1 Introduction

Throughout this paper \({\mathcal {U}}\), \({\mathcal {E}}\) and \({\mathcal {Y}}\) are all finite dimensional Hilbert spaces. Furthermore, \(H^\infty ({\mathcal {U}},{\mathcal {Y}})\) is the Hardy space formed by the set of all operator valued functions F(s) mapping \({\mathcal {U}}\) into \({\mathcal {Y}}\) that are analytic in the open right half plane \(\mathbb {C}_+ = \{s \in \mathbb {C}: \Re (s) >0\}\) and

$$\begin{aligned} \Vert F\Vert _\infty = \sup \{\Vert F(s)\Vert : \Re (s) >0\} <\infty . \end{aligned}$$
(1.1)

Moreover, \(L_+^2({\mathcal {U}})\) is the Hilbert space formed by the set of all Lebesgue measurable functions g(t) with values in \({\mathcal {U}}\), \( t \ge 0 \) and such that

$$\begin{aligned} \Vert g\Vert _{L_+^2}^2 =\int _0^\infty \Vert g(t)\Vert _{\mathcal {U}}^2 dt <\infty . \end{aligned}$$

Let G in \(H^\infty ({\mathcal {U}},{\mathcal {Y}})\) and K in \(H^\infty ({\mathcal {E}},{\mathcal {Y}})\) be rational functions. Let \(T_G\) and \(T_K\) denote the corresponding Wiener–Hopf operators,

$$\begin{aligned} T_G:L^2_+({\mathcal {U}})\rightarrow L^2_+({\mathcal {Y}}) \quad \text{ and }\quad T_K:L^2_+({\mathcal {E}})\rightarrow L^2_+({\mathcal {Y}}). \end{aligned}$$

To be precise,

$$\begin{aligned}&\big (T_G u\big )(t) = D_1 u(t) + \int _0^t g_\circ (t-\tau ) u(\tau )d \tau \qquad (\text{ for } 0 \le t<\infty ) \nonumber \\&\big (T_K v\big )(t) = D_2 v(t) + \int _0^t k_\circ (t-\tau ) v(\tau )d \tau \qquad (\text{ for } 0 \le t <\infty ) \nonumber \\&G(s) = D_1 +\big ({\mathfrak {L}} g_\circ \big )(s) \quad \hbox {and}\quad K(s) = D_2 + \big ({\mathfrak {L}} k_\circ \big )(s) \quad (\text{ for } s\in \mathbb {C}_+). \end{aligned}$$
(1.2)

Here \( {\mathfrak {L}} g_\circ \) and \( {\mathfrak {L}} k_\circ \) denote the Laplace transform of the functions \(g_\circ \) and \(k_\circ \), respectively, that is,

$$\begin{aligned} \big ({\mathfrak {L}} g_\circ \big )(s)= & {} \int _0^\infty g_\circ (t) e^{-st}dt \qquad (\text{ for } s\in \mathbb {C}_+) \nonumber \\ \big ({\mathfrak {L}} k_\circ \big )(s)= & {} \int _0^\infty k_\circ (t) e^{-st}dt \qquad (\text{ for } s\in \mathbb {C}_+). \end{aligned}$$
(1.3)

It is emphasized that in our problems \(g_\circ \) and \(k_\circ \) are integrable operator valued functions over the interval \([0,\infty )\). Following engineering notation, the time function g(t) is denoted with a lower case, while its Laplace transform G(s) is denoted by a capital.

A function X in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\) is called a solution to the Leech problem associated with data G and K whenever

$$\begin{aligned} G(s)X(s)= K(s) \quad (s\in \mathbb {C}_+) \quad \hbox {and}\quad \Vert X\Vert _\infty \le 1. \end{aligned}$$
(1.4)

The Leech problem is an example of a metric constrained interpolation problem, the first part of (1.4) is the interpolation condition, and the second part is the metric constraint.

In an unpublished note from the early 1970’s (and then eventually published) [20] Leech proved that the problem is solvable if and only if the operator \(T_GT_G^*-T_KT_K^*\) is nonnegative. Later the Leech theorem was derived as a corollary of more general results; see, e.g., [21, page 107], [9, Sect. VIII.6], and [1, Sect. 4.7]. But in Leech’s work and in the other results just mentioned the problem was solved for \(H^\infty \) functions in the open unit disc. In that case, \(T_G\) and \(T_K\) are Toeplitz operators. Here we are working with \(H^\infty \) functions in the open right half plane.

One can use a Cayley transform to obtain a right half plane solution from an open unit disc solution. Moreover, one can directly use the commutant lifting theorem with the unilateral shift on the appropriate \(H^2\) space determined by the multiplication operator \(\frac{s-1}{s+1}\), to arrive at state space solutions to the Leech problem. However, these methods usually lead to cumbersome formulas, that are not natural. Motivated by commutant lifting techniques in the discrete case we will derive a special state space solution for our continuous time Leech problem, which avoids all the complications associated with the Cayley transform.

In what follows it is assumed that G and K are stable rational finite dimensional operator functions. In other words, G and K are rational operator-valued \(H^\infty \) functions. In that case, if the Leech problem associated with G and K is solvable, one expects the problem to have a stable rational finite dimensional operator solution as well. However, a priori this is not clear, and the existence of rational solutions in the discrete time case was proved only recently in [25] by reducing the problem to polynomials, in [24] by adapting the lurking isometry method used in [2], and in [15] by using a state space approach.

The special case of Leech’s theorem in the discrete time setting, with \( {\mathcal {E}} = {\mathcal {Y}} \) and K identically equal to the identity operator \(I_{{\mathcal {Y}}}\) is part of the Toeplitz-corona theorem, which is due to Carlson [7] for \({\mathcal {Y}} = {{\mathbb {C}}}\), and is due to Fuhrmann [16] for an arbitrary finite dimensional \( {\mathcal {Y}} \). The least squares solution of the Toeplitz-corona version of the equation can be found in [13] and a description of all solutions without any norm constraint in [14].

References [22, 23] present a nice state space approach based on lossless systems to provide all solutions to a general or two sided continuous time Leech problem. These papers rely mainly on state space and algebraic techniques. Our approach is different, that is, we use operator methods to develop a solution to the Leech problem. Inspired by the central solution for the commutant lifting theorem, we present a solution to the Leech problem by using the corresponding Wiener–Hopf operators; see Theorem 1.1 below. Then we apply classical state space methods to convert this operator formula to a state space solution and prove some stability results. To keep the presentation simple, we only concentrated on one solution.

For an engineering perspective on discrete time Toeplitz-corona and related problems with applications to signal processing we refer to [22, 23, 26, 27] and the references therein. See [3] for a nice state space presentation of many classical interpolation problems in both the discrete and continuous time. However, [3] does not treat Leech or Bezout type problems.

Now let X in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\) be a solution to our Leech interpolation problem with data G and K. Since \(\Vert T_X\Vert = \Vert X\Vert _\infty \), we see that \(T_X\) is a contraction. Because \(GX = K\) is equivalent to \(T_G T_X = T_K\), we have

$$\begin{aligned} T_G T_G^* - T_K T_K^*&= T_G\big (I - T_X T_X^*\big )T_G^* \ge 0. \end{aligned}$$

In other words, \(T_G T_G^* - T_K T_K^*\) is a positive operator.

In this paper we will provide an explicit state space solution when the operator \(T_G T_G^* - T_K T_K^*\) is strictly positive, that is, \(T_G T_G^* - T_K T_K^*\) positive and invertible. In this case \(T_G T_G^*\) is strictly positive too.

In general, when M is a positive and invertible operator, then M is called strictly positive, and in that case we write \(M \gg 0\). Thus \(T_G T_G^* - T_K T_K^* \gg 0 \) if and only if \(T_G T_G^* - T_K T_K^*\) is strictly positive.

Assume that the operator \(T_G\) is onto, or equivalently, assume that \(T_G T_G^*\) is strictly positive. In particular,

$$\begin{aligned} \Lambda :\, = T_G^* \big (T_G T_G^*\big )^{-1} T_K \end{aligned}$$
(1.5)

is a well defined operator mapping \(L_+^2({\mathcal {E}})\) into \({\mathfrak {H}} = \ker (T_G)^\perp \) and satisfying \(T_G \Lambda = T_K\). It is noted that \(\Vert \Lambda \Vert <1\), that is, \( \Lambda \) is strictly contractive if and only if \(T_G T_G^* - T_K T_K^*\) is strictly positive. Notice that \(\Lambda \) is strictly contractive if and only if

$$\begin{aligned} 1&> r_{spec}\left( \Lambda ^* \Lambda \right) = r_{spec}\left( T_K^* \big (T_G T_G^*\big )^{-1} T_K \right) = r_{spec}\left( \big (T_G T_G^*\big )^{-1} T_K T_K^*\right) \\&= r_{spec}\left( \big (T_G T_G^*\big )^{-{1/2}} T_K T_K^*\big (T_G T_G^*\big )^{-{1/2}} \right) . \end{aligned}$$

In other words, \(\Lambda \) is strictly contractive if and only if

$$\begin{aligned} I - \big (T_G T_G^*\big )^{-{1/2}} T_K T_K^*\big (T_G T_G^*\big )^{-{1/2}} \end{aligned}$$

is strictly positive. Multiplying both sides by \(\big (T_G T_G^*\big )^{{1/2}} \), we see that \(\Lambda \) is strictly contractive if and only if \(T_G T_G^* - T_K T_K^*\) is strictly positive.

Throughout this paper we assume that \(\Lambda \) defined in (1.5) is strictly contractive, or equivalently, \(T_G T_G^* - T_K T_K^*\) is assumed to be strictly positive. In fact, we will develop a state space method involving an algebraic Riccati equation to determine when \(\Lambda \) is strictly contractive. Then motivated by the central solution for the commutant lifting theorem, we will compute a solution X for our Leech problem with data G and K. In fact what we shall present is an analog of Theorem IV.4.1 in [10], which uses a central solution based on a discrete time setting. For the non-discrete time setting the null space for the backward shift in Theorem IV.4.1 in [10] is replaced by the Dirac delta function. An explanation of the role of the Dirac delta is given in Sect. 14 “(Appendix 2)”. We will directly show that our solution is indeed a solution to our Leech problem. The next theorem is our first main result for the non-discrete case.

Theorem 1.1

Let G in \(H^\infty ({\mathcal {U}},{\mathcal {Y}})\) and K in \(H^\infty ({\mathcal {E}},{\mathcal {Y}})\) be two rational functions. Assume that \(T_GT_G^* - T_K T_K^*\) is strictly positive, or equivalently, \(\Lambda = T_G^*\left( T_G T_G^*\right) ^{-1} T_K\) is a strict contraction. Let X be the function defined by

$$\begin{aligned} X(s) = U(s)V(s)^{-1} \end{aligned}$$
(1.6)

where U(s) and V(s) are the functions defined by

$$\begin{aligned} U(s)= & {} \Big ({\mathfrak {L}} \Lambda \left( I - \Lambda ^* \Lambda \right) ^{-1} \delta \Big )(s) \nonumber \\ V(s)= & {} \Big ({\mathfrak {L}} \left( I - \Lambda ^* \Lambda \right) ^{-1} \delta \Big )(s). \end{aligned}$$
(1.7)

Here \(\delta (t)\) is the Dirac delta function and \({\mathfrak {L}}\) is the Laplace transform.

  1. (i)

    Then X is a solution to our Leech problem. To be precise, X is a function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\) satisfying \(G(s) X(s) = K(s)\) and \(\Vert X\Vert _\infty < 1\).

  2. (ii)

    Moreover, V(s) is an invertible outer function, that is, both V(s) and \(V(s)^{-1}\) are functions in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\). In fact, the function

    $$\begin{aligned} \Theta (s) = V(\infty )^{{1/2}} V(s)^{-1} \end{aligned}$$
    (1.8)

    is the outer spectral factor for the function \(I-X^*(-{\overline{s}})X(s)\), that is

    $$\begin{aligned} I-X(-{\overline{s}})^*X(s)=\Theta (-{\overline{s}})^* \Theta (s). \end{aligned}$$
    (1.9)

In this paper we shall derive state space formulas for X(s) and \(\Theta (s)\). Furthermore, we will directly show that X(s) is indeed a strictly contractive solution to our Leech problem, that is, \(G(s) X(s) = K(s)\) and \(\Vert X\Vert _\infty <1\) and \(\Theta \) is the outer spectral factor for \(I -X(-{\overline{s}})^*X(s)\).

The formula for \(X(s) = U(s)V(s)^{-1}\) in Eq. (1.7) was motivated by [10, Theorem IV.4.1], which is a consequent of the central solution to the Sz.-Nagy–Foias commutant lifting theorem. It is emphasized that the setting in [10, Theorem IV.4.1] was developed for the discrete time case, that is, when the Hardy spaces are in the open unit disc, and the corresponding operators are Toeplitz operators. Similar type formulas for U and V are also presented in the band method for the discrete time Nehari problem (see [18]). To arrive at \(X(s) = U(s)V(s)^{-1}\) in (1.7), we replaced the Toeplitz operators by Wiener–Hopf operators and the kernel of the backward shift by the Dirac delta function. This leads to the formula in (1.7) for our solution to the Leech problem. Of course, there are several issues that arise when one makes these adjustments to arrive at X(s) in (1.7). First and foremost is that the Dirac delta function \(\delta (t)\) is not a well defined function. Moreover, we did not present any justification on why replacing a Toeplitz operators with Wiener–Hopf operators and using the Laplace transform instead of a Fourier transform should lead to a solution \(X(s) = U(s)V(s)^{-1}\) to our Leech problem with data G(s) and K(s). However, we will put all of these issues to rest. First we will use the formula \(X(s) = U(s)V(s)^{-1}\) in (1.7) to derive the state space realization for X(s) in Theorem 3.2. Then, once we have these state space formulas, we will directly verify that \(X(s) = U(s)V(s)^{-1}\) in Theorem 3.2 is indeed a solution to our Leech problem. Moreover, we will also directly verify that all the other results in Theorem 3.2 hold. This direct verification of the solution should eliminate any doubt one may have concerning our solution X(s) to the continuous time Leech problem. Finally, it is noted that we can adapt Theorem IV.4.1 in [10] with the corresponding unilateral shifts on \(H^2\) of the right half plane to obtain Theorem 1.1. However, we decided to simply give the result and verify directly that it holds on the Leech problem.

The paper consists of 14 sections including the introduction, which contains the first main theorem (Theorem 1.1). Throughout, beginning in Sect. 2, the emphasis will be on Leech problems that are based on finite dimensional state space realizations. In the third section the two main theorems are presented, and the first one is proved in the fourth section. The Proof of Theorem 1.1 and the proof of the second theorem in Sect. 3 are based on Lemmas 5.1, 6.1 and 7.1, using formulas (5.3), (6.2), and (7.4). Furthermore, the solution X is given by (8.2), and then Sects. 10 and 11 prove Theorem 3.2. The Proof of Theorem 1.1 is presented in Sect. 12. The Appendices 1 and 2 in Sects. 13 and 14 respectively, present classical results that are used in the paper. “Appendix 1” treats a Riccati equation. In “Appendix 2” the definition of the Dirac delta function \( \delta \) for the specific class of operators that we need is developed. In fact, the results in Sect. 14 are a special case of the general theory of Dirac delta functions.

For completeness assume that \(T_G T_G^* - T_K T_K^*\) is a positive operator not necessarily strictly positive, or equivalently, \(\Vert T_G^* y\Vert \ge \Vert T_K^* y\Vert \) for all y in \(L_+^2({\mathcal {Y}})\). Then there exists a contraction \(\Lambda \) mapping \(L_+^2({\mathcal {E}})\) into \({\mathfrak {H}}= \ker (T_G)^\perp \) such that \(\Lambda ^* T_G^* = T_K\), or equivalently, \(T_G \Lambda = T_K\). By choosing the appropriate unilateral shifts one can use the Sz.-Nagy–Foias commutant lifting theorem to show that there exists a function X in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\) such that

$$\begin{aligned} \Lambda = P_{{\mathfrak {H}}}T_X \quad \hbox {and}\quad \Vert X\Vert _\infty = \Vert T_X\Vert \le 1. \end{aligned}$$
(1.10)

In particular, X is a solution to our Leech problem. So there exists a solution to our Leech problem if and only if \(T_G T_G^* - T_K T_K^*\) is positive. Finally, we compute a solution to our Leech problem when \(T_G T_G^* - T_K T_K^*\) is strictly positive. The strictly positive hypothesis is due to standard numerical issues with solving Riccati equations.

2 The State Space Setup

Throughout this paper, we assume that G and K are stable rational matrix functions. Given this we will establish a state space method involving a special Riccati equation to determine when the operator \(T_GT_G^*-T_KT_K^* \) is strictly positive, or equivalently, \(\Lambda = T_G^*\left( T_G T_G^*\right) ^{-1}T_K\) is strictly contractive.

To develop a solution to our rational Leech problem with data G and K, we use some classical state space realization theory from mathematical systems theory (see, e.g., Chapter 1 of [8] or Chapter 4 in [4]). For our G and K this means that the matrix function \(\begin{bmatrix}G&K \end{bmatrix}\) admits a state space representation of the following form:

$$\begin{aligned} \begin{bmatrix} G(s)&K(s) \end{bmatrix} = \begin{bmatrix} D_1&D_2 \end{bmatrix} + C(s I - A)^{-1}\begin{bmatrix} B_1&B_2 \end{bmatrix} . \end{aligned}$$
(2.1)

As expected, I denotes the identity operator. Throughout A is a stable operator on a finite dimensional space \({\mathcal {X}}\). By stable we mean that all the eigenvalues for A are in the open left half plane \(\{s \in \mathbb {C}: \Re (s) < 0\}\). Moreover, \(B_1\), \(B_2\), C, \(D_1\) and \(D_2\) operate between the appropriate spaces. Since G and K are stable rational operator valued functions, G and K have no poles in the closed right half plane \(\{s \in \mathbb {C}: \Re (s) \ge 0\}\). The realization (2.1) is called minimal if there exists no realization of \(\begin{bmatrix}G&K \end{bmatrix}\) as in (2.1) with the dimension of the state space \( {{\mathcal {X}}}\) smaller than the one in the given realization. In that case, the dimension of the state space \({\mathcal {X}}\) of A is called the McMillan degree of \(\begin{bmatrix}G&K \end{bmatrix}\). If the realization (2.1) is minimal, then the matrix A is automatically stable.

We will use the realization (2.1) of \(\begin{bmatrix} G(s) &{} K(s) \\ \end{bmatrix}\) to obtain alternative formulas for the functions U(s) and V(s) in (1.7). These alternative state space formulas will be given in Eqs. (6.2) and (5.3) below.

The observability operator \(W_{obs}\) mapping \({\mathcal {X}}\) into \(L_+^2({\mathcal {Y}})\) is defined by

$$\begin{aligned} \bigl ( W_{obs} x \bigr ) (t) = C e^{A t} x \qquad (x \in {\mathcal {X}}). \end{aligned}$$
(2.2)

If the realization is minimal, then the pair \(\{C,A\}\) is observable, and thus, the operator \(W_{obs}\) is one to one. We do not require the realization (2.1) to be minimal. All we need is that A is stable and the pair \(\{C,A\}\) is observable, or equivalently, \(W_{obs}\) is one to one. However, from a practical perspective, one would almost always work with a minimal realization. This guarantees that A is stable, makes the state space computations more efficient.

As a first step towards our main result we obtain Theorem 3.1 below, which presents a necessary and sufficient condition for \(T_GT_G^*-T_KT_K^* \) to be strictly positive in terms of the operators in (2.1) and related matrices. To accomplish this we need the rational matrix function with values in \({\mathcal {Y}}\) given by

$$\begin{aligned} R(s)=G(s){\widetilde{G}}(s) -K(s){\widetilde{K}}(s). \end{aligned}$$
(2.3)

Here \({\widetilde{G}}(s) =G(-{\overline{s}})^*\) and \({\widetilde{K}}(s) =K(-{\overline{s}})^*\). Note that R has no pole on the imaginary axis \(\{i \omega : -\infty< \omega < \infty \}\).

In Sect. 13 “(Appendix 1)” we will show that R admits the following state space representation:

$$\begin{aligned} R(s) = C(s I - A)^{-1}\Gamma + R_0 - \Gamma ^*(sI + A^*)^{-1}C^*. \end{aligned}$$
(2.4)

Here \(R_0\) on \({\mathcal {Y}}\) and \(\Gamma \) mapping \({\mathcal {Y}}\) into \({\mathcal {X}}\) are defined by

$$\begin{aligned} R_0= & {} D_1D_1^* -D_2D_2^* , \end{aligned}$$
(2.5)
$$\begin{aligned} \Gamma= & {} B_1D_1^* -B_2D_2^* + \Delta C^*, \end{aligned}$$
(2.6)

where \(\Delta \) is the unique solution of the Lyapunov equation

$$\begin{aligned} A \Delta + \Delta A^*+ B_1B_1^* -B_2 B_2^*=0. \end{aligned}$$
(2.7)

Since A is stable, this Lyapunov equation is indeed solvable. The solution is unique and given by

$$\begin{aligned} \Delta = \int _0^\infty e^{A t} \left( B_1 B_1^* - B_2 B_2^*\right) e^{A^*t} dt. \end{aligned}$$
(2.8)

With our realization for R(s) we associate the following algebraic Riccati equation:

$$\begin{aligned} A^* Q +Q A + ( C - \Gamma ^* Q )^* R_0^{-1} ( C - \Gamma ^* Q ) =0. \end{aligned}$$
(2.9)

For the moment, let us assume that \(R_0\) is strictly positive. Then we say that Q is a stabilizing solution to the algebraic Riccati (2.9) if Q is a strictly positive operator solving (2.9) and the operator \(A_0\) on \({\mathcal {X}}\) defined by

$$\begin{aligned} A_{0} = A-\Gamma C_0, \quad \hbox {and}\quad C_0 = R_0^{-1} ( C - \Gamma ^* Q ) \end{aligned}$$
(2.10)

is stable. If a stabilizing solution exists, then it is unique. If Q is a stabilizing solution, then \(W_0\) is the observability mapping \({\mathcal {X}}\) into \(L_+^2({\mathcal {Y}})\) corresponding to the pair \(\{C_0,A_0\}\) defined by

$$\begin{aligned} \bigl ( W_0 x \bigr ) (t) = C_0 e^{A_0 t} x \qquad (x\in {\mathcal {X}}). \end{aligned}$$
(2.11)

Finally, \(T_R\) denotes the non-causal Wiener–Hopf operator determined by R, that is,

$$\begin{aligned} \big (T_R y)(t) = R_0 y(t) + \int _0^t C e^{A(t-\tau )} \Gamma y(\tau ) d \tau + \int _t^\infty \Gamma ^* e^{A^*(\tau -t)} C^* y(\tau ) d \tau \nonumber \\ \end{aligned}$$
(2.12)

where y(t) is a vector in \(L_+^2({\mathcal {Y}})\).

Remark 2.1

Theorem 13.4 in Sect. 13 “(Appendix 1)” shows that \(T_R\) is a strictly positive operator on \(L_+^2({\mathcal {Y}})\) if and only if \(R_0 \gg 0 \) and there exists a stabilizing solution to the algebraic Riccati Eq. (2.9). In this case, the unique stabilizing solution Q is given by \(Q= W_{obs}^*T_R^{-1}W_{obs}\).

3 Main Theorems

The first main theorem presented in this paper is Theorem 1.1 in the Introduction. The two other main theorems are Theorems 3.1 and 3.2 in the present section. The proofs of these three main theorems are given in different sections. The Proof of Theorem 1.1 is given in Sect. 12, the Proof of Theorem 3.1 is the main issue of Sect. 4, and the Proof of Theorem 3.2 is based on the results of Sects. 510 and 13 “(Appendix 1)” and is completed in Sect. 11.

Theorem 3.1

Let G and K be stable rational matrix functions, and assume that \(\begin{bmatrix} G&K\end{bmatrix}\) is given by the minimal realization (2.1). Let R be the function defined in (2.4). Then the operator

$$\begin{aligned} T_G T_G^* - T_K T_K^* \end{aligned}$$

is strictly positive if and only if the following two conditions hold.

  1. (i)

    The operator \(T_R\) is strictly positive, or equivalently, \( R_0 \) given by (2.5) is strictly positive and there exists a stabilizing solution Q to the algebraic Riccati equation (2.9), that is, \(Q \gg 0\) and the operator \( A_0 \) defined by (2.10), i.e.,

    $$\begin{aligned} A_{0} = A-\Gamma R_0^{-1} ( C - \Gamma ^* Q ) \end{aligned}$$
    (3.1)

    is stable.

  2. (ii)

    The operator \(Q^{-1} - \Delta \) is strictly positive.

In this case, the Wiener–Hopf operator \(T_R\) is strictly positive and the unique stabilizing solution to the algebraic Riccati equation is given by

$$\begin{aligned} Q = W_{obs}^*T_R^{-1}W_{obs}. \end{aligned}$$
(3.2)

The inverse of the operator \(T_G T_G^* - T_K T_K^*\) is determined by

$$\begin{aligned} \Big (T_GT_G^*-T_KT_K^*\Big )^{-1} = T_R^{-1} + W_{0}\Delta \big (I - Q\Delta \big )^{-1} W_{0}^*. \end{aligned}$$
(3.3)

Finally, \(R(s) = {\widetilde{\Phi }}(s)\Phi (s)\) where \(\Phi \) is the invertible outer function given by

$$\begin{aligned} \Phi (s)= & {} R_0^{{1/2}}\left( I + C_0(sI - A)^{-1}\Gamma \right) \nonumber \\ \Phi (s)^{-1}= & {} \left( I - C_0(sI - A_0)^{-1}\Gamma \right) R_0^{-{1/2}}\nonumber \\ C_0= & {} R_0^{-1}\left( C-\Gamma ^*Q\right) . \end{aligned}$$
(3.4)

Equation (3.2) and the outer spectral factorization \(R(s) = {\widetilde{\Phi }}(s)\Phi (s)\) with \(\Phi \) in (3.4) is a special case of Theorem 13.4 in Sect. 13 “(Appendix 1)”. Using Theorem 13.4, we will directly prove the following result.

Theorem 3.2

Let G and K be stable rational matrix functions, and let \(\begin{bmatrix} G&K\end{bmatrix}\) be given by the minimal (stable) realization (2.1). Furthermore, assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, assume that items (i) and (ii) of Theorem 3.1 hold. Then the following holds.

  1. (i)

    A solution X to the Leech problem with data G and K is given by the following stable state space realization:

    $$\begin{aligned} X(s)= & {} D_1^\dag D_2 + C_x \big (s I - {\mathbf {A}}\big )^{-1} \big ( B_2 - B_1 D_1^\dag D_2\big )\nonumber \\ C_x= & {} D_1^\dag C + \big (I - D_1^\dag D_1\big ) B_1^*Q\big (I - \Delta Q\big )^{-1} \nonumber \\ {\mathbf {A}}= & {} A - B_1 C_x \nonumber \\ D_1^\dag= & {} D_1^*(D_1D_1^*)^{-1}. \end{aligned}$$
    (3.5)

    The operator \({\mathbf {A}}\) is stable, and \(\Vert X\Vert _\infty <1\). Finally, the McMillan degree of X is less than or equal to the McMillan degree of \(\begin{bmatrix} G&K \end{bmatrix}\).

  2. (ii)

    Let \(\Theta (s)\) be the rational function in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\) defined by

    $$\begin{aligned} \Theta (s)= & {} D_v^{-{1/2}} - D_v^{-{1/2}}C_\theta \big (sI - {\mathbf {A}})^{-1} \big ( B_2 - B_1 D_1^\dag D_2\big )\nonumber \\ C_\theta= & {} (D_2^* C_0 + B_2^*Q) \big (I - \Delta Q\big )^{-1}\nonumber \\ D_v= & {} I + D_2^* R_0^{-1}D_2. \end{aligned}$$
    (3.6)

    Then \(\Theta \) is an invertible outer function, that is, both \(\Theta (s)\) and \(\Theta (s)^{-1}\) are functions in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\). Furthermore, \(\Theta \) is the outer spectral factor for \(I - {\widetilde{X}}X\). To be precise,

    $$\begin{aligned} {\widetilde{\Theta }}(s) \Theta (s) = I - {\widetilde{X}}(s)X(s). \end{aligned}$$
    (3.7)

Theorem 3.1 will be proved in the next section. The Proof of Theorem 3.2 will be finished at the end of Sect. 11.

4 Proof of Theorem 3.1

Throughout the section G in \(H^\infty ({\mathcal {U}},{\mathcal {Y}})\) and K in \(H^\infty ({\mathcal {E}},{\mathcal {Y}})\) are rational functions, and we assume that \(\begin{bmatrix} G&K\end{bmatrix}\) is given by the minimal stable realization in (2.1). We first prove three lemmas. The first deals with the rational matrix function R(s) defined by (2.3).

Lemma 4.1

Let R be the \(L^\infty ({\mathcal {Y}},{\mathcal {Y}})\) rational matrix function defined by (2.3). If \(T_G T_G^* - T_K T_K^*\) is strictly positive, then \(T_R\) is strictly positive.

Proof

Assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive. For each \(\alpha \in \mathbb {C}_+\) set \(\varphi _{\alpha }(t) = e^{-\alpha t}\). Notice that \(G(s) = D_1 + ({\mathfrak {L}}g_o)(s)\) where \(g_o(t) = C e^{At}B_1\). For y in \({\mathcal {Y}}\), we have

$$\begin{aligned} \left( T_G^* y \varphi _\alpha \right) (t)&= D_1^* y e^{-\alpha t}+ \int _t^\infty g_o(\tau -t)^* ye^{-\alpha \tau } d\tau \\&= D_1^*y e^{-\alpha t} + \int _t^\infty g_o(\tau -t)^* y e^{-\alpha (\tau -t)} e^{-\alpha t} d\tau \\&= D_1^* y e^{-\alpha t} + e^{-\alpha t} \int _0^\infty g_o(u)^* y e^{-\alpha u} du\\&= \Big (D_1^* + \big ({\mathfrak {L}}g_o^*\big )(\alpha ) \Big )y \varphi _\alpha (t) = G(\alpha )^*y \varphi _\alpha (t). \end{aligned}$$

In other words, for \(\Re (\alpha ) >0\):

$$\begin{aligned} \big (T_G^*y \varphi _\alpha \big )(t) =\varphi _\alpha (t) G(\alpha )^* y \qquad (\text{ when } \varphi _\alpha (t) = e^{-\alpha t} \text{ and } y \in {\mathcal {Y}}).\quad \end{aligned}$$
(4.1)

Using this with the corresponding result for K, we have

$$\begin{aligned} T_G^* y \varphi _{\alpha }=\varphi _{\alpha }G(\alpha )^* y \quad \text{ and }\quad T_K^* y \varphi _{\alpha }= \varphi _{\alpha }K(\alpha )^* y \qquad (\text{ when } y \in {\mathcal {Y}}). \end{aligned}$$
(4.2)

Notice that

$$\begin{aligned} \Vert \varphi _{\alpha }\Vert ^2 = (\varphi _{\alpha }, \varphi _{\alpha }) = \int _0^\infty e^{-\alpha t} e^{-{\overline{\alpha }} t} dt = \frac{1}{\alpha +{\overline{\alpha }}}. \end{aligned}$$

Since \(T_G T_G^* - T_K T_K^*\) is strictly positive (by assumption), there exists \(\eta >0\) such that \(T_GT_G^*-T_KT_K^*\ge \eta I\). Hence for y in \({\mathcal {Y}}\), we have

$$\begin{aligned} ((T_GT_G^*-T_KT_K^*)y\varphi _{\alpha },y \varphi _{\alpha }) \ge \eta (y\varphi _{\alpha },y\varphi _{\alpha }). \end{aligned}$$

This with (4.2) readily implies that

$$\begin{aligned} \frac{\eta \Vert y\Vert ^2}{\alpha + {\overline{\alpha }}}&= \eta \Vert y\varphi _{\alpha }\Vert ^2 \le \Vert T_G^* y \varphi _{\alpha }\Vert ^2 -\Vert T_K^* y \varphi _{\alpha }\Vert ^2\\&= \Vert \varphi _{\alpha } G(\alpha )^* y\Vert ^2 -\Vert \varphi _{\alpha } K(\alpha )^* y\Vert ^2\\&= \Vert \varphi _{\alpha }\Vert _{L_+^2}^2 \Big (\Vert G(\alpha )^* y\Vert _{\mathcal {U}}^2 -\Vert K(\alpha )^* y\Vert _{\mathcal {E}}^2\Big )\\&= \frac{(G(\alpha ) G(\alpha )^* y,y) -(K(\alpha ) K(\alpha )^* y,y)}{\alpha +{\overline{\alpha }}} . \end{aligned}$$

In other words, we have

$$\begin{aligned} \frac{G(\alpha )G(\alpha )^*-K(\alpha )K(\alpha )^*}{\alpha +{\overline{\alpha }}} \ge \frac{\eta }{\alpha +{\overline{\alpha }}}I \qquad (\alpha \in \mathbb {C}_+). \end{aligned}$$

Multiplying with \(\alpha +{\overline{\alpha }}\) and taking limits \(\alpha \rightarrow i \omega \) on the imaginary axis, shows

$$\begin{aligned} R( i\omega )&= G( i\omega ) {\widetilde{G}}( i\omega ) - K( i\omega ) {\widetilde{K}} ( i\omega ) \\&= G( i\omega )G( i\omega )^*-K( i\omega )K( i\omega )^* \ge \eta I, \qquad (\text{ for } -\infty< \omega < \infty ). \end{aligned}$$

This is equivalent to \(T_R\) being strictly positive. See [21, Chapter 3, Examples and Addenda] and [21, Sect. 6.2, Theorem B].\(\square \)

Lemma 4.2

Let \(W_{obs}\) be defined by (2.2), and let \(\Delta \) be the unique solution of the Lyapunov Eq. (2.7). Then

$$\begin{aligned} T_GT_G^*-T_KT_K^* =T_R - W_{obs}\Delta W_{obs}^*. \end{aligned}$$
(4.3)

In particular, the operator \(T_GT_G^*-T_KT_K^*\) is strictly positive if and only if the operator \(T_R - W_{obs}\Delta W_{obs}^*\) is strictly positive.

Proof

We first recall some elementary facts concerning Hankel operators. To this end, let

$$\begin{aligned} \left( H_G w \right) (t) = \int _0^\infty g_o(t+\tau ) w(\tau ) d\tau \qquad ( t \ge 0 ) \end{aligned}$$

be the Hankel operator determined by \(G(s) = D_1+{\mathfrak {L}}g_o(t)\). In a similar way, let \(H_K\) be the corresponding Hankel operator mapping \(L_+^2({\mathcal {E}})\) into \(L_+^2({\mathcal {Y}})\) determined by K. Let \(W_{con,1}\) mapping \(L_+^2({\mathcal {U}})\) into \({\mathcal {X}}\) and \(W_{con,2}\) mapping \(L_+^2({\mathcal {E}})\) into \({\mathcal {X}}\) be the controllability operators defined by

$$\begin{aligned} W_{con,j} w = \int _0^\infty e^{A t}B_j w(t) dt \qquad (\text{ for } j=1,2). \end{aligned}$$

Let \(P_j\) be the controllability Gramian defined by \(P_j = W_{con,j}W_{con,j}^*\) for \(j =1,2\). Notice that \(P_j\) is the unique solution to the Lyapunov equation

$$\begin{aligned} A P_j + P_j A^* + B_jB_j^* = 0 \qquad (\text{ for } j=1,2). \end{aligned}$$

Hence \(\Delta = P_1 - P_2\). Using \(G(s) = D_1 + \big ({\mathfrak {L}}g_o\big )(s)\) where \(g_o(t)= C e^{At} B_1\) and the corresponding result \(K(s) = D_2 + \big ({\mathfrak {L}}k_o\big )(s)\) where \(k_o(t)= C e^{At} B_2\), we see that \(H_G = W_{obs} W_{con,1}\) and \(H_K = W_{obs} W_{con,2}\). Finally,

$$\begin{aligned} H_G H_G^* = W_{obs} P_1 W_{obs}^* \quad \text{ and }\quad H_K H_K^* = W_{obs} P_2 W_{obs}^*. \end{aligned}$$
(4.4)

The Wiener–Hopf operators \(T_{GG^*}\) and \(T_{KK^*}\) are given by the following identities:

$$\begin{aligned} T_{GG^*} = T_GT_G^* + H_GH_G^*\quad \hbox {and}\quad T_{KK^*} = T_KT_K^* + H_K H_K^*. \end{aligned}$$
(4.5)

Using \(R = G {\widetilde{G}} - K {\widetilde{K}} \), we have

$$\begin{aligned} T_R= & {} T_GT_G^*-T_KT_K^*+H_GH_G^*-H_KH_K^* \nonumber \\= & {} T_GT_G^*-T_KT_K^* + W_{obs}(P_1 - P_2)W_{obs}^*; \end{aligned}$$
(4.6)

see (4.4). This with \(\Delta = P_1-P_2\), yields (4.3). \(\square \)

Lemma 4.3

Let M and T be self-adjoint operators on a Hilbert space \({\mathfrak {H}}\), and let T be strictly positive. Assume that \(M=T-WNW^*\), where N is a self-adjoint operator on a Hilbert space \({\mathfrak {X}}\), and W is a one-to-one operator with a closed range from \({\mathfrak {X}}\) into \({\mathfrak {H}}\). Put \(Q=W^*T^{-1}W\). Then \(Q \gg 0\). Furthermore \(M\gg 0\) if and only if \(Q^{-1}-N \gg 0\). In this case

$$\begin{aligned} M^{-1} = T^{-1} + T^{-1}W N \big (I - Q N\big )^{-1} W^*T^{-1}. \end{aligned}$$
(4.7)

Proof

Multiplying \(M = T - W N W^*\) on both sides by \(T^{-{1/2}}\) yields

$$\begin{aligned} T^{-{1/2}}M T^{-{1/2}} = I - T^{-{1/2}} WNW^* T^{-{1/2}}. \end{aligned}$$

Let \(\mathbb {P}\) be the orthogonal projection onto the range of \(T^{-{1/2}}W\). Then

$$\begin{aligned} T^{-{1/2}}M T^{-{1/2}} = I - \mathbb {P} + \mathbb {P} - T^{-{1/2}} WNW^* T^{-{1/2}}. \end{aligned}$$

Notice that

$$\begin{aligned} \mathbb {P} = T^{-{1/2}} W\big (W^* T^{-1}W\big )^{-1} W^* T^{-{1/2}} = T^{-{1/2}} W Q^{-1} W^* T^{-{1/2}}. \end{aligned}$$

This readily implies that

$$\begin{aligned} T^{-{1/2}}M T^{-{1/2}} = (I - \mathbb {P}) + T^{-{1/2}} W\big (Q^{-1}-N\big )W^* T^{-{1/2}}. \end{aligned}$$

The previous equation decomposes \(T^{-{1/2}}M T^{-{1/2}}\) into the orthogonal sum of two self-adjoint operators. Therefore M is strictly positive if and only if \(T^{-{1/2}}M T^{-{1/2}}\) is strictly positive, or equivalently, \(Q^{-1}-N\) is strictly positive.

Assume that M is strictly positive, or equivalently, \(Q^{-1} -N\) is strictly positive. Then

$$\begin{aligned} M^{-1}&= \big (T- W N W^*\big )^{-1}\\&= T^{-1}\big (I - W N W^*T^{-1}\big )^{-1}\\&= T^{-1} + T^{-1}\big (I - W N W^*T^{-1}\big )^{-1} W N W^*T^{-1}\\&= T^{-1} + T^{-1}W N \big (I - W^*T^{-1}W N\big )^{-1} W^*T^{-1}\\&= T^{-1} + T^{-1}W N \big (I - Q N\big )^{-1} W^*T^{-1}. \end{aligned}$$

In other words, \(M^{-1}\) is given by the formula in (4.7). \(\square \)

Proof of Theorem 3.1

Assume the operator \(T_GT_G^*-T_KT_K^*\) is strictly positive. Then Lemma 4.1 tells us that \(T_R\) is strictly positive and item (i) in Theorem 3.1 is fulfilled. Furthermore, applying Lemma 4.3 with

$$\begin{aligned} M = T_GT_G^*-T_KT_K^*, \quad T = T_R, \quad W = W_{obs} \quad \hbox {and}\quad N = \Delta , \end{aligned}$$

we see that the matrix \(Q^{-1}- \Delta \) is strictly positive, and hence item (ii) in Theorem 3.1 is fulfilled. Furthermore, in this case the inversion formula (4.7) yields the formula to compute the inverse of \(T_GT_G^*-T_KT_K^*\) in (3.3).

Conversely, assume items (i) and (ii) are satisfied. Item (i) gives \( T_R \gg 0 \). Then again using Lemma 4.3, we see that item (ii) implies that \(T_GT_G^*-T_KT_K^*\) is strictly positive. \(\square \)

The rest of this paper is devoted to establishing the state space formulas (3.5) for our solution X of the Leech problem and proving both Theorems 1.1 and 3.2.

5 A State Space Realization for V(s)

Our solution to the Leech problem is motivated by \(X(s) = U(s)V(s)^{-1}\) where U(s) and V(s) are defined in (1.7). The operator \(\Lambda \) is defined by

$$\begin{aligned} \Lambda = T_G^*\big (T_GT_G^*\big )^{-1}T_K \quad \hbox {and}\quad T_G \Lambda = T_K. \end{aligned}$$
(5.1)

Recall that \(\Lambda \) is a strict contraction if and only if \(T_G T_G^* - T_K T_K^*\) is strictly positive. The following result provides a state space realization for V(s).

Lemma 5.1

Assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, that items (i) and (ii) of Theorem 3.1 hold. Consider the function V(s) defined by

$$\begin{aligned} V(s) = \big ({\mathfrak {L}} \left( I - \Lambda ^* \Lambda \right) ^{-1} \delta \big )(s) \end{aligned}$$
(5.2)

where \(\delta (t)\) is the Dirac delta function. Then a state space realization for V(s) is given by

$$\begin{aligned} V(s)= & {} (I + D_2^* R_0^{-1}D_2) + \left( D_2^* C_0 + B_2^*Q \right) \big (sI - A_0\big )^{-1}\mathbb {B} , \nonumber \\ \mathbb {B}= & {} \big (I - \Delta Q\big )^{-1} \Big ( B_2 - B_1 D_1^\dag D_2\Big )\Big ( I+ D_2^*R_0^{-1} D_2\Big ) , \end{aligned}$$
(5.3)

where \(D_1^\dag = D_1^*(D_1 D_1^*)^{-1}\) and \(\Delta \) is the unique solution of the Lyapunov Eq. (2.7). Because \(A_0\) is stable, V is a function in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\).

Recall that \( A_0 \) and \( C_0 \) are given by (2.10) and that Q solves (2.9). Formula (5.3) for V(s) will play a fundamental role in computing our solution X(s) for the Leech problem.

Proof of Lemma 5.1

By assumption, the operator \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, conditions (i) and (ii) of Theorem 3.1 hold. Hence \(T_R\) is strictly positive, the unique stabilizing solution to the algebraic Riccati Eq. (2.9) is given by \(Q = W_{obs}^* T_R^{-1}W_{obs}\) and (see Remark 2.1) \(Q^{-1} - \Delta \) is also strictly positive. Moreover, Theorem 3.1 shows that

$$\begin{aligned} T_G T_G^* - T_K T_K^* = T_R - W_{obs} \Delta W_{obs}^*. \end{aligned}$$

Furthermore, the inverse is given by

$$\begin{aligned} \Big (T_G T_G^* - T_K T_K^*\Big )^{-1} = T_R^{-1} + T_R^{-1}W_{obs} \Delta \big (I - Q \Delta \big )^{-1} W_{obs}^*T_R^{-1}. \end{aligned}$$

Recall that \(\Lambda = T_G^*\big (T_G T_G^*\big )^{-1} T_K\) is a strict contraction. Using this we obtain

$$\begin{aligned} \big (I - \Lambda ^* \Lambda \big )^{-1}&= \Big (I - T_K^* \big (T_G T_G^*\big )^{-1} T_K \Big )^{-1}\\&= I + \Big (I - T_K^* \big (T_G T_G^*\big )^{-1} T_K \Big )^{-1}T_K^* \big (T_G T_G^*\big )^{-1} T_K \\&= I + T_K^* \Big (I - \big (T_G T_G^*\big )^{-1} T_K T_K^* \Big )^{-1}\big (T_G T_G^*\big )^{-1} T_K. \end{aligned}$$

In other words,

$$\begin{aligned} \big (I - \Lambda ^* \Lambda \big )^{-1} = I + T_K^* \Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K. \end{aligned}$$
(5.4)

This readily implies that

$$\begin{aligned} \big (I - \Lambda ^* \Lambda \big )^{-1}&= I + T_K^* \Big (T_R^{-1} + T_R^{-1}W_{obs} \Omega W_{obs}^*T_R^{-1} \Big ) T_K\\ \Omega&= \Delta \big (I - Q \Delta \big )^{-1}. \end{aligned}$$

Recall that \(W_0 = T_R^{-1}W_{obs}\) where \(\left( W_0 x\right) (t) = C_0 e^{A_0 t}x\) and \(x\in {\mathcal {X}}\); see Part (iii) of Theorem 13.4 in Sect. 13 “(Appendix 1)” for a discussion of \(W_0\) in the general Riccati setting. In particular, \(W_{obs}^* W_0 =Q\). Hence

$$\begin{aligned} \big (I - \Lambda ^* \Lambda \big )^{-1} = I + T_K^* \Big (T_R^{-1} + W_0 \Omega W_0^* \Big ) T_K. \end{aligned}$$

Let us compute \(\left( (I - \Lambda ^* \Lambda )^{-1} \delta \right) (t)\). By employing \(R(s) = \Phi (-{\overline{s}})^* \Phi (s)\), with \( \Phi (s)^{-1} = \left( I - C_0(sI - A_0)^{-1}\Gamma \right) R_0^{-{1/2}}\) [see (3.4)], we obtain

$$\begin{aligned} T_R^{-1} \delta = T_\Phi ^{-1}T_\Phi ^{-*} \delta = T_\Phi ^{-1} R_0^{-{1/2}}\delta =R_0^{-1}\delta - W_0 \Gamma R_0^{-1}. \end{aligned}$$

The calculations involving the Dirac delta function \(\delta (t)\) are explained in Sect. 14 “(Appendix 2)”. Using this we have

$$\begin{aligned}&\Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K \delta = \Big (T_R^{-1} + W_0 \Omega W_0^* \Big ) T_K \delta \\&\quad = \Big (T_R^{-1} + W_0 \Omega W_0^* \Big ) \Big (D_2 \delta + W_{obs} B_2\Big )\\&\quad = T_R^{-1} \delta D_2 + T_R^{-1} W_{obs} B_2 + W_0 \Omega W_0^* \Big (D_2 \delta + W_{obs} B_2\Big )\\&\quad = \delta R_0^{-1} D_2 - W_0 \Gamma R_0^{-1} D_2 + W_0 B_2 + W_0 \Omega C_0^* D_2 + W_0 \Omega Q B_2 \\&\quad = \delta R_0^{-1} D_2 + W_0 \Big ( B_2- \Gamma R_0^{-1} D_2 + \Omega C_0^* D_2 + \Omega Q B_2 \Big )\\&\quad = \delta R_0^{-1} D_2 + W_0 \Big (\Omega C_0^* D_2 - \Gamma R_0^{-1} D_2 + (I + \Omega Q )B_2 \Big ) \end{aligned}$$

In other words,

$$\begin{aligned}&\Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K \delta (t) = \delta R_0^{-1} D_2 + W_0 \mathbb {B} \nonumber \\&\mathbb {B} = \Omega C_0^* D_2 - \Gamma R_0^{-1} D_2 + (I + \Omega Q )B_2\nonumber \\&\Omega = \Delta \big (I - Q \Delta \big )^{-1}. \end{aligned}$$
(5.5)

Let us simplify the expression for \(\mathbb {B}\). To this end,

$$\begin{aligned} I + \Omega Q = I + \Delta \big (I - Q \Delta \big )^{-1} Q = I+ \Delta Q \big (I - \Delta Q \big )^{-1} = \big (I - \Delta Q \big )^{-1}. \end{aligned}$$

Using this and (5.5) gives

$$\begin{aligned} \mathbb {B} = \Big ( \Omega C_0^* - \Gamma R_0^{-1} \Big ) D_2 + \big ( I - Q \Delta \big )^{-1} B_2, \end{aligned}$$

and

$$\begin{aligned}&\big (\Omega C_0^* - \Gamma R_0^{-1} \big )D_2 = \left( \Delta (I - Q \Delta )^{-1} ( C^*- Q \Gamma ) - \Gamma \right) R_0^{-1} D_2 \\&\quad = \big ( \Omega C^* - (I - \Delta Q)^{-1} \Gamma \big ) R_0^{-1} D_2 \\&\quad = \big ( (I - \Delta Q)^{-1} \Delta C^* - (I - \Delta Q)^{-1} \Gamma \big ) R_0^{-1} D_2 . \end{aligned}$$

Thus

$$\begin{aligned} \mathbb {B}&= \big ( (I - \Delta Q)^{-1} \Delta C^* -(I - \Delta Q)^{-1} \Gamma \big ) R_0^{-1} D_2 + \big (I - \Delta Q \big )^{-1}B_2\\&=(I - \Delta Q)^{-1} \big ( (\Delta C^* - \Gamma ) R_0^{-1} D_2 + B_2 \big ). \end{aligned}$$

Next

$$\begin{aligned}&(\Delta C^* - \Gamma )R_0^{-1} D_2 + B_2 = \big ( \Delta C^* - (B_1 D_1^* - B_2 D_2^* + \Delta C^*) \big ) R_0^{-1} D_2 + B_2 \\&\quad = ( B_2 D_2^* - B_1 D_1^* )R_0^{-1} D_2 + B_2 = B_2 \big ( I+ D_2^*R_0^{-1} D_2\big ) - B_1 D_1^* R_0^{-1} D_2. \end{aligned}$$

And therefore

$$\begin{aligned} \mathbb {B} = (I - \Delta Q)^{-1} \Big ( B_2 \big ( I+ D_2^*R_0^{-1} D_2\big ) - B_1 D_1^* R_0^{-1} D_2\Big ). \end{aligned}$$
(5.6)

Now observe that

$$\begin{aligned} D_1^* R_0^{-1} D_2 \big ( I+ D_2^*R_0^{-1} D_2\big )^{-1}&= D_1^* \big ( I+ R_0^{-1} D_2 D_2^*\big )^{-1}R_0^{-1} D_2\\&=D_1^* \big (R_0+ D_2 D_2^*\big )^{-1} D_2\\&=D_1^* \big (D_1 D_1^*\big )^{-1} D_2 = D_1^\dag D_2. \end{aligned}$$

This readily implies that

$$\begin{aligned} D_1^* R_0^{-1} D_2 \big ( I+ D_2^*R_0^{-1} D_2\big )^{-1} = D_1^\dag D_2 \end{aligned}$$
(5.7)

where \(D_1^\dag = D_1^* \big (D_1 D_1^*\big )^{-1}\) is the Moore-Penrose inverse of \(D_1\). Using this in (5.6), we obtain the formula for \(\mathbb {B}\) in Eq. (5.3) of Lemma 5.1, that we have been looking for, that is,

$$\begin{aligned} \mathbb {B} = \big (I - \Delta Q\big )^{-1} \Big ( B_2 - B_1 D_1^\dag D_2\Big )\Big ( I+ D_2^*R_0^{-1} D_2\Big ). \end{aligned}$$
(5.8)

Now that we have a formula for \(\mathbb {B}\), notice that

$$\begin{aligned} \left( T_K^* W_0 \mathbb {B}\right) (t) = D_2^* W_0(t) \mathbb {B} + \int _t^\infty B_2^* e^{A^*(\tau -t)}C^* C_0e^{A_0 \tau } \mathbb {B} d\tau . \end{aligned}$$

Further notice

$$\begin{aligned}&\int _t^\infty B_2^* e^{A^*(\tau -t)}C^* C_0e^{A_0 \tau } \mathbb {B} d\tau = \int _t^\infty B_2^* e^{A^*(\tau -t)}C^* C_0e^{A_0 (\tau -t)} e^{A_0 t} \mathbb {B} d\tau \\&\quad = \int _0^\infty B_2^* e^{A^*\xi }C^* C_0e^{A_0 \xi } e^{A_0 t} \mathbb {B} d\xi = B_2^*W_{obs}^* W_0 e^{A_0 t} \mathbb {B} = B_2^*Q e^{A_0 t} \mathbb {B}. \end{aligned}$$

For the last equality consult the formulas (13.16) and (13.17) in Sect. 13. Thus we have

$$\begin{aligned} \left( T_K^* W_0 \mathbb {B}\right) (t) = D_2^* W_0(t) \mathbb {B} + B_2^*Q e^{A_0 t} \mathbb {B} \quad ( t \ge 0 ). \end{aligned}$$
(5.9)

This with (5.5) and (5.4) yields

$$\begin{aligned} (I-\Lambda ^*\Lambda )^{-1} \delta&= I \delta + T_K^* \Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K \delta \\&= I \delta + T_K^* \Big (\delta R_0^{-1} D_2 + W_0 \mathbb {B}\Big )\\&= \big ( I + D_2^* R_0^{-1} D_2\big ) \delta + \Big (D_2^* C_0 + B_2^* Q\Big ) e^{A_0 t} \mathbb {B}. \end{aligned}$$

By taking the Laplace transform of the previous result, we arrive at the state space realization for \(V(s) = \big ({\mathfrak {L}} (I-\Lambda ^*\Lambda )^{-1} \delta \big )(s)\) in Eq. (5.3) of Lemma 5.1 that we have been looking for, that is,

$$\begin{aligned} V(s)= & {} (I + D_2^* R_0^{-1}D_2) + \left( D_2^* C_0 + B_2^*Q \right) \big (sI - A_0\big )^{-1}\mathbb {B}. \end{aligned}$$
(5.10)

This completes the Proof of Lemma 5.1. \(\square \)

6 A State Space Realization for U(s)

The following result provides a state space formula for the function U(s).

Lemma 6.1

Assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, that items (i) and (ii) of Theorem 3.1 hold. Consider the function U(s) defined by

$$\begin{aligned} U(s) = \big ({\mathfrak {L}} \Lambda \left( I - \Lambda ^* \Lambda \right) ^{-1} \delta \big )(s). \end{aligned}$$
(6.1)

Then a state space realization for U(s) is given by

$$\begin{aligned} U(s)= & {} D_1^* R_0^{-1} D_2 + (D_1^* C_0 + B_1^*Q)(sI - A_0)^{-1}\mathbb {B}\nonumber \\ \mathbb {B}= & {} \big (I - \Delta Q\big )^{-1} \Big ( B_2 - B_1 D_1^\dag D_2\Big ) \Big ( I+ D_2^*R_0^{-1} D_2\Big ). \end{aligned}$$
(6.2)

Because \(A_0\) is stable, U is a function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\).

Once we have established our state space realizations for U(s) and V(s), we will directly verify that \(U(s)V(s)^{-1}\) is a stable rational solution to our Leech problem, that is,

$$\begin{aligned} X(s) = U(s)V(s)^{-1} \quad \hbox {and}\quad G(s) X(s) = K(s) \quad \hbox {and}\quad \Vert X\Vert _\infty < 1. \end{aligned}$$
(6.3)

Proof of Lemma 6.1

By assumption \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, conditions (i) and (ii) of Theorem 3.1 hold, or equivalently, \(\Lambda \) is a strict contraction. By employing (5.4), we have

$$\begin{aligned}&\Lambda (I- \Lambda ^* \Lambda )^{-1} = \Lambda + \Lambda T_K^* \Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K\\&\quad = T_G^*\big (T_GT_G^*)^{-1}T_K + T_G^* \big ( T_G T_G^*)^{-1} T_K T_K^*\Big ( T_G T_G^* - T_K T_K^* \Big )^{-1} T_K\\&\quad = T_G^* \big ( T_G T_G^*)^{-1} \Big ( I+ T_K T_K^* \big ( T_G T_G^* - T_K T_K^* \big )^{-1} \Big ) T_K \\&\quad =T_G^*\big (T_GT_G^*)^{-1} \Big (T_G T_G^* - T_K T_K^*+T_K T_K^*\Big )\big (T_G T_G^* - T_K T_K^* \big )^{-1}T_K\\&\quad =T_G^*\big (T_G T_G^* - T_K T_K^* \big )^{-1}T_K. \end{aligned}$$

In other words,

$$\begin{aligned} \Lambda (I- \Lambda ^* \Lambda )^{-1} = T_G^*\big (T_G T_G^* - T_K T_K^* \big )^{-1}T_K. \end{aligned}$$
(6.4)

By consulting (5.5), we have

$$\begin{aligned} \Lambda (I- \Lambda ^* \Lambda )^{-1} \delta= & {} T_G^*\Big (T_G T_G^* - T_K T_K^* \Big )^{-1} T_K \delta \nonumber \\= & {} T_G^*\Big (\delta R_0^{-1} D_2 + W_0 \mathbb {B}\Big )\nonumber \\= & {} D_1^* R_0^{-1} D_2 \delta + T_G^*W_0 \mathbb {B}. \end{aligned}$$
(6.5)

The last equality follows by using the properties of the Dirac delta function in Sect. 14 “(Appendix 2)”. Replacing K with G in (5.9), we obtain

$$\begin{aligned} \left( T_G^* W_0 \mathbb {B}\right) (t) = \Big (D_1^* C_0 + B_1^* Q\Big ) e^{A_0 t} \mathbb {B}. \end{aligned}$$
(6.6)

By taking the Laplace transform with (6.1), we have

$$\begin{aligned} U(s)= & {} D_1^* R_0^{-1}D_2 + C_u \big (sI - A_0\big )^{-1}\mathbb {B}\nonumber \\ C_u= & {} D_1^* C_0 + B_1^*Q. \end{aligned}$$
(6.7)

Here \(U(s) = \left( {\mathfrak {L}}\Lambda (I - \Lambda ^* \Lambda )^{-1}\delta \right) (s)\). This yields the state space realization for U(s) in Eq. (6.2) above, and completes the Proof of Lemma 6.1. \(\square \)

7 A State Space Realization for \(V(s)^{-1}\)

To compute our solution \(X(s) = U(s)V(s)^{-1}\) to the Leech problem, we need to take the inverse of V(s). Recall that

$$\begin{aligned} V(s)= & {} D_v + C_v(s I - A_0)^{-1}\mathbb {B}\nonumber \\ D_v= & {} (I + D_2^* R_0^{-1}D_2)\nonumber \\ C_v= & {} D_2^* C_0 + B_2^*Q\nonumber \\ \mathbb {B}= & {} \big (I - \Delta Q\big )^{-1} \Big ( B_2 - B_1 D_1^\dag D_2\Big )D_v\nonumber \\ \Xi= & {} \big (I - \Delta Q\big )^{-1}. \end{aligned}$$
(7.1)

Using standard state space techniques,

$$\begin{aligned} V(s)^{-1} = D_v^{-1} - D_v^{-1}C_v (sI - \mathbb {A})^{-1}\mathbb {B}D_v^{-1}. \end{aligned}$$
(7.2)

The “feedback operator” \(\mathbb {A}\) is defined by

$$\begin{aligned} \mathbb {A} = A_0 - \mathbb {B}D_v^{-1}C_v = A_0 - \Xi \Big ( B_2 - B_1 D_1^\dag D_2\Big )C_v. \end{aligned}$$
(7.3)

The following result expresses \(V(s)^{-1}\) is a slightly different form.

Lemma 7.1

Assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, that items (i) and (ii) of Theorem 3.1 hold. Consider the function V(s) in (7.1), or equivalently, \(V(s)=\big ({\mathfrak {L}} \left( I - \Lambda ^* \Lambda \right) ^{-1} \delta \big )(s)\). Then a state space realization for \(V(s)^{-1}\) is given by

$$\begin{aligned} V(s)^{-1}= & {} D_v^{-1} - D_v^{-1} C_v (I- \Delta Q)^{-1} \big (sI - {\mathbf {A}}\big )^{-1}\big ( B_2 - B_1 D_1^\dag D_2 \big ), \nonumber \\ {\mathbf {A}}= & {} \Xi ^{-1} \mathbb {A} \Xi = A - B_1 D_1^\dag C - B_1\big (I- D_1^\dag D_1\big ) B_1^*Q \Xi . \end{aligned}$$
(7.4)

Moreover, the operator \({\mathbf {A}}\) is stable. In particular, V(s) is an invertible outer function, that is, both V(s) and its inverse \(V(s)^{-1}\) are functions in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\).

Proof of (7.4)

Let us derive the formula for \({\mathbf {A}}\) in (7.4). Later in Sect. 9, we will show that \({\mathbf {A}}\) is stable. Recall that \(\Delta \) is the solution to the Lyapunov equation

$$\begin{aligned} A \Delta + \Delta A^* +B_1 B_1^* - B_2B_2^* =0 \end{aligned}$$

and that Q is the stabilizing solution of the Riccati Eq. (2.9); see also (13.14) in Sect. 13. To simplify (7.4), let us compute

$$\begin{aligned} \Xi ^{-1} A - A\Xi ^{-1}&= \big (I - \Delta Q\big )A - A\big (I - \Delta Q\big )\\&= - \Delta Q A + A \Delta Q\\&= \Delta A^* Q + \Delta C_0^* R_0 C_0 - \Delta A^* Q - B_1 B_1^* Q + B_2 B_2^* Q\\&=\Delta C_0^* R_0 C_0 - B_1 B_1^* Q + B_2 B_2^* Q. \end{aligned}$$

In other words,

$$\begin{aligned} \Xi ^{-1} A - A\Xi ^{-1} = \Delta C_0^* R_0 C_0 - B_1 B_1^* Q + B_2 B_2^* Q. \end{aligned}$$
(7.5)

Using this we have

$$\begin{aligned} \Xi ^{-1} A_0&= \Xi ^{-1} A - \Xi ^{-1} \Gamma C_0\\&= A \Xi ^{-1} +\Delta C_0^* R_0 C_0 - B_1 B_1^* Q + B_2 B_2^* Q -\Xi ^{-1} \Gamma C_0\\&= A \Xi ^{-1} + \Big (\Delta C_0^* R_0 - \Gamma + \Delta Q \Gamma \Big ) C_0 - B_1 B_1^* Q + B_2 B_2^* Q . \end{aligned}$$

Furthermore, using (2.10) (see also (13.9) in Sect. 13 “(Appendix 1)”) and (13.4) yields the following

$$\begin{aligned} \Delta C_0^* R_0 - \Gamma + \Delta Q \Gamma&= \Delta \big (C^* -Q \Gamma \big )- \Gamma + \Delta Q \Gamma = \Delta C^* - \Gamma \\&= - B_1D_1^* + B_2D_2^* . \end{aligned}$$

Thus,

$$\begin{aligned} \quad \Xi ^{-1} A_0 = A \Xi ^{-1} - \big (B_1D_1^* - B_2D_2^*\big ) C_0 - \big (B_1 B_1^* - B_2 B_2^*\big ) Q. \end{aligned}$$
(7.6)

Notice that

$$\begin{aligned} \Xi ^{-1} A_0 = A \Xi ^{-1} - B_1 \big ( D_1^* C_0 + B_1^* Q \big ) + B_2 \big ( D_2^* C_0 + B_2^* Q \big ) \end{aligned}$$

By applying the definitions of \(C_u\) and \(C_v\) in (6.7) and (7.1), we have

$$\begin{aligned} \Xi ^{-1} A_0 = A \Xi ^{-1} - B_1C_u + B_2C_v. \end{aligned}$$
(7.7)

Using this we obtain from (7.3) that

$$\begin{aligned} \Xi ^{-1} \mathbb {A}&= \Xi ^{-1} A_0 - \Big ( B_2 - B_1 D_1^\dag D_2\Big )C_v\\&= A \Xi ^{-1} - B_1 C_u + B_2C_v- \Big ( B_2 - B_1 D_1^\dag D_2\Big )C_v\\&= A \Xi ^{-1} - B_1 C_u + B_1 D_1^\dag D_2 C_v. \end{aligned}$$

In other words,

$$\begin{aligned} \Xi ^{-1} \mathbb {A} = A \Xi ^{-1} - B_1\Big (C_u - D_1^\dag D_2 C_v \Big ). \end{aligned}$$
(7.8)

A formula for \(C_u-D_1^\dag D_2 C_v\). To simplify the formula involving \(\mathbb {A}\) in (7.8), we need to work on \(C_u-D_1^\dag D_2 C_v\). To this end, notice that

$$\begin{aligned}&C_u-D_1^\dag D_2 C_v = D_1^*C_0 + B_1^*Q - D_1^\dag D_2\big (D_2^*C_0 + B_2^*Q\big )\\&\quad = D_1^*\big (I - (D_1D_1^*)^{-1} D_2D_2^*\big )C_0 + \big (B_1^* -D_1^\dag D_2 B_2^*\big )Q. \end{aligned}$$

To simplify the last expression, observe that

$$\begin{aligned}&D_1^*\big (I - (D_1D_1^*)^{-1} D_2D_2^*\big )C_0\\&\quad = D_1^*\big (I - (D_1D_1^*)^{-1} D_2D_2^*\big )R_0^{-1}\big (C - \Gamma ^*Q\big )\\&\quad = D_1^*(D_1D_1^*)^{-1}\big (D_1D_1^* - D_2D_2^*\big )R_0^{-1}\big (C - \Gamma ^*Q\big )\\&\quad = D_1^\dag \big (C - \Gamma ^*Q\big ). \end{aligned}$$

Using this in our previous formula, we have

$$\begin{aligned}&C_u-D_1^\dag D_2 C_v = D_1^\dag \big (C - \Gamma ^*Q\big ) + \big (B_1^* -D_1^\dag D_2 B_2^*\big )Q \\&\quad = D_1^\dag C + B_1^*Q - D_1^\dag \big (\Gamma ^* + D_2 B_2^*\big )Q\\&\quad = D_1^\dag C + B_1^*Q- D_1^\dag \big (D_1 B_1^* + C\Delta \big )Q\\&\quad = D_1^\dag C\Big (I - \Delta Q\Big ) + \big (I - D_1^\dag D_1\big ) B_1^*Q. \end{aligned}$$

In other words,

$$\begin{aligned} C_u-D_1^\dag D_2 C_v = D_1^\dag C\Big (I - \Delta Q\Big ) + \big (I - D_1^\dag D_1\big ) B_1^*Q. \end{aligned}$$
(7.9)

Using \(\Xi = \big (I - \Delta Q\big )^{-1}\) in (7.8), we obtain the result that we have been looking for, that is,

$$\begin{aligned} \Xi ^{-1} \mathbb {A} = A \Xi ^{-1} - B_1 D_1^\dag C \Xi ^{-1} - B_1\big (I- D_1^\dag D_1\big ) B_1^*Q. \end{aligned}$$
(7.10)

Multiplying by \(\Xi \) on the right, yields the following formula for \({\mathbf {A}}\) in (7.4):

$$\begin{aligned} {\mathbf {A}} = \Xi ^{-1} \mathbb {A} \Xi = A - B_1 D_1^\dag C - B_1\big (I- D_1^\dag D_1\big ) B_1^*Q \Xi . \end{aligned}$$
(7.11)

To complete the Proof of Lemma 7.1, it remains to show that \(\mathbb {A}\) is stable. This will be proved in Sect. 9. \(\square \)

Because \({\mathbf {A}}\) is similar to \(\mathbb {A}\), it follows that the operator \({\mathbf {A}}\) is also stable. In particular, \(V(s)^{-1}\) is a function in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\). Using the state space formula for \(V(s)^{-1}\) in (7.2), with \({\mathbf {A}}\) defined in (7.11), we arrive at the state space formula for \(V(s)^{-1} \) in (7.4).

8 The State Space Realization for \(X(s) = U(s)V(s)^{-1}\)

We are now ready to compute \(X(s) = U(s)V(s)^{-1}\), which will turn out to be our solution to the Leech problem. For convenience recall that V and U are given by

$$\begin{aligned} U(s)= & {} D_1^* R_0^{-1}D_2 + C_u \big (sI - A_0\big )^{-1}\mathbb {B}\nonumber \\ V(s)= & {} D_v + C_v(s I - A_0)^{-1}\mathbb {B}\nonumber \\ C_u= & {} D_1^* C_0 + B_1^*Q\nonumber \\ C_v= & {} D_2^* C_0 + B_2^*Q\nonumber \\ D_v= & {} (I + D_2^* R_0^{-1}D_2)\nonumber \\ \mathbb {B}= & {} \big (I - \Delta Q\big )^{-1} \Big ( B_2 - B_1 D_1^\dag D_2\Big )D_v. \end{aligned}$$
(8.1)

Proposition 8.1

Given the formulas for U(s) and V(s) in (8.1), and set \(X(s) = U(s)V(s)^{-1}\). Then a state space realization for X is given by

$$\begin{aligned} X(s)= & {} D_1^\dag D_2 + C_x\big (sI - {\mathbf {A}}\big )^{-1}\big ( B_2 - B_1 D_1^\dag D_2\big )\nonumber \\&\text{ where } \end{aligned}$$
(8.2)
$$\begin{aligned} C_x= & {} D_1^\dag C + \big (I - D_1^\dag D_1\big ) B_1^*Q \Xi = \big (C_u-D_1^\dag D_2 C_v\big )\Xi \end{aligned}$$
(8.3)
$$\begin{aligned} {\mathbf {A}}= & {} A- B_1 C_x. \end{aligned}$$
(8.4)

Finally, the operator \({\mathbf {A}}\) is stable. In particular, X(s) is a rational function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\).

This establishes the formula for X(s) in (3.5) in Theorem 3.2. In Sect. 11 we will show that \(\Vert X\Vert _\infty <1\).

Derivation X(s) in Proposition8.1. To compute \(X(s) = U(s)V(s)^{-1}\), first notice that

$$\begin{aligned} D_1^* R_0^{-1} D_2 D_v^{-1}&= D_1^* R_0^{-1} D_2\big (I + D_2^* R_0^{-1}D_2\big )^{-1}\\&= D_1^* \big (I + R_0^{-1} D_2 D_2^*\big )^{-1} R_0^{-1}D_2 = D_1^* \big (R_0 + D_2 D_2^*\big )^{-1}D_2\\&= D_1^* \big (D_1 D_1^*\big )^{-1}D_2 = D_1^\dag D_2. \end{aligned}$$

In other words, the constant term \(X_\infty \) of X(s) is

$$\begin{aligned} D_1^* R_0^{-1} D_2 D_v^{-1} = D_1^\dag D_2. \end{aligned}$$
(8.5)

By employing standard state space techniques, we obtain

$$\begin{aligned} V(s)^{-1} = D_v^{-1} - D_v^{-1}C_v\big (sI - \mathbb {A})^{-1}\mathbb {B}D_v^{-1}. \end{aligned}$$
(8.6)

This with the definition of \(\mathbb {A} = A_0-\mathbb {B} D_v^{-1} C_v\) in (7.3) yields

$$\begin{aligned} X(s)&= \Big (D_1^* R_0^{-1} D_2 + C_u\big (s I - A_0\big )^{-1}\mathbb {B} \Big ) \Big (D_v^{-1} - D_v^{-1}C_v\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}\Big )\\&= D_1^\dag D_2 + C_u \big (s I - A_0)^{-1}\mathbb {B}D_v^{-1} - D_1^\dag D_2 C_v\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1} \\&\quad - C_u\big (s I - A_0\big )^{-1}\mathbb {B}D_v^{-1}C_v \big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}\\&= D_1^\dag D_2 - D_1^\dag D_2 C_v\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1} \\&\quad + C_u \big (s I - A_0\big )^{-1}\Big ((sI - \mathbb {A}) - \mathbb {B}D_v^{-1}C_v\Big )\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}\\&= D_1^\dag D_2 + \big (C_u- D_1^\dag D_2 C_v\big )\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}. \end{aligned}$$

Recall that \(\Xi = (I- \Delta Q)^{-1}\). Using this we have

$$\begin{aligned} X(s)&= D_1^\dag D_2 + \big (C_u- D_1^\dag D_2 C_v\big )\big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}\\&= D_1^\dag D_2 + \Big (D_1^\dag C + \big (I - D_1^\dag D_1\big ) B_1^*Q \Xi \Big ) \Xi ^{-1} \big (sI - \mathbb {A}\big )^{-1}\mathbb {B}D_v^{-1}\\&= D_1^\dag D_2 + \Big (D_1^\dag C + \big (I - D_1^\dag D_1\big ) B_1^*Q \Xi \Big ) \big (sI - {\mathbf {A}}\big )^{-1}\Xi ^{-1}\mathbb {B}D_v^{-1}; \end{aligned}$$

see (7.9) and (7.11). This yields the formula for X(s) in (8.2). In Sect. 9 we will prove that \({\mathbf {A}}\) is stable.

9 Proof of the Stability of \(\mathbb {A}\) using a Lyapunov Equation

In this section, we will directly show that \(V^{-1}\) is analytic on the closed right half plane. To accomplish this we will prove that \(\mathbb {A}\) is stable. Since \({\mathbf {A}}\) is similar to \(\mathbb {A}\), the operator \({\mathbf {A}}\) is also stable. This guarantees that \(V(s)^{-1}\) is a function in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\); see (8.6). Since \(A_0\) is stable, V(s) is also a function in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\); see (8.1). In particular, V(s) is an invertible outer function. Because U(s) is in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\), the function \(X(s) =U(s)V(s)^{-1}\) is a function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\). In Sect. 11 we will show that \(\Vert X\Vert _\infty <1\).

Recall

$$\begin{aligned} V(s)= & {} D_v + C_v(s I - A_0)^{-1}\mathbb {B} \nonumber \\ V(s)^{-1}= & {} D_v^{-1} - D_v^{-1} C_v (s I - \mathbb {A} )^{-1} \mathbb {B} D_v^{-1} , \end{aligned}$$
(9.1)

where

$$\begin{aligned} \mathbb {A}= & {} A_0 - \mathbb {B} D_v^{-1} C_v = A_0 - \Xi \Big ( B_2 - B_1 D_1^\dag D_2\Big )C_v, \nonumber \\ \mathbb {B}= & {} \Xi \Big ( B_2 - B_1 D_1^\dag D_2 \Big ) D_v,\nonumber \\ \Xi= & {} \big (I - \Delta Q \big )^{-1} . \end{aligned}$$
(9.2)

Throughout this section Q is the stabilizing solution for the algebraic Riccati Eq. (2.9).

In order to show that \(\mathbb {A}\) is stable, we will first establish the following lemma.

Lemma 9.1

Let \(\mathbb {P}\) be the strictly positive operator defined by

$$\begin{aligned} \mathbb {P} = Q- Q\Delta Q = Q \Xi ^{-1}. \end{aligned}$$
(9.3)

Then \( \mathbb {A} \) satisfies the Lyapunov equation

$$\begin{aligned} \mathbb {A}^*\mathbb {P} +\mathbb {P}\mathbb {A} + F^* F = 0 \end{aligned}$$
(9.4)

where the operator F, the strict contraction Z and the isometry E are given by

$$\begin{aligned} F= & {} \begin{bmatrix} (I - Z^* Z )^{{1/2}} C_v \\ (I - E E^* )^{{1/2}} C_u \\ Z C_v - E^* C_u \end{bmatrix}, \end{aligned}$$
(9.5)
$$\begin{aligned} Z= & {} \big ( D_1 D_1^* \big )^{-{1/2}} D_2 \quad \hbox {and}\quad E = D_1^* \big ( D_1 D_1^* \big )^{-{1/2}}. \end{aligned}$$
(9.6)

Proof

Because \(Q^{-1} - \Delta \) is strictly positive (see item (ii) in Theorem 3.1), the operator

$$\begin{aligned} \mathbb {P} = Q - Q \Delta Q = Q\left( Q^{-1} - \Delta \right) Q \end{aligned}$$

is also strictly positive. The first step is to prove that

$$\begin{aligned} C_v^* C_v - C_u^*C_u = A_0^*(Q - Q\Delta Q) + (Q - Q\Delta Q)A_0. \end{aligned}$$
(9.7)

Using the formulas for \(C_u\) and \(C_v\) in (8.1) and the definition of \( \Gamma \) in (2.6) we have that

$$\begin{aligned} D_1 C_u - D_2 C_v&= R_0 C_0 + D_1 B_1^* Q - D_2 B_2^* Q \\ \qquad&= C - \Gamma ^* Q + \Gamma ^* Q - C \Delta Q = C ( I - \Delta Q ) = C \Xi ^{-1} . \end{aligned}$$

Equation (7.7) states that

$$\begin{aligned} \Xi ^{-1} A_0 = A \Xi ^{-1} - B_1 C_u + B_2 C_v. \end{aligned}$$

Thus

$$\begin{aligned} (Q - Q\Delta Q) A_0&= Q \Xi ^{-1} A_0 = Q A \Xi ^{-1} - Q B_1C_u + Q B_2 C_v\\&= Q A \Xi ^{-1} - C_u^* C_u + C_0^* D_1 C_u + C_v^*C_v - C_0^*D_2 C_v\\&=Q A \Xi ^{-1} + C_v^*C_v - C_u^*C_u + C_0^*\big (D_1 C_u -D_2 C_v\big )\\&=Q A \Xi ^{-1} + C_v^*C_v - C_u^*C_u + C_0^* C \Xi ^{-1}. \end{aligned}$$

Recall that \(Q = W_{obs}^*W_0\). This identity is a consequence of the algebraic Riccati Eq. (13.14); see the defination of \(W_0\) in (2.11), the Lyapunov Eqs. (13.20) and (13.23) with Theorem 13.4 in Sect. 13 “(Appendix 1)”. Because Q is self-adjoint \(Q = W_0^* W_{obs}\). This yields the Lyapunov equation \(Q A + A_0^*Q + C_0^*C =0\). Using this equation, we obtain

$$\begin{aligned} (Q - Q\Delta Q)A_0&= Q A \Xi ^{-1} + C_v^*C_v - C_u^*C_u - \big (Q A +A_0^*Q\big ) \Xi ^{-1}\\&= C_v^*C_v - C_u^*C_u - A_0^*Q \Xi ^{-1}\\&= C_v^*C_v - C_u^*C_u - A_0^* (Q - Q\Delta Q). \end{aligned}$$

Therefore (9.7) holds.

Recall [see (9.3)] that \(\mathbb {P} = Q- Q\Delta Q = Q \Xi ^{-1}\), and observe that

$$\begin{aligned} \mathbb {P}\mathbb {A}&= \mathbb {P} (A_0 - \mathbb {B}D_v^{-1}C_v)\\&=-A_0^* \mathbb {P} + C_v^*C_v - C_u^*C_u - \mathbb {P}\Xi \Big ( B_2 - B_1 D_1^\dag D_2\Big ) C_v\\&=-A_0^* \mathbb {P} + C_v^*C_v - C_u^*C_u - \Big ( Q B_2 - Q B_1 D_1^\dag D_2\Big ) C_v \\&=-A_0^* \mathbb {P} + C_v^*C_v - C_u^*C_u - \Big ( C_v^* - C_0^* D_2 - Q B_1 D_1^\dag D_2\Big ) C_v. \end{aligned}$$

For the last equality in this calculation, we used the formula for \(C_v\) in (8.1). So we have

$$\begin{aligned} \mathbb {P}\mathbb {A} = -A_0^* \mathbb {P} - C_u^* C_u + \Big ( C_0^* D_2 + Q B_1 D_1^\dag D_2\Big ) C_v. \end{aligned}$$
(9.8)

By taking the adjoint we see that

$$\begin{aligned} \mathbb {A}^* \mathbb {P} = - \mathbb {P} A_0 - C_u^* C_u + C_v^* \Big ( C_0^* D_2+ Q B_1 D_1^\dag D_2\Big )^*. \end{aligned}$$
(9.9)

Note that (9.7) gives

$$\begin{aligned} 0 = \mathbb {P} A_0 + A_0^* \mathbb {P} - C_v^* C_v + C_u^* C_u . \end{aligned}$$
(9.10)

Adding the equalities (9.8), (9.9) and (9.10) yields

$$\begin{aligned} \mathbb {A}^*\mathbb {P} +\mathbb {P}\mathbb {A}&= - C_v^* C_v - C_u^* C_u + \Big ( C_0^* D_2 + Q B_1 D_1^\dag D_2\Big ) C_v +\\&\quad + C_v^* \Big ( C_0^* D_2 + Q B_1 D_1^\dag D_2\Big )^*. \end{aligned}$$

Now observe that, using the formula for \(C_u\) in (8.1) with \(D_1 D_1^\dag =I\),

$$\begin{aligned} C_0^* D_2 + Q B_1 D_1^\dag D_2 = C_0^* D_2 + (C_u^* - C_0^*D_1)D_1^\dag D_2 = C_u^*D_1^\dag D_2. \end{aligned}$$

This yields the Lyapunov equation that we have been looking for, that is,

$$\begin{aligned} \quad \mathbb {A}^*\mathbb {P} +\mathbb {P}\mathbb {A} + C_v^* C_v + C_u^* C_u = C_u^*D_1^\dag D_2 C_v + C_v^*\big (D_1^\dag D_2\big )^* C_u. \end{aligned}$$
(9.11)

Next we will transform (9.11) into (9.4). Recall that \(D_1D_1^* - D_2D_2^* = R_0 \gg 0\). Thus

$$\begin{aligned} I - \big (D_1 D_1^*\big )^{-{1/2}} D_2 D_2^*\big (D_1 D_1^* \big )^{-{1/2}} \gg 0. \end{aligned}$$

In other words, \(Z=\big (D_1D_1^*\big )^{-{1/2}}D_2\) is a strict contraction, that is, \(\Vert Z\Vert <1\). The operator E given by

$$\begin{aligned} E=D_1^*\big (D_1D_1^*\big )^{-{1/2}} \end{aligned}$$

is an isometry. Using, \(D_1^\dag D_2 = E Z\), we see that \(D_1^\dag D_2\) is a strict contraction.

Notice that

$$\begin{aligned}&C_v^*C_v + C_u^*C_u - C_u^*E Z C_v - C_v Z^* E^* C_u \\&\quad = C_v^*(I- Z^* Z)C_v+ C_u^*(I- E E^*) C_u \\&\qquad + C_v^* Z^* ZC_v + C_u^*E E^* C_u -C_u^*E Z C_v - C_v Z^* E^* C_u\\&\quad = C_v^*(I- Z^* Z)C_v+ C_u^*(I- E E^*) C_u \\&\qquad + \big (ZC_v - E^*C_u\big )^*\big (ZC_v - E^*C_u\big ). \end{aligned}$$

So we can rewrite the Lyapunov Eq. (9.11) as

$$\begin{aligned}&\mathbb {A}^*\mathbb {P} +\mathbb {P}\mathbb {A} + C_v^*(I- Z^* Z)C_v + C_u^*(I- E E^*) C_u \nonumber \\&\quad + \big (ZC_v - E^*C_u\big )^*\big (ZC_v - E^*C_u\big ) =0, \end{aligned}$$
(9.12)

which can be rewritten as (9.4). \(\square \)

Proposition 9.2

The operator \( \mathbb {A} \) in (9.2) is stable.

Proof

Recall that \(\mathbb {A}\) satisfies the Lyapunov equation

$$\begin{aligned} \mathbb {A}^*\mathbb {P} +\mathbb {P}\mathbb {A} + F^*F =0, \end{aligned}$$
(9.13)

where \( \mathbb {P} \) and F are given by (9.3) and (9.5), respectively. Assume that \(\lambda \) is an eigenvalue for \(\mathbb {A}\) with eigenvector x, that is, \(\mathbb {A} x = \lambda x\). Using this in the Lyapunov Eq. (9.13), we have

$$\begin{aligned} 0 = (\mathbb {A}^*\mathbb {P}x,x) +(\mathbb {P}\mathbb {A}x,x) + (F^* Fx,x) = ({\overline{\lambda }} + \lambda )(\mathbb {P} x,x) + \Vert F x\Vert ^2. \end{aligned}$$

Notice that \(\mathbb {P} = Q\big (Q^{-1} - \Delta \big )Q\) is strictly positive. Hence \((\mathbb {P} x,x) > 0\) and

$$\begin{aligned} 2\Re (\lambda ) = -\frac{\Vert F x\Vert ^2}{(\mathbb {P} x,x)}. \end{aligned}$$

We will prove that \(\Re (\lambda )\) is nonzero. Assume that \( \Re (\lambda ) = 0 \), then we have that \(F x =0\). So in particular \((I-Z^*Z)^{1/2}C_v x =0\). Because Z is strictly contractive, \((I-Z^*Z)^{1/2}\) is invertible, and thus \(C_v x=0\). Using this with the definition of \(\mathbb {A}\), we see that

$$\begin{aligned} \lambda x = \mathbb {A} x = \big (A_0 - \mathbb {B}D_v^{-1}C_v \big )x = A_0 x. \end{aligned}$$

Thus \(\lambda \) is an eigenvalue for the stable operator \(A_0\). But this means that \( \Re (\lambda ) < 0 \), which contradicts the assumption that \( \Re (\lambda ) = 0 \).

We conclude that \(\Re (\lambda ) <0\) and therefore \(\lambda \) is a stable eigenvalue for \(\mathbb {A}\). This proves that the operator \(\mathbb {A}\) is stable. \(\square \)

Finally, since \(\mathbb {A}\) is similar to \({\mathbf {A}}\), the operator \({\mathbf {A}}\) is also stable.

10 A Direct Proof of \(G(s) X(s) = K(s)\)

So far we have derived a state space formula for X(s); see (3.5) in Theorem 3.2. Because \(\mathbb {A}\) is stable and \(\mathbb {A}\) is similar to \({\mathbf {A}}\), the operator \({\mathbf {A}}\) is stable and X(s) is a rational function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\). In this section, we will directly prove that \(G(s)X(s) = K(s)\). For convenience recall that

$$\begin{aligned} X(s)= & {} D_1^\dag D_2 + C_x \big (s I - {\mathbf {A}}\big )^{-1} \big ( B_2 - B_1 D_1^\dag D_2\big )\nonumber \\ C_x= & {} D_1^\dag C + \big (I - D_1^\dag D_1\big ) B_1^*Q \Xi \nonumber \\ {\mathbf {A}}= & {} A - B_1 C_x \nonumber \\ \Xi= & {} \big (I - \Delta Q\big )^{-1} \quad \hbox {and}\quad D_1^\dag = D_1^*\big (D_1 D_1^*\big )^{-1}. \end{aligned}$$
(10.1)

Let \({\mathfrak {S}} = (sI-A)^{-1}\) and \(\Upsilon = (s I - {\mathbf {A}})^{-1}\). Then using \(D_1 C_x =C\) with \(B_1 C_x = A- {\mathbf {A}}\), we obtain

$$\begin{aligned} G X =&\, \Big (D_1 + C {\mathfrak {S}} B_1\Big ) \Big (D_1^\dag D_2 + C_x \Upsilon \big ( B_2 - B_1 D_1^\dag D_2\big )\Big )\\ =&\,D_2 + C{\mathfrak {S}} B_1 D_1^\dag D_2 + C\Upsilon \big ( B_2 - B_1 D_1^\dag D_2\big )\\&+C{\mathfrak {S}} B_1C_x \Upsilon \big ( B_2 - B_1 D_1^\dag D_2\big )\\ =&D_2 + C{\mathfrak {S}} B_1 D_1^\dag D_2 + C\Upsilon \big ( B_2 - B_1 D_1^\dag D_2\big ) \\&+ C{\mathfrak {S}} \Big (A - {\mathbf {A}}\Big ) \Upsilon \Big ( B_2 - B_1 D_1^\dag D_2\Big ). \end{aligned}$$

In other words,

$$\begin{aligned} G X&= D_2 + C {\mathfrak {S}} B_2 - C{\mathfrak {S}} \big (B_2- B_1 D_1^\dag D_2\big ) + C\Upsilon \Big ( B_2 - B_1 D_1^\dag D_2\Big )\\&\quad +C{\mathfrak {S}} \Big (A - {\mathbf {A}}\Big ) \Upsilon \Big ( B_2 - B_1 D_1^\dag D_2\Big ). \end{aligned}$$

Using \(K = D_2 + C {\mathfrak {S}} B_2\) with \(B_3= B_2- B_1 D_1^\dag D_2\), we have

$$\begin{aligned} G X&= K - C{\mathfrak {S}} B_3 + C\Upsilon B_3 + C{\mathfrak {S}} \big (A - {\mathbf {A}}\big )\Upsilon B_3\\&= K - C{\mathfrak {S}} B_3 +C\Upsilon B_3 + C{\mathfrak {S}} \big ( \Upsilon ^{-1} - {\mathfrak {S}}^{-1} \big )\Upsilon B_3\\&= K - C{\mathfrak {S}} B_3 +C\Upsilon B_3 + C{\mathfrak {S}} B_3 -C\Upsilon B_3 =K. \end{aligned}$$

Therefore we have

$$\begin{aligned} G(s) X(s) = K(s). \end{aligned}$$

So to show that X(s) is indeed a solution to our Leech problem, it remains to show that \(\Vert X\Vert _\infty \le 1\). In fact, we will show in the next section that \(\Vert X\Vert _\infty <1\).

11 The Outer Spectral Factor for \(I - {\widetilde{X}}(s) X(s)\)

In this section, we will show that \(\Theta (s) = D_v^{{1/2}}V(s)^{-1}\) is the invertible outer spectral factor for \(I - {\widetilde{X}}(s) X(s)\), that is, \({\widetilde{\Theta }}\Theta = I - {\widetilde{X}} X\). (Recall that for an operator valued \( H^\infty \) function we defined \( {\widetilde{F}}(s) = F(-{\overline{s}})^* \).) The state space representations of U, V, and X are given by

$$\begin{aligned} U(s)= & {} D_1^* R_0^{-1}D_2 + C_u \big (sI - A_0\big )^{-1}\mathbb {B}\nonumber \\ V(s)= & {} D_v + C_v(s I - A_0)^{-1}\mathbb {B}\nonumber \\ X(s)= & {} D_1^\dag D_2 + C_x\big (sI - {\mathbf {A}}\big )^{-1}\big ( B_2 - B_1 D_1^\dag D_2\big ) \end{aligned}$$
(11.1)

where

$$\begin{aligned} {\mathbf {A}}= & {} A- B_1 C_x , \nonumber \\ C_u= & {} D_1^* C_0 + B_1^*Q, \qquad C_v = D_2^* C_0 + B_2^*Q , \nonumber \\ C_x= & {} \big (C_u-D_1^\dag D_2 C_v\big )\Xi , \quad \text{ where } \quad \Xi = \big (I - \Delta Q\big )^{-1} , \nonumber \\ D_v= & {} (I + D_2^* R_0^{-1}D_2) , \nonumber \\ \mathbb {B}= & {} \Xi \Big ( B_2 - B_1 D_1^\dag D_2\Big )D_v. \end{aligned}$$
(11.2)

Because V(s) is an invertible outer function, \( \Theta (s) \) is also an invertible outer function, that is, both \(\Theta (s)\) and \(\Theta (s)^{-1}\) are functions in \(H^\infty ({\mathcal {E}},{\mathcal {E}})\). This sets the stage for the following result.

Proposition 11.1

Assume that \(T_G T_G^* - T_K T_K^*\) is strictly positive, or equivalently, that items (i) and (ii) of Theorem 3.1 hold. Consider the function V(s) in (11.1) and set \(\Theta (s) = D_v^{{1/2}}V(s)^{-1}\). Then \(\Theta \) admits a state space realization of the form

$$\begin{aligned} \Theta (s)= & {} D_v^{-{1/2}} - D_v^{-{1/2}}C_v\Xi \big (sI - {\mathbf {A}})^{-1} \big ( B_2 - B_1 D_1^\dag D_2\big )\nonumber \\ C_v \Xi= & {} (D_2^* C_0 + B_2^*Q) \Xi . \end{aligned}$$
(11.3)

Furthermore, \(\Theta (s)\) is the invertible outer spectral factor satisfying

$$\begin{aligned} {\widetilde{\Theta }}(s)\Theta (s) = I - {\widetilde{X}}(s)X(s). \end{aligned}$$
(11.4)

In particular, because \(\Theta (s)\) is an invertible outer function,

$$\begin{aligned} \Vert X\Vert _\infty <1. \end{aligned}$$
(11.5)

Proof

The state space realization for \(\Theta (s) = D_v^{{1/2}}V(s)^{-1}\) follows from the state space realization for \(V(s)^{-1}\) in (7.4).

To prove that (11.4) holds, set

$$\begin{aligned} \Psi (s)&= (s I - A_0)^{-1} \quad \hbox {and}\quad {\widetilde{\Psi }}(s) = - (s I + A_0^*)^{-1} = \Psi (-{\overline{s}})^*\\ {\mathfrak {B}}&= \mathbb {B}D_v^{-1} = \Xi \big ( B_2 - B_1 D_1^\dag D_2\big ). \end{aligned}$$

Using this notation, the state space formula for U(s) and V(s) in (11.1) and \( D_u D_v^{-1} = D_x = D_1^\dag D_2 \), we see that

$$\begin{aligned} V D_v^{-1} = I + C_v \Psi {\mathfrak {B}} , \quad \hbox {and}\quad U D_v^{-1} = D_1^\dag D_2+ C_u \Psi {\mathfrak {B}}. \end{aligned}$$
(11.6)

We claim that

$$\begin{aligned} D_v^{-1} = D_v^{-1} V(-{\overline{s}})^*V(s)D_v^{-1} - D_v^{-1} U(-{\overline{s}})^*U(s)D_v^{-1}. \end{aligned}$$
(11.7)

To verify this, notice that the formulas for V(s) and U(s) in (11.6) yield

$$\begin{aligned}&D_v^{-1} {\widetilde{V}} V D_v^{-1} - D_v^{-1} {\widetilde{U}} U D_v^{-1}\nonumber \\&\quad = \big (I + {\mathfrak {B}}^*{\widetilde{\Psi }} C_v^*\big ) \big (I + C_v \Psi {\mathfrak {B}}\big ) - \big (D_2^*D_1^{\dag *} + {\mathfrak {B}}^*{\widetilde{\Psi }} C_u^*\big ) \big (D_1^\dag D_2+ C_u \Psi {\mathfrak {B}}\big )\nonumber \\&\quad = I - D_2^*(D_1D_1^*)^{-1}D_2 + {\mathfrak {B}}^*{\widetilde{\Psi }}\big (C_v^*- C_u^* D_1^\dag D_2 \big ) + \big (C_v - D_2^*D_1^{\dag *} C_u \big )\Psi {\mathfrak {B}}+ {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^* C_v - C_u^*C_u\Big )\Psi {\mathfrak {B}}\nonumber \\&\quad =D_v^{-1} + {\mathfrak {B}}^*{\widetilde{\Psi }}\big (C_v^*- C_u^* D_1^\dag D_2 \big ) + \big (C_v - D_2^*D_1^{\dag *} C_u \big )\Psi {\mathfrak {B}}+ {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^* C_v - C_u^*C_u\Big )\Psi {\mathfrak {B}}. \end{aligned}$$
(11.8)

The last equality follows from

$$\begin{aligned} D_v^{-1} = I - D_2^*(D_1D_1^*)^{-1}D_2. \end{aligned}$$
(11.9)

To prove this equality, observe that

$$\begin{aligned} D_v^{-1}&= (I + D_2^*R_0^{-1}D_2)^{-1} = I - D_2^*R_0^{-1}D_2 (I + D_2^*R_0^{-1}D_2)^{-1}\\&=I - D_2^* (I + R_0^{-1}D_2 D_2^*)^{-1}R_0^{-1}D_2\\&=I - D_2^* (R_0 + D_2 D_2^*)^{-1} D_2 = I - D_2^*(D_1D_1^*)^{-1}D_2. \end{aligned}$$

This yields (11.9).

To complete the proof, it remains to show that the right hand side of (11.8) is equal to \( D_v^{-1} \), or equivalently,

$$\begin{aligned} 0={\mathfrak {B}}^*{\widetilde{\Psi }}\big (C_v^*- C_u^* D_1^\dag D_2 \big ) + \big (C_v - D_2^*D_1^{\dag *} C_u \big )\Psi {\mathfrak {B}} + {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^* C_v - C_u^*C_u\Big )\Psi {\mathfrak {B}}. \end{aligned}$$

By consulting the Lyapunov Eq. in (9.7), we obtain

$$\begin{aligned}&{\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^* C_v - C_u^*C_u\Big )\Psi {\mathfrak {B}} = {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (A_0^*(Q - Q\Delta Q) + (Q - Q\Delta Q)A_0\Big )\Psi {\mathfrak {B}}\\&\quad = {\mathfrak {B}}^*{\widetilde{\Psi }}\Big ( - {\widetilde{\Psi }}^{-1} (Q - Q\Delta Q) - (Q - Q\Delta Q) \Psi ^{-1} \Big )\Psi {\mathfrak {B}}\\&\quad = - {\mathfrak {B}}^*(Q - Q\Delta Q)\Psi {\mathfrak {B}} - {\mathfrak {B}}^*{\widetilde{\Psi }}(Q - Q\Delta Q){\mathfrak {B}}. \end{aligned}$$

Since \(Q - Q\Delta Q = Q \Xi ^{-1}\), we have

$$\begin{aligned} {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^* C_v - C_u^*C_u\Big )\Psi {\mathfrak {B}} = - {\mathfrak {B}}^*\Xi ^{-*}Q\Psi {\mathfrak {B}} - {\mathfrak {B}}^*{\widetilde{\Psi }}Q\Xi ^{-1}{\mathfrak {B}}. \end{aligned}$$

Moreover, we also have

$$\begin{aligned}&{\mathfrak {B}}^*{\widetilde{\Psi }}\big (C_v^*- C_u^* D_1^\dag D_2 \big ) - {\mathfrak {B}}^*{\widetilde{\Psi }}Q\Xi ^{-1}{\mathfrak {B}} = {\mathfrak {B}}^*{\widetilde{\Psi }}\Big (C_v^*- C_u^* D_1^\dag D_2 - Q\Xi ^{-1}{\mathfrak {B}}\Big ). \end{aligned}$$

Now observe that

$$\begin{aligned}&C_v^*- C_u^* D_1^\dag D_2 - Q\Xi ^{-1}{\mathfrak {B}}\\&\quad = C_0^*D_2+Q B_2 - C_0^*D_1 D_1^\dag D_2-Q B_1 D_1^\dag D_2 -Q \Xi ^{-1}\Xi \big ( B_2 - B_1 D_1^\dag D_2\big )\\&\quad = C_0^*D_2+Q B_2 - C_0^* D_2 -Q B_2 =0. \end{aligned}$$

Therefore

$$\begin{aligned} {\mathfrak {B}}^*{\widetilde{\Psi }}\big (C_v^*- C_u^* D_1^\dag D_2 \big ) - {\mathfrak {B}}^*{\widetilde{\Psi }}Q\Xi ^{-1}{\mathfrak {B}} =0. \end{aligned}$$

Likewise its adjoint

$$\begin{aligned} \big (C_v - D_2^* D_1^{\dag *}C_u \big )\Psi {\mathfrak {B}} - {\mathfrak {B}}^*\Xi ^{-*} Q \Psi {\mathfrak {B}} =0. \end{aligned}$$

This with (11.8) shows that

$$\begin{aligned} D_v^{-1} = D_v^{-1} V(-{\overline{s}})^*V(s)D_v^{-1} - D_v^{-1} U(-{\overline{s}})^*U(s)D_v^{-1}. \end{aligned}$$

Recall that \(X(s) = U(s)V(s)^{-1}\). By applying \(D_v\) to both sides, we see that

$$\begin{aligned} D_v = V(-{\overline{s}})^*V(s) - V(-{\overline{s}})^* X(-{\overline{s}})^*X(s)V(s). \end{aligned}$$

By taking the appropriate inverse, we arrive at

$$\begin{aligned} I - X(-{\overline{s}})^*X(s) = \Theta (-{\overline{s}})^*\Theta (s) \quad (\text{ where } \Theta (s) = D_v^{{1/2}} V(s)^{-1}). \end{aligned}$$

Therefore \(\Theta \) is the outer spectral factor for \(I - X(-{\overline{s}})^*X(s)\). Finally, because \(\Theta \) is an invertible outer function, we also have \(\Vert X\Vert _\infty <1\). This completes the Proof of Proposition 11.1. \(\square \)

Since Proposition 11.1 is proved, we have also finished the Proof of Theorem 3.2.

12 Proof of Theorem 1.1

The Proof of Theorem 1.1 concerns rational functions G in \(H^\infty ({\mathcal {U}},{\mathcal {Y}})\) and K in \(H^\infty ({\mathcal {E}},{\mathcal {Y}})\), which have the additional property that \(T_G T_G^*-T_K T_K^*\) is strictly positive. Furthermore, X is a function in \(H^\infty ({\mathcal {E}},{\mathcal {U}})\) defined by (1.6) where U(s) and V(s) are the functions defined by (1.7). Moreover, the rational functions G and K admit a minimal stable realization (2.1). Given these data we have to prove items (i) and (ii).

According to Lemma 6.1 the function U(s) has a realization (6.2) and according to Lemma 5.1 the function V(s) has a realization (5.3). Thus it follows from Proposition 8.1 that \( X(s) = U(s) V(s)^{-1} \) has a realization (8.2) and X(s) is analytic on \( {{\mathbb {C}}}_+ \). Moreover, by Sect. 10, we have \( GX=K \). Proposition 11.1 gives that \( \Vert X \Vert _\infty <1 \) and completes the proof that X is indeed a solution of the Leech problem for G and K. Thus item (i) is proved. Finally, Proposition 11.1 also shows us that formula (1.9) holds true with \( \Theta \) given by (1.8). This tells us that item (ii) of the theorem is proved. Thus items (i) and (ii) are proved, and hence Theorem 1.1 is proved. \(\square \)