Characterization of the dual problem of linear matrix inequality for H-infinity output feedback control problem via facial reduction

This paper deals with the minimization of H∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_\infty $$\end{document} output feedback control. This minimization can be formulated as a linear matrix inequality (LMI) problem via a result of Iwasaki and Skelton 1994. The strict feasibility of the dual problem of such an LMI problem is a valuable property to guarantee the existence of an optimal solution of the LMI problem. If this property fails, then the LMI problem may not have any optimal solutions. Even if one can compute parameters of controllers from a computed solution of the LMI problem, then the computed H∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_\infty $$\end{document} norm may be very sensitive to a small change of parameters in the controller. In other words, the non-strict feasibility of the dual tells us that the considered design problem may be poorly formulated. We reveal that the strict feasibility of the dual is closely related to invariant zeros of the given generalized plant. The facial reduction is useful in analyzing the relationship. The facial reduction is an iterative algorithm to convert a non-strictly feasible problem into a strictly feasible one. We also show that facial reduction spends only one iteration for so-called regular H∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_\infty $$\end{document} output feedback control. In particular, we can obtain a strictly feasible problem by using null vectors associated with some invariant zeros. This reduction is more straightforward than the direct application of facial reduction.


Introduction
The H ∞ control is one of the most important control theories and has attracted much interest from viewpoints of not only the theory, but also application and computation. In particular, two approaches, the algebraic Riccati equations/inequalities approach (e.g., [6]) and linear matrix inequality (LMI) approach (e.g., [8,9,17]) were proposed and investigated thoroughly. These approaches are to find a so-called suboptimal controller, i.e., an admissible controller, so that the H ∞ norm of the closed-loop system with a plant and the controller is less than a given value. Naturally, one seeks an optimal controller, i.e., an admissible controller that minimizes the H ∞ norm. This paper deals with an LMI formulation of this minimization.
The strict feasibility of the dual problem of such an LMI problem is a valuable property to guarantee the existence of an optimal solution of the LMI problem. If strict feasibility fails in the dual, then the optimal value is finite, but no solutions may attain the optimal value of the LMI problem. In this case, it is often observed that the computed controller has very high gains. Also, the controller will be fragile, that is, the computed H ∞ norm may be very sensitive to a small change of parameters in the controller. In other words, the non-strict feasibility of the dual tells us that the considered design problem may be poorly formulated.
The purpose of this paper is to present the necessary and sufficient condition for the dual problem of the LMI problem obtained from the H ∞ output feedback control problem to be strictly feasible. In addition, we analyze the condition in terms of system theory. In particular, we reveal that the condition is closely related to invariant zeros in the generalized plant. Facial reduction [2] for LMI problems is useful to characterize strict feasibility. It converts a non-strictly feasible LMI problem into a strictly feasible one with finitely many iterations. Then it finds a certificate on the non-strict feasibility of the dual. We prove the non-strict feasibility of the dual by constructing a certificate on the non-strict feasibility of the dual with null vectors associated with some invariant zeros.
We introduce some of the literature of facial reduction. The concept and algorithm of the facial reduction for the convex program were proposed in [2]. The facial reduction for LMI (as known as SemiDefinite Program (SDP)) problems was discussed in [13]. In particular, [13] proposed the construction of Ramana's extended dual for which the strong duality holds without any qualification constraint. Pataki et al. [12] simplified and extended facial reduction for LMI problems discussed in [13]. The facial reduction proposed in [23] can also detect the infeasibility of a given problem. More details of facial reduction and its applications except for control theory are introduced in the monograph [5].
Another contribution of this paper is to prove that the facial reduction for an LMI problem associated with the so-called shape regular H ∞ output feedback control problem (see 1 of Remark 2 for the definition) terminates in at most one iteration. The number of iterations of the facial reduction is called shape the singularity degree in [4,19]. For instance, the singularity degree is used for perturbation analysis in [19] and error bounds for LMI problems in [4]. In particular, the singularity degree is regarded as a measure of numerical difficulty at solving the LMI problem by the SDP solver. We cannot expect accurate computation for an LMI problem with a high singularity degree. Also, from the viewpoint of theory, the characterization of the singularity degree is interesting. The work [20] provides characterizations of positive semidefinite matrix completion problems via the singularity degree. The second contribution is a characterization of a regular H ∞ output feedback control problem via the singularity degree.
We introduce some related work to this paper. Balakrishnan and Vandenberghe [1] discussed the strict feasibility of LMI problems arising from a few problems associated with linear time-invariant systems. Then the authors used theorems of alternatives for LMI problems, which play an essential role in the facial reduction for LMI problems. Waki and Sebe [24,25] provided the necessary and sufficient condition to be strictly feasible in the case of H ∞ state feedback control. The author of [7] provided an elimination via the Kronecker canonical form (KCF) for a given design problem. This elimination focuses on the unboundedness of some variables in the associated LMI problem. The KCF structure is used to find such unbounded variables, and then the proposed approach eliminates such variables from the LMI problem. Using this elimination, we can not only get a more accurate solution but also reduce the order of the controller. Our approach can be regarded as shape the dual of this elimination. In fact, this elimination focuses on the LMI problem, whereas we investigate the dual problem of the LMI problem. However, we can see the relationship between invariant zeros of the given generalized plant and unbounded variables.
The paper consists of four sections and appendices. Section 2 devotes the preliminaries on LMI, the dual, facial reduction and H ∞ output feedback control problem. We give the first result, i.e., the necessary and sufficient condition for the dual problem to be strictly feasible, in Sect. 3. We discuss the singularity degree of the dual of the LMI problem obtained from the regular H ∞ output feedback control problem in Sect. 4. Technical lemmas and their proofs are given in appendices.

Notation and symbols
For a positive integer m, we define [m] = {1, . . . , m}. Let (λ) be the real part of a complex number λ. C + and C − denote the closed right and left half-planes, respectively.
Let S n , S n + and S n ++ be the sets of n × n real symmetric matrices, positive semidefinite matrices and positive definite matrices, respectively.
In general, G ⊥ is not unique for a given matrix G. For G ∈ R m×n , we have Im G = ker(G ⊥ ) T and ker G T = Im G ⊥ . G ⊥T stands for the transpose of G ⊥ throughout this paper.
For a nonempty set F ⊂ S n , F ⊥ denotes the set {X ∈ S n : X • Y = 0 (∀Y ∈ F)}. In particular, if F = {S} where S ∈ S n + , then F ⊥ = {S} ⊥ denotes the set {X ∈ S n : S • X = 0}.

Linear matrix inequality problem, its dual and strict feasibility
For b ∈ R m , L j ∈ S n ( j ∈ {0} ∪ [m]), we consider the linear matrix inequality (LMI) problem.
We assume that L j ( j ∈ [m]) are linearly independent. The dual of LMI (1) can be formulated as LMI (1) is said to be strictly feasible if there existsỹ ∈ R m such that m j=1 L jỹ j −L 0 ∈ S n ++ . The dual (2) is said to be strictly feasible if there exists anX ∈ S n ++ such that ). It is well-known that the strong duality holds for LMI problem (1) and its dual (2). Theorem 1 (Strong duality for LMI; for instance, see [21]) If (2) is strictly feasible and (1) is feasible, then p * = d * and (1) has an optimal solution.
We give a characterization for the dual problem (2) which is not strictly feasible. Theorem 2 ([12,22,23]) The dual problem (2) is not strictly feasible if and only if there exists a pair of (y, W ) ∈ R m × S n such that y = 0, b T y ≤ 0 and W = m j=1 L j y j ∈ S n + . If such a pair of (y, W ) satisfies b T y < 0, then it is infeasible. Otherwise, the dual (2) is equivalent to the following problem: We convert (3) into the form of the dual problem (2). Since W in Theorem 2 is where r is the rank of W and ∈ S r ++ and P 1 P 2 is an orthogonal matrix with appropriate sizes. Then it is clear that any element X in S n + ∩ {W } ⊥ has the form of for some X 3 ∈ S n−r + . Substituting this form of X to (3), it is equivalently converted to

Facial reduction for LMI problem
In general, the reduced problem (4) may not be strictly feasible yet. If (4) is not strictly feasible, we can apply Theorem 2 to the reduced problem again until the reduced problem is strictly feasible. This reduction is the essence of the facial reduction for the dual problem (2). The facial reduction terminates in finitely many iterations because the dimension of the positive semidefinite cone S n + is reduced in each iteration. After the facial reduction terminates, we can convert it to the form of a strictly feasible problem by applying a way discussed in the previous subsection if the original is feasible. Otherwise, the original is infeasible.
Facial reduction generates a face of S n + in each iteration. In other words, facial reduction generates a sequence of faces F 0 = S n + , F 1 , . . . , F k of S n + if the original is feasible. In particular, we have F k · · · F 1 F 0 . For a convex set C ⊂ R n , a convex subset F of C is a face of C if for all x 1 , x 2 ∈ C, nonemptyness of the intersection of the open line segment (x 1 , x 2 ) and F implies that x 1 and x 2 are both in F. It is known in e.g., [11,Example 2.5] that any face of S n + is either the empty set ∅, S n + or the following form of the set where P ∈ R n×n is a nonsingular matrix and r is the maximum of the rank over the all matrices in the face. Each iteration of the facial reduction consists of two steps: at th iteration, (i) finding a certificate (y, W ) ∈ R m × S n such that and (ii) generating a face F = F −1 ∩ {W } ⊥ . The facial reduction repeats two steps and terminates when (6) has no solutions. Here the sets F * and F ⊥ for a cone F ⊂ S n + , respectively, are defined by In particular, F * is called the dual cone of F. For instance (S n + ) * = S n + and (S n in (6), then it does not reduce the face F −1 , i.e., F = F −1 . This is the reason why we choose the certificate from the set F * −1 \F ⊥ −1 . Algorithm 1 shows the summary of the facial reduction for (2). It should be noted that (y, W ) in Theorem 2 is the certificate of the first iteration of facial reduction for (2). We can see this because F * 0 = S n + and F ⊥ 0 = {O}. In other words, if facial reduction cannot find any certificate that satisfies (6) at the first iteration, the original problem (2) is strictly feasible. Otherwise, after Algorithm 1 terminates, we reduce S n + to the form (5) by using all the obtained certificates. The detail is similar to the discussion introduced in Sect. 2.1.

Algorithm 1: Facial reduction algorithm for (2)
Input: (2) Output: Face of S n + or detect the infeasibility begin We give a remark on the practical computation of facial reduction. We need to find a solution that satisfies (6) in each iteration. As mentioned in [5,Section 4.6], solving (6) is not easier than solving the original LMI problem. For this, some techniques are proposed in e.g., [10,27]. The approach of [10] approximates a convex cone F into simpler convex cones and makes a solvable approximated problem of (6). However, the approach may miss any solutions to (6). The approach of [27] finds a solution in (6) more efficiently if the original LMI problem has a so-called block-diagonal structure. However, if the original LMI does not have such a structure, then one cannot find any solutions of (6). On the other hand, facial reduction successfully works in more restricted cases, e.g., matrix completion problems and sums of squares relaxation of polynomial optimization. See [5, Part II] for more details.
We have flexibility in choosing the certificate W because it is not uniquely defined. The flexibility means that the number of iterations of the facial reduction depends on the choice of the certificates. In particular, the minimum number of iterations is called shape the singularity degree of (2). For instance, we say that the singularity degree of any strictly feasible LMI problem is zero because such an LMI problem has no certificate (y, W ) in Theorem 2. Also, if (2) is feasible, and the singularity degree of (2) is one, then the reduced problem (4) is strictly feasible.
From the viewpoint of theory, the singularity degree is used for perturbation analysis [4] and error bounds of LMI problems [19]. We can use the singularity degree as a measure of numerical difficulties in solving LMI problems. In fact, we cannot expect accurate computation for LMI problems with a high singularity degree if we do not apply facial reduction. We can see this phenomenon in [27]. In particular, the LMI problems discussed in [27, for instance, Tables 1 and 2] have high singularity degrees, and SDP solvers return significantly different values from correct optimal values. Moreover, the work [20] provides characterizations of a positive semidefinite matrix completion problem by using a singularity degree.

LMI problem for H ∞ output feedback control problem
We introduce H ∞ output feedback control problem. To this end, we consider the following state-space model of a generalized plant. Here, We impose the following assumptions to (7). The former is necessary for the existence of stabilizing controller and the latter is only for simplicity.
For (7), we consider a full-order controller whose state-space model is as follows: Here The state space representation of the closed-loop system G cl (s) which consists of control system (7) and controller (8) is described as follows.
G zu (s) and G yw (s) denote the open-loop transfer functions from u to z and from w to y in (7), respectively. In fact, those functions are formulated by G zu (s) 21 , respectively. We define functions L 1 and L 2 by

Theorem 3 ([8, Theorem 3])
For the state-space system (7) and a given γ > 0, there exists a full-order controller (8) such that the closed-loop system (8) is Hurwitz stable For a fixed γ > 0, we can find (X , Y ) that satisfies all linear matrix inequalities in Theorem 3 via e.g., primal-dual interior-point methods. The solution is suboptimal, and we can construct parameters A k , B k , C k and D k of controllers in (8) from the suboptimal solution. Then H ∞ norm of the closed-loop system is less than γ .
As we mentioned in Introduction, it is natural to find a solution (X , Y ) that minimizes G cl ∞ and makes the closed-loop system (8) Hurwitz stable. Then the LMI problem can be formulated via Theorem 3 as follows. inf We set matrices in (9) as follows.
We have the following relationship between D 12 and E 1 . We give a proof in "Appendix A".
By using E 1 , E 2 , F 1 and F 2 , (9) can be reformulated as where The dual of (10) can be formulated by Here U , V and Z in (11) are the decision variables and * stands for the transpose of the lower triangular part. It is clear that (11) is feasible. In fact, the following solution (U , V , Z ) is feasible for (11).
We can also see that the optimal value of (11) is nonnegative as the objective value is zero. In addition, under Assumption 1, (10) is strictly feasible. This fact can be proved by using the elimination lemma (e.g., [3, Section 2.6.2]). Hence, the strong duality between (10) and (11) holds due to Theorem 1, and the dual problem (11) has an optimal solution.

Strict feasibility of the dual problem
We discuss strict feasibility of the dual problem (11). It follows from Theorem 2 that (11) is not strictly feasible if and only if there exists a solution Clearly, γ in (12) must be zero. In addition, since W c is block diagonal, at least one of X and Y must be nonzero. Otherwise, all the matrices W zu , W yw and W c are zero. Substituting γ = 0 to (12), it is equivalent to the following conic system.
We see that (12) has a solution if and only if (13) has a solution. Moreover, we can separate (13) into the following two conic systems.

Remark 1 If both
Also, if (14) has a solution (X ,W zu ), but (15) has no solutions, then the following (12).
In fact, (15) has no solutions if and only ifŴ yw = − He , the last constraint of (12) holds. Hence we can choose (17) as a solution of (12).
In the remainder of this section, we give an interpretation of (14) and (15) in terms of the system theory. The following lemma is useful for the purpose. We will give a proof of the lemma in "Appendix B".

Lemma 2
The following holds.
(D2) If there exists a solution (X ,Ŵ zu ) of (14) with rank X = r , then there exist U ∈ C n×r , V ∈ C m 2 ×r and a Jordan matrix ∈ C r ×r such that U is of full column rank, all eigenvalues of are in the closed left half-plane and (D3) Assume that E 1 is of full row rank. Then there exists a solution (X ,Ŵ zu ) of (14) with rank X = rankŴ zu = r if and only if there exist U ∈ C n×r , V ∈ C m 2 ×r and a Jordan matrix ∈ C r ×r such that U is of full column rank, all eigenvalues of are in the open left half-plane and (D4) If E 1 is not of full row rank, then there exists a solution (X ,Ŵ zu ) of (14). (18) implies

Remark 2 Equation
This λ is called an shape invariant zero of the realization via (A, B 2 , C 1 , D 12 ) of G zu (s). For simplicity, we call λ an shape invariant zero of G zu (s) throughout this paper. In particular, if an invariant zero λ is in the open left half-plane, then it is said to be shape stable.
The multiplicity of the invariant zero can be defined as the multiplicity of the eigenvalue of in (20). In fact, we have seen in (19) and (20)  We can obtain similar results to (D1)-(D4) and (D1 )-(D3 ) with respect to G yw (s) by following replacements.
Moreover, we can define an shape invariant zero of the realization via of G yw (s) in a similar manner to G zu (s). All the above remarks for G zu (s) hold for G yw (s).
We see from the proofs of (D2) and (D3) of Lemma 2 that rank X ≥ rankŴ zu holds for any solution (X ,Ŵ zu ) of (14). We obtain the following corollary from this observation, (D2 ) and (D3 ).

Corollary 1
Assume that E 1 is of full column rank. If there exists a solution (X ,Ŵ zu ) of (14) with rank X > rankŴ zu , then G zu (s) has an invariant zero on the imaginary axis.
Proof Let r = rank X . It follows from (D2 ) that G zu (s) has r invariant zero in the closed left half-plane. If all invariant zeros are in the open left half-plane, then rank X = rankŴ zu = r due to (D3 ). However, this contradicts rank X > rankŴ zu .
We conclude strict feasibility of (11) from Theorem 2 and Lemma 2. Proof If all the assumptions of Theorem 4 hold, then it follows from (D3 ) of Remark 2 that (14) has no solutions. Similarly, (15) has no solutions under the assumption on invariant zeros of G yw (s). Therefore, (12) has no solutions, and (11) is strictly feasible due to Theorem 2.
Assume that D 12 is not full column rank. It follows from Lemma 1 and (D4) in Lemma 2 that (11) is not strictly feasible. The case that D T 21 is not full column rank can be proved in a similar manner.
Assume that G zu (s) has a stable invariant zero. It follows from Theorem 2 and (D1) of Lemma 2 that (11) is not strictly feasible. The case that G yw (s) has a stable invariant zero can be proved in a similar manner.  (7) is said to be shape singular if (i) D 12 is not full column rank, (ii) D 21 is not full row rank, (iii) G zu (s) has an invariant zero on the imaginary axis or (iv) G yw (s) has an invariant zero on the imaginary axis. Otherwise it is said to be shape regular. Some remedies has been already proposed in [15,16,18] although the algebraic Riccati equation and inequality approach is not available to singular cases. Theorem 4 implies that it is difficult to handle singular cases even via the LMI approach. said to have infinite zeros when D 12 or D T 21 is not full column rank. To distinguish the two types of infinite zeros, we say that G zu (resp. G yw ) has an infinite zero if D 12 (resp. D T 21 ) is not of full column rank.

Singularity degree of the dual problem
In this section, we deal with a regular H ∞ output feedback control problem and will prove that the singularity degree of the dual problem (11) is at most one. In particular, when both G zu (s) and G yw (s) have no stable invariant zeros, the dual problem (11) is strictly feasible due to Theorem 4, and thus the singularity degree of (11) is zero. Hence, Theorem 5 Consider a regular H ∞ output feedback control problem for (7). We assume that G zu (s), G yw (s) or both have at least one stable invariant zero. Then the singularity degree of the dual problem (11) is one.

Remark 4 1. As we have seen in the proof of Lemma 2, we can construct a solution
(γ , X , Y , W zu , W yw , W c ) of (12). For simplicity, we introduce a way to make these matrices in the case where G zu (s) has r stable invariant zeros which are distinct from each other. Let λ j and (u j , v j ) ( j = 1, . . . , r zu ) be stable invariant zeros of G zu (s) and the null vectors associated with λ j , respectively. We define matrices X and W zu as follows.
whereū j denotes the complex conjugate of the vector u j . Similarly, we can construct Y and W yw from stable invariant zeros of G yw (s) and their null vectors if they exist in G yw (s). Otherwise set Y = O and W yw = O. Also we define γ and W c by Then we can see that (γ , X , Y , W zu , W yw , W c ) is a certificate for non-strict feasibility of (11), i.e., it satisfies (12). See the proof of Lemma 2 for the detail. Theorem 5 ensures that if H ∞ output feedback control problem is regular, then the LMI problem reduced by this certificate is strictly feasible. 2. As we mentioned in Sect. 2.2, [10,27] proposes computation of facial reduction.
However, these computations may not work for (12) because (12) does not necessarily have a structure required in [10,27]. On the other hand, as discussed above, we can construct a solution to (12). Also, it would be natural to reformulate (12) into an LMI problem to find a solution of (12). However, this construction is simpler than solving such an LMI problem. 3. The singularity degree often expresses difficulty in H ∞ output feedback control problems. Indeed, Theorem 5 implies that regular H ∞ output feedback control problems are relatively easy to handle because the facial reduction requires at most one iteration, and the certificate can be constructed easily. In contrast, singular H ∞ output feedback control problems are not necessarily so. Indeed, the singularity degree of (11) may be more than one. In this case, as we mentioned in Sect. 2.2, it is difficult to solve LMI problems with a high singularity degree accurately. Also, SDP solvers may return completely different solutions from the optimal solutions for such LMI problems. Hence, it is difficult to handle singular H ∞ output feedback control problems from the viewpoint of the theory of LMI problems.
For Theorem 5, it is sufficient to prove that the conic system (6) at the second iteration of the facial reduction for (11) has no solutions. As we have already seen in Sect. 3, we simplified the conic system (12), which corresponds to (6) at the first iteration of the facial reduction, by using diagonal structure. We also simplify the conic system (6) at the second iteration in the next subsection.

Simplification of the conic system at the second iteration of the facial reduction
Since (11) has the three positive semidefinite cones S n 1 + , S n 2 + and S 2n + , faces generated by the facial reduction consists of the Cartesian product of faces of those positive semidefinite cones. As at least one of G zu (s) and G yw (s) has a stable invariant zero, (12) has a solution (γ , X , Y , W zu , W yw , W c ) due to D1'). In fact, as we have already seen in Remark 1, we can construct a solution of (12) from (14) and (15).
The face generated at the first iteration of the facial reduction is the Cartesian product of the following sets.
Here the solution (γ , X , Y , W zu , W yw , W c ) has the form of We can see the block diagonal structure in the matrices W zu , W yw and W c . Using this structure, we simplify the conic system (6) at the second iteration of the facial reduction for (11), which can be formulated as follows.
We summarize the simplification of (21) in the following theorem. We will give a proof of this theorem in "Appendix C".

Remark 6
We can see the following from Theorem 6.
1. We can separate the conic system (22) into the following two conic systems.

Proof of Theorem 5
To prove Theorem 5, we assume for simplicity that G zu (s) has at least one stable invariant zero, but G yw (s) has no invariant zeros. In fact, as we have seen in Remarks 5 and 6 , if both G zu (s) and G yw (s) have a stable invariant zero, then we can reduce the faces simultaneously. Thus, the proof of the case can be done similarly. Under this assumption, (14) has a solution (X 1 ,Ŵ zu ), whereas (15) has no solutions. Thus, (24) also has no solutions. Moreover, we obtain the following corollary from Theorem 6 and Remark 6. (21) has no solutions if and only if (23) has no solutions. In addition, if any solution (X 2 ,Ŵ zu,2 ) of the following conic system satisfies X 2 ∈F ⊥ c,X andŴ zu,2 ∈F ⊥ zu , then (23) has no solutions.

Corollary 2 Assume that G zu (s) has at least one stable invariant zero, but G yw (s) has no invariant zeros. Then
The following lemma plays an essential role in the proof of Theorem 5. We give a proof of Lemma 3 in "Appendix D". (7). Let (X 1 ,Ŵ zu ) be a solution of (14). We assume that (i) we have rank X 1 = rankŴ zu and that (ii) rank X 1 ≥ rank X for each solution (X , W ) of (14). Then (23) has no solutions.

Proof of Theorem 5
Assume that G zu (s) has r stable invariant zeros including their multiplicities. Then it follows from D3') that there exist X 1 ∈ S n + andŴ zu ∈ S n 1 −m 1 + such that rank X 1 = rankŴ zu = r andŴ zu = − He(E T 1 X 1 G 1 ). Thus, (i) of Lemma 3 holds.
Suppose that there exists a solution (X , W ) of (14) such that rank(X ) > rank(X 1 ) = r . As we consider a regular H ∞ output feedback control problem, it follows from Corollary 1 that rank X = rank W > r . The number of stable invariant zeros in G zu (s) is more than r due to D3'). This is a contradiction to the assumption imposed in this proof. Therefore since (X 1 ,Ŵ zu ) satisfies all assumptions of Lemma 3, it follows from Corollary 2 that (21) has no solutions. This implies that the facial reduction spends one iteration for the dual.

Conclusion
We discussed the strict feasibility of the dual problem (11) of the LMI problem (10) for the H ∞ output feedback control problem. We provided in Theorem 4 a necessary and sufficient condition for the dual to be strictly feasible. In particular, we have seen in Lemma 2 that a certificate of non-strict feasibility of the dual can be constructed via the null vectors associated with stable invariant zeros of G zu (s) or G yw (s). Using this certificate, we can apply the facial reduction to the dual problem. Furthermore, we proved that the singularity degree of the dual is at most one for any regular H ∞ output feedback control problem. This property ensures that such a reduced dual problem is strictly feasible if the H ∞ output feedback control problem is regular and has at least one stable invariant zero of G zu (s) or G yw (s).
On the other hand, the facial reduction is more complicated for singular H ∞ output feedback control because the singularity degree may be more than one. Future work is a deep understanding of such cases.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Proof of Lemma 1
We prove the contraposition of the only-if part. There exists u = 0 such that u T D T 12 = 0. It follows from B T 2 E 1 + D T 12 E 2 = O that we obtain u T B T 2 E 1 = 0. Suppose that E 1 is of full row rank, then u T B T 2 = 0. From 2 of Assumption 1, we obtain u = 0, which contradicts u = 0. Therefore if E 1 is of full row rank, then D 12 is of full column rank.
For the if part, we assume that E 1 is not of full row rank. Then there exits Since D 12 is of full column rank, we obtain v = 0, which contradicts v = 0. This completes the proof of the if part.

B Proof of Lemma 2
We use the following lemma to prove Lemma 2. An extension of this lemma was proved in [14, (iii) of Lemma 3].

Lemma 4 Let F and G be m × n matrices. Assume F is of full column rank. Then
He(FG T ) ∈ S m + if and only if there exists an n × n matrix such that G = F and He( ) ∈ S n + . For simplicity of the proof, we prove (D1), (D2), and (D3) for R instead of C, that is, we consider only the case where all invariant zeros and their null vectors are real.
Otherwise, it is sufficient to apply a quasi-diagonal transformation to U , V and , simultaneously.
For (D1), we have Au + B 2 v = λu and C 1 u + D 12 v = 0, and thus Hence, we obtain G T We prove (D2). We decompose X =ŨŨ T with a matrixŨ ∈ R n×r that is of full column rank. Then we haveŴ zu = He It follows from Lemma 4 that there exists˜ ∈ R r ×r such that He(˜ ) ∈ S r + and −G T There exists an m 2 × r matrixṼ such that the triplet (Ũ ,Ṽ , −˜ ) satisfies (19).
For˜ , there exist an r × r non-singular matrix P and a Jordan matrix such that = P −1˜ P. Define U =Ũ P and V =Ṽ P. Then clearly, the triplet (U , V , − ) satisfies (19). Since He(˜ ) ∈ S r + , all eigenvalues of − are in C − . In addition, since P is nonsingular andŨ is of full column rank, U is also of full column rank.
To prove (D3), we use the following fact to construct the desired matrix X . Proof of Claim 1 If is diagonal, then it is enough to set D = I r . Otherwise, it is enough to consider the case where is the the following form.
We note that −2λ is positive. Hence, if all of d 2 /d 1 , . . . , d r /d r −1 are sufficiently close to zero, then the matrix − He(D T D −1 ) is positive definite. For instance, we set d k = k (k = 1, . . . , r ) for sufficiently small > 0. Then the matrix converges to −2λI r ∈ S r ++ as goes to zero. Therefore, the matrix − He(D T D −1 ) is positive definite for sufficiently small > 0.
We prove the if part of D3). Let X = U D 2 U T by using the diagonal matrix D in Claim 1. Clearly, X ∈ S n + and rank X = r . In addition, eliminating V from (20), we Since all eigenvalues of exist on the open left half-plane, it follows from Claim 1 that rankŴ zu = r .
Next we prove the only if part of (D3). It follows from (D2) that there exists (U , V , ) in (19).

C Proofs of Theorem 6
Assume that (γ 2 , X 2 , Y 2 , W zu,2 , W yw,2 , W c,2 ) is a solution of (22). Clearly, we have Then W zu • H = m 1 γ 2 ≥ 0, and thus γ 2 = 0. Substituting γ 2 = 0 to (21), we then obtain the following conic system. Proof We prove only (P1) because (P2) can be proved in a similar manner to (P1). We assume that W ∈ F * . For any U 11 ∈ F 1 , we have On the other hand, we assume that W j ∈ F * j for j = 1, 2. For any U ∈ F, we have The following claim ensures that we can apply Lemma 5 to (26)

Proof of Claim 2
We prove only the first statement. We have F zu = S n 1 Then since F zu ⊂ S n 1 + , we have U ∈ S n 1 + and U 11 ∈ S n 1 −m 1 + . In addition, we have 0 = U • W zu = U 11 •Ŵ zu , and thus U 11 ∈F zu . Similarly, we have U 22 ∈ S m 1 + . Hence (C1) holds.
For any U 11 ∈F zu , U 11 ∈ S n 1 −m 1 Therefore, we can see that (C1) and (C2) of Lemma 5 hold.
The conic system (26) can be reformulated as follows.

Proof of Claim 3
We have F c = S 2n Finally, applying Lemma 5 to F c due to Claim 3, we can equivalently reformulate (27) to (22). This completes the proof of Theorem 6.

D Proof of Lemma 3
Let (X 2 ,Ŵ zu,2 ) be a solution of (25). Then we consider the two cases: (I) X 2 ∈F ⊥ c,X and (II) X 2 ∈F * c,X \F ⊥ c,X . In the case (I), we will proveŴ zu,2 ∈F ⊥ zu . This is the desired result of Lemma 3. In the case (II), we can construct a solution (X 1 , − He(E T 1X 1 G 1 )) of (14) that has a higher rank than X 1 . However, this contradicts the assumption (ii) of Lemma 3. Since the case (II) does not occur, it follows from the case (I) and Corollary 2 that (23) has no solutions.
Before giving the proof, we discuss the structure ofF zu ,F c,X and their dual cones. For simplicity, we use F and G to denoteF zu andF c,X , respectively. We decompose X 1 as follows: where r is the rank of X 1 , ∈ S r ++ and P := [ P 1 P 2 ] is an orthogonal matrix. Then G, G * and G ⊥ can be formulated as follows: T 2 X 2X3 P T :X 1 ∈ S r ,X 2 ∈ R (n−r )×r ,X 3 ∈ S n−r We considerŴ zu = − He(E T 1 X 1 G 1 ) to provide the structure of F. Since we have already assumed that D 12 is of full column rank, E 1 is of full row rank due to Lemma 1. It follows from Lemma 4 that there exists a matrix ∈ R r ×r such that He( ) ∈ S r + and −G T In particular, we can see that He( ) ∈ S r ++ . Otherwise, the assumption (i) of Lemma 3 fails becauseQ is nonsingular. Using this representation, we can give explicit forms of F, F * and F ⊥ as follows: T is the inverse ofQ. We characterize the two dual faces F * and G * in the following claim.

Claim 4
Let X ∈ G * be as in (29). Then − He(E T 1 X G 1 ) ∈ F * if and only if − He(X 3 P T 2 G 1 R T 2 ) ∈ S n−r + andX 3 P T 2 G 1 R T 3 = O. In particular, if X ∈ G ⊥ , then − He(E T 1 X G 1 ) ∈ F ⊥ .
From the inequality, we obtain − He(X 3 P T 2 G 1 R T 2 ) ∈ S n−r + by setting S 5 = O. In addition, we define S 5 = (X 3 P T 2 G 1 R T 3 ) T and α > σ max (S 5 ). Here σ max (S 5 ) stands for the maximum singular value of S 5 . Then by substituting S 4 S T This implies S 5 = (X 3 P T 2 G 1 R T 3 ) T = O. Therefore we can conclude the first statement.
For the second statement, if X ∈ G ⊥ , thenX 3 = O, and thus − He(E T 1 X G 1 )•S = 0 for any S ∈ F. This implies the second statement.
We consider the case (I), i.e., X 2 ∈ G ⊥ . ThenŴ zu ∈ F ⊥ follows from Claim 4. We next consider the case (II), i.e., X 2 ∈ G * \G ⊥ . Thus we haveX 3 ∈ S n−r + \{O} of X ∈ G * . We can decomposeX 3 = F F T with a full column rank F ∈ R (n−r )×k . Here k is the rank ofX 3 . Then it follows from Claim 4 that we have F T P T 2 G 1 R T 3 = O. In addition, it follows from Claim 4 and Lemma 4 that there exists an F ∈ R k×k such that He( F ) ∈ S k + and −R 2 G T 1 P 2 F = F F . Claim 5 The Sylvester equation S T −1 T + F S T = F T P T 2 G 1 R T 1 has a unique solution S ∈ R r ×k . We defineX byX = P α SF T F S T F F T P T , where α is sufficiently large positive and S is a solution in Claim 5. ThenX ∈ S n + if we select α > 0 such that α − SS T ∈ S r ++ . The following claim ensures that (X , − He(E T 1X G 1 )) is also a solution of (14). In particular, the rank ofX is greater than rank X 1 . This, however, contradicts the assumption (ii) of Lemma 3. Therefore it follows from the case (I) and Corollary 2 that (23) has no solutions. Proof of Claim 6 Note that −G T 1 P 1 T = E T 1 P 1 , −R 2 G T 1 P 2 F = F F , F T P T 2 G 1 R T 3 = O andRQ = I . Using these equations, we obtaiñ

Proof of
The (2, 1)st element of −R He(E T 1X G 1 )R T is zero because S is a solution of the Sylvester equation in Claim 5. Therefore − He( Since He( ) ∈ S r ++ , He( F ) ∈ S k + and α is sufficiently large positive, the matrix − He(E T 1X G 1 ) is positive semidefinite.