A limiting analysis on regularization of singular SDP and its implication to infeasible interior-point algorithms

We consider primal-dual pairs of semidefinite programs and assume that they are singular, i.e., both primal and dual are either weakly feasible or weakly infeasible. Under such circumstances, strong duality may break down and the primal and dual might have a nonzero duality gap. Nevertheless, there are arbitrary small perturbations to the problem data which would make them strongly feasible thus zeroing the duality gap. In this paper, we conduct an asymptotic analysis of the optimal value as the perturbation for regularization is driven to zero. Specifically, we fix two positive definite matrices, Ip\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_p$$\end{document} and Id\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_d$$\end{document}, say, (typically the identity matrices), and regularize the primal and dual problems by shifting their associated affine space by ηIp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eta I_p$$\end{document} and εId\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon I_d$$\end{document}, respectively, to recover interior feasibility of both problems, where ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} and η\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eta $$\end{document} are positive numbers. Then we analyze the behavior of the optimal value of the regularized problem when the perturbation is reduced to zero keeping the ratio between η\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eta $$\end{document} and ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} constant. A key feature of our analysis is that no further assumptions such as compactness or constraint qualifications are ever made. It will be shown that the optimal value of the perturbed problem converges to a value between the primal and dual optimal values of the original problems. Furthermore, the limiting optimal value changes “monotonically” from the primal optimal value to the dual optimal value as a function of θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document}, if we parametrize (ε,η)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varepsilon , \eta )$$\end{document} as (ε,η)=t(cosθ,sinθ)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varepsilon , \eta )=t(\cos \theta , \sin \theta )$$\end{document} and let t→0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t\rightarrow 0$$\end{document}. Finally, the analysis leads us to the relatively surprising consequence that some representative infeasible interior-point algorithms for SDP generate sequences converging to a number between the primal and dual optimal values, even in the presence of a nonzero duality gap. Though this result is more of theoretical interest at this point, it might be of some value in the development of infeasible interior-point algorithms that can handle singular problems.


Introduction
Strong feasibility of primal and dual problems is a standard regularity condition in convex optimization, e.g., [24], [36,Chapter 3].Once this condition is satisfied, powerful algorithms such as interior-point algorithms and the ellipsoid algorithm can be applied to solve them efficiently, at least in theory.On the other hand, if a problem at hand does not satisfy this condition, it can be much harder to solve.For instance, the problem may have a positive duality gap.Due to the advance of techniques of optimization modelling, there are many problems which do not satisfy primal-dual strong feasibility by nature.
A first attempt to apply interior-point algorithms to such problems would be to perturb the problem to recover strong feasibility at both sides, i.e., "regularization."But it is not clear how this perturbation affects the optimal value.In this paper, we focus on semidefinite programs (SDP) and conduct an asymptotic analysis of the optimal value function when the problem is perturbed slightly to recover primal-dual strong feasibility.The analysis is general enough to be applicable to any ill-behaved problems without assuming constraint qualifications, and has interesting implications to the convergence theory of interior-point algorithms.
It is known that every SDP falls into one of the four statuses: strongly feasible, weakly feasible, weakly infeasible and strongly infeasible, e.g., [23].Difficult situations like positive duality gap may occur when the problem is either weakly feasible or weakly infeasible.We may call such problems "singular." A standard method to deal with singular problems in semidefinite programming and general conic convex programming is facial reduction [4-7, 15, 40, 43, 46].This approach recovers strong feasibility by finding the minimal face containing the feasible region.While many of the earlier papers on facial reduction focused on weakly feasible problems, it is relatively recent that weak infeasibility is analyzed in this context [15,18,29].Along this line of developments, the paper [20] showed that any SDP can be solved completely just by calling an interior-point oracle polynomially many times by using facial reduction, where the interior-point oracle is an idealized interior-point algorithm which returns a primaldual optimal solutions given a primal-dual strongly feasible SDP.In the context of SDPs with positive duality gaps, Ramana developed an extended Lagrangian dual SDP for which strong duality always holds [34].Later it was shown in [35] this dual problem is strongly related to facial reduction, see also [28].
Implementation of a facial reduction algorithm is subtle and not easy, being vulnerable to rounding errors.Nevertheless, it is worth mentioning that there are several recent works focused on practical issues regarding facial reduction or on heuristics based on facial reduction [9,30,31,49].
So far, we have discussed approaches based on (or related to) facial reduction in order to deal with singular SDPs.Unrelated to that, the paper [16] considered an application of the Douglas-Rachford algorithm to the analysis of pathological behavior in SDPs.Interestingly, they show it is sometimes possible to identify the presence of positive duality gaps by observing whether certain sequences converge to 0 or to ∞, see [16, Figure 1, Sections 2.8 and 2.9].
As mentioned previously, in this paper we will consider yet another approach for analyzing singular SDPs: regularization.The idea is to perturb the problem slightly to recover strong feasibility on both primal and dual sides.Once strong feasibility is recovered, we may, say, apply interior-point algorithms to the regularized problems.However, the resulting approximate optimal solution is not guaranteed to be close to the optimal solution to the original problem, though intuitively we might expect or hope so.In particular, if we consider a SDP problem with a finite and nonzero duality gap, it is not clear what happens with the optimal value and the optimal solutions of the regularized problem as functions of the perturbation when the perturbation is reduced to zero.
Analyzing this problem is one of the main topics of the current paper.We consider primal and dual pairs of semidefinite programs and assume they are singular i.e., either weakly feasible or weakly infeasible (see Section 2.1 for definitions).Under these circumstances, there are arbitrarily small perturbations which make the perturbed pair primal-dual strongly feasible.Then, we fix two positive definite matrices, and shift the associated affine spaces of the primal and dual slightly in the direction of these matrices so that the perturbed problems have interior feasible solutions.Under this setting, we analyze the behavior of the optimal value of the perturbed problem when the perturbation is reduced to zero while keeping the proportion.
First, we demonstrate that, if perturbation is added only to the primal problem to recover strong feasibility, then the optimal value of the perturbed problem converges to the dual optimal value as the perturbation is reduced to zero, even in the presence of nonzero duality gap.An analogous proposition holds for the dual problem.We derive them as a significantly simplified version of the classical asymptotic strong duality theorem (see, for instance, [1,3,8,23,24,36] and Chapter 2 of [42]).
Then we analyze the case where perturbation is added to both primal and dual sides of the problem.We will demonstrate that in that case the limiting optimal value of the perturbed problems converges to a value between the primal and dual optimal values of the original problem even in the presence of nonzero duality gap.The limiting optimal value is a function of the relative weight of primal and dual perturbations, and reduces monotonically from the primal optimal value to the dual optimal value as the relative weight shifts from the dual side to the primal side.
The result provides an interesting implication to the behavior of infeasible interior-point algorithms applied to general SDPs [12,13,25,27,32,44,48].In particular, we pick up two well-known polynomial-time infeasible interior-point algorithms by Zhang [48] and Potra and Sheng [32], and prove the following (see Theorems 5 and 6): 1.If neither the primal nor the dual are strongly infeasible then: (a) the algorithms always generate sequences (X k , S k , y k ) that are asymptotically primal-dual feasible and such that the "duality gap" X k • S k converges to zero.
(b) the sequence of modified (primal and dual) objective values converges to a number in [θ D , θ P ], where θ P and θ D are the primal optimal value and the dual optimal value, respectively.
2. Otherwise (i.e., if either the primal or the dual is strongly infeasible), the algorithms fail to generate a sequence such that the duality gap X k • S k converges to zero.(Needless to say, there is no way to generate an asymptotically primal-dual feasible sequence in this case.) One implication of the result above is that, at least in theory, these interior-point algorithms generate sequences converging to the optimal value as long as strong feasibility is satisfied at one side of the problem.Furthermore, even in the presence of a finite duality gap, they still generate sequences converging to values between the primal and dual optimal values.It is also worth mentioning that our analysis shows that, by setting appropriate initial iterates, it is possible to control how close the limit value will be to the primal or the dual optimal values.Though this result is more of theoretical interest, this might be of some value if one wants to solve mixed-integer SDP (MISDP) through branch-and-bound and linear SDP relaxations.As discussed in [10], it is quite possible that the relaxations eventually fail to satisfy strong feasibility at least one of the sides of the problem.
Nevertheless, the solutions obtained by the infeasible interior-point methods described above can still be used as bounds to the optimal values of the relaxed linear SDPs regardless of regularity assumptions or constraint qualifications (at least in theory).
This paper is organized as follows.In Section 2, we describe our main results.Section 3 is a preliminary section where we review asymptotic strong duality, infeasible interior-point algorithms, and semialgebraic geometry.In Section 4, we develop a main analysis when both primal and dual problems are perturbed.In Section 5, we apply the developed result to an analysis of the infeasible primal-dual algorithms.In Section 6, illustrative instances will be presented.

Main Results
In this section, we introduce our main results after providing the setup and some preliminaries.We also review existing related results.

Setup and Terminology
First we introduce the notation.The space of n × n real symmetric matrices will be denoted by S n .We denote the cone of n × n real symmetric positive semidefinite matrices and the cone of n × n real symmetric positive definite matrices by S n + and S n ++ .For U, V ∈ S n , we define the inner product U • V as U ij V ij , and we use U 0 and U ≻ 0 to denote that U ∈ S n + and U ∈ S n ++ , respectively.The n × n identity matrix is denoted by I. We denote the Frobenius norm and the operator norm by X F and X .For v ∈ R k , we denote by v its Euclidean norm.
In this paper, we deal with the following standard form primal-dual semidefinite programs where C, A i , i = 1, . . ., m, X, S are real symmetric n × n matrices and y ∈ R m .For ease of notation, we define the mapping A from S n to R m : and introduce We denote by v(P) and v(D) the optimal values of P and D, respectively.We use analogous notation throughout the paper to denote the optimal value of an optimization problem.For a maximization problem, the optimal value +∞ means that the optimal value is unbounded above and the optimal value −∞ means that the problem is infeasible.For a minimization problem, the optimal value −∞ means that the optimal value is unbounded below and the optimal value +∞ means that the the problem is infeasible.
It is well-known that v(P) = v(D) holds under suitable regularity conditions, although, in general, we might have v(P) = v(D), i.e., the problem may have a nonzero duality gap.We also note that v(P) and v(D) might not be necessarily attainable.
In general, P is known to be in one of the following four different mutually exclusive status (see [24]).
1. Strongly feasible: there exists a positive definite matrix satisfying the constraints of P, i.e., V ∩ S n ++ = ∅.This is the same as Slater's condition.

3.
Weakly infeasible: P is infeasible but the distance between S n + and the affine space V is zero, i.e., V ∩ S n + = ∅ but the zero matrix belongs to the closure of S n + − V.

4.
Strongly infeasible: P is infeasible but not weakly infeasible.Note that this includes the case where V = ∅.
The status of D is defined analogously by replacing V by the affine set We say that a problem is asymptotically feasible if it is either feasible or weakly infeasible.
As a reminder, we say that a problem is singular if it is either weakly feasible or weakly infeasible.

Main Results
Now we introduce the main results of this paper.We say that a problem is asymptotically primal-dual feasible (or asymptotically pd-feasible, in short) if both P and D are asymptotically feasible.Evidently, the problem is asymptotically pd-feasible if and only if both P and D are feasible or weakly infeasible.The analysis in this paper is conducted mainly under this condition.
Note that asymptotic pd-feasibility is a rather weak condition.Many difficult situations such as finite nonzero duality gaps and weak infeasibility of both P and D are covered under this condition.Furthermore, since strong infeasibility can be detected by solving auxiliary SDPs that are both primal and dual strongly feasible (see [19]), checking whether a given problem is asymptotically pd-feasible or not can also be checked by solving SDPs that are primal and dual strongly feasible.
We consider the following primal-dual pair P(ε, η) and D(ε, η) obtained by perturbing P and D with two positive definite matrices I p and I d and two nonnegative parameters ε and η: and Using (1), we have While I p and I d represent the direction of perturbation, ε and η represent the amount of perturbation.In particular, we could take, for example, I p = I d = I, where I is the n × n identity matrix.We note that the perturbed pair (2) and ( 3) was used in the study of infeasible interior-point algorithms [32] and facial reduction [40].
If the problem is asymptotically pd-feasible, D(ε, η) is strongly feasible for any ε > 0 and P(ε, η) is strongly feasible for any η > 0. To see the strong feasibility of P(ε, η), we observe that there always exists X −ηI p /2 satisfying A i • X = b i , i = 1, . . ., m, since P is weakly infeasible or feasible.Then, we see that the matrix X = X + ηI p is positive definite and a feasible solution to P(ε, η).We emphasize that the primal-dual pair P(ε, η) and D(ε, η) is a natural and possibly one of the simplest regularizations of P and D which ensures primal-dual strong feasibility under perturbation.
We define v(ε, η) to be the common optimal value of P(ε, η) and D(ε, η) if they coincide.If the optimal values differ, v(ε, η) is not defined.Suppose that P and D are asymptotically pd-feasible.In this case, from the the duality theory of convex programs, the function v(ε, η) has the following properties: 2. v(ε, 0) is well-defined as long as ε > 0 and it takes the value +∞ if P is infeasible.
3. v(0, η) is well-defined as long as η > 0 and it takes the value −∞ if D is infeasible.
Therefore, although the regularized pair P(ε, η) and D(ε, η) satisfies primal-dual strong feasibility if ε > 0 and η > 0, it is not clear whether this is actually useful in solving SDP under notorious situations such as the presence of nonzero duality gaps.This is precisely one of the main topics of this paper: an analysis on the behavior of the regularized problems without imposing any restrictive assumption.
In this context, it is worth mentioning that the following asymptotic strong duality results  [1,8].This theory received renewed attention with the emergence of conic linear programming; see, for instance, [3,23,24,36] and Chapter 2 of [42].We will prove (i) and (ii) in the next section, see Theorem 3. In comparison with the classical asymptotic strong duality theorem, Theorem 3 considers a smaller perturbation space.Now we are ready to describe the main results.They are developed to interpolate between (i) and (ii).The first result is the following theorem.
Here we remark that Theorem 1 includes the case where the limit is ±∞.Theorem 1 implies that the limit of the optimal value of the perturbed system exists but it is a function of the direction used to approach (0, 0).For θ ∈ [0, π/2], let us consider the function v a (θ) ≡ lim t↓0 v(t cos θ, t sin θ), which is the limiting optimal value of v(•) when it approaches zero along the direction making an angle of θ with the ε axis.With that, v a (0) and v a (π/2) are the special cases corresponding to dual-only perturbation and primal-only perturbation, respectively.So we abuse notation slightly and define v a (D) ≡ v a (0) and v a (P) ≡ v a (π/2). ( Below is our second main result.
Theorem 2 If the problem is asymptotically pd-feasible, the following statements hold.
Theorem 2 is proved by using Theorem 4 which establishes monotonicity and convexity of lim t→0 v(t, tβ).Now we turn our attention to the connection of these main results to the convergence analysis of the primal-dual infeasible interior-point algorithm.Indeed, the pair (2) and (3) appears often in the analysis of infeasible interior-point algorithms.In particular, primaldual infeasible interior-point algorithms typically generate a sequence of feasible solutions to P(t k , t k ) and D(t k , t k ), where I p and I d are determined by the initial value of the algorithm and t k is a positive sequence converging to 0. By Theorem 2 , the common optimal value v(t k , t k ) of P(t k , t k ) and D(t k , t k ) converges to v a (π/4) which is between v(P) and v(D).Therefore, if we can show that an infeasible interior-point algorithm generates a sequence which approaches v(t k , t k ) as k → ∞, we can prove that that sequence converges to v(π/4) in the end.
Exploiting this idea, we obtain the following convergence results without any assumption on the feasibility status of the problem.We consider two typical well-known polynomialtime algorithms by Zhang [48] and Potra and Sheng [32].But the idea can be applied to a broad class of infeasible interior-point algorithms to obtain analogous results.They are stated formally in Theorem 5 and Theorem 6, and summarized as follows: 1.The algorithms [32,48] generate asymptotically pd-feasible sequences with the duality gap X k • S k and t k converging to zero if and only if P and D are asymptotically pd-feasible.
2. If P and D are asymptotically pd-feasible, the sequence of modified primal and dual objective values converges to a common value between the primal optimal value v(P) and the dual optimal value v(D) even in the presence of nonzero duality gap.
The modified primal and dual objective values mentioned in the statements can be easily computed using the current iterate and do not require any extra knowledge.
If P and D are not asymptotically pd-feasible, namely, if one of the problems is strongly infeasible, the algorithms get stuck at a certain point and they fail to generate an asymptotically pd-feasible sequence and fails to drive duality gap and t k to 0. But the algorithms never fails to generate asymptotically pd-feasible sequences as long as the problems are asymptotically pd-feasible.
We note that Theorems 5 and 6 are to some extent surprising in that infeasible interiorpoint algorithms work in a meaningful manner without making any restrictive assumptions, at least in theory.This might have interesting implications when solving SDP relaxations arising from hard optimization problems such as MISDP by using infeasible interior-point algorithms.The theorems guarantees that the modified objective function value converges to a value between the primal and dual optimal values.Therefore, the limiting modified objective value can always be used to bound the optimal value of linear SDP relaxations obtained when solving MISDP via, say, branch-and-bound as in [10].We should mention, however, that if one tries to implement this idea, one would still need to find a way to overcome the severe numerical difficulties that may happen when attempting to solve singular SDPs directly.
Finally, while the results of this paper clarifies some aspects of the limiting behavior of infeasible interior-point algorithms when applied to a problem with nonzero duality gap, we remark that deriving similar results for self-dual embedding approaches is still an open problem.

Related Work
Our work is closely related to the perturbation theory and sensitivity analysis which are, of course, classic topics in the optimization literature.In particular, there are a number of results on perturbation of semidefinite programs including [3,23,24,42] which were mentioned in the introduction.The book by Bonnans and Shapiro [3], for instance, has many results on the perturbation and sensitivity analysis of general conic programs that are also applicable to SDPs.See also [37] for earlier results in the context of convex optimization.However, many of those results require that some sort of constraint qualification holds.
In particular, in Chapter 4 of [3] there is a discussion of a family of optimization problems having the format where f and G are functions depending on the parameter u and K is a closed convex set in some Banach space.Denote by v(u), the optimal value of ( 6).For some fixed u 0 , many results are proved about the continuity of v(•) [3, Proposition 4.4], or the directional derivatives of v(•) in a neighborhood of u 0 [3, Theorem 4.24].However, these existing results do not cover the situations we will deal in this paper.[3, Proposition 4.4], for example, requires a condition called inf-compactness, which implies, in particular, that the set of optimal solutions of the problem associated to v(u 0 ) be compact.[3,Theorem 4.24], on the other hand, requires that the set of optimal solutions associated to v(u 0 ) be non-empty.In contrast, neither compactness nor non-emptiness is assumed in this paper.
The perturbation we consider is closely related to the infeasible central path appearing in the primal-dual infeasible interior-point algorithms.In fact, we use some properties of the infeasible central path in our proof.The papers [21,33] showed the analyticity of the entire trajectory including the end point at the optimal set under the existence of primal-dual optimal solutions satisfying strict complementarity conditions.A very recent paper [41] analyzes the limiting behavior of singular infeasible central paths taking into account the singularity degree.Therein, the authors analyze the speed of convergence under the assumption that the feasible region exists and is bounded.No strong feasibility assumption is made, although we remark that if the feasible region of a primal SDP is non-empty and bounded, then its dual counterpart must satisfy Slater's condition.While their analysis conducts a detailed limiting analysis on the asymptotic behavior of the central path, our analysis deals with the limiting behavior of the optimal value of the perturbed system under weaker assumptions.
In reality, it may be necessary to estimate the error of an approximate optimal solution to a problem with a finite perturbation.In this regard, an interesting and closely related topic to the limiting perturbation analysis is error bounds.The error bound analysis is relatively easy under primal-dual strong feasibility, but it becomes much harder for singular SDPs.See [22,43] for SDP and SOCP, and [17] for a more general class of convex programs.The relationship between forward and backward errors of a semidefinite feasibility system is closely related to its singular degree, which, roughly, is defined as the number of facial reduction steps necessary for regularizing the problem.Recently, some analysis of limiting behaviors of the external (or infeasible) central path involving singularity degree is developed in [41].Finally, we mention [39] which conducted a sensitivity analysis of SDP under perturbation of the coefficient matrices "A i ".

Preliminaries
In this section, we introduce three ingredients of this paper, namely, asymptotic strong duality, infeasible interior-point algorithms and real-algebraic geometry.

Asymptotic Strong Duality
A main difference between the duality theory in linear programming and general convex programming is that the latter requires some regularity conditions for strong duality to hold.If such regularity condition is violated, then the primal and dual may have nonzero duality gap [34].Nevertheless, the so-called asymptotic strong duality holds even in such singular cases [1,3,8,23,24,36,42].Here we quickly review the result and work on it a bit to derive a modified and simplified version suitable for our purposes.
Note that the Asymptotic Duality Theorem includes the cases where a-val(•) = ±∞.Now we develop a simplified version of the Asymptotic Duality Theorem.Let ε ≥ 0, and let D(ε) be D(ε, 0), i.e., the relaxed dual problem According to the notation introduced in Section 2.2, the optimal value of ( 8) is written as v(ε, 0).Recall also that lim Next we consider an analogous relaxation at the primal side.Notice that ( 8) is obtained by shifting the semidefinite cone by −εI d .The analogous perturbation of the primal problem is given by min where η ≥ 0. Letting X ≡ X + ηI p , we obtain The optimal value of ( 9) is monotone decreasing in η, because the feasible region enlarges as η is increased (strictly speaking, it does not shrink).Observe also that this problem is P(0, η) with the objective function shifted by a constant −ηC • I p .Since this constant vanishes as η → 0, we obtain {The optimal value of (10)}.Now we prove Theorem 3, which is a simplified version of the asymptotic duality theorem discussed above.Compared with the asymptotic duality results discussed in [1,3,8,23,24,36], the key difference is that we only consider perturbations along a single direction in each of the primal and dual problems, while in the aforementioned works the perturbation space is larger.Indeed, in the Asymptotic Duality Theorem (as stated above), the perturbation space is ∆b < ǫ and ∆C < ǫ at the primal and dual sides, respectively.In contrast, in Theorem 3 below, we only consider perturbations along a single direction at each of the primal and dual problems (i.e., along I p and I d , respectively).Since it is not a priori obvious that the smaller perturbation space is still enough to close the duality gap, we provide a detailed proof showing how to go from the Asymptotic Duality Theorem to Theorem 3.

Theorem 3
The following statements hold.
2. If P is asymptotically feasible, then Proof.Recall that by definition (see ( 5)), we have v a (0) = v a (D) and v a (π/2) = v a (P).First we show that v a (D) = v(P).From the Asymptotic Duality Theorem, a-val(D) = v(P) holds including the special cases where a-val(D) = ±∞.We observe that a-val(D) satisfies a-val(D) = lim where ∆C < ε in ( 7) is changed to ∆C ≤ ε.Since v a (0) is obtained by restricting the condition on ∆C from " ∆C ≤ ε" to "∆C = εI d / I d ", we obtain v a (0) ≤ a-val(D) = v a (D).We also have the converse inequality Here we used I I −1 d I d for the second inequality.The proof of item 1 is complete.We proceed to prove item 2. From the Asymptotic Duality Theorem again, we have v(D) = a-val(P).Hence, for the sake of proving assertion 2, it suffices to show that v a (P) = a-val(P).The proof of the inequality v a (P) ≥ a-val(P) is analogous to the proof for v a (D) ≤ a-val(D).We will now show the converse inequality.If a-val(P) = +∞, then v a (P) ≥ a-val(P) implies that v a (P) = +∞.Therefore, in what follows we assume that a-val(P) < +∞.
By assumption, P is not strongly infeasible (see Section 2.1).By the definition of a-val(P), for every ε > 0 sufficiently small, there exist X ε and ∆b ε such that ∆b ε ≤ ε, X ε is feasible to "A(X) = b + ∆b ε , X 0", and Note that this is still valid even when a-val(P) = −∞.
In addition, the fact that P is not strongly infeasible implies the existence of a solution to the system "A(X ′ ) = b".As a consequence, "A(Y ) = ∆b ε " too has a solution when ∆b ε is as described above.Otherwise, "A(X) = b + ∆b ε " is infeasible, contradicting the existence of X ε above.
Next, we show that there exists M > 0 depending only on A such that "A(Y ) = ∆b ε " has a solution with norm bounded by M ∆b ε .Let V denote the set of solutions to "A(Y ) = ∆b" and let S be a symmetric matrix.Denote by dist (S, V) the Euclidean distance between S and V. Hoffman's lemma (e.g., [11,Theorem 11.26]) says that there exists a constant M depending on A but not on ∆b such that for every S, we have that dist (S, V) is bounded above by M ∆b − A(S) .Taking S = 0, we conclude the existence of Y satisfying A(Y ) = ∆b and Y ≤ M ∆b .
Let Y ε be one such solution.Then Y ε ≤ M ∆b ε ≤ M ε for each sufficiently small ε > 0 and hence lim ε↓0 Observing that I −1 p With that, X ′ ε is positive semidefinite and is a feasible solution to P(0, η) which approaches 0 by driving ε → 0 because of (14).
We are now ready to show the desired assertion.Notice that we have lim ) and ( 14) holds.This fact combined with ( 13) and (15) implies v a (P) ≤ a-val(P).The proof is complete.
Theorem 3 motivates our subsequent discussion and leads naturally to an examination of what happens when P and D are simultaneously perturbed, which is the focus of Theorems 1, 2 and 4.

Infeasible Primal-dual Interior-point Algorithms
We introduce some basic concepts of infeasible primal-dual interior-point algorithms for SDP [32,45,47,48].This is because our analysis leads to a novel convergence property of the infeasible primal-dual interior-point algorithms when applied to singular problems.We also need some theoretical results about infeasible interior-point algorithms in the proof of Theorem 1.In this subsection, we assume that A i (i = 1, . . ., m) are linearly independent.This assumption is not essential but to ensure uniqueness of y and ∆y in the system of equations of the form S = i A i y i + C ′ and ∆S = i A i ∆y + R ′ with respect to (S, y) and (∆S, ∆y), respectively, where C ′ and R ′ are constants, which appear throughout the analysis.

Outline of infeasible primal-dual interior-point algorithms
Primal-dual interior-point methods for P and D are based on the following optimality conditions: Rather than solving this system directly, a relaxed problem is considered, where ν > 0. The algorithm solves ( 16) by solving (17) approximately and reducing ν gradually to zero repeatedly.This amounts to following the central path towards "ν = 0".Let us take a closer look at the algorithm proposed by Zhang, more precisely, Algorithm-B of [48].Let (X, S, y) be the current iterate such that X ≻ 0 and S ≻ 0. The method employs the Newton direction to solve the system (17).More precisely, the first equation XS = νI is replaced with an equivalent symmetric reformulation where P is a constant nonsingular matrix.In Zhang's algorithm, the constant matrix P is set to S 1/2 .Then we consider a modified nonlinear system of equations to (17) where XS = νI is replaced with (19).The Newton direction (∆X, ∆S, ∆y) for that modified system at the point (X, S, y) is the unique solution to the following system of linear equations.
Starting from the kth iterate (X k , S k , y k ) = (X, S, y), the next iterate (X k+1 , S k+1 , y k+1 ) is determined as: The stepsize 0 < s k ≤ 1 is chosen not only so that X k+1 and S k+1 are strictly positive but also carefully so that they stay close to the central path in order to ensure good convergence properties.Then ν is updated appropriately and the iteration continues.Now we briefly describe another representative polynomial-time infeasible primal-dual interior-point algorithm developed by Potra and Sheng [32].Let (X 0 , S 0 , y 0 ) be a point satisfying X 0 ≻ 0 and S 0 ≻ 0 and consider the path defined as follows.
{(X, S, y) The algorithm follows this path by driving t → 0 and using a predictor-corrector method.We note that polynomial-time convergence is proved for both algorithms [32,48] assuming the existence of optimal solutions (X * , S * , y * ) to P and D. In the analysis, the initial iterate (X 0 , S 0 , y 0 ) is set to (ρ 0 I, ρ 1 I, 0) where ρ 0 and ρ 1 are selected to be large enough in order to satisfy the conditions X 0 − X * ≻ 0 and S 0 − S * ≻ 0. Although the polynomial convergence analysis was conducted using this initial iterate, the algorithms themselves can be applied to any SDP problem by choosing (X 0 , S 0 , y 0 ) such that X 0 ≻ 0 and S 0 ≻ 0 as the initial iterate.
In many practical implementations of the algorithm [45,47], they take different stepsizes in the primal and dual space for the sake of practical efficiency.For simplicity of presentation, we only analyze the case (21) which corresponds to the situation where we take the same stepsize in the primal-dual space.
The following well-known property connects Theorems 1 and 2 to the analysis of infeasible interior-point algorithms.
Proposition 1 Let X 0 ≻ 0 and S 0 ≻ 0, and let {(X k , S k , y k )} be a sequence generated by the primal-dual infeasible interior-point algorithms in [32,48] with initial iterate (X 0 , S 0 , y 0 ).Let I ′ d ≡ S 0 − (C − A i y 0 i ) and let I ′ p ≡ X 0 − X where A( X) = b.Then, there exists a nonnegative sequence {t k } such that the following equations hold: (cf.The linear equality constraints of (2) and (3)) Proof.This result is a fundamental tool used in the analysis of the algorithms in [32,48].For the sake of completeness, here we prove the result only for Zhang's algorithm.
We prove the first relation of ( 23) by induction.For k = 0, the proposition holds by taking t 0 ≡ 1. Suppose that the relation ( 23) holds for k, then, the search direction (∆X, ∆S, ∆y) is the solution to the linear system of equations ( 20) with (X, S, y) = (X k , S k , y k ).Because of the second equation of (20), we have Since holds by the induction assumption.The primal relation, i.e., the right side in (23), follows similarly.Remark In view of Proposition 1, by convention, we treat t k as a part of iterates of the algorithms.By its construction, we have t 0 = 1 and for k = 0, 1, . . .

Path formed by points on the central path of perturbed problems
We fix ν to be a positive number, and consider the following system of equations and semidefinite conditions parametrized by t > 0: We denote by w ν (t) ≡ (X ν (t), S ν (t), y ν (t)) the solution of (25) (if it exists).If the problem is asymptotically pd-feasible, for any t > 0, P(tα, tβ) and D(tα, tβ) are strongly feasible.
Then the solution of (25) defines a point on the central path with parameter ν of the primal-dual pair of strongly feasible SDP: where we note that t is fixed in ( 26) and (27).In this case, w ν (t) is ensured to exist and is uniquely determined for all t ∈ (0, ∞) (due to the assumption of linear independence of A i , i = 1, . . ., m).Moreover, the set forms an analytic path running through S n ++ × S n ++ × R m .The existence and analyticity of C is a folklore result (e.g., [21,33]), but we outline a proof in the Appendix 7 based on a result in [26].We note that the existence and analyticity of the path just relies on local conditions, so, the existence of optimal solutions of P and D is not necessary.A special case where ν = 1 and C = 0 is analyzed in [40] in the context of facial reduction.
Since A(X ν (t)) = b + tβA(I p ), C + tαI d − i A i y νi (t) = S ν (t), and X ν (t)S ν (t) = νI hold, we have Let us denote by v opt (t) the common optimal value of ( 26) and (27).Since holds by weak duality, we see, together with (29), that holds for each t > 0.

Semialgebraic sets and the Tarski-Seidenberg Theorem
A set S in R k is called basic semialgebraic if it can be written as the set of solutions of finitely many polynomial equalities and strict polynomial inequalities.Then, a set is said to be semialgebraic if it is a union of finitely many basic semialgebraic sets.In particular, a semialgebraic set in R is a union of finitely many points and intervals.For x = (x 1 , . . ., x k ) ∈ R k , let T (x) be a coordinate projection to R k−1 defined as T (x) ≡ (x 2 , . . ., x n ).The Tarski-Seidenberg Theorem states that a coordinate projection of a semialgebraic set is again a semialgebraic set in the lower-dimensional space, and described as follows.

Proof of the Main Results
In this section, we prove Theorems 1 and 2. We start with some basic properties of v(ε, η).
Proposition 2 If the problem is asymptotically pd-feasible, the following statements hold.
There are two sub-cases to consider: when ε 1 = 0 and when ε 1 > 0. In the latter sub-case, D(ε 1 , η) and D(ε 2 , η) are both feasible, since ε 1 , ε 2 and η are all positive and asymptotic primal-dual feasibility was assumed.For simplicity, we define b as the vector corresponding to the objective function of D(ε, η) so that bT y = m i=1 We let y k and ȳk be sequences of feasible solutions of D(ε 1 , η) and Then, for t ∈ [0, 1], we have that Taking the limit with respect to k, we obtain as we desired.Now we deal with the sub-case where ε 1 = 0.By assumption, we have ε 2 > ε 1 = 0, implying that v(ε 2 , η) is finite.Then, we can proceed analogously except that D(0, η) may be infeasible so that v(ε is finite for all t ∈ [0, 1), we see that (32) indeed holds.This concludes the proof for the case where η > 0.
Finally, we deal with the case η = 0.In this case, we may assume that ε 1 is positive, since v(ε 1 , 0) might not be well-defined otherwise.By assumption, D is asymptotically feasible, so D(ε, 0) is always feasible for ε > 0. Thus the optimal value of D(ε 1 , 0) is either finite or is +∞.

dt
. The reason for that is as follows.By the discussion in Section 3.2.2, for fixed t > 0, X ν (t), S ν (t) are uniquely defined.Since the A i are linearly independent, y ν(t) must be unique as well.In order to see that δX, δS, δy are also uniquely determined, we take a look at the first three equations of (34) for fixed positive definite matrices X and S.They become linear equations in δX, δS, δy and determine a unique solution if and only if the kernel of φ : (U, V, z) → Considering the first component of φ, we have the equation XU = −V S, which implies that νU = −SV S. Taking the inner product with V , we obtain 0 = (SV S) F .Therefore, S 1/2 V S 1/2 = 0 and since S is invertible, V = 0.By νU = −SV S, we have U = 0. Now we are ready to proceed.Let us denote by D the set of solutions to (34) as follows: D = {(t, X, S, y, δX, δS, δy) | (t, X, S, y, δX, δS, δy) satisfies (34).}Each element of D can be seen as a pair consisting of a point on C and its tangent.Since the semidefinite conditions S 0 and X 0 can be written as the solution set of finitely many polynomial inequalities, D is a semialgebraic set.Now we claim that (C +tαI d )•X ν (t) is either monotonically increasing or monotonically decreasing for sufficiently small t.To this end, we analyze the set of local minimum points and local maximum points of (C + tαI d ) • X ν (t) over (0, ∞).A necessary condition for local minimum and maximum points is: Recall that for t > 0, ( dXν dt ( t), dSν dt ( t), dyν dt ( t)) is the tangent part (δX, δS, δy) of the unique solution to (34) with t = t.With that in mind, a necessary condition for (C + tαI d ) • X ν (t) to have an extreme value at t is that t is in the set

where
T ≡ {(t, X, S, y, δX, δS, δy) Since D is a semialgebraic set, so is T .Since T 1 is the projection of T onto the t coordinate, by applying the Tarski-Seidenberg Theorem, we see that T 1 is a semialgebraic set.Thus, T 1 is a semialgebraic set contained in R, therefore T 1 can be expressed as a union of finitely many points and intervals over R. Since (C + tαI d ) • X ν (t) is an analytic function (see Section 3.2.2), the same is true for its derivatives.Therefore, if T 1 contains an interval, then the derivative of (C + tαI d ) • X ν (t) with respect to t must, in fact, be zero throughout (0, ∞) 1 .In particular, (C + tαI d ) • X ν (t) is constant for all t > 0. Thus, (C + tαI d ) • X ν (t) is a monotonically increasing/decreasing function in this case.Now we deal with the case where T 1 consists of a finite number of points only.We recall that (C + tαI d ) • X ν (t) takes an extreme value at t only if t ∈ T 1 .This implies that the number of extremal points of (C + tαI d ) • X ν (t) is finite and hence (C + tαI d ) • X ν (t) is monotonically increasing or monotonically decreasing for sufficiently small t.

(Step 2)
It follows from Step 1 that there are three possibilities.
First we consider cases (i) and (ii).Recalling (31), we have Therefore, v opt (t) diverges to +∞ and −∞, respectively.This corresponds to the case of the theorem where the limit is ±∞.
Next, we proceed to case (iii).In this case, v opt (t) is bounded for sufficiently small t > 0 because |v opt (t) is bounded for sufficiently small t > 0. Therefore, there exist three constants M 1 , M 2 , and t > 0 such that M 1 < M 2 and t > 0 for which For the sake of obtaining a contradiction, we assume that v opt (t) does not have a limit as t → 0.Then, there exists an infinite sequence {t k } with lim k→∞ t k → 0 where {v opt (t k )} has two distinct accumulation points, v 1 and v 2 , say.Without loss of generality, we let Let ν ≡ z/(6n).By Step 1, it follows that (C + tαI d ) • X ν (t) is a monotone function for sufficiently small t > 0. Furthermore, since v opt (t) is bounded for sufficiently small t, (31) implies that (C + tαI d ) • X ν (t) does not diverge and has a limit as t ↓ 0. Let us denote by c * ν the limit value, and let t > 0 be such that holds for any t ∈ (0, t].On the other hand, holds due to (31).Adding (35), (36) and using the triangular inequality, we see that holds for any t ∈ (0, t].Together with the fact that v 1 > v 2 are the two accumulation points of {v opt (t)}, the above relation yields This implies z = v 1 − v 2 ≤ 2z/3 and hence z ≤ 0, which, however, contradicts z > 0. Therefore, the accumulation point of v opt (t) is unique and the limit of v opt (t) exists as t ↓ 0. Now we are ready to prove Theorem 2. Let We note that v a (θ) = lim t↓0 v(t cos θ, t sin θ) = ṽ(tan θ).
Theorem 2 is a direct consequence of the following theorem.
Theorem 4 If the problem is asymptotically pd-feasible, then ṽ(β) is a monotone decreasing function in β in the interval [0, +∞] and the following relation holds.
Now we prove convexity of ṽ(β).We define the function v k as Then it follows for any Thus, {v k } converges pointwise to ṽ.By item 3. of Proposition 2, v k is convex on (0, ∞), so it follows from [38,Theorem 10.8] that ṽ is also a convex function on (0, ∞).Since ṽ(α) is monotone increasing on [0, ∞), ṽ is convex on [0, ∞).This completes the proof of the theorem.
(Proof of Theorem 2) We recall that a convex function is continuous over the relative interior of its domain, e.g., [38,Theorem 10.1], so the function ṽ in Theorem 4 is continuous over (0, ∞).We also recall that v a (θ) = lim t↓0 v(t cos θ, t sin θ).We have, for θ Since v a (θ) = ṽ(tan θ) and tan is a strictly monotone increasing function in θ, Theorem 2 readily follows.

Application to Infeasible Interior-point Algorithms
The analysis in the previous section indicates that the limiting common optimal value of P(tα, tβ) and D(tα, tβ) exists as t → 0 and the value is between v(D) and v(P).In this section, we discuss an application to the convergence analysis of infeasible primal-dual interior-point algorithms.
While the efficiency of infeasible interior-point algorithms is supported by a powerful polynomial-convergence analysis when applied to a primal-dual strongly feasible problems, its behavior for singular problems was not clear.Our analysis leads to a clearer picture about what happens when infeasible interior-point algorithms are applied to arbitrary SDP problems.As indicated in Subsection 3.2, we focus on two polynomial-time algorithms by Zhang [48] and Potra and Sheng [32], but the idea and the analysis can be applied to many other variants.
Suppose that X is a solution to A(X) = b, ( Ŝ, ŷ) is a solution to S = C − i A i y i , and let (X 0 , S 0 , y 0 ) ≡ ( X + ρ sin θI p , Ŝ + ρ cos θI d , 0), where θ ∈ (0, π/2) and ρ > 0 is sufficiently large so that X 0 ≻ 0 and S 0 ≻ 0 hold.This is an interior feasible point to the primal-dual pair P(ρ cos θ, ρ sin θ) and D(ρ cos θ, ρ sin θ), see ( 2) and (3).In the following, we analyze infeasible primal-dual interior-point algorithms started from this point.For simplicity of notation, we let α ≡ cos θ and β ≡ sin θ.As discussed in Section 3.2.1, in particular as stated in Proposition 1, the infeasible primal-dual interior-point algorithms we are considering generate a sequence (X k , S k , y k ) of interior feasible points to the perturbed system for t k ≥ 0. We define as the modified primal objective function and the modified dual objective function, respectively.If (X k , S k , y k , t k ) is a sequence satisfying (37) for every k and t k ↓ 0, then it is an asymptotically pd-feasible sequence in the sense that X k , S k satisfy the conic constraints of P and D and the distance between (X k , S k , y k ) and the set of solutions to the linear constraints of P and D goes to 0 as k → ∞. 2Now we are ready to describe and prove our first result on infeasible interior-point algorithms.
Theorem 5 Suppose that X is a solution to A( X) = b, ( Ŝ, ŷ) is a solution to C− i A i y i = S, and let (X 0 , S 0 , y 0 ) ≡ ( X + ρ sin θI p , Ŝ + ρ cos θI d , 0), where θ ∈ (0, π/2) and ρ > 0 is sufficiently large so that X 0 ≻ 0 and S 0 ≻ 0 hold.Also, let t 0 ≡ 1. Apply the algorithm Algorithm-B of [48] or Algorithm 2.1 of [32] to solve P and D, and let {(X k , S k , y k , t k )} be the generated sequence.Then the following statements hold.
1. t k → 0 and X k • S k → 0 hold if and only if P and D are asymptotically pd-feasible, namely, the algorithms generate an asymptotically pd-feasible sequence with duality gap converging to zero if and only if P and D are asymptotically pd-feasible.See the remark after the proof of the theorem for the behavior of the algorithms when P and D are not asymptotically pd-feasible.
2. If the problem is asymptotically pd-feasible, then the generated sequence of the modified primal and dual objective values (38) converges to the value v a (θ) ∈ [v(D), v(P)].
3. In item 2., as θ gets closer to 0 the limiting modified objective values of the infeasible primal-dual algorithm get closer to the primal optimal value v(P) of the original problem.As θ gets closer to π/2 the limiting modified objective value gets closer to the dual optimal value v(D).
Proof.First, we discuss item 1.If {(X k , S k , y k , t k )} is an asymptotically pd-feasible sequence, then P and D must be asymptotically pd-feasible.Next, we take a look at the converse.In the analysis conducted in [32,48], although both papers assume the existence of a solution to (16), in fact, the existence of a solution is not necessary for showing convergence of t k and X k •S k to zero under asymptotic pd-feasibility.Under asymptotic pd-feasibility, for any t > 0 the perturbed problems are strongly feasible.This is enough for showing t k → 0 and X k • S k → 0 in these algorithms.We give more details of the proof in Appendix 7. Now we prove items 2 and 3.The following relations hold at the k-th iteration: (See also (29) and (30) for the derivation of these relations.)Then it follows from ( 39), ( 40) and X k • S k → 0 that the sets of accumulation points of Then the sequences of the modified objective functions (38) also converge to v a (θ).Remark When P and D are not pd-asymptotically feasible, lim k→∞ t k is positive for both algorithms [32,48].But the behavior of the duality gap X k • S k is a bit different.In the case of Zhang's algorithm, the sequence of X k • S k also converges to a positive value as well, but in the case of Potra and Sheng's algorithm, what we can say is that liminf X k • S k is positive.This is because the sequence X k • S k is not necessarily monotonically decreasing in Potra and Sheng's algorithm.Now we present the last theorem.A typical choice of the initial iterate (X 0 , S 0 , y 0 ) for primal-dual infeasible interior-point algorithms is (X 0 , S 0 , y 0 ) = (ρ 0 I, ρ 1 I, 0) with ρ 0 > 0 and ρ 1 > 0 sufficiently large.This is different from the one adopted in Theorem 5.In concluding this section, we discuss how our results can be adapted to this case.
Let X be a solution to A(X) = b.If we set I p ≡ ρ 0 I − X and I d ≡ ρ 1 I − C with ρ 0 and ρ 1 sufficiently large so that I p ≻ 0 and I d ≻ 0 hold, (X 0 , S 0 , y 0 ) is a feasible solution to P(1, 1) and D(1, 1).Now, we are ready to apply an argument analogous to the one we developed earlier to derive Theorem 5 with this choice of I p and I d to obtain the following theorem.

Examples
In this section, we present three examples with nonzero duality gaps to illustrate Theorems 1 and 2. The optimal values of P and D are both finite in Example 1, the optimal value of P is finite but D is weakly infeasible in Example 2, and both problems are weakly infeasible in Example 3. In the latter two cases the duality gaps are infinity.

Example 1
We start with a simple instance with a finite nonzero duality gap taken from Ramana's famous paper [34].The following problem has a duality gap of one.
The problem D is max y 1 s.t.
With that, we have The optimal value v(D) = 0 for this problem, since y 1 = 0 is the only possible value for the lower-right 2 × 2 submatrix to be positive semidefinite.The optimal value v(P) = 1 for this problem, since x 23 = 0 must hold for positive semidefiniteness of the lower-right 2 × 2 submatrix, which drives x 11 to be 1.Now we consider the problem D(ε, η) Since the objective is linear, there is an optimal solution such that at least one of the inequality constraints is active.Taking into account that the second constraint is quadratic, we analyze the following three subproblems, and take the maximum of them.
In this case, the second constraint yields Together with y 1 = 1 + ε, the problem reduces to a linear program, and it follows that the maximum is (Case 2) Under this condition, the objective function is written as By computing the derivative, we see that the function takes the unique maximum at and Then, we see that But we should recall that this maximum is obtained by ignoring the constraint By substituting ( 41) and ( 42) into this constraint, ( 43) is the maximum only if is satisfied.If (44) does not hold, then, the maximum of f (y 2 ) is taken at the boundary of the constraint 1 + ε − y 1 ≥ 0, i.e., y 2 satisfying the condition Solving this equation with respect to y 2 , we obtain In summary, the maximum value in (Case 2) is as follows: (Case 3) In this case, 1 + ε − y 1 ≥ 0 holds trivially.Therefore, the maximization problem in this case is max under the condition that y 2 ≤ ε.The function is monotone increasing, so that the maximum is attained when y 2 = ε and the maximum value is Now we are ready to combine the three results to complete the evaluation of ṽ and v a .By letting ε = tα, η = tβ with t > 0 and letting t ↓ 0, we see that The maximum among the three corresponds to ṽ. Comparing the three, we see that (Case 2) always is the maximum.This means ṽ

Example 2
The next example is such that D is weakly infeasible but P is weakly feasible and has a finite optimal value.
The problem This system is weakly infeasible, so v(D) = −∞.It follows that Therefore, we see that the maximum value is Now we are ready to evaluate ṽ and v a .By letting ε = tα, η = tβ with t > 0 and letting t ↓ 0, we see that lim Finally, we deal with a pathological case where both primal and dual are weakly infeasible.

Example 3
The problem D is The optimal value v(D) = −∞ for this problem, since y 1 = −2 should hold for feasibility, but then the (2,2) element becomes −1 and, therefore, the matrix cannot be feasible.By letting y 2 large and y 1 = 0, we confirm the problem is weakly infeasible.Since the objective is linear, there is an optimal solution such that at least one of the inequality constraints is active.Taking into account that the second constraint is quadratic, we analyze the following three subproblems and take the maximum of them.This implies that y 1 = 2(−1 ± ε(ε + y 2 )).
By differentiation, we see that the function attains its maximum at We see that the first constraint is always satisfied at the maximum.The third constraint 1 + y 1 + ε ≥ 0 is satisfied if If this condition is not satisfied, then 1 + y 1 + ε = 0 holds at the maximum, so, we can leave the analysis to the third case.Substituting y 1 , y 2 to the objective, we conclude that, if ε/η ≥ 1, then, the maximum is and if the aforementioned condition is not satisfied, then, we can leave the analysis to the third case below.
(Case 3) We have y 1 = −1 − ε.After simple manipulation, we see that other two inequalities are satisfied iff Therefore, the maximum is Now we are ready to combine the three results to complete evaluation of ṽ and v a .By letting ε = tα, η = tβ with t > 0 and letting t ↓ 0, we see that where we used the convention 1/0 = ∞.

Concluding Discussion
In this paper, we developed a perturbation analysis for singular primal-dual semidefinite programs.We assumed that primal and dual problems are asymptotically feasible and added positive definite perturbations to recover strong feasibility.A major innovation was that we considered perturbations of primal and dual problems simultaneously.It was shown that the primal-dual common optimal value of the perturbed problem has a directional limit when the perturbation is reduced to zero along a line.Representing the direction of approach with an angle θ between 0 and π/2, where the former and latter corresponds to the dualonly perturbation and the primal-only perturbation, respectively, we demonstrated that the limiting objective value is a monotone decreasing function in θ which takes the primal optimal value v(P) at θ = 0 and the dual optimal value v(D) at θ = π/2.Based on this result, we could show that the modified objective values of the two infeasible primal-dual interior-point algorithms by Zhang and by Potra and Sheng converge to a value between the optimal values of P and D. The modified primal and dual objective functions are easily computed from the current iterate.The development of analogous results for homogeneous self-dual interior-point algorithms and the design of robust infeasible primal-dual interiorpoint algorithms reflecting the theory developed in this paper are interesting further research topics to explore.