Abstract
Let \(A\in {\mathbb R}^{m\times n}\setminus \{0\}\) and \(P:=\{x:Ax\le 0\}\). This paper provides a procedure to compute an upper bound on the following homogeneous Hoffman constant
In sharp contrast to the intractability of computing more general Hoffman constants, the procedure described in this paper is entirely tractable and easily implementable.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Hoffman constants for systems of linear inequalities, and more general error bounds for feasibility problems, play a central role in mathematical programming. In particular, Hoffman constants provide a key building block for the convergence of a variety of algorithms [1, 3, 10, 11, 13, 23]. Since Hoffman’s seminal work [7], Hoffman constants and more general error bounds has been widely studied [2, 4, 6, 12, 14, 18, 24, 25]. However, there has been very limited work on algorithmic procedures that compute or bound Hoffman constants. The only two references that appear to tackle this computational challenge are the 1995 article by Klatte and Thiere [9] and the more recent 2021 article by Peña, Vera, and Zuluaga [16]. However, as it is discussed in both [9] and [16], there are limitations on the algorithmic schemes proposed in both these articles.
The central goal of this paper is to devise a procedure that computes an upper bound on the following homogeneous Hoffman constant \(H_0(A)\). Suppose \(A\in {\mathbb R}^{m\times n}\). Let \(P:=\{x:Ax\le 0\}\) and define \(H_0(A)\) as
For notational convenience, by convention let \(H_0(A):= 0\) when \(P={\mathbb R}^n\). This occurs precisely when \(A = 0\).
To position this work in the context of Hoffman constants, we next recall the local and global Hoffman constants H(A, b) and H(A) associated to linear systems of inequalities defined by A. The homogeneous Hoffman constant \(H_0(A)\) is a special case of the following local Hoffman constant H(A, b). Suppose \(A\in {\mathbb R}^{m\times n}\) and \(b\in A{\mathbb R}^n + {\mathbb R}^m_+\). Let \(P_A(b):=\{x\in {\mathbb R}^n: Ax\le b\}\) and define H(A, b) as
It is evident that \(H_0(A) = H(A,0)\) and thus \(H_0(A)\) is bounded above by the following global Hoffman constant H(A). Suppose \(A\in {\mathbb R}^{m\times n}\). Define
In his seminal paper [7], Hoffman showed that H(A) is finite and consequently so are \(H_0(A)\) and H(A, b) for all \(b\in A{\mathbb R}^n + {\mathbb R}^m_+\).
The articles [9, 16] propose algorithms to compute or estimate the global Hoffman constant H(A). These algorithms readily yield a computational procedure to bound \(H_0(A)\). However, as it is detailed in [9, 16], except for very special cases the computation or even approximation of H(A) is an extremely challenging problem. Indeed, the recent results in [15] show that the Stewart-Todd condition measure \(\chi (A)\) [20, 21] is the same as \(H(\textbf{A})\) where \(\textbf{A} = \begin{bmatrix} A\\ -A \end{bmatrix}\). Since the quantity \(\chi (A)\) is known to be NP-hard to approximate [8], so is H(A). The computation of the (non-homogeneous) local Hoffman constant H(A, b), as discussed in [2, 25], also poses similar computational challenges. In sharp contrast, the procedure proposed in this paper for upper bounding the more specialized Hoffman constant \(H_0(A)\) is entirely tractable and easily implementable for any \(A\in {\mathbb R}^{m\times n}\). The bound is a formalization of the following three-step approach detailed in Sect. 2.
First, upper bound \(H_0(A)\) in the following two special cases:
-
(i)
When \(A\hat{x} < 0\) for some \(\hat{x} \in {\mathbb R}^n\) or equivalently when \(A^{\textsf{T}}y = 0, y\ge 0 \Rightarrow y =0\). (See Proposition 1.)
-
(ii)
When \(A^{\textsf{T}}\hat{y} = 0\) for some \(\hat{y} > 0\) or equivalently when \(Ax\le 0\Rightarrow Ax=0\). (See Proposition 2.)
Second, use a canonical partition \(A = \begin{bmatrix} A_B\\ A_N \end{bmatrix}\) of the rows of A such that \(A_N\) is as in case (i) and \(A_B\) is as in case (ii) above. (See Proposition 3.)
Third, upper bound \(H_0(A)\) by stitching together the Hoffman constants \(H_0(A_B)\), \(H_0(A_N),\) and a third Hoffman constant \(\mathcal {H}(L,K)\) associated to the intersection of the subspace \(L:=\{x: A_Bx = 0\}\) and the cone \(K:=\{x: A_N x \le 0\}\). (See Theorem 1.)
The above steps suggest the following computational procedure to upper bound \(H_0(A)\): First, compute the partition B, N. Second, compute upper bounds on \(H_0(A_B)\) and on \(H_0(A_N)\). Third, upper bound \(\mathcal {H}(L,K)\). Section 3 details this procedure. As explained in Sect. 3, the total computational work in the entire procedure consists of two linear programs, two quadratic programs, a convex program, and a singular value calculation, all of which are computationally tractable. This is noteworthy in light of the challenges associated to estimating the Hoffman constants H(A) and H(A, b). A Python implementation and some illustrative examples of this procedure are publicly available at https://github.com/javi-pena.
For ease of notation and computability, we assume throughout the paper that the norm in \({\mathbb R}^m\) satisfies the following componentwise compatibility condition: if \(y,z\in {\mathbb R}^m\) and \(|y|\le |z|\) componentwise then \(\Vert y\Vert \le \Vert z\Vert \). The componentwise compatibility condition in particular implies that for all \(u\in {\mathbb R}^n\)
where \((Au)^+ = \max \{Au,0\}\) componentwise. Consequently,
Observe that most of the usual norms in \({\mathbb R}^m\), including the \(\ell _p\) norms for \(1\le p \le \infty \) satisfy the componentwise compatibility condition.
We conclude this introduction by highlighting that our developments for bounding \(H_0(A)\) rely critically on the features of homogeneous systems of inequalities. In contrast to non-homogeneous systems of inequalities and more general affine cone inclusions, homogeneous systems of inequalities and more general homogeneous affine cone inclusions possess a number of attractive properties as discussed in [5, 17, 19, 22]. In particular, although it is tempting to conjecture that a bound on the non-homogeneous Hoffman constant H(A, b) could be obtained from some \(H_0(A_b)\) via homogenization, that is not the case as we next detail. Indeed, consider the natural homogenization \(A_b z \le 0\) of the system of inequalities \(Ax\le b\) where
The following example shows that H(A, b) cannot be bounded above by any reasonable multiple of \(H_0(A_b)\). Suppose \(0< \epsilon < 1\) and let
Then
For ease of computation, suppose all relevant spaces are endowed with the infinite norm. Hence the remarks following Proposition 1 below imply that \(H_0(A_b) \le 1/(1-\epsilon )\le 2\). On the other hand, \(H(A,b) \ge 1/\epsilon \) because \(Ax \le b\) implies that \(x_2\le 1/\epsilon \) and thus for \(u =\begin{bmatrix} 0,2/\epsilon \end{bmatrix}^{\textsf {T}}\) we have \(\Vert (Au-b)^+\Vert _{\infty }=1\) but \(\Vert u-x\Vert _{\infty }\ge 1/\epsilon = 1/\epsilon \cdot \Vert (Au-b)^+\Vert _{\infty }\) for any x such that \(Ax \le b\). Since this holds for any \(0<\epsilon < 1\), it follows that H(A, b) cannot be bounded above in terms of \(H_0(A_b)\).
2 Upper bounds on \(H_0(A)\)
2.1 Upper bounds on \(H_0(A)\) in two special cases
We next consider two special cases that can be seen as dual counterparts of each other.
Proposition 1
Suppose \(A\in {\mathbb R}^{m\times n}\) and \(A\hat{x} < 0\) for some \(\hat{x}\in {\mathbb R}^n\) or equivalently \(A^{\textsf{T}}y = 0, y \ge 0 \Rightarrow y=0\). Then
Proof
For ease of notation, let H denote the right-hand side expression in (1), that is,
Observe that \(H < +\infty \) because the assumption on A implies that \(A{\mathbb R}^n + {\mathbb R}^m_+ = {\mathbb R}^m\).
We need to show that \(H_0(A) \le H\). To that end, let \(P:=\{x\in {\mathbb R}^n: Ax \le 0\}\) and suppose that \(u \in {\mathbb R}^n{\setminus } P\). Let \(y:=(Au)^+ \in {\mathbb R}^m\). The construction of H implies that there exists \(x \in {\mathbb R}^n\) such that \(Ax\le -y\) and \(\Vert x\Vert \le H \cdot \Vert y\Vert = H \cdot \Vert (Au)^+\Vert \). Thus \(x+u \in P\) because
Furthermore \(\Vert (x+u) - u\Vert = \Vert x\Vert \le H \cdot \Vert (Au)^+\Vert \). Since this holds for all \(u\in {\mathbb R}^n\setminus P\), it follows that \(H_0(A) \le H\).
In addition to the simple direct proof above, an alternative proof of Proposition 1 can also be obtained from [16]. Indeed, [16, Proposition 2] implies that when \(A\in {\mathbb R}^{m\times n}\) satisfies the assumption in Proposition 1, the right-hand side in (1) is precisely the global Hoffman constant H(A) which is at least as large as \(H_0(A)\) as previously noted.
For computational purposes, it is useful to note that when \({\mathbb R}^m\) is endowed with the \(\ell _\infty \) norm, the upper bound in Proposition 1 can be computed via the following convex optimization problem:
In particular, any \(\bar{x} \in {\mathbb R}^n\) such that \(A\bar{x} \ge \textbf{1}\) yields the upper bound
The following proposition, which can be seen as a dual counterpart of Proposition 1, relies on the dual norms in \({\mathbb R}^m\) and \({\mathbb R}^n\). More precisely, suppose both \({\mathbb R}^m\) and \({\mathbb R}^n\) are endowed with their canonical inner products. In each case let \(\Vert \cdot \Vert ^*\) denote the norm defined as
Proposition 2
Suppose \(A\in {\mathbb R}^{m\times n}\) is such that \(A^{\textsf{T}}\hat{y} = 0\) for some \(\hat{y} >0 \) or equivalently \(Ax \le 0 \Rightarrow Ax = 0\). Then
Proof
We shall assume that \(A\ne 0\) as otherwise \(H_0(A) = 0\) and (2) trivially holds. Again for ease of notation, let H denote the right-hand side expression in (2), that is,
Observe that \(H < +\infty \) because the assumption on A implies that \(A^{\textsf{T}}{\mathbb R}^m_+= A^{\textsf{T}}{\mathbb R}^m\).
We need to show that \(H_0(A) \le H\). To that end, let \(P:=\{x\in {\mathbb R}^n: Ax \le 0\}= \{x \in {\mathbb R}^n: Ax = 0\}\) and suppose that \(u \in {\mathbb R}^n{\setminus } P\). Let
The optimality conditions of the latter problem imply that there exists \(v\in A^{\textsf{T}}{\mathbb R}^m\) with \(\Vert v\Vert ^*=1\) such that
The construction of H implies that there exists \(y \in {\mathbb R}^m_+\) such that \(A^{\textsf{T}}y =v\) and \(\Vert y\Vert ^* \le H.\) Since \(v = A^{\textsf{T}}y\) we have
In addition, since \(y \in {\mathbb R}^m_+\) and \(\Vert y\Vert ^* \le H\), we also have
Since this holds for all \(u\in {\mathbb R}^n\setminus P\), it follows that \(H_0(A) \le H\).
For computational purposes, it is useful to note that when \({\mathbb R}^m\) is endowed with the \(\ell _\infty \) norm, the upper bound in Proposition 2 can be computed as follows
The reciprocal of the latter quantity in turn is the radius of the largest ball in \(A^{\textsf{T}}({\mathbb R}^m)\) centered at 0 and contained in the set
Therefore, if in addition \({\mathbb R}^n\) is endowed with the \(\ell _2\) norm then any \(\bar{y}\in {\mathbb R}^m_{++}\) with \(\textbf{1}^{\textsf{T}}\bar{y} =1\) and \(A^{\textsf{T}}\bar{y} = 0\) yields the upper bound
where \(\bar{Y} = \text {Diag}(\bar{y})\) and \(\sigma _{\min }^+(A^{\textsf{T}}\bar{Y})\) denotes the smallest positive singular value of \(A^{\textsf{T}}\bar{Y}\). To see why (3) holds, observe that if \(v \in A^{\textsf{T}}{\mathbb R}^m\) and \(\Vert v\Vert _2 \le \frac{\sigma _{\min }^+(A^{\textsf{T}}\bar{Y})}{2}\) then \(2v = A^{\textsf{T}}\bar{Y} z\) for some \(\Vert z\Vert _2\le 1\). The latter implies that \(|\bar{Y}z| \le \bar{y}\) componentwise and thus \(2v = A^{\textsf{T}}(\bar{y} + \bar{Y}z)\) with
In particular, \(v\in \{A^{\textsf{T}}y: y\in {\mathbb R}^m_+, \textbf{1}^{\textsf{T}}y \le 1\}\). Since this holds for any \(v\in A^{\textsf{T}}{\mathbb R}^m\) with \(\Vert v\Vert _2 \le \frac{\sigma _{\min }^+(A^{\textsf{T}}\bar{Y})}{2}\), it follows that the radius of the largest ball in \(A^{\textsf{T}}({\mathbb R}^m)\) centered at 0 and contained in the set
is at least \(\frac{\sigma _{\min }^+(A^{\textsf{T}}\bar{Y})}{2}\).
2.2 Upper bound on \(H_0(A)\) for general A
An upper bound on H(A) for general \(A\in {\mathbb R}^{m\times n}\) follows by stitching together the cases in the above two propositions via the the canonical partition result in Proposition 3 and the additional Hoffman constant \(\mathcal {H}(L,K)\) defined in (4) below.
The following result is a consequence of the classical Goldman-Tucker partition theorem. To make our exposition self-contained, we include a proof.
Proposition 3
Let \(A\in {\mathbb R}^{m\times n}\). There exists a unique partition \(B\cup N = \{1,\dots ,m\}\) such that
and
Proof
Let \(N\subseteq \{1,\dots ,m\}\) be the largest subset of \(\{1,\dots ,m\}\) such that
has a solution. In other words,
Observe that N is well-defined and unique and thus so is \(B:=\{1,\dots ,m\}\setminus N\). Furthermore the construction of N implies that \(A_Bx= 0\) and \(A_Nx<0\) for some \(x\in {\mathbb R}^n\). Hence to finish the proof it suffices to show that
has a solution. To that end, for \(i\in \{1,\dots ,m\}\) let \(e_i\in {\mathbb R}^n\) is the vector with i-th component equal to one and all other equal to zero. Observe that \(i \in B\) if and only if the following system of equations and inequalities does not have a solution:
Farkas Lemma thus implies that \(i\in B\) if and only if the following system of equations and inequalities has a solution:
Since this holds for each \(i\in B\), it follows that \(A_B^{\textsf{T}}y_B =0, y_B > 0\) has a solution.
We should note that, depending on A, the set N in Proposition 3 could be any subset of \(\{1,\dots ,m\}\). In particular, \(N = \emptyset \) if \(A^{\textsf{T}}y = 0\) for some \(y > 0\), and \(N = \{1,\dots ,m\}\) if \(Ax<0\) for some \(x\in \mathbb {R}^n\). For instance, \(N = \emptyset \) if \(A = \begin{bmatrix} 1\\ -1 \end{bmatrix}\) and \(N = \{1,2\}\) if \(A = \begin{bmatrix} 1\\ 1 \end{bmatrix}\).
Suppose \(L\subseteq {\mathbb R}^n\) is a linear subspace and \(K \subseteq {\mathbb R}^n\) is a closed convex cone. Let
with the convention that \(\mathcal {H}(L,K) =0\) when \(L\cap K = {\mathbb R}^n\).
In the remainder of this paper, we will use the following notation for \(A\in {\mathbb R}^{m\times n}\): Let B, N denote the canonical partition defined by A as in Proposition 3 and let \(L\subseteq {\mathbb R}^n, K\subseteq {\mathbb R}^n\) be defined as
with the convention that \(L = {\mathbb R}^n\) if \(B=\emptyset \) and \(K = {\mathbb R}^n\) if \(N=\emptyset \).
Observe that L is a linear subspace, K is a closed convex cone, and \(\{x: Ax\le 0\} = L\cap K\). We now have all the necessary ingredients to upper bound \(H_0(A)\).
Theorem 1
Suppose \(A\in {\mathbb R}^{m\times n}\) and the norm in \({\mathbb R}^m\) satisfies the componentwise compatibility condition. Let B, N and L, K be as above. Then
Proof
Suppose \(u\in {\mathbb R}^n \setminus P\). The construction of \(\mathcal {H}(\cdot ,\cdot )\) and \(H_0(\cdot ),\) and the componentwise compatibility condition imply that there exists \(x \in P=L\cap K\) such that
Since this holds for all \(u\in {\mathbb R}^n\setminus P\), the inequality in (5) follows.
Observe that unlike \(H_0(A)\) that depends on the data representation \(A\in {\mathbb R}^{m\times n}\) of the cone \(P=\{x: Ax\le 0\}\), the constant \(\mathcal {H}(L,K)\) only depends on the sets \(L\subseteq {\mathbb R}^n\) and \(K\subseteq {\mathbb R}^n\). In particular, \(\mathcal {H}(L,K)\) does not depend on the norm in \({\mathbb R}^m\) while \(H_0(A)\) evidently does.
The next proposition provides an upper bound on \(\mathcal {H}(L,K)\) analogous to the upper bounds on \(H_0(A)\) in Propositions 1 and 2. It will be useful for the computational procedure in Sect. 3.
Proposition 4
Suppose \(L\subseteq {\mathbb R}^n\) is a linear subspace and \(K \subseteq {\mathbb R}^n\) is a closed convex cone. Then
Proof
To ease notation, let
We need to show that \(\mathcal {H}(L,K) \le 1 + 2H.\) To that end, suppose \(u \in {\mathbb R}^n{\setminus } L\cap K\). Let \(u_L:=\mathop {\hbox {arg min}}\limits _v\{\Vert u-v\Vert : v \in L\}\) and \(u_K:=\mathop {\hbox {arg min}}\limits _v\{\Vert u-v\Vert : v \in K\}\). The construction of H implies that there exist \(x\in L, y\in K\) such that \(\Vert x\Vert \le H\cdot \Vert u_K-u_L\Vert \) and \( x-y = u_K-u_L. \) Hence \( u_L+x = u_K+y \in L\cap K \) and
Since this holds for any \(u\in {\mathbb R}^n\setminus L\cap K\), it follows that
For computational purposes, it is useful to note that if \(\bar{x} \in L \cap \textrm{int}(K)\) is such that \(\bar{x} + u \in K\) for all \(\Vert u\Vert \le 1\) then Proposition 4 implies that
3 A computable procedure to bound \(H_0(A)\)
We next describe a procedure to compute an upper bound on \(H_0(A)\). The procedure consists of four main steps. First, compute the partition B, N. Second, compute an upper bound on \(H_0(A_B)\). Third, compute an upper bound on \(H_0(A_N)\). Fourth, compute an upper bound on \(\mathcal {H}(L,K)\). An upper bound on \(H_0(A)\) thereby follows from Theorem 1. For computational convenience, throughout this section we assume that \({\mathbb R}^m\) is endowed with the \(\ell _\infty \) norm and \({\mathbb R}^n\) is endowed with the \(\ell _2\) norm. A Python implementation and some illustrative examples of this procedure are publicly available at https://github.com/javi-pena
3.1 Step 1: Partition B, N
The partition B, N can be obtained from any point (x, y, s, t) that satisfies the following systems of equations and inequalities for some \(t > 0\):
More precisely, if (x, y, s, t) satisfies (6) with \(t>0\) then B, N can be obtained as follows:
Proposition 3 guarantees that a solution (x, y, s, t) to (6) with \(t>0\) always exists and that the associated partition B, N is unique. Such a point (x, y, s, t) can be computed via the following linear program:
3.2 Step 2: Upper bound on \(H_0(A_N)\)
Suppose \(N \ne \emptyset \) as otherwise \(H_0(A_N) = 0\). The remarks following Proposition 1 show that
for any \(\bar{x}\in {\mathbb R}^n\) such that \(A_N\bar{x} \ge \textbf{1}\). The best such upper bound can be computed via the following quadratic program
3.3 Step 3: Upper bound on \(H_0(A_B)\)
Suppose \(B \ne \emptyset \) as otherwise \(H_0(A_B) = 0\). The remarks following Proposition 2 show that
for any \(\bar{y}\in {\mathbb R}^B_{++}\) such that \(\textbf{1}_B^{\textsf{T}}\bar{y} = 1\) and \(A_B^{\textsf{T}}\bar{y} = 0\). Although the best such upper bound is challenging to compute, an upper bound of this kind that is within a factor of \(\sqrt{|B|}\) of the best possible one can be computed via the following convex program
3.4 Step 4: Upper bound on \(\mathcal {H}(L,K)\)
Suppose both \(N \ne \emptyset \) and \(B \ne \emptyset \) as otherwise \(\mathcal {H}(L,K)=1\) or \(\mathcal {H}(L,K) = 0\). Let Q be an orthonormal basis for \(L:=\{x:A_Bx =0\}\) and \(M = DA_NQ\) where D is the diagonal matrix with positive diagonal entries such that all rows of \(DA_N\) have Euclidean norm equal to one. Then the remarks following Proposition 4 imply that
for any \(\bar{z} \ge 0\) such that \(M\bar{z} \ge \textbf{1}\). The best such upper bound can be computed via the following quadratic program
3.5 Putting it all together: A procedure to bound \(H_0(A)\)
Theorem 1 allows us to stitch together the partition B, N and the upper bounds on \(H_0(A_B),\) \(H_0(A_N),\) and \(\mathcal {H}(L,K)\) to obtain an upper bound on \(H_0(A)\) as detailed in Algorithm 1 below.
Data availability
The developments on this paper followed a theoretical mathematical approach and did not analyze or generate any datasets.
References
D. Applegate, O. Hinder, H. Lu, M. Lubin: Faster first-order primal-dual methods for linear programming using restarts and sharpness. Math. Program. 1–52 (2022)
Azé, D., Corvellec, J.: On the sensitivity analysis of Hoffman constants for systems of linear inequalities. SIAM J. Optim. 12(4), 913–927 (2002)
Beck, A., Shtern, S.: Linearly convergent away-step conditional gradient for non-strongly convex functions. Math. Program. 164(1), 1–27 (2017)
Burke, J., Tseng, P.: A unified analysis of Hoffman’s bound via Fenchel duality. SIAM J. Optim. 6(2), 265–282 (1996)
Burke, J.V., Deng, S.: Weak sharp minima revisited, part III: Error bounds for differentiable convex inclusions. Math. Program. 116(1–2), 37–56 (2009)
Güler, O., Hoffman, A., Rothblum, U.: Approximations to solutions to systems of linear inequalities. SIAM J. Matrix Anal. Appl. 16(2), 688–696 (1995)
Hoffman, A.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49(4), 263–265 (1952)
Khachiyan, L.: On the complexity of approximating extremal determinants in matrices. J. Complex. 11(1), 138–153 (1995)
Klatte, D., Thiere, G.: Error bounds for solutions of linear equations and inequalities. Z. Oper. Res. 41(2), 191–214 (1995)
Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of Frank-Wolfe optimization variants. In: Advances in Neural Information Processing Systems (NIPS) (2015)
Leventhal, D., Lewis, A.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35, 641–654 (2010)
Li, W.: The sharp Lipschitz constants for feasible and optimal solutions of a perturbed linear program. Linear Algebra Appl. 187, 15–40 (1993)
Luo, Z., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46(1), 157–178 (1993)
Mangasarian, O., Shiau, T.-H.: Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems. SIAM J. Control. Optim. 25(3), 583–595 (1987)
Peña, J., Vera, J., Zuluaga, L.: Equivalence and invariance of the chi and Hoffman constants of a matrix. arXiv preprint arXiv:1905.06366 (2019)
Peña, J., Vera, J., Zuluaga, L.: New characterizations of Hoffman constants for systems of linear constraints. Math. Program. 187(1), 79–109 (2021)
Robinson, S.: Normed convex processes. Trans. Am. Math. Soc. 174, 127–140 (1972)
Robinson, S.: Bounds for error in the solution set of a perturbed linear program. Linear Algebra Appl. 6, 69–81 (1973)
Robinson, S.: Regularity and stability for convex multivalued functions. Math. Oper. Res. 1(2), 130–143 (1976)
Stewart, G.: On scaled projections and pseudoinverses. Linear Algebra Appl. 112, 189–193 (1989)
Todd, M.: A Dantzig-Wolfe-like variant of Karmarkar’s interior-point linear programming algorithm. Oper. Res. 38(6), 1006–1018 (1990)
Ursescu, C.: Multifunctions with convex closed graph. Czechoslov. Math. J. 25(3), 438–441 (1975)
Wang, P., Lin, C.: Iteration complexity of feasible descent methods for convex optimization. J. Mach. Learn. Res. 15(1), 1523–1548 (2014)
Zalinescu, C.: Sharp estimates for Hoffman’s constant for systems of linear inequalities and equalities. SIAM J. Optim. 14(2), 517–533 (2003)
Zheng, X., Ng, K.: Hoffman’s least error bounds for systems of linear inequalities. J. Global Optim. 30(4), 391–403 (2004)
Funding
Open access funding provided by Carnegie Mellon University. This research has been supported by the Bajaj Family Chair at the Tepper School of Business.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author has no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Peña, J.F. An easily computable upper bound on the Hoffman constant for homogeneous inequality systems. Comput Optim Appl 87, 323–335 (2024). https://doi.org/10.1007/s10589-023-00514-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-023-00514-y