Skip to main content
Log in

A comparative note on the relaxation algorithms for the linear semi-infinite feasibility problem

  • S.I. : CLAIO 2014
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

The problem (LFP) of finding a feasible solution to a given linear semi-infinite system arises in different contexts. This paper provides an empirical comparative study of relaxation algorithms for (LFP). In this study we consider, together with the classical algorithm, implemented with different values of the fixed parameter (the step size), a new relaxation algorithm with random parameter which outperforms the classical one in most test problems whatever fixed parameter is taken. This new algorithm converges geometrically to a feasible solution under mild conditions. The relaxation algorithms under comparison have been implemented using the extended cutting angle method for solving the global optimization subproblems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Agmon, S. (1954). The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6, 382–392.

    Article  Google Scholar 

  • Auslender, A., Ferrer, A., Goberna, M. A., & López, M. A. (2015). Comparative study of RPSALG algorithm for convex semi-infinite programming. Computational Optimization and Applications, 60, 59–87.

    Article  Google Scholar 

  • Bagirov, A. M., & Rubinov, A. M. (2000). Global minimization of increasing positively homogeneous functions over the unit simplex. Annals of Operations Research, 98, 171–187.

    Article  Google Scholar 

  • Bagirov, A. M., & Rubinov, A. M. (2001). Modified versions of the cutting angle method. In N. Hadjisavvas & P. M. Pardalos (Eds.), Advances in convex analysis and global optimization (pp. 245–268). Dordrecht: Kluwer.

    Chapter  Google Scholar 

  • Bartle, R. G. (1964). The elements of real analysis. New York: Wiley.

    Google Scholar 

  • Basu, A., De Loer, J. A., & Junod, M. (2013). On Chubanov’s method for linear programming. INFORMS Journal on Computing, 26, 336–350.

    Article  Google Scholar 

  • Beliakov, G. (2003). Geometry and combinatorics of the cutting angle method. Optimization, 52, 379–394.

    Article  Google Scholar 

  • Beliakov, G. (2004). Cutting angle method. A tool for constrained global optimization. Optimization Methods and Software, 19, 137–151.

    Article  Google Scholar 

  • Beliakov, G. (2005). A review of applications of the cutting angle method. In A. M. Rubinov & V. Jeyakumar (Eds.), Continuous Optimization (pp. 209–248). New York: Springer.

    Chapter  Google Scholar 

  • Beliakov, G. (2008). Extended cutting angle method of global optimization. Pacific Journal of Optimization, 4, 153–176.

    Google Scholar 

  • Beliakov, G., & Ferrer, A. (2010). Bounded lower subdifferentiability optimization techniques: Applications. Journal of Global Optimization, 47, 211–231.

    Article  Google Scholar 

  • Ben-Tal, A., El Ghaoui, L., & Nemirovski, A. (2009). Robust optimization. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Ben-Tal, A., & Nemirovski, A. (1999). Robust solutions of uncertain linear programs. Operations Research Letters, 25(1), 1–13.

    Article  Google Scholar 

  • Betke, U. (2004). Relaxation, new combinatorial and polynomial algorithms for the linear feasibility problem. Discrete and Computational Geometry, 32, 317–338.

    Article  Google Scholar 

  • Borwein, J. M., & Tam, M. K. (2014). A cyclic Douglas–Rachford iteration scheme. Journal of Optimization Theory and Applications, 160, 1–29.

    Article  Google Scholar 

  • Cánovas, M. J., López, M. A., Parra, J., & Toledo, F. J. (2005). Distance to ill-posedness and the consistency value of linear semi-infinite inequality systems. Mathematical Programming, 103A, 95–126.

    Article  Google Scholar 

  • Cheney, E. W., & Goldstein, A. A. (1959). Newton method for convex programming and Tchebycheff approximation. Numerissche Mathematik, 1, 253–268.

    Article  Google Scholar 

  • Combettes, P. L. (1996). The convex feasibility problem in image recovery. In P. Hawkes (Ed.), Advances in imaging and electron physics (Vol. 95, pp. 155–270). New York: Academic Press.

    Google Scholar 

  • Dinh, N., Goberna, M. A., & López, M. A. (2006). From linear to convex systems: Consistency, Farkas Lemma and applications. Journla of Convex Analysis, 13, 279–290.

    Google Scholar 

  • Dolan, E. D., & Moré, J. J. (2002). Benchmarking optimization software with performance profiles. Mathematical Programming, 91, 201–213.

    Article  Google Scholar 

  • Eriksson, K., Estep, D., & Johnson, C. (2004). Applied mathematics: Body and soul. Berlin: Springer.

    Book  Google Scholar 

  • Ferrer, A., & Miranda, E. (2013). Random test examples with known minimum for convex semi-infinite programming problems. E-prints UPC. http://hdl.handle.net/2117/19118. Accessed 28 February 2015.

  • Goberna, M. A., Jeyakumar, V., Li, G., & Vicente-Pérez, J. (2014). Robust solutions of uncertain multi-objective linear semi-infinite programming. SIAM Journal on Optimization, 24, 1402–1419.

    Article  Google Scholar 

  • Goberna, M. A., Jeyakumar, V., Li, G., & Vicente-Pérez, J. (2015). Robust solutions to multi-objective linear programs with uncertain data. European Journal of Operational Research, 242, 730–743.

    Article  Google Scholar 

  • Goberna, M. A., & López, M. A. (1998). Linear semi-infinite optimization. Chichester: Wiley.

    Google Scholar 

  • Goberna, M. A., & López, M. A. (2014). Post-optimal analysis in linear semi-infinite optimization. New York: Springer.

    Book  Google Scholar 

  • González-Gutiérrez, E., Rebollar, L. A., & Todorov, M. I. (2011a). Rate of convergence of a class of numerical methods solving linear inequality systems. Optimization, 60, 947–957.

    Article  Google Scholar 

  • González-Gutiérrez, E., Rebollar, L. A., & Todorov, M. I. (2011b). Under and over projection methods for solving linear inequality systems. Comptes Rendus de la Academie Bulgare des Sciences, 64, 785–790.

    Google Scholar 

  • González-Gutiérrez, E., Rebollar, L. A., & Todorov, M. I. (2012). Relaxation methods for solving linear inequality systems: Converging results. Top, 20, 426–436.

    Article  Google Scholar 

  • González-Gutiérrez, E., & Todorov, M. I. (2012). A relaxation method for solving systems with infinitely many linear inequalities. Optimization Letters, 6, 291–298.

    Article  Google Scholar 

  • González-Gutiérrez, E., Todorov, M. I. (2014). Generalized step iteration in the relaxation method for the feasibility problem. IMPA, Rio de Janeiro, BR, Preprint serie D113/2014. https://institucional.impa.br/preprint/lista.action?serie=4

  • Horst, R., Pardalos, P., & Thoai, N. V. (2000). Introduction to global optimization (1st ed.). Dordrecht: Kluwer.

    Book  Google Scholar 

  • Hu, H. (1994). A Projection method for solving infinite systems of linear inequalities. In D.-Z. Du & J. Sun (Eds.), Advances in optimization and approximation (pp. 186–194). Dordrecht: Kluwer.

    Chapter  Google Scholar 

  • Jeroslow, R. G. (1979). Some relaxation methods for linear inequalities. Cahiers du Cero, 21, 43–53.

    Google Scholar 

  • Kelley, J. E., Jr. (1960). The cutting-plane method for solving convex programs. Journal of the Society for Industrial and Applied Mathematics, 8, 703–712.

    Article  Google Scholar 

  • Rubinov, A. M. (2000). Abstract convexity and global optimization. Dordrecht/Boston: Kluwer.

    Book  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the referees for their constructive comments and helpful suggestions which have contributed to the final preparation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. I. Todorov.

Additional information

M.I. Todorov: On leave from IMI-BAS, Sofia, BG.

This research was partially supported by MICINN of Spain, Grant MTM2014-59179-C2-1-P and Sistema Nacional de Investigadores, Mexico.

Appendices

Appendix 1: Extended cutting angle method

The extended cutting angle method (ECAM in short) due to Beliakov solves very hard optimization problems of the form

$$\begin{aligned} \inf \left\{ f(x)\,{:}\,x\in X\right\} , \end{aligned}$$
(19)

where f is Lipschitz continuous and X is a polytope. For simplicity, we assume that \(\dim X=n\). Since any full dimensional polytope can be expressed as the finite union of non-overlapping simplices, X will be a simplex in this appendix.

In ECAM the objective function is optimized by building a sequence of piecewise linear underestimates. ECAM is inspired in the classical cutting plane method by Kelley (1960) and Cheney and Goldstein (1959) to solve linearly constrained convex programs of the form (3), where X is the solution set of a given linear system and \(f\,{:}\,\mathbb {R}^{n}\rightarrow \mathbb {R}\) is convex. Since f is lower semicontinuous, it is the upper envelope of the set of all its affine minorants, i.e.

$$\begin{aligned} f=\text{ sup } \{h\,{:}\,h \text{ affine } \text{ function, } h\le f\}. \end{aligned}$$
(20)

Indeed, it is enough to consider in (20) the affine functions of the form \(h(x)=f(z)+\left\langle u,x-z\right\rangle \), where \(u\in \partial f\left( z\right) \), the graph of h being a hyperplane which supports the epigraph of f at \(\left( z,f(z)\right) \). Let \(x^{1},\ldots ,x^{k}\in X\) be given and consider the affine functions \(h^{j}(x)=f(x^{j})+\left\langle u^{j},x-x^{j}\right\rangle \), for some \(u^{j}\in \partial f\left( x^{j}\right) ,\,j=1,\ldots ,k.\) The function

$$\begin{aligned} f_{k}:=\max _{j=1,\ldots ,k}h^{j} \end{aligned}$$
(21)

is a convex piecewise affine underestimate of the objective function f, in other words, a polyhedral convex minorant of f. The kth iteration of the cutting plane method consists of computing an optimal solution \(x^{k+1}\) of the approximating problem \(\inf \left\{ f_{k}(x)\,{:}\,x\in X\right\} \) which results of replacing f with \(f_{k}\) in (3) or, equivalently, solving the linear programming problem in \(\mathbb {R}^{n+1}\)

$$\begin{aligned} \inf \left\{ x_{n+1}\,{:}\,x\in X,x_{n+1}\ge h^{j}(x),j=1,\ldots ,k\right\} , \end{aligned}$$
(22)

where \(x=\left( x_{1},\ldots ,x_{n}\right) \). Then the next underestimate of f,

$$\begin{aligned} f_{k+1}:=\max \left\{ f_{k},h^{k+1}\right\} , \end{aligned}$$
(23)

is a more accurate approximation to f,  and the method iterates.

The generalized cutting plane method for (3), where \(f\,{:}\,\mathbb {R}^{n}\rightarrow \mathbb {R}\) is now a non-convex function while \(X=\left\{ x\in \mathbb {R}_{+}^{n}\,{:}\,\sum \nolimits _{i=1}^{n}x_{i}=1\right\} \) is the unit simplex, follows the same script, except that the underestimate \(f_{k}\) is built using the so-called H-subgradients (see Rubinov 2000) instead of ordinary subgradients, so that minimizing \(f_{k}\) on S is no longer a convex problem. The cutting angle method (Bagirov and Rubinov 2000, 2001), of which ECAM is a variant, is an efficient numerical method for minimizing the underestimates when f belongs to certain class of abstract convex functions. Assume that f is Lipschitz continuous with Lipschitz constant \(M>0\) and take a scalar \(\gamma \ge M.\) Let \(x^{1},\ldots ,x^{k}\in S\) be given. For \(j=1,\ldots ,k,\) we define the support vector \(l^{j}\in \mathbb {R}^{n} \) by

$$\begin{aligned} l_{i}^{j}:=\frac{f(x^{j})}{\gamma }-x_{i}^{j},\quad i=1,\ldots ,n, \end{aligned}$$
(24)

and the support function \(h^{j}\) by

$$\begin{aligned} h^{j}(x):=\min _{i=1,\ldots ,n}\left( f(x^{j})-\gamma (x_{i}^{j}-x_{i})\right) = \min _{i=1,\ldots ,n}\gamma \left( l_{i}^{j}+x_{i}\right) . \end{aligned}$$
(25)

Since the functions \(h^{j}\) are concave piecewise affine underestimates of f (i.e. polyhedral concave minorants of f), the underestimate \(f_{k}\) defined in (21) is now a saw-tooth underestimate of f and its minimization becomes a hard problem as (22) is no longer a linear program. ECAM locates the set \(V^{k}\) of all local minima of the function \(f_{k}\) which, after sorting, yields the set of global minima of \(f_{k}\) (see Beliakov (2005, 2008); Beliakov and Ferrer 2010 for additional information). A global minimum \(x^{k+1}\) of \(f_{k}\) is aggregated to the set \(\left\{ x^{1},\ldots ,x^{k}\right\} \) and the method iterates with \(f_{k+1}:=\max \left\{ f_{k},h^{k+1}\right\} \).

As shown in Beliakov (2003, 2005, 2008), a necessary and sufficient condition for a point \(x^{*}\in \hbox {ri}\,X\) to be a local minimizer of \(f_{k}\) given by Eq. (25), Eq. (21) is that there exist an index set \(J=\{k_{1},k_{2},\ldots ,k_{n+1}\}\), such that

$$\begin{aligned} d=f_{k}(x^{*})=\gamma \left( l_{1}^{k_{1}}+x_{1}^{*}\right) = \gamma \left( l_{2}^{k_{2}}+x_{2}^{*}\right) =\cdots = \gamma \left( l_{n}^{k_{n+1}}+x_{n+1}^{*}\right) , \end{aligned}$$

and \(\forall i\in \{1,\ldots ,n+1\}\),

$$\begin{aligned} \left( l_{i}^{k_{i}}+x_{i}^{*}\right) <\left( l_{j}^{k_{i}}+x_{j}^{*}\right) ,\quad j\ne i. \end{aligned}$$

Let \(x^{*}\) be a local minimizer of \(f_{k}\), which corresponds to some index set J satisfying the above conditions. Form the ordered combination of the support vectors \(L=\{l^{k_{1}}, l^{k_{2}},\ldots , l^{k_{n+1}}\}\) that corresponds to J. It is helpful to represent this combination with a matrix L whose rows are the support vectors \(l^{k_{i}}\):

$$\begin{aligned} L:=\left( \begin{matrix} l_{1}^{k_{1}} &{}\quad l_{2}^{k_{1}} &{}\quad \ldots &{}\quad l_{n+1}^{k_{1}}\\ l_{1}^{k_{2}} &{}\quad l_{2}^{k_{2}} &{}\quad \ldots &{}\quad l_{n+1}^{k_{2}}\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ l_{1}^{k_{n+1}} &{}\quad l_{2}^{k_{n+1}} &{}\quad \ldots &{}\quad l_{n+1}^{k_{n+1}} \end{matrix}\right) , \end{aligned}$$
(26)

so that its components are given by \(L_{ij}=\frac{f(x^{k_{i})}}{\gamma }-x_{j}^{k_{i}}\).

Let the support vectors \(l^{k},k=1,\ldots ,K\) be defined as in (24). Let \(x^{*}\) denote a local minimizer of \(f_{k}\) and \(d=f_{k}(x^{*})\). Then the matrix (26) corresponding to \(x^{*}\) enjoys the following properties (see Beliakov 2008):

  1. (1)

    \(\forall i,j \in \{1,\ldots ,n+1\}, i \ne j: l_{i}^{k_{j}}>l_{i}^{k_{i}}\),

  2. (2)

    \(\forall r \not \in \{k_{1},k_{2},\ldots ,k_{n+1} \} \; \exists i\in \{1,\ldots ,n+1\}: L_{ii}=l_{i}^{k_{i}} \ge l_{i}^{r}\),

  3. (3)

    \(d=\frac{\gamma }{n+1}\left( \hbox {Trace}\,(L)+1\right) \), and

  4. (4)

    \(x^{*}_{i}=\frac{d}{\gamma }-l^{k_{i}}_{i}, i=1,\ldots ,n+1\).

Property 1 reads that the diagonal elements of the matrix L are dominated by their respective columns, and Property 2 reads that no support vector \(l^{r}\) (which is not part of L) strictly dominates the diagonal of L. The approach taken in Beliakov (2004, 2005) is to enumerate all combinations L with the Properties 1–2, which will give the positions of local minima \(x^{*}\) and their values d by using Properties 3–4.

From (23), combinations of L-matrices can be built incrementally, by taking initially the first \({n+1}\) support vectors (which yields the unique combination \(L=\{l^{1},l^{2},\ldots ,l^{n+1}\}\)), and then adding one new support vector at a time. Suppose, we have already identified the local minima of \(f_{k}\), i.e., all the required combinations. When we add another support vector \(l^{k+1}\), we can inherit most of the local minima of \(f_{k+1}\) (a few will be lost since Property 2 may fail with \(l^{k+1}\) playing the role of \(l^{r}\)), and we only need to add a few new local minima, that are new combinations necessarily involving \(l^{k+1}\). These new combinations are simple modifications of those combinations because Property 2 fails with \(l^{r}=l^{k+1}\).

When ECAM is applied for solving the global optimization subproblem (4) at step r of ERA, the procedure finishes when \(f_{{ best}}-d^{*}>\beta \) so, a \(\beta \)-global optimal solution is obtained.

Remark 13

Notice that the transformation of variables

  1. (1)

    \(\bar{x}_{i}=x_{i}-a_{i}, i=1,\dots ,n, d=\sum \nolimits _{i=1}^{n}(b_{i}-a_{i})\) with \(\bar{x}_{i}\ge 0\) and \(\sum \nolimits _{i=1}^{n}\bar{x}_{i}\le d\),

  2. (2)

    \(z_{i}=\frac{\bar{x}_{i}}{d}, i=1,\dots ,n, z_{n+1}=\sum \nolimits _{i=1}^{n}z_{i},\)

allows us to replace the program

$$\begin{aligned} \min \{f(x):x\in [a,b]\} \end{aligned}$$

by the following one:

$$\begin{aligned} \min \{g(z_{1},\dots ,z_{n+1}):(z_{1},\dots ,z_{n+1})\in X\}, \end{aligned}$$

where S denotes the unit simplex in \(\mathbb {R}^{n+1}\).

Appendix 2: Performance profiles

In this paper we compare, on the one hand, 8 implementations of the classical fixed step relaxation algorithm corresponding to 8 choices of \(\lambda \) on a battery of 27 feasibility problems and, on the other hand, 5 implementations of the new relaxation algorithm with variable step size corresponding to 5 choices of \(\upsilon \) on the same set of test problems. Denote by \(\mathcal {S}\) the set of implementations to be compared, so that the cardinality of \(\mathcal {S}\), denoted by \(\hbox {size}\,\mathcal {S}\) is 8 and 5 for the classic and for the new relaxation algorithms, respectively. Denote also by \(\mathcal {P}\) the set of test feasibility problems, with \(\hbox {size}\,\mathcal {P}=27\) for both algorithms.

The notion of performance profile (Dolan and Moré 2002) allows us to compare the performance of the implementations from \(\mathcal {S}\) on \(\mathcal {P}\). For each pair \(({\normalsize p},{\normalsize s})\in \mathcal {P\times S}\) we define

$$\begin{aligned} f_{{\normalsize p},{\normalsize s}} :=\text{ number } \text{ of } \text{ function } \text{ evaluations } \text{ required } \text{ to } \text{ solve } \text{ problem }\,p\, \text{ by } \text{ solver }\,s. \end{aligned}$$

Consider a fixed problem \({\normalsize p}\in \mathcal {P}\). The performance of a solver \({\normalsize s}\in \mathcal {S}\) able to solve \({\normalsize p}\) is compared with the best performance of any solver of \(\mathcal {S}\) on the same problem through the performance ratio

$$\begin{aligned} r_{{\normalsize p},{\normalsize s}}:=\frac{f_{{\normalsize p},{\normalsize s} }}{\min \{f_{{\normalsize p},{\normalsize s}}\,{:}~{\normalsize s}\in \mathcal {S}\}}\ge 1. \end{aligned}$$

Obviously, \(r_{{\normalsize p},{\normalsize s}}=1\) means that s is a winner for p, as it is at least as good, for solving p, as any other solver of \(\mathcal {S}\). For any solver \({\normalsize s}\) unable to solve problem \({\normalsize p}\) we define \(r_{{\normalsize p},{\normalsize s}}=r_{M}\), where \(r_{M}\) denotes an arbitrary scalar such that

$$\begin{aligned} r_{M}>\max \left\{ r_{{\normalsize p},{\normalsize s}}\,{:}\,\text { }s\text { solves }p,\,({\normalsize p},{\normalsize s})\in \mathcal {P\times S}\right\} . \end{aligned}$$

The evaluation of the overall performance of \({\normalsize s}\in \mathcal {S}\) is based on the stepwise non-decreasing function \(\rho _{{\normalsize s}}\,{:}\,\mathbb {R}_{+}\rightarrow [0,1],\) called performance profile of s,  defined as follows:

$$\begin{aligned} \rho _{{\normalsize s}}(t)=\frac{ \text{ size }\{{\normalsize p}\in \mathcal {P}\,{:}~r_{{\normalsize p},{\normalsize s}}\le t\}}{ \text{ size }\mathcal {P}},\quad t\ge 0. \end{aligned}$$

Obviously, \(\rho _{{\normalsize s}}\left( t\right) =0\) for all \(t\in [0,1[\) and \(\rho _{{\normalsize s}}(1)\) is the relative frequency (which could be interpreted as a probability when p is taken at random from \(\mathcal {P}\)) of wins of solver s over the rest of the solvers. We say in brief that \(\rho _{{\normalsize s}}(1)\) is the probability of win for s.

Analogously, for \(t>1, \rho _{{\normalsize s}}(t)\) represents the probability for solver \({\normalsize s}\in \mathcal {S}\) that a performance ratio \(r_{{\normalsize p},{\normalsize s}}\) is within a factor \(t\in \mathbb {R}\) of the best possible ratio, so that \(\rho _{{\normalsize s}}\) can be interpreted as a distribution function and the number

$$\begin{aligned} \rho _{{\normalsize s}}^{*}:=\lim _{t\searrow r_{M}}\rho _{{\normalsize s}}(t) \end{aligned}$$

as the probability of solving a problem of \(\mathcal {P}\) with \({\normalsize s}\in \mathcal {S}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferrer, A., Goberna, M.A., González-Gutiérrez, E. et al. A comparative note on the relaxation algorithms for the linear semi-infinite feasibility problem. Ann Oper Res 258, 587–612 (2017). https://doi.org/10.1007/s10479-016-2135-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-016-2135-2

Keywords

Navigation