Skip to main content

Advertisement

Log in

Facets of a mixed-integer bilinear covering set with bounds on variables

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We derive a closed form description of the convex hull of mixed-integer bilinear covering set with bounds on the integer variables. This convex hull description is determined by considering some orthogonal disjunctive sets defined in a certain way. This description does not introduce any new variables, but consists of exponentially many inequalities. An extended formulation with a few extra variables and much smaller number of constraints is presented. We also derive a linear time separation algorithm for finding the facet defining inequalities of this convex hull. We study the effectiveness of the new inequalities and the extended formulation using some examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Adjiman, C.S., Dallwig, S., Floudas, C.A., Neumaier, A.: A global optimization method, \(\alpha \)BB, for general twice-differentiable constrained NLPs—I. Theoretical advances. Comput. Chem. Eng. 22(9), 1137–1158 (1998)

    Article  Google Scholar 

  2. Anstreicher, K.M.: Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming. J. Global Optim. 43, 471–484 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Anstreicher, K.M.: On convex relaxations for quadratically constrained quadratic programming. Math. Program. B 136, 233–251 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bao, X., Sahinidis, N.V., Tawarmalani, M.: Semidefinite relaxations for quadratically constrained quadratic programming: a review and comparisons. Math. Program. B 129, 129–157 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Belotti, P., Lee, J., Liberti, L., Margot, F., Waechter, A.: Branching and bounds tightening techniques for nonconvex MINLP. Optim. Methods Softw. 24(4–5), 597–634 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bertsimas, D., Tsitsiklis, J.N.: Introduction to Linear Optimization. Athena Scientific, Belmont (1997)

    Google Scholar 

  7. Conforti, M., Cornuéjols, G., Zambelli, G.: Integer Programming. Springer, Berlin (2014)

    MATH  Google Scholar 

  8. Falk, J.E., Soland, R.M.: An algorithm for separable nonconvex programming problems. Manag. Sci. 15, 550–569 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  9. Forrest, J., Lougee-Heimerl, R.: CBC (Coin-or Branch-and-Cut) Solver. https://projects.coin-or.org/Cbc. Accessed 18 Apr 2019

  10. Gau, T., Wascher, G.: CUTGEN1: a problem generator for the standard one-dimensional cutting stock problem. Eur. J. Oper. Res. 84, 572–579 (1995)

    Article  MATH  Google Scholar 

  11. Grötschel, M., Lovász, L., Schrijver, A.: The ellipsoid methods and its consequences in combinatorial optimization. Combinatorica 1, 169–197 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  12. Harjunkoski, I., Westerlund, T., Porn, R., Skrifvars, H.: Different transformations for solving nonconvex trim-loss problems by MINLP. Eur. J. Oper. Res. 105, 594–603 (1998)

    Article  MATH  Google Scholar 

  13. Katta, G.M., Santosh, N.K.: Some NP-complete problems in quadratic and nonlinear programming. Math. Program. 39, 117–129 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  14. McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: Part I—Convex underestimating problems. Math. Program. 10, 147–175 (1976)

    Article  MATH  Google Scholar 

  15. Mitchell, S., Kean, A., Mason, A., O’Sullivan, M., Phillips, A.: PuLP. (2011) https://www.coin-or.org/PuLP/. Accessed 18 Apr 2019

  16. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  17. Sahinidis, N.V.: Global optimization and constraint satisfaction: the branch-and-reduce approach. In: Bliek, C., Jermann, C., Neumaier, A. (eds.) The Book Global Optimization and Constraint Satisfaction, pp. 1–16. Springer, Berlin (2003)

    Google Scholar 

  18. Sherali, H.D.: A unified approach for discrete and continuous nonconvex optimization. Ann. Oper. Res. 149(1), 185–193 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Sherali, H.D., Alameddin, A.R.: A new reformulation linearization technique for bilinear programming problems. J. Global Optim. 2, 379–410 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  20. Tawarmalani, M., Richard, J.P., Chung, K.: Strong valid inequalities for orthogonal disjunctions and bilinear covering sets. Math. Program. B 124, 481–512 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Umetani, S., Yagiura, M., Ibaraki, T.: One dimensional cutting stock problem to minimize the number of different patterns. Eur. J. Oper. Res. 146, 388–402 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  22. Vanderbeck, F.: Exact algorithm for minimising the number of setups in the one-dimensional cutting stock problem. Oper. Res. 48(6), 915–926 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  23. Vavasis, S.A.: Quadratic programming is in NP. Inf. Process. Lett. 36, 73–77 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  24. Voorhis, T.V.: A global optimization algorithm using Lagrangian underestimates and the interval Newton method. J. Global Optim. 24, 349–370 (2002)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are grateful to three anonymous reviewers for giving insightful comments and suggestions, specially the idea of the extended formulation and simplification of proofs. These suggestions have improved this paper significantly. We also thank Vishnu Narayanan for insightful discussions we had with him during this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashutosh Mahajan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

Optimization over S and separation on conv(S)

Pseudocode for separation of facets of \(conv \left( S \right) \) is provided in Algorithm 1. Now we consider the problem of minimizing a linear function \(c^T x + d^T y\) over S (or equivalently over conv(S)).

figure e

If one of the components of c or d is negative, then the problem is unbounded. Suppose, \(c \ge 0, d \ge 0\) and one of the component of the vector \(c, c_t\) (say) is zero. If \(d_t = 0, \min _{(x,y) \in S} c^T x + d^T y = 0\) and if \(d_t > 0, \inf _{(x,y) \in S} \ c^T x + d^T y = 0\). This is because, in either case we can choose \(y_t\) arbitrary small such that \(x_t y_t = r\) and all other components are zero. Now, let \(c \ge 0, d = 0\). Let \(c_t \le c_j, \forall j \in N\). Then \({\mathcal {L}}(t, 1, r)\) is an optimal solution with optimal value \(c_t\).

The only remaining case is when \(c > 0, d \ge 0, d \ne 0\). We consider it next.

Proposition 7

Consider the orthogonal disjunctive subset \(S_i\) of the set S. Then we can solve the optimization problem \(\min _{(x,y) \in S_i} c_i x_i + d_i y_i\) in polynomial time.

Proof

From the definition, each \((x,y) \in S_i\) is of the form \({\mathcal {L}}(i, x_i, y_i), x_i \in {\mathbb {N}}\). If \(c_i \ge 0, d_i = 0\), then \({\mathcal {L}}(i, 1, r)\) is an optimal solution with optimal value \(c_i\).

Now, we only have to consider \(c_i> 0, d_i > 0\). Let \({\mathcal {L}}(i, x^*_i, y^*_i)\) be an extreme point optimal solution of \(conv \left( S_i \right) \). Clearly, this point should lie on the surface \(x_i y_i = r\). Since the continuous relaxation of the set \(S_i\) is a strictly convex set, the optimal solution \({\mathcal {L}}(i, {\bar{x}}_i, {\bar{y}}_i)\) (say) over the continuous relaxation is unique, and we have,

$$\begin{aligned} {\bar{x}}_i = \sqrt{\frac{r d_i}{c_i}}, \ \text {and} \ {\bar{y}}_i = \frac{r}{{\bar{x}}_i}. \end{aligned}$$

If \(\sqrt{\frac{r d_i}{c_i}}\) is an integer, then \(x^*_i = {\bar{x}}_i, y^*_i = {\bar{y}}_i\). If not, then from the geometry, it is clear that at the optimal solution either \(x^*_i = \left\lceil \sqrt{\frac{r d_i}{c_i}} \right\rceil \) or \(x^*_i = \left\lfloor \sqrt{\frac{r d_i}{c_i}} \right\rfloor \) whichever minimizes the objective function and is nonzero.

So, to find an optimal solution, we just have to check the signs of the objective coefficient and compute the value of \(\sqrt{\frac{r d_i}{c_i}}\). This can be done in constant time. \(\square \)

Now we consider the set S. If an optimal solution exists, there must be an extreme point optimal solution of conv(S) that is optimal in S. Now by Theorem 2, there must be an optimal solution that is an extreme point of \(conv(S_i)\) for some \(i \in N\). We can solve the n problems \(\min _{{\mathcal {L}}(i, x_i, y_i) \in S_i} c_i x_i + d_i y_i\) for \(i \in N\) and pick the minimum of the n objective values, we will get the optimal value and corresponding optimal solution. Since each subproblem takes constant time to solve, we can solve the whole problem in linear time.

Optimization over \(S^U\) and separation on \(conv \left( S^U \right) \)

Pseudocode for separation of facets of \(conv \left( S^U \right) \) is provided in Algorithm 2. Now we consider the following problem:

figure f
figure g

This problem is equivalent to minimizing \(c^T x + d^Ty\) over \(conv \left( S^U \right) \) which is polyhedral and whose extreme points are known. If an optimal solution exists, we will find an extreme point optimal solution to \(conv \left( S^U \right) \).

When \(d_t < 0\) for some \(t \in N\), the problem is unbounded. Otherwise, if \(c \le 0, d = 0\), then clearly \(\left( u_1, \frac{r}{u_1}, u_2, 0, \ldots , u_n, 0 \right) \) is an extreme point optimal solution.

Now the remaining case is \(d \ge 0\). We first partition the set of extreme points of \(conv \left( S^U \right) \) and optimize over those partitions. Let us define the following set for each \(i \in N\).

$$\begin{aligned} E_i = \left\{ (x,y) \in {\mathbb {R}}^{2n}_+ : x_i \in \{1, \ldots , u_i \}, y_i = \frac{r}{x_i}, x_j \in \{ 0, u_j \}, y_j = 0, \forall j \in N, j \ne i \right\} . \end{aligned}$$

From the discussion in Sect. 4.1, all the points in \(E_i\) are extreme points of \(conv \left( S^U \right) \) and \(E = \bigcup _{i \in N} E_i\) is the set of all extreme points of \(conv \left( S^U \right) \). We minimize \(c^T x + d^Ty\) over each set \(E_i, i \in N\) and pick the minimum. Now our goal is to solve the following problem.

figure h

Clearly \(\zeta = \min \{ \zeta _i : i \in N \}\). Note that only the \(i^{th}\) component of the variable y of each point in \(E_i\) is non-zero and rest are all zero. Therefore, the objective function of the above problem (\(P_i\)) reduces to \(c_i x_i + d_i y_i + \sum _{j \in N, j \ne i} c_j x_j\). For any point \((x,y) \in E_i\), the choices of the components \(x_j \in \{0, u_j\}, j \in N, j \ne i\) are independent of the choice of \(x_i \in \{1, \ldots , u_i \}\). Let \(({\bar{x}}, {\bar{y}})^i \in E_i\) be an optimal solution of (\(P_i\)). Then we must have \({\bar{y}}^i_i = \frac{r}{{\bar{x}}^i_i}, {\bar{y}}^i_j = 0, \forall j \in N, j \ne i\). Let us consider the following choices of x components of \(({\bar{x}}, {\bar{y}})^i\).

$$\begin{aligned}&{\bar{x}}^i_i \in \{1, \ldots , u_i \} \ \text {such that } ({\bar{x}}^i_i, {\bar{y}}^i_i) \ \text {minimzes } c_i x_i + d_i y_i, \\&{\bar{x}}^i_j = {\left\{ \begin{array}{ll} 0, \ \text {if } c_j > 0, \\ u_j, \ \text {if } c_j \le 0, \end{array}\right. } \forall j \in N, j \ne i. \end{aligned}$$

It can be seen clearly that such above choice of the components of \(({\bar{x}}, {\bar{y}})^i\) minimizes the objective function. Now to find the value of \({\bar{x}}^i_i \in \{1, \ldots , u_i \}\), we consider the following cases.

  • Case 1 When \(c_i \le 0\), then \({\bar{x}}^i_i = u_i\). This is because, since \(c_i \le 0\), the maximum value of \(x_i\) in the domain will minimize \(c_i x_i\). Moreover, for this choice of \({\bar{x}}^i_i, {\bar{y}}^i_i = \frac{r}{u_i}\) is also minimum, and since \(d_i \ge 0, \left( u_i, \frac{r}{u_i} \right) \) minimizes \(c_i x_i + d_i y_i\).

  • Case 2 If \(c_i > 0\) and \(d_i = 0, {\bar{x}}^i_i = 1, {\bar{y}}^i_i = r\) as \({\bar{x}}^i_i \ge 1\).

  • Case 3 The remaining case is \(c_i> 0, d_i > 0\). Since the points \({\mathcal {L}} \left( i, p_i, \frac{r}{p_i} \right) , p_i \in \{ 1, \ldots , u_i \}\) are the extreme points of \(conv \left( S^U_i \right) \), minimizing \(c_i x_i + d_i y_i\) over \(conv \left( S^U_i \right) \) and over \(E_i\) are equivalent. To solve this we will use the same analysis as in the proof of Proposition 7 with slight modification as there is an upper bound \(u_i\) on the variable \(x_i\). So, in this case we have the following choice of \({\bar{x}}^i_i\) and consequently \({\bar{y}}^i_i = \frac{r}{{\bar{x}}^i_i}\).

    $$\begin{aligned} {\bar{x}}^i_i = {\left\{ \begin{array}{ll} \sqrt{\frac{r d_i}{c_i}}, \ \text {if } \sqrt{\frac{r d_i}{c_i}} \in \{ 1, \ldots , u_i \}, \\ 1, \ \text {if } \sqrt{\frac{r d_i}{c_i}}< 1, \\ \left\lceil \sqrt{\frac{r d_i}{c_i}} \right\rceil \ \text {or } \left\lfloor \sqrt{\frac{r d_i}{c_i}} \right\rfloor , \ \text {whichever minimizes } c_i x_i + d_i \frac{r}{x_i}, \\ \text {if } 1< \sqrt{\frac{r d_i}{c_i}} < u_i \ \text {and } \sqrt{\frac{r d_i}{c_i}} \notin {\mathbb {Z}}_+,\\ u_i, \ \text {if } \sqrt{\frac{r d_i}{c_i}} > u_i. \end{array}\right. } \end{aligned}$$

From the above analysis, we can solve the problem (\(P_i\)) in linear time in the input size, as we just have to check the signs of \(n - 1\) entries and have to check the value of \(\sqrt{\frac{r d_i}{c_i}}\), whenever it exists, and if not then the signs of \(c_i\) and \(d_i\). Since finding \(\zeta = \min \{ \zeta _i : i \in N \}\) takes O(n) time, we can solve (P) in \(O(n^2)\) time in the input size.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahman, H., Mahajan, A. Facets of a mixed-integer bilinear covering set with bounds on variables. J Glob Optim 74, 417–442 (2019). https://doi.org/10.1007/s10898-019-00783-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-019-00783-0

Keywords

Navigation