# A storm of feasibility pumps for nonconvex MINLP

## Authors

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s10107-012-0608-x

- Cite this article as:
- D’Ambrosio, C., Frangioni, A., Liberti, L. et al. Math. Program. (2012) 136: 375. doi:10.1007/s10107-012-0608-x

- 16 Citations
- 522 Views

## Abstract

One of the foremost difficulties in solving Mixed-Integer Nonlinear Programs, either with exact or heuristic methods, is to find a feasible point. We address this issue with a new feasibility pump algorithm tailored for nonconvex Mixed-Integer Nonlinear Programs. Feasibility pumps are algorithms that iterate between solving a continuous relaxation and a mixed-integer relaxation of the original problems. Such approaches currently exist in the literature for Mixed-Integer Linear Programs and convex Mixed-Integer Nonlinear Programs: both cases exhibit the distinctive property that the continuous relaxation can be solved in polynomial time. In nonconvex Mixed-Integer Nonlinear Programming such a property does not hold, and therefore special care has to be exercised in order to allow feasibility pump algorithms to rely only on local optima of the continuous relaxation. Based on a new, high level view of feasibility pump algorithms as a special case of the well-known successive projection method, we show that many possible different variants of the approach can be developed, depending on how several different (orthogonal) implementation choices are taken. A remarkable twist of feasibility pump algorithms is that, unlike most previous successive projection methods from the literature, projection is “naturally” taken in two different norms in the two different subproblems. To cope with this issue while retaining the local convergence properties of standard successive projection methods we propose the introduction of appropriate *norm constraints* in the subproblems; these actually seem to significantly improve the practical performance of the approach. We present extensive computational results on the MINLPLib, showing the effectiveness and efficiency of our algorithm.

### Keywords

Feasibility pumpMINLPGlobal optimizationNonconvex NLP### Mathematics Subject Classification

90C11 Mixed integer programming90C26 Nonconvex programming, global optimization90C59 Approximation methods and heuristics## 1 Introduction

The exact solution of nonconvex MINLP is only possible for certain classes of functions \(f,g\) (e.g. if \(f\) is linear and \(g\) involve bilinear terms \(xy\) [2, 11]). In general, the spatial Branch-and-Bound (sBB) algorithm is used to obtain \(\varepsilon \)-approximate solutions for a given positive constant \(\varepsilon \). The sBB computes upper and lower bounds to the objective function value within sets belonging to an iteratively refined partition of the feasible region. The search is pruned when the lower bound on the current set is worse than the best feasible so far (the incumbent), when the problem restricted to the current set is infeasible, and when the two bounds for the current set are within \(\varepsilon \). Otherwise, the current set is partitioned and the search continues recursively [6, 35]. Heuristic approaches to solving MINLPs include Variable Neighbourhood Search [30], automatically tuned variable fixing strategies [7], Local Branching [31] and others; specifically, most exact approaches for convex MINLPs [8, 21] work as heuristic for nonconvex MINLPs. In heuristic approaches, however, one of the main algorithmic difficulties connected to MINLPs is to find a feasible solution. From the worst-case complexity point of view, finding a feasible MINLP solution is as hard as finding a feasible Nonlinear Programming (NLP) solution, which is NP-hard [36].

In this paper we address the issue of MINLP feasibility by extending a well-known approach, namely the Feasibility Pump (FP) to the nonconvex MINLP case. The FP algorithm was originally proposed for Mixed-Integer Linear Programming (MILP) [20], where \(f, g\) are linear forms, and then extended to convex MINLPs [8], where \(g\) are convex functions. In both cases the feasible region is partitioned so that two subproblems are iteratively solved: a problem \(P_1\) involving the continuous variables \(y\) with relaxed integer variables \(x\), and a problem \(P_2\) involving both integer and continuous variables \(x,y\) targeting, through its objective function, the continuous solution of \(P_1\). The two subproblems are iteratively solved, generating sequences of values for \(x\) and \(y\). One of the main theoretical issues in FP is to show that these sequences do not cycle, i.e., are not periodic but converge to some feasible point \((x,y)\). This is indeed the case for the FP version proposed for convex MINLP [8] where \(P_2\) is a MILP, while cycling might happen for the original FP version proposed for MILP [20] where randomization is effectively (and cheaply) used as an escaping mechanism. In the FP for MILPs, \(P_1\) is a Linear Program (LP) and \(P_2\) a rounding phase; in the FP for convex MINLPs, \(P_1\) is a convex NLP and \(P_2\) a MILP iteratively updated with Outer Approximation (OA) constraints derived from the optimum of the convex NLP. In both cases one of the subproblems (\(P_1\)) can be solved in polynomial time; in the FP for convex MINLPs, \(P_2\) is NP-hard in general. Extensions for both FPs exist, addressing solution quality in some cases [1] and CPU time in others [9]. The added difficulty in the extension proposed in this paper is that \(P_1\) is a nonconvex NLP, and is therefore NP-hard: thus, in our decomposition, both subproblems are difficult, and special care has to be exercised in order to allow FP algorithms to rely only on local optima of the continuous relaxation.

A contribution of the present paper is to present FP algorithms as a special case of the well-known Successive Projection Method (SPM). By doing so we show that many possible different variants of the approach can be developed, depending on how several different (orthogonal) implementation choices are taken. A remarkable twist of FP algorithms is that, unlike most previous SPMs from the literature, projection is “naturally” taken in two different norms in \(P_1\) and \(P_2\). To cope with this issue while retaining the local convergence properties of standard SPMs we propose the introduction of appropriate *norm constraints* in the subproblems, an idea that could be generalized to other nonconvex applications of the SPM. In particular, adding a norm constraint to \(P_1\), besides providing nice theoretical convergence properties, actually seem to significantly improve the practical performance of the approach.

The rest of this paper is organized as follows. In Sect. 2 we frame the FP algorithm within the class of Successive Projection Methods, describing their convergence properties. In Sect. 3 we discuss the use of different norms within the two subproblems of the FP algorithm. In Sect. 4 we list our solution strategies for both subproblems. In Sect. 5 we present comparative computational results illustrating the efficiency of the proposed approach. Section 6 concludes the paper.

## 2 A view on feasibility pumps

While in the vast majority of the applications \(\fancyscript{A}\) and \(\fancyscript{B}\) are “easy” convex sets, and it was intended that optimization was to be exact, our nonconvex FP setting also fits under the basic assumption of the approach. Indeed, \(\fancyscript{X}\) is the coarsest relaxation of the feasible region of (1) and let \(C \subseteq \{ 1, \ldots , m \}\) be the set of constraint indices such that \(g_i(x, y)\) is a convex function of \((x, y)\) (note that these do not include the linear defining inequalities of \(\fancyscript{X}\), if any), and \(N = \{ 1, \ldots , m \}\backslash C\). We denote the list of all convex constraints by \(g_C\), so that \(\fancyscript{C} = \{ (x,y) \;|\; g_C(x,y)\le 0\} \subseteq \mathbb R ^{p+q}\) also is a convex relaxation of the feasible region of (1). We also denote by \(g_N\) the constraints indexed by \(N\) and let \(\fancyscript{N} = \{ (x,y) \;|\; g_N(x,y) \le 0 \}\). We remark that deciding whether \(\fancyscript{N}\) is empty involves the solution of a nonconvex NLP and is therefore a hard problem. This hardness, by inclusion, extends to the continuous relaxation of the feasible region \(\fancyscript{P} = \fancyscript{C} \cap \fancyscript{N} \cap \fancyscript{X}\). Now, let \(\fancyscript{Z} = \{ (x,y) \;|\; x \in \mathbb Z ^p \}\), so that \(\fancyscript{I} = \fancyscript{C} \cap \fancyscript{X} \cap \fancyscript{Z}\) is the relaxation of the feasible region involving all the convex and integrality constraints of (1). Deciding emptiness of \(\fancyscript{I}\) involves solving a convex MINLP and is therefore also hard, but for different reasons than \(\fancyscript{P}\). More specifically, solving nonconvex NLPs globally requires solving nonconvex NLPs locally as a sub-step, whereas solving convex MINLPs involves the solution of convex NLPs (globally) as a sub-step. The numerical difficulties linked to these two tasks are very different, in particular with respect to the reliability of finding the solution: with nonconvex NLPs, for example, Sequential Quadratic Programming (SQP) algorithms might yield an infeasibile linearization step even though the original problem is feasible. It therefore makes sense to decompose \(\fancyscript{F} = \fancyscript{I} \cap \fancyscript{P}\), the feasible region of (1), into its two components \(\fancyscript{I}\) and \(\fancyscript{P}\), in order to address each of the difficulties separately.

Thus, by taking e.g. \(\fancyscript{A} = \fancyscript{P}\) and \(\fancyscript{B} = \fancyscript{I}\) one can fit the FP approach under the generic SPM framework. Note that with this choice the (nonlinear) convex constraints \(g_C\) are included in the definition of both \(\fancyscript{P}\) and \(\fancyscript{I}\) (although they can possibly be outer-approximated in the latter, as discussed below). This makes sense since \(\fancyscript{C}\) represents, in this context, an “easy” part of (1): adding it to either set of constraints do not fundamentally change the difficulty of the corresponding problems, while clearly helping to convey as much information of \(\fancyscript{F}\) as possible. Yet, other decompositions could make sense as well. For instance, one may alternatively set \(\fancyscript{B} = \fancyscript{X} \cap \fancyscript{Z}\) in order to keep \(P_2\) a linear problem without having to resort to outer approximation techniques (assuming that \(C\) only contains the *nonlinear* convex constraints, with all the linear ones represented in \(\fancyscript{X}\)). Alternatively, \(\fancyscript{B} = \fancyscript{Z}\) could also make sense, since then \(P_2\) actually simplifies to a simple rounding operation (the choice of the original FP [20], cf. §4.2). Thus, different variants of FP for (1) can be devised which can all be interpreted as special cases of SPM for proper choices of \(\fancyscript{A}\) and \(\fancyscript{B}\). Therefore, in the following we will keep the “abstract” notation with the generic sets \(\fancyscript{A}\) and \(\fancyscript{B}\) whenever we discuss general properties of the approach which do not depend on specific choices within the FP application.

*(over/under)relaxation*parameters and \(\lambda ^i_h \ge 0\) are the

*weights*for \(h = 0,1, \lambda ^i_0 + \lambda ^i_1 = 1\), and

*weighted simultaneous projection*and

*relaxation*; we mention in passing that these algorithms bear more than a casual resemblance with

*subgradient methods*[18], as discussed in [5, §7]. The scheme (3)–(4) clearly corresponds to \(\alpha ^i_h = 1\) (“unrelaxed”) and \(\lambda ^i_{(i \mod 2)} = 1\) (“cyclic control”), so that only one among the two projections actually need to be computed at any iteration (\(z^i = v^{2i - 1}\) and \(w^i = v^{2i}\)). While simultaneous projection is unlikely to be attractive in the FP setting, relaxation is known to improve the practical performance of SPMs in some cases, and it could be considered.

*block Gauss-Seidel*approaches applied to the minimization of a block-structured objective function \(Q(z,w)\). These approaches, based on the same idea to iteratively minimize over one block of variables at a time, can be shown to be (locally) convergent under much less stringent conditions than convexity, especially in the two-blocks case of interest here. Different convergence results, under different assumptions, can be found e.g. in [23, 34] for the even more general setting where the objective function of is

*regular*, i.e.,

*stabilized*version

The algorithm alternates between solving the nonconvex NLP (8) and the convex MINLP (9). In order to retain the local convergence property, both problems would need to be solved exactly: a difficult task in both cases.

Being (9) a mixed-integer program, it would be very attractive to be able to use the efficient available MILP solvers to tackle it. However, in order to do that one would — as the very first step — need to substitute the Euclidean norms with “linear” ones (\(L_1, L_{\infty }\)).

In the standard FP approach [8, 20] the distance is actually only measured on the integer (\(x\)) variables, as opposed to the full pair \((x, y)\).

## 3 Using different norms

^{1}This shows that

*Example 1*

Hence, the modification (12) of the FP still guarantees convergence of the \(\delta _i\) sequence, and therefore (at least for \(\beta < 1\)) ensures that no cycling can occur. Convergence may occur to a local minimum when using “nonsmooth” norms such as \(L_1\) and \(L_{\infty }\) even if \(\fancyscript{A}\) and \(\fancyscript{B}\) were convex, but this is not a major issue since the sets are nonconvex, and therefore there is no guarantee of convergence to a global minimum anyway. Other mechanisms in the algorithm (cf. §4.2) are designed to take care of this.

### 3.1 Partial norms

## 4 Approximate solution of the subproblems

The convergence theory for SPMs would require solving (8) and (9) to global optimality. As already remarked, this is extremely challenging and not very likely to be effective in the context of what, overall, remains a heuristic method, which at any rate does not provide any theoretical guarantee of success. Furthermore, even if the subproblems were actually solved to global optimality, several variants of the FP approach—most notably, those employing two different norms–would not still entirely fit into the theoretical framework for which convergence proofs are readily available. This frees us to consider several different options and strategies to solve both (8) and (9), as discussed in this section, which give rise to “a storm” of many different configurations that we extensively tested computationally. The results are reported in Sect. 5, either in detail for the most successful algorithms or in summary for the unsuccessful ones.

### 4.1 Addressing the nonconvex NLP (8)

- 1.
a simple stochastic multi-start approach [33] in which the NLP solver is provided with different randomly generated starting points in order to try to escape from possible local minima;

- 2.

- i.
we forget about such a difference in norms and we hope for the best;

- ii.
we amend (8) by the norm constraint (16), and solve it as usual. We remark here that preliminary computational experiments have shown that the value of \(\beta \) does not strongly influence the results, thus we used \(\beta = 1\) in the computational results of Sect. 5.

- I.
solution algorithm: multi-start (1. above) versus VNS (2. above),

- II.
additional fixing step: NO (a. above) versus YES (b. above), and

- III.
norm correction: NO (i. above) versus YES (ii. above).

### 4.2 Addressing the convex MINLP (9)

- 1.
Of course, the most trivial option is to keep the Euclidean norm so as (9) is a convex MINLP.

- 2.As discussed in Sect. 3, the main alternative is to employ either the \(L_1\) or the \(L_\infty \) norm in the objective function so that it can be linearly reformulated in standard ways (via the introduction of a few auxiliary continuous variables). This is in the attempt to replace (9) with a MILP relaxation, because MILP solution technology is currently more advanced than its convex MINLP equivalent. This, however, requires the constraints to be linearized as well. This can be done by means of standard
*Outer Approximation*approaches. That is, assuming \(C\) contains only*nonlinear*convex constraints (the linear ones being left in \(\fancyscript{X}\)), one can approximately solve (9) at the generic iteration \(i \in \mathbb N \) by means of its MILP relaxation$$\begin{aligned}&\min \Vert \bar{x}^i-x\Vert _B&\end{aligned}$$(17)$$\begin{aligned}&g_\ell (\bar{x}^k,\bar{y}^k)+ \nabla g_\ell (\bar{x}^k,\bar{y}^k) \left(\begin{array}{c} x-\bar{x}^k \\ y-\bar{y}^k \end{array}\right) \le 0&\quad \ell \in \bar{C}^k, \; k \le i \end{aligned}$$(18)where the norm \(B\) in (17) can be either \(L_1\) or \(L_\infty \) and \(\bar{C}^k \subseteq C\) is the set of convex nonlinear constraints that are active at \((\bar{x}^k,\bar{y}^k)\). In other words, one keeps collecting the classical$$\begin{aligned}&(x,y) \in \fancyscript{X},\;\; x \in \mathbb Z ^p&\end{aligned}$$(19)*Outer Approximation cuts*[19] (18) along the iterations and uses them to define a polyhedral outer approximation of \(\fancyscript{I}\). Note that while (18) could seem to require that each \(g_\ell \) for \(\ell \in C\) be a differentiable function, this is only assumed for the sake of notational simplicity: notoriously, subgradients of nondifferentiable convex functions can be used as well (e.g. [17]).

- a.If the Euclidean norm is used in (9), then we investigate three options:
- 1.
we solve the convex MINLP as is by means of a sophisticated general-purpose MINLP solver, in our case Bonmin solver [10],

- 2.
we solve a convex mixed-integer quadratic problem (MIQP) relaxation of the MINLP. Precisely, the MIQP is obtained by using the objective function

^{2}\(\min \Vert \bar{x}^i-x\Vert _2\) instead of (17) but with the same set of (linear) constraints (18)-(19). This is done to simplify the problem and being able to use a sophisticated general-purpose MIQP solver, in our case CPLEX [25]. - 3.
we remove all constraints (18)–(19), only keeping \(x\in \mathbb Z ^p\) and bound constraints, and solve (9) by rounding. This is in the spirit of both [20] and [9].

- 1.
- b.
If instead the \(L_1\)/\(L_\infty \) norm is used and the MILP relaxation (17)–(19) is defined, we solve the MILP as is by means of a sophisticated general-purpose MILP solver, in our case CPLEX [25].

In the nonconvex case, however, OA cuts are not enough, as discussed in Example 2. In addition, in the testbed we used to computationally test our approach, the number of OA cuts we could generate is somehow limited as discussed in detail in Sect. 5.1.

*Example 2*

In Fig. 5 a nonconvex feasible region and its current linear approximation are depicted. Let us consider \(\bar{x}\) being the current solution of subproblem (8). In this case, only one OA cut can be generated, i.e., the one corresponding to convex constraint \(\gamma \). However, it does not cut off \(\hat{x}\), i.e., the solution of (9) at the previous iteration. In this example, the FP would not immediately cycle, because \(\hat{x}\) is not the solution of (9) which is closest to \(\bar{x}\). This shows that there is a distinction between cutting off and cycling. In general, however, failure to cut off previously visited integer solutions might lead to cycling, as shown in Fig. 6. \(\square \)

*no-good cuts*at iteration \(i\) to make \((\hat{x}^k, \hat{y}^k)\) infeasible for all \(k < i\). This is possible if (as it happens in some of the variants) any of the minimum distance problems is solved (even if only approximately) with an

*exact*approach, which not only provides good feasible solutions, but also a

*lower bound*on the optimal value of the problem to provide a guarantee of the accuracy. Indeed, if the solution method proves that the inequality

*nonlinear and nonconvex “cut”*(20) can be added to \(\fancyscript{B}\) without changing the feasible set of the problem. The interesting part is that, of course, \(\hat{x}^i\)

*violates*(20), and therefore (20) provides—at least in theory–a convenient globalization mechanism.

- i.
We employ a tabu list in order to prevent a MILP solver from finding the same solutions \((\hat{x},\hat{y})\) at different iterations.

- ii.
We configure our solver to find a pool of solutions from which we choose the best non-forbidden one.

- I.
the norm to be used in the formulation of (9): \(L_2\) (1. above) versus \(L_1\)/\(L_\infty \) (2. above),

- II.
how to define the feasible region of (9) and solve it: MINLP (1 above) versus MIQP (2 above) versus rounding (3 above) or MILP (b above), and

- III.
how to avoid cycling: tabu list (ii. above) versus solution pool (ii. above).

## 5 Computational results

In this section we discuss the outcome of our extensive computational investigation.

### 5.1 Computational setting

The algorithms were implemented within the AMPL environment [22]. We chose to use this framework to make it easy to change subsolver. In practice, the user can select the preferred solver to solve NLPs, MINLPs, MIQPs or MILPs, exploiting their advantages.

Model analysis: getting information about nonlinearity and convexity of the constraints and integrality requirements of the variables, so as to define subproblems (8) and (9).

Solution feasibility analysis: necessary to verify feasibility of the provided solutions.

OA cut generation: necessary to update (9). In order to determine whether a constraint is convex, ROSE performs a recursive analysis of its expression tree [26] to determine whether it is an affine combination of convex functions. We call such a function “evidently convex” [28]. Evident convexity is a stricter notion than convexity: evidently convex functions are convex but the converse may not hold. Thus, it might happen that a convex constraint is labeled nonconvex; the information provided is in any case safe for our purposes, i.e., we generate OA cuts only from constraints which are certified to be convex. Unfortunately, the number of problems in the testbed (see next section) in which we are able to generate OA cuts is limited, around 15% of them, surely because of such a conservative (but

*safe*) policy adopted by ROSE.

### 5.2 FP variants and preliminary results

Because of the multiple options which can be put in place to solve both (8) and (9), we had to implement and test more than twenty FP versions/variants to assert the effectiveness of each of the algorithmic decisions discussed in the two previous sections. Some of these options have been ruled out after a preliminary set of experiments involving 243 MINLP instances from MINLPlib [12] and used, among others, in [16, 30]. Only 65 among such 243 instances are those in which the open-source Global Optimization solver COUENNE 0.1 [6] (available from COIN-OR [14]) is *unable* to find a feasible solution within a time limit of 5 min on an Intel Xeon 2.4 GHz with 8 GB RAM running Linux.

Thus, the goal of the preliminary set of computational experiments was twofold. On the one side, we wanted to be quick and competitive on the “easy” instances, i.e., the 178 instances on which COUENNE is able to find a solution within 5 min of CPU time. This is because FP can clearly be used as a stand-alone heuristic algorithm for nonconvex MINLP, and must be competitive with a general-purpose solver used as well as a heuristic, i.e., truncated within a short time limit. That was achieved by the “best” FP versions/variants that will be discussed in the remainder of the section. To give an example, the version denoted as FP-1 (see Sect. 5.4) finds a feasible solution for 156 of the 178 “easy” instances within 5 min, encounters numerical troubles in 13 of them (because of the NLP solver) and requires more than 5 min in the remaining 9 instances. Because COUENNE 0.1 (like most GO solvers) mainly implemented simple heuristics based on reformulations and linearizations, it would have been relatively easy to recover those few instances with longer computing times (9) by ad-hoc policies. On the other hand, however, we wanted to be effective (in possibly longer computing times) on the 65 “hard” instances where simple heuristics and partial enumeration failed. In particular, FP should be effective on the instances in which the nonlinear aspects of the problems play a crucial role, thus suggesting its fruitful integration within COUENNE or any other GO solver (as happened for FP algorithms in MILP). Indeed, the current trunk version of COUENNE is more sophisticated in terms of heuristics also due to our investigation preliminary reported in [15, 16] and some results at the of Sect. 5.4 seem promising in this concern.

^{3}are those that, at the same time, did not perform particularly well in the “easy”instances and did not add anything special on the “hard” ones. Namely,

- 1.
Solving (8) by VNS was always inferior with respect to solve it by the stochastic multi-start approach. Such a poor performance of the VNS approach might be due to its iterative implementation within AMPL: at each iteration, a different search space is defined, starting from a small one and incrementing it so that at the last iteration the entire feasible region is considered. In particular, this approach seems to be too “conservative” with respect to the previous solution.

- 2.
The additional fixing step which can be performed in case of fail when solving (8) by fixing the integer variables has a slight positive effect when the norm constraint is added while turns out to be crucial in case it is not. In a sense the theoretical convergence guaranteed by the use of norm constraints seems to make problems (8) easier, thus the benefit of the fixing step is particularly high if such constraints are not added. We then decided to always include the fixing step as well.

- 3.
In case the Euclidean norm is kept in problem (9), we decided to solve the convex MIQP instead of the convex MINLP. The main reason (besides some technical issues related to modify a convex MINLP solver like Bonmin to implement mechanisms to prevent cycling) is that the number of evidently convex constraints as discovered by ROSE is very limited in the testbed. Thus, if the constraints in (9) are linear, then the MIQP solver of CPLEX is clearly more efficient than a fully general convex MINLP solver line Bonmin.

- 4.
Preventing cycling by using a pool of solutions was always inferior with respect to use the tabu list. Again, this might be due to the lack of flexibility of the (nice) solution pool feature of CPLEX 11 that we used in our experiments. Every time we need to solve (9), we ask CPLEX to produce a number of solutions equal to the number of tabu list solutions plus one. Once obtained the solutions pool, we analyze the solutions starting from the first and set \((\hat{x}^{i},\hat{y}^{i})\) as the first solution of the pool which is not present in the tabu list. However, we have to consider the two following drawbacks: (i) the solution pool is populated after the branch and bound is finished. Because we have a time limit for solving (9), it is not guaranteed that we would have a number of solutions sufficient to provide a non-forbidden solution (especially because providing a solution pool is a time-consuming feature); (ii) we cannot force CPLEX to measure the diversity of the solutions in the pool by neglecting the continuous part of the problem. Unfortunately, CPLEX can provide us a set of solutions which has the same integer values, but different continuous values. More generally, it might happen that only forbidden solutions are generated, for example if the continuous relaxation of (9) is integer feasible but forbidden. In this case the solution would be discarded, but no further solution can be generated.

*Implementing a tabu list in CPLEX.* Discarding a solution in the tabu list within the CPLEX branch and bound is possible using the incumbent callback function. The tabu list is stored in a text file which is then exchanged between AMPL and CPLEX. Every time CPLEX finds an integer feasible solution, a specialized incumbent callback function checks whether the new solution appears in the tabu list. If this is the case, the solution is rejected, otherwise the solution is accepted. CPLEX continues executing until either the optimal solution (excluding those forbidden) is found or a time limit is reached. In the case where an integer solution found by CPLEX at the root node appears in the tabu list, CPLEX stops and no new integer feasible solution is provided to FP^{4}. In such a case, we amend problem (9) with a no-good cut [17] which excludes the solution and we call CPLEX again.

*Avoid cycling when solving* (9) *by rounding.* When the MILP relaxation of (9) is solved by rounding to the nearest integer the fractional values of vector \(\bar{x}\), the methods for preventing the cycling cannot be implemented in the way we described above. The method adopted is taken from the original FP paper [20]: whenever a forbidden solution is found, the algorithm randomly flip some of the integer values so as to obtain a new solution.

### 5.3 Code tuning

The algorithm terminates after the first MINLP feasible solution is found or a time limit is reached. The parameters are set in the following way: time limit of 2 h of user CPU time, the absolute feasibility tolerance to evaluate constraints is 1e-6, and the relative feasibility tolerance is 1e-3 (used if absolute feasibility test fails). The tabu list length was set adaptively to a value which was inversely proportional to the number of integer variables of the instance, i.e., the number of values to be stored for each solution of the tabu list. The value was 60,000 divided by the number of integer variables. The actual mean value, over the full set of 243 instances, of the solutions stored in the tabu list was 35.

### 5.4 Results

*six*surviving FP variants have been extensively tested on the full set of 243 MINLP instances and, in particular, we discuss the results on the 65 “hard” instances introduced in Sect. 5.2. More precisely, the six variants have the characteristics reported in Table 1.

FP variants

Variant | Problem (8) | Problem (9) | ||||
---|---|---|---|---|---|---|

Algorithm | Fixing step | Norm constraint | Norm | Algorithm | Cycling | |

FP-1 | multi-start | YES | YES | \(L_1\) | MILP | tabu list |

FP-2 | multi-start | YES | NO | \(L_1\) | MILP | tabu list |

FP-3 | multi-start | YES | YES | \(L_2\) | Rounding | tabu list |

FP-4 | multi-start | YES | N/A | \(L_2\) | MIQP | tabu list |

FP-5 | multi-start | YES | YES | \(L_{\infty }\) | MILP | tabu list |

FP-6 | multi-start | YES | NO | \(L_{\infty }\) | MILP | tabu list |

Comparing the *six* FP variants, aggregated results

FP-1 | FP-2 | FP-3 | FP-4 | FP-5 | FP-6 | |
---|---|---|---|---|---|---|

Successes | 49 | 45 | 22 | 23 | 44 | 46 |

Successes alone | 0 | 3 | 0 | 0 | 1 | 1 |

Time limit reached | 11 | 14 | 42 | 32 | 2 | 12 |

Fails | 5 | 6 | 1 | 9 | 19 | 7 |

Wins | 26 | 20 | 10 | 4 | 10 | 8 |

Time geomean | 151.02 | 104.45 | 17.59 | 76.14 | 23.25 | 14.99 |

Comparing FP variants FP-1, FP-2, FP-3 and FP-4, detailed results

Instance | FP-1 | FP-2 | FP-3 | FP-4 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

Value | Time | it.s | Value | Time | it.s | Value | Time | it.s | Value | Time | it.s | |

beuster | ++ | ++ | ++ | ++ | ++ | ++ | \(-\) | 7,200 | 28 | ++ | ++ | ++ |

csched2a | \(-\)102,002.02 | 5 | 2 | \(-\)102,867.73 | 4 | 2 | ++ | ++ | ++ | \(-\)112,174.73 | 624 | 2 |

csched2 | \(-\)120,042.73 | 138 | 2 | \(-\)120,066.02 | 241 | 2 | – | 7,200 | 41 | ++ | ++ | ++ |

deb10 | ++ | ++ | ++ | ++ | ++ | ++ | 223.29 | 25 | 14 | ++ | ++ | ++ |

deb6 | 234.78 | 197 | 29 | 237.11 | 4 | 4 | 290.90 | 7 | 2 | 235.81 | 9 | 4 |

deb7 | 411.00 | 139 | 4 | 345.76 | 10 | 3 | 419.78 | 218 | 8 | 451.05 | 13 | 2 |

deb8 | 8,453,005,065.59 | 23 | 2 | 185,839,836.37 | 2 | 1 | 8,453,005,005.71 | 30 | 1 | 416,332.32 | 3 | 1 |

deb9 | 444.67 | 33 | 2 | 425.34 | 16 | 4 | 444.67 | 39 | 1 | 438.39 | 59 | 2 |

detf1 | 11,497.56 | 368 | 2 | 15,976.03 | 131 | 2 | 8,455.75 | 961 | 1 | 15,976.03 | 731 | 2 |

eg_all_s | 223.14 | 27 | 3 | 100,003.77 | 52 | 5 | 94,165.69 | 10 | 1 | ++ | ++ | ++ |

eg_disc2_s | 65,822.96 | 7 | 1 | 100,004.34 | 5 | 1 | 65,822.96 | 7 | 1 | 100,004.34 | 5 | 1 |

eg_disc_s | 94,165.42 | 8 | 1 | 100,003.69 | 7 | 1 | 94,165.42 | 8 | 1 | 100,003.69 | 7 | 1 |

eg_int_s | 94,167.12 | 10 | 1 | 100,005.46 | 7 | 1 | 94,167.12 | 10 | 1 | 100,005.46 | 7 | 1 |

fo8_ar25_1 | 994,207.06 | 185 | 124 | \(-\) | 7,200 | 6 | – | 7,200 | 3,211 | – | 7,200 | 2 |

fo8_ar3_1 | 994,235.33 | 784 | 367 | \(-\) | 7,200 | 6 | – | 7,200 | 3,210 | – | 7,200 | 2 |

fo8 | 894,678.42 | 9 | 8 | 1,400,000.00 | 1,543 | 533 | – | 7,200 | 3,110 | 1,400,000.00 | 860 | 444 |

fo9_ar2_1 | 1,136,279.49 | 1,286 | 167 | – | 7,200 | 8 | – | 7,200 | 2,619 | – | 7,200 | 2 |

fo9_ar25_1 | 1,136,997.73 | 635 | 97 | – | 7,200 | 14 | – | 7,200 | 2,620 | – | 7,200 | 2 |

fo9_ar4_1 | 9,959.68 | 202 | 68 | 1,599,990.28 | 4,212 | 699 | – | 7,200 | 2,616 | – | 7,200 | 2 |

fo9_ar5_1 | 1,428,148.20 | 17 | 2 | 1,599,993.97 | 14 | 2 | – | 7,200 | 2,610 | – | 7,200 | 2 |

fo9 | 1,006,964.21 | 61 | 32 | 1,600,000.00 | 221 | 153 | – | 7,200 | 2,552 | 1,600,000.00 | 1,387 | 657 |

johnall | \(-\)201.15 | 615 | 2 | \(-\)201.29 | 614 | 2 | \(-\)201.16 | 2 | 2 | \(-\)221.92 | 618 | 2 |

lop97ic | – | 7,200 | 9 | – | 7,200 | 120 | – | 7,200 | 94 | – | 7,200 | 13 |

mbtd | 91.33 | 4,266 | 2 | 98.53 | 4,045 | 2 | – | 7,200 | 3 | 1,000,005.67 | 5,834 | 2 |

nuclear104 | – | 7,200 | 2 | ++ | ++ | ++ | – | 7,200 | 2 | ++ | ++ | ++ |

nuclear10a | – | 7,200 | 2 | – | 7,200 | 5 | – | 7,200 | 3 | – | 7,200 | 4 |

nuclear10b | – | 7,200 | 2 | – | 7,200 | 4 | – | 7,200 | 2 | – | 7,200 | 3 |

nuclear14a | \(-\)1.09 | 1,602 | 3 | \(-\)1.11 | 2,641 | 173 | – | 7,200 | 38 | – | 7,200 | 14 |

nuclear14b | \(-\)1.10 | 646 | 2 | \(-\)1.10 | 686 | 7 | – | 7,200 | 34 | \(-\)1.09 | 1,922 | 81 |

nuclear14 | \(-\)1.12 | 1,645 | 3 | \(-\)1.12 | 847 | 15 | – | 7,200 | 107 | – | 7,200 | 14 |

nuclear24a | \(-\)1.09 | 1,602 | 3 | \(-\)1.11 | 2,730 | 173 | – | 7,200 | 38 | – | 7,200 | 14 |

nuclear24b | \(-\)1.05 | 2,626 | 4 | \(-\)1.09 | 1,584 | 59 | – | 7,200 | 35 | \(-\)1.09 | 1,959 | 81 |

nuclear24 | \(-\)1.12 | 1,655 | 3 | \(-\)1.12 | 1,649 | 51 | – | 7,200 | 101 | – | 7,200 | 14 |

nuclear25a | \(-\)1.06 | 6,501 | 8 | – | 7,200 | 372 | – | 7,200 | 39 | – | 7,200 | 14 |

nuclear25b | \(-\)0.99 | 1,666 | 3 | \(-\)1.05 | 707 | 8 | – | 7,200 | 33 | – | 7,200 | 19 |

nuclear49a | – | 7,200 | 8 | \(-\)1.11 | 6,266 | 67 | – | 7,200 | 15 | – | 7,200 | 12 |

nuclear49b | \(-\)1.06 | 4,367 | 4 | \(-\)1.13 | 4,980 | 27 | – | 7,200 | 8 | – | 7,200 | 34 |

nuclear49 | \(-\)1.14 | 1,165 | 2 | – | 7,200 | 13 | – | 7,200 | 5 | – | 7,200 | 11 |

nuclearva | \(-\)1.01 | 133 | 2 | \(-\)1.01 | 244 | 2 | \(-\)1.01 | 496 | 35 | – | 7,200 | 71 |

nuclearvb | \(-\)1.03 | 614 | 2 | \(-\)1.02 | 710 | 3 | \(-\)1.01 | 1,107 | 181 | \(-\)1.02 | 613 | 2 |

nuclearvc | \(-\)1.00 | 2,064 | 6 | \(-\)0.99 | 110 | 4 | \(-\)0.99 | 1,702 | 149 | – | 7,200 | 51 |

nvs08 | 24,116.94 | 0 | 1 | 24,119.23 | 1 | 1 | 24,116.94 | 0 | 1 | 24,119.23 | 0 | 1 |

nvs20 | 146,475,177.22 | 0 | 1 | 138,691,481.67 | 0 | 1 | 146,475,177.22 | 0 | 1 | 138,691,481.67 | 0 | 1 |

nvs24 | \(-\)342.20 | 1 | 4 | \(-\)536.20 | 1 | 4 | \(-\)517.80 | 0 | 2 | – | 7,200 | 2 |

o8_ar4_1 | 5,822,973.45 | 22 | 10 | 8,199,969.73 | 736 | 236 | – | 7,200 | 3,214 | – | 7,200 | 2 |

o9_ar4_1 | 6,877,522.82 | 198 | 59 | 8,199,964.62 | 6,206 | 698 | – | 7,200 | 2,611 | – | 7,200 | 2 |

qapw | 468,078.00 | 372 | 2 | 460,118.00 | 637 | 2 | – | 7,200 | 21 | 464,259.68 | 684 | 2 |

saa_2 | 11,497.56 | 377 | 2 | 15,976.03 | 252 | 2 | 8,455.75 | 978 | 2 | 15,976.03 | 721 | 2 |

space25a | 661.97 | 376 | 3 | 650.69 | 245 | 18 | – | 7,200 | 124 | 1,124.32 | 612 | 2 |

space25 | 661.97 | 413 | 3 | 650.69 | 773 | 18 | – | 7,200 | 45 | 1,124.38 | 619 | 2 |

space960 | 24,070,000.00 | 3,629 | 7 | – | 7,200 | 7 | – | 7,200 | 8 | – | 7,200 | 8 |

super1 | ++ | ++ | ++ | ++ | ++ | ++ | – | 7,200 | 18 | – | 7,200 | 2 |

super2 | ++ | ++ | ++ | ++ | ++ | ++ | – | 7,200 | 18 | – | 7,200 | 2 |

super3 | ++ | ++ | ++ | ++ | ++ | ++ | – | 7,200 | 18 | – | 7,200 | 2 |

super3t | – | 7,200 | 18 | – | 7,200 | 19 | – | 7,200 | 20 | ++ | ++ | ++ |

tln12 | – | 7,200 | 14 | – | 7,200 | 10 | – | 7,200 | 2,403 | – | 7,200 | 2 |

tls12 | – | 7,200 | 10 | – | 7,200 | 10 | – | 7,200 | 445 | – | 7,200 | 9 |

tls2 | 5.30 | 720 | 6 | 5.30 | 1 | 5 | – | 7,200 | 6,569 | ++ | ++ | ++ |

tls5 | – | 7,200 | 24 | 22.50 | 58 | 21 | – | 7,200 | 2,783 | – | 7,200 | 100 |

tls6 | – | 7,200 | 9 | – | 7,200 | 26 | – | 7,200 | 2,226 | – | 7,200 | 17 |

tls7 | – | 7,200 | 9 | 37.80 | 2,892 | 38 | – | 7,200 | 1,464 | – | 7,200 | 8 |

uselinear | 1,951.37 | 188 | 1 | 227,751.06 | 47 | 1 | 1,951.37 | 187 | 1 | 1,951.37 | 48 | 1 |

var_con10 | 475.36 | 449 | 54 | 463.17 | 12 | 5 | 562.62 | 29 | 4 | ++ | ++ | ++ |

var_con5 | 397.21 | 110 | 16 | 315.16 | 6 | 3 | 434.66 | 21 | 3 | ++ | ++ | ++ |

waste | 62,025.78 | 50 | 1 | 306,239.04 | 19 | 1 | 62,025.78 | 50 | 1 | 306,232.46 | 70 | 2 |

Comparing FP variants FP-1, FP-2, FP-5 and FP-6, detailed results

Instance | FP-1 | FP-2 | FP-5 | FP-6 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

Value | Time | it.s | Value | Time | it.s | Value | Time | it.s | Value | Time | it.s | |

beuster | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ |

csched2a | \(-\)102,002.02 | 5 | 2 | \(-\)102,867.73 | 4 | 2 | \(-\)102,932.87 | 1 | 2 | \(-\)102,932.87 | 0 | 2 |

csched2 | \(-\)120,042.73 | 138 | 2 | \(-\)120,066.02 | 241 | 2 | \(-\)120,064.55 | 2 | 2 | \(-\)120,066.02 | 2 | 2 |

deb10 | ++ | ++ | ++ | ++ | ++ | ++ | 331.39 | 1 | 2 | ++ | ++ | ++ |

deb6 | 234.78 | 197 | 29 | 237.11 | 4 | 4 | 950.00 | 1 | 2 | 949.94 | 2 | 2 |

deb7 | 411.00 | 139 | 4 | 345.76 | 10 | 3 | 447.77 | 9 | 2 | 449.38 | 7 | 2 |

deb8 | 8,453,005,065.59 | 23 | 2 | 185,839,836.37 | 2 | 1 | 185,839,836.37 | 3 | 1 | 185,839,836.37 | 2 | 1 |

deb9 | 444.67 | 33 | 2 | 425.34 | 16 | 4 | 443.97 | 8 | 2 | 434.33 | 7 | 2 |

detf1 | 11,497.56 | 368 | 2 | 15,976.03 | 131 | 2 | 15,976.27 | 113 | 2 | 15,976.03 | 110 | 2 |

eg_all_s | 223.14 | 27 | 3 | 100,003.77 | 52 | 5 | 15.45 | 406 | 4 | 100,003.00 | 85 | 7 |

eg_disc2_s | 65,822.96 | 7 | 1 | 100,004.34 | 5 | 1 | 100,004.34 | 8 | 1 | 100,004.34 | 5 | 1 |

eg_disc_s | 94,165.42 | 8 | 1 | 100,003.69 | 7 | 1 | 100,003.69 | 11 | 1 | 100,003.69 | 8 | 1 |

eg_int_s | 94,167.12 | 10 | 1 | 100,005.46 | 7 | 1 | 100,005.46 | 12 | 1 | 100,005.46 | 8 | 1 |

fo8_ar25_1 | 994,207.06 | 185 | 124 | – | 7,200 | 6 | 1,399,992.47 | 18 | 16 | 1,399,991.20 | 240 | 222 |

fo8_ar3_1 | 994,235.33 | 784 | 367 | – | 7,200 | 6 | 1,333,314.45 | 10 | 9 | 1,399,992.80 | 2 | 4 |

fo8 | 894,678.42 | 9 | 8 | 1,400,000.00 | 1,543 | 533 | 1,400,000.00 | 3 | 6 | 1,400,000.00 | 11 | 29 |

fo9_ar2_1 | 1,136,279.49 | 1,286 | 167 | – | 7,200 | 8 | 1,599,990.04 | 1,245 | 58 | 1,599,990.04 | 1,896 | 642 |

fo9_ar25_1 | 1,136,997.73 | 635 | 97 | – | 7,200 | 14 | 1,599,989.57 | 527 | 264 | 1,599,988.83 | 343 | 232 |

fo9_ar4_1 | 9,959.68 | 202 | 68 | 1,599,990.28 | 4,212 | 699 | 55,204.13 | 1,267 | 227 | 1,599,989.86 | 20 | 10 |

fo9_ar5_1 | 1,428,148.20 | 17 | 2 | 1,599,993.97 | 14 | 2 | 1,598,839.97 | 3 | 2 | 1,599,992.22 | 3 | 2 |

fo9 | 1,006,964.21 | 61 | 32 | 1,600,000.00 | 221 | 153 | 1,600,000.00 | 13 | 16 | 1,600,000.00 | 47 | 76 |

johnall | –201.15 | 615 | 2 | –201.29 | 614 | 2 | –201.15 | 1 | 2 | –201.15 | 1 | 2 |

lop97ic | – | 7,200 | 9 | – | 7,200 | 120 | – | 7,501 | 20 | 5,401.15 | 3,652 | 97 |

mbtd | 91.33 | 4,266 | 2 | 98.53 | 4,045 | 2 | 89.53 | 5,253 | 2 | 89.53 | 6,103 | 2 |

nuclear104 | – | 7,200 | 2 | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ |

nuclear10a | – | 7,200 | 2 | – | 7,200 | 5 | ++ | ++ | ++ | – | 9,078 | 4 |

nuclear10b | – | 7,200 | 2 | – | 7,200 | 4 | \(-\)1.10 | 3,794 | 2 | – | 8,769 | 4 |

nuclear14a | \(-\)1.09 | 1,602 | 3 | \(-\)1.11 | 2,641 | 173 | ++ | ++ | ++ | \(-\)1.10 | 142 | 19 |

nuclear14b | \(-\)1.10 | 646 | 2 | \(-\)1.10 | 686 | 7 | \(-\)1.00 | 30 | 2 | \(-\)1.07 | 25 | 3 |

nuclear14 | \(-\)1.12 | 1,645 | 3 | \(-\)1.12 | 847 | 15 | \(-\)1.12 | 11 | 2 | – | 7,202 | 251 |

nuclear24a | \(-\)1.09 | 1,602 | 3 | \(-\)1.11 | 2,730 | 173 | ++ | ++ | ++ | \(-\)1.10 | 271 | 19 |

nuclear24b | \(-\)1.05 | 2,626 | 4 | \(-\)1.09 | 1,584 | 59 | \(-\)1.00 | 28 | 2 | \(-\)1.07 | 45 | 3 |

nuclear24 | \(-\)1.12 | 1,655 | 3 | \(-\)1.12 | 1,649 | 51 | \(-\)1.12 | 13 | 2 | – | 7,200 | 216 |

nuclear25a | \(-\)1.06 | 6,501 | 8 | – | 7,200 | 372 | ++ | ++ | ++ | \(-\)1.05 | 214 | 16 |

nuclear25b | \(-\)0.99 | 1,666 | 3 | \(-\)1.05 | 707 | 8 | \(-\)1.06 | 18 | 2 | \(-\)1.07 | 303 | 13 |

nuclear49a | – | 7,200 | 8 | \(-\)1.11 | 6,266 | 67 | ++ | ++ | ++ | – | 7,334 | 18 |

nuclear49b | \(-\)1.06 | 4,367 | 4 | \(-\)1.13 | 4,980 | 27 | \(-\)1.03 | 454 | 2 | – | 7,288 | 47 |

nuclear49 | \(-\)1.14 | 1,165 | 2 | – | 7,200 | 13 | \(-\)1.14 | 439 | 2 | – | 7,492 | 30 |

nuclearva | \(-\)1.01 | 133 | 2 | \(-\)1.01 | 244 | 2 | ++ | ++ | ++ | \(-\)1.01 | 5 | 4 |

nuclearvb | \(-\)1.03 | 614 | 2 | \(-\)1.02 | 710 | 3 | \(-\)1.02 | 2 | 2 | \(-\)1.02 | 2 | 2 |

nuclearvc | \(-\)1.00 | 2,064 | 6 | \(-\)0.99 | 110 | 4 | \(-\)0.99 | 2 | 2 | \(-\)0.99 | 3 | 3 |

nvs08 | 24,116.94 | 0 | 1 | 24,119.23 | 1 | 1 | 24,119.23 | 0 | 1 | 24,119.23 | 0 | 1 |

nvs20 | 146,475,177.22 | 0 | 1 | 138,691,481.67 | 0 | 1 | 138,691,481.67 | 0 | 1 | 138,691,481.67 | 0 | 1 |

nvs24 | \(-\)342.20 | 1 | 4 | \(-\)536.20 | 1 | 4 | \(-\)506.60 | 360 | 4 | \(-\)413.80 | 0 | 3 |

o8_ar4_1 | 5,822,973.45 | 22 | 10 | 8,199,969.73 | 736 | 236 | 8,171,278.66 | 947 | 305 | 8,199,970.33 | 85 | 108 |

o9_ar4_1 | 6,877,522.82 | 198 | 59 | 8,199,964.62 | 6,206 | 698 | 43,754.16 | 1,800 | 285 | 8,199,964.17 | 20 | 10 |

qapw | 468,078.00 | 372 | 2 | 460,118.00 | 637 | 2 | 459,102.00 | 611 | 2 | 459,102.00 | 610 | 2 |

saa_2 | 11,497.56 | 377 | 2 | 15,976.03 | 252 | 2 | 15,976.27 | 117 | 2 | 15,976.03 | 116 | 2 |

space25a | 661.97 | 376 | 3 | 650.69 | 245 | 18 | ++ | ++ | ++ | 671.09 | 5 | 5 |

space25 | 661.97 | 413 | 3 | 650.69 | 773 | 18 | ++ | ++ | ++ | 671.09 | 17 | 4 |

space960 | 24,070,000.00 | 3,629 | 7 | – | 7,200 | 7 | 24,070,000.00 | 1,513 | 7 | 24,070,000.00 | 2,558 | 9 |

super1 | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ |

super2 | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ |

super3 | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ | ++ |

super3t | – | 7,200 | 18 | – | 7,200 | 19 | ++ | ++ | ++ | ++ | ++ | ++ |

tln12 | – | 7,200 | 14 | – | 7,200 | 10 | – | 7,215 | 1,114 | – | 7,203 | 1,034 |

tls12 | – | 7,200 | 10 | – | 7,200 | 10 | ++ | ++ | ++ | – | 7,280 | 92 |

tls2 | 5.30 | 720 | 6 | 5.30 | 1 | 5 | ++ | ++ | ++ | 15.30 | 3 | 11 |

tls5 | – | 7,200 | 24 | 22.50 | 58 | 21 | ++ | ++ | ++ | – | 7,201 | 504 |

tls6 | – | 7,200 | 9 | – | 7,200 | 26 | ++ | ++ | ++ | – | 7,211 | 497 |

tls7 | – | 7,200 | 9 | 37.80 | 2,892 | 38 | ++ | ++ | ++ | – | 8,171 | 19 |

uselinear | 1,951.37 | 188 | 1 | 227,751.06 | 47 | 1 | 227,751.06 | 58 | 1 | 227,751.06 | 96 | 1 |

var_con10 | 475.36 | 449 | 54 | 463.17 | 12 | 5 | 589.80 | 4 | 2 | 589.43 | 4 | 2 |

var_con5 | 397.21 | 110 | 16 | 315.16 | 6 | 3 | 532.55 | 3 | 2 | 532.50 | 5 | 2 |

waste | 62,025.78 | 50 | 1 | 306,239.04 | 19 | 1 | 306,239.04 | 23 | 1 | 306,239.04 | 38 | 1 |

The results of Tables 2, 3 and 4 show that FP-1 is the most successful FP variant and is remarkably able to find a feasible solution in limited CPU time on 75 % of the “hard” instances in the testbed. A direct comparison with the closest variant, namely FP-2, shows that the use of the norm constraint is useful: although FP-1 does not dominate FP-2, it is overall superior on all entries and there are many instances in which FP-2 converges slowly whereas FP-1 reaches feasibility in a very small number of iterations. Variant FP-3 is very fast but seems to be a bit “unsophisticated” for those instances which look more difficult (in the “hard” testbed). However, it might be a viable option for a “cheap” FP variant executed extensively within a GO solver. Variant FP-4 does not look—at the moment—very competitive, although it is not fully dominated because it finds the smallest solution four times, in one case (deb8) a much smaller one, with respect to the other variants. One relevant issue for FP-4 seems that the MIQP solved as problem (9) is time consuming thus allowing only a limited number of FP iterations. Things might change in the future, depending on the solver or its settings. Finally, variants FP-5 and FP-6 are very close to FP-1 and FP-2, respectively, and they indeed lead to similar results. Specifically, FP-5, compared to FP-1, seems to have much more numerical troubles (due to the NLP solve, see Sect. 3) and is inferior in terms of quality of the solutions obtained (wins) but is much faster. Instead, variant FP-6 is almost equivalent, perhaps superior, to FP-2, the two main differences being the number of wins (20 for FP-2 with respect to 8 for FP-6) and the speed (104.45 CPU seconds for FP-2 with respect to 14.99 for FP-6). Overall, both FP-5 and FP-6 seem promising for further investigation.

Concerning the interaction of FP-1 with the GO solver COUENNE (or any other), note that in 14 cases FP-1 finds a feasible solution within 1 minute of CPU time (in 24 cases within 5 min), thus suggesting a profitable integration within the solver.

## 6 Conclusion

We have presented the theoretical foundation of an abstract Feasibility Pump scheme interpreted as a Successive Projection Method in which, roughly speaking, the set of constraints of the original problem is split (possibly in different ways) in two sets and the overall algorithm aims at deciding if the feasibility space given by the intersection of such two sets is empty. Such a scheme has been specialized for dealing with nonconvex Mixed-Integer Nonlinear Programming problems, the hardest class of (deterministic) optimization problems.

Because the devil is in the details, we analyzed a large number of options for (i) formulating and solving the two distinct problems originated by the above split and (ii) guaranteeing convergence of the global algorithm. The result has been more than twenty FP variants which have been computationally tested on a large number of MINLP instances from the literature to assert the viability of FP both as a stand-alone approximation algorithm and as a primal heuristic within a global optimization solver. Six especially interesting of these variants have been discussed in detail and extensive results have been presented on a set of 65 “hard” instances. The results show that feasibility pumps are indeed successful in finding feasible solutions for nonconvex MINLPs.

## Acknowledgments

One of the authors (LL) is grateful for the following financial support: ANR 07-JCJC-0151 “ARS”and 08-SEGI-023 “AsOpt”; Digiteo Emergence “ARM”, Emergence “PASO”, Chair “RMNCCO”; Microsoft-CNRS Chair “OSD”. The remaining three authors are grateful to the other author (LL) for not making them feel too “poor”. We thank two anonymous referees for a careful reading and useful comments. Part of this work was conducted while the first author was a post-doctoral student at the ISyE, University of Wisconsin-Madison and DEIS, University of Bologna. The support of both institutions is strongly acknowledged.