1 Introduction

Hierarchical decision making by different agents occurs in a number of real life problems including, e.g., energy policy and markets [62], pricing schemes [37], supply chain management [60], infrastructure interdiction [10, 58], taxation [36], and transportation network design [6]. The authors in particular were motivated by using bilevel modelling to determine optimal graph aggregations for network design problems as in [3], and introducing controllable energy generation facilities with corresponding discrete decisions on the lower level of energy supply tariff design problems described in [25]. In this paper we consider the setting with two agents, the leader on the upper level and the follower on the lower level, known in game theory as a Stackelberg game (see [48]). Furthermore, discrete decisions are allowed on both levels.

Thus, we want an exact solution method for problems from the following class:

$$\begin{aligned} \begin{aligned} \max _{x^u, y^u} \quad&F(x^u, y^u, x^l, y^l) \\ \mathrm {s.t.}\quad&G(x^u, y^u, x^l, y^l) \le 0 \\&x^u\in \mathbb {R}^{^{m_R} }_+, y^u\in \mathbb {Z}^{^{m_Z} }_+ \\&\begin{aligned} \max _{x^l, y^l} \quad&f(x^u,y^u,x^l,y^l)\\ \mathrm {s.t.}\quad&g(y^u,x^l,y^l)\le 0 \\&x^l\in \mathbb {R}^{^{n_R} }_+, y^l\in \mathbb {Z}^{^{n_Z} }_+. \end{aligned} \end{aligned} \end{aligned}$$
(1.1)

Here we denote the leader’s variables, objective function and constraints by \(x^u\in \mathbb {R}^{^{m_R} }_+\) and \(y^u\in \mathbb {Z}^{^{m_Z} }_+\), \( F\), and \( G(x^u, y^u, x^l, y^l) \le 0\), respectively. The follower’s optimization problem, parameterized by upper-level variables \(\left( x^u, y^u\right) \), constitutes maximizing objective function \( f\) over variables \(x^l\in \mathbb {R}^{^{n_R} }_+\) and \(y^l\in \mathbb {Z}^{^{n_Z} }_+\) under constraints \( g(y^u,x^l,y^l)\le 0\).

A detailed set of additional assumptions that we consider in this paper will be introduced later, see Assumptions 14 in the beginning of Sect. 2. It should be stressed in particular that a number of the applications mentioned above, e.g., pricing schemes, taxation, interdiction and energy policy problems, fall into the setting given by our set of assumptions. Note that the majority of these applications are special cases that typically include either nonlinearities or lower-level integer variables, but not both.

1.1 Contribution

We show on a very small example that the algorithm proposed in [60], due to its approximative nature, may lead to incorrect results just because some constraint is scaled in an unfortunate way.

We then propose an exact algorithm version under the additional assumption that continuous leader variables do not appear in follower constraints. Without this assumption a compact bilevel feasible set cannot be guaranteed from a theoretical point of view, i.e., a bilevel optimum may be unattainable and therefore finite termination with the exact optimum can not be expected. More restrictive assumptions on this point are present in [23], where continuous leader variables are not allowed to appear in the follower’s problem at all. The assumption from [23] is also present in [49] and the corresponding software MibS [45].

Furthermore, we dispense with the assumption of feasibility of the bilevel problem required in [60], i.e., our algorithm can detect whether the bilevel problem is infeasible.

Moreover, we extend the algorithm to a nonlinear setting where any kind of MINLP which an off-the-shelf solver can handle is allowed as the leader problem, while the follower problem considered in lower-level continuous variables only has to be convex, bounded, and satisfy Slater’s condition in case of feasibility. Thus we also provide a constructive proof that a finite optimum is always attained within this class of bilevel problems.

In two recent surveys on bilevel optimization [12, 29], no exact global solution method for this problem class is mentioned, nor is any other existing method—apart from complete enumeration of the follower’s integer variables—known to the authors of this paper. Neither are the authors aware of any other proof that an optimum is always attained for the class of bilevel problems considered.

Although there are \(\varepsilon \)-optimal global optimization methods for bilevel problems requiring partially different assumptions than ours and, e.g., allowing more general nonconvex follower problems (see Sect. 1.3 on existing literature below), the algorithm proposed in this paper is able to find exact global optimal solutions or prove infeasibility for more complex bilevel problem classes than methods with the same property suggested in the literature, e.g., [23, 49].

Finally, we demonstrate the performance of the first implementation on newly constructed bilevel test instances. Note that the algorithm proposed in this paper builds upon a MINLP solver of the user’s choice, which iteratively derives upper and lower bounds on the bilevel optimum by solving corresponding single-level optimization problems. Thus our method incorporates all merits of an established solver implementation (stability, efficiency, multi-threading support, etc.) and automatically profits from any further development of MINLP solvers.

1.2 Paper structure

Section 1.3 contains a short overview of selected methods and existing literature for bilevel optimization problems, with a focus on problems with integer follower variables. Our setting including the necessary assumptions is presented in Sect. 2. In Sect. 2.1 we briefly state how the key features of Algorithm 1 described in Sect. 2 are realized in [60], and then describe and illustrate its approximative property in Sect. 2.2. In Sects. 3.13.2 we propose an exact realization which guarantees that a bilevel optimum of a feasible bilevel problem is always found. We initially restrict ourselves to the case in which the follower problem is linear. In the end of Sect. 3 we give a proof that our algorithm either finds a global optimal solution of a bilevel problem satisfying Assumptions 14 or proves infeasibility in finitely many iterations. Numerical results are presented in Sect. 4. In Sect. 5 we provide details on how to extend our algorithm to nonlinear follower problems. Finally, we conclude with Sect. 6.

1.3 Existing literature

Note that whenever the follower’s optimal solution is not unique, the bilevel optimal solution as well as its objective value may be so too, see, e.g., [2]. As this issue is hard to deal with in practice, the majority of bilevel problem formulations rely on a so-called optimistic assumption, i.e., whenever multiple optimal lower-level solutions exist, the one allowing best results for the leader is chosen. The opposite approach is to consider the follower’s optimal solution that is least advantageous to the leader, cf., e.g., [53, 59] and the survey [12] for more references. In the remainder of the paper we consider only bilevel problem formulations with an optimistic assumption.

A common way of solving bilevel problems is reformulating them into a single-level optimization problem, which is obtained by adding optimality conditions for the lower-level problem as constraints to the so-called High Point Relaxation (HPR) of the original bilevel problem, see, e.g., [2, 11]. However, this approach requires that general optimality conditions for the lower-level problem can be stated in such a way that some general global optimization solver is able to handle the reformulated single-level optimization problem. Consequently, this method is suitable only for special classes of bilevel problems and in particular does not allow for lower-level integer variables.

A number of methods have been proposed for handling bilevel problems with lower-level integer variables, most of them concentrating on various linear problem classes. Branch and cut approaches for approximating the optimal-value function have been used in [15, 16, 49], while [21,22,23] extend and apply intersection cuts to bilevel problems. An implementation of the approach published in [16] is publicly available in the solver MibS [45]. In addition, the algorithm presented in [21] is accessible as the software [20].

Global optimization approaches exploiting sensitivity analysis in the lower-level problem for the solution of various classes of bilevel programming problems are suggested in [18, 19]. The idea is to solve the lower-level problem as a multi-parametric programming problem, with parameters being the variables of the upper-level problem. Then by inserting the obtained rational reaction sets in the upper-level problem, the overall problem is transformed into a set of independent quadratic, linear or mixed-integer linear programming problems, which can be solved to global optimality. Another parametric approach for linear bilevel problems is presented in [35], while [50] solves bilevel problems without integer variables by min-max reformulation.

There exist some algorithms for nonlinear bilevel problems that are proven to yield \(\varepsilon \)-optimal solutions such as the methods proposed in [39, 40] and the Branch-and-Sandwich algorithm from [31, 32]. They have been extended to the mixed-integer case in [38] and [33], respectively. Recent advances for these approaches allow, e.g., to lift restrictions on coupling equality constraints [17], or improve algorithm performance [42]. Computational results for a bilevel solver based on the implementation of the Branch-and-Sandwich algorithm are presented in [43]. These algorithms have different sets of assumptions, but some allow continuous nonconvexities on the lower level and are thus more general than ours in this respect. However, in contrast to the \(\varepsilon \)-optimal methods mentioned above, our aim is an exact global optimal solution algorithm for nonlinear mixed-integer bilevel problems with a set of assumptions that we prove sufficient to guarantee attainment of the bilevel optimum. Note that \(\varepsilon \) in the definition of bilevel \(\varepsilon \)-optimality stands for bilevel feasibility as well as a bilevel optimality tolerance, since a bilevel optimum may not always be attained in the general case. For a more comprehensive list of currently available solution methods for bilevel optimization problems, the reader is referred to [12].

In [61] the authors adapt the above-mentioned classical approach of solving bilevel problems via formulating necessary and sufficient optimality conditions of the lower level to a setting where both levels are mixed-integer problems. They show that if a mixed-integer linear bilevel optimization problem has the so-called relatively complete response property, iteratively solving its HPR with successively added follower optimality conditions for some fixed values of lower-level integer variables produces an exact solution of the original bilevel problem. Problem (1.1) has the relatively complete response property if every combination of HPR-feasible \(\left( x^u, y^u\right) \) and HPR-feasible \(y^l\) can be extended to a HPR-feasible solution \(\left( x^u, y^u, x^l, y^l\right) \) by a suitable HPR-feasible \(x^l\). Henceforth we shall call the HPR with optimality conditions of the lower level for some \(y^{l,k}\) with iteration index k the master problem (MP). The relatively complete response property guarantees that the master problem constructed in this way is a relaxation of the original bilevel problem. Successively adding follower optimality conditions for different \(y^{l,k}\) to (MP) produces a more precise approximation of the optimal-value function of the original bilevel problem, which becomes exact if all HPR-feasible discrete decisions for the follower have been enumerated. Note that complete enumeration of all possible follower integer configurations is not always necessary for solving the bilevel problem via the procedure described above.

A similar approach is also used for computing binary quasi-equilibria in [28], where the authors propose a transformation of a mixed-integer equilibrium problem into a mixed-integer bilevel problem, which they then solve by enumerating all possible integer configurations and formulating corresponding sets of follower optimality conditions. In this case the feasibility of the follower problem is not affected by the leader’s decisions, and the relatively complete response property holds.

In [60] the authors propose an extension of an algorithm from [61], which aims at handling lower-level integer variables in linear bilevel problems without the relatively complete response property. They consider a set of upper-level decisions which allow certain lower-level integer configurations, and add corresponding lower-level optimality conditions to the HPR only for this upper-level set.

Note that the idea of iteratively adding a bound for the lower-level objective function value to the HPR and making this bound valid only for a certain set of upper-level configurations was previously proposed in [38]. However, the particular formulation of this bound as well as the way of defining and finding the corresponding upper-level set and imposing the implied bound differs from the one suggested in [61].

2 General setting and algorithm structure

In this section we first state our assumptions on the class of bilevel problems the approach proposed in this paper can solve. Afterwards we introduce some necessary bilevel terminology in order to describe the general algorithm idea as proposed in [60]. Then, in Sect. 2.1 we state how the key features of the general projection algorithm are realized in [60]. Finally, in Sect. 2.2 we give an example on how this realization makes the algorithm prone to failure.

The algorithm proposed in the current paper requires the following assumptions:

Assumption 1

All upper-level variables have finite bounds in the HPR. All lower-level variables have finite bounds in the follower problem. The objective and constraint functions \( F, G, f\) and \( g\) are continuous in their respective closed boxes.

Assumption 1 is a usual assumption which ensures that all terms occurring in the formulations are bounded. We will make use of it when reformulating various indicator constraints as well as for showing that the proposed algorithm terminates after finitely many iterations.

Assumption 2

For any fixed upper-level continuous and integer decisions \(\bar{x}^{u}\) and \(\bar{y}^{u}\), respectively, and lower-level integer decisions \(\bar{y}^{l}\), the follower problem is convex and in case of feasibility satisfies Slater’s condition, i.e., \( f(\bar{x}^{u}, \bar{y}^{u}, \cdot , \bar{y}^{l})\) is concave, \( g(\bar{y}^{u}, \cdot , \bar{y}^{l})\) are convex, and there exists a strictly feasible point (satisfying all nonlinear constraints with strict inequality) for all \(\bar{x}^{u}, \bar{y}^{u}\) and \(\bar{y}^{l}\) if \( g(\bar{y}^{u}, x^l, \bar{y}^{l}) \le 0\) has a solution in \(x^l\).

Moreover, functions \( f\) and \( g\) are continuously differentiable within lower-level variable bounds.

The main consequence of Assumption 2 is that, considering all upper-level variables as parameters, we can formulate necessary and sufficient optimality conditions for the follower problem with fixed lower-level integer variables. Note that although there are optimal-value function formulations [54, 55] as well as optimality conditions based on extended duality [1] for ILPs/MILPs, they do not allow a single-level reformulation of the formulation (BLP) below that can be directly handed over to an off-the-shelf solver.

Assumption 3

The high point relaxation (HPR) of the original bilevel problem together with the necessary and sufficient optimality conditions for the lower level as assumed in Assumption 2 is of a problem class that can be handled by some off-the-shelf optimization solver.

The algorithm presented in this paper builds upon solvers for problems of the type given in Assumption 3. Hence we need this general assumption. In practice (and in our computational experiments), we mainly think of this problem class to be mixed-integer nonlinear problems (MINLPs) with polynomial nonlinearities. Note that Assumptions 13 are also present in [60].

Assumption 4

The lower-level constraints do not contain any continuous upper-level variables.

Without Assumption 4 we might not be able to attain the bilevel optimum despite Assumption 1, see [51]. Indeed, this assumption justifies using ‘\(\max \)’ instead of ‘\(\sup \)’ in the problem formulation. Assumptions with this effect are widely used in the literature on different levels of restrictiveness. For instance, the requirement of an all-integer leader and/or follower problem is frequently encountered. The main point of Assumption 4 is that continuous leader variables do not influence the follower’s feasible set. This is very similar, yet slightly less restrictive, than Assumption 2 from [23] and Assumption 1 from [49], where continuous leader variables are also banned from the follower objective function, in addition to being banned from the follower constraints. This assumption has already been mentioned in earlier work by the same authors (e.g., see [22]) as being very important for their bilevel solver based on intersection cuts. Indeed, for the linear case, in which no continuous leader variables are present in the follower problem at all, [51] shows that a bilevel optimum is attained. To the best of the authors’ knowledge, no such statement exists yet for nonlinear cases satisfying Assumption 4, but possibly with continuous leader variables appearing in the follower objective function. This paper offers a constructive proof that a bilevel optimum is attained in this setting too. See Corollary 3.17 in Sect. 3 for more details.

Using the optimistic assumption described at the beginning of Sect. 1.3, we can employ the following reformulation of the original bilevel problem (1.1):

figure a

where

$$\begin{aligned} \theta \left( x^u, y^u\right) = \max _{x^l, y^l} \left\{ f(x^u,y^u,x^l,y^l): g(y^u,x^l,y^l)\le 0, x^l\in \mathbb {R}^{^{n_R} }_+, y^l\in \mathbb {Z}^{^{n_Z} }_+ \right\} \end{aligned}$$
(2.2)

is an optimal-value function, cf.  Section 1.2 in [14].

The High Point Relaxation (HPR) mentioned earlier is obtained from (BLP) by dropping the (optimal-value-function constraint).

Next, we provide some definitions from [60] that are needed to state their general algorithm, including a master problem and two subproblems. By k we denote the iteration index in the following.

Definition 2.1

For any lower-level feasible \(y^{l,k}\), let \(P \left( y^{l,k} \right) \) be defined as

$$\begin{aligned} P \left( y^{l,k} \right) = \left\{ \left( y^u, x^l\right) : g(y^u, x^l, y^{l,k}) \le 0, y^u\in \mathbb {Z}^{^{m_Z} }_+, x^l\in \mathbb {R}^{^{n_R} }_+ \right\} . \end{aligned}$$
(2.3)

Definition 2.2

For any lower-level feasible \(y^{l,k}\), by \(Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) we denote

$$\begin{aligned} Proj _{\left( y^u \right) } P \left( y^{l,k} \right) = \left\{ y^u\in \mathbb {Z}^{^{m_Z} }_+ : \exists x^l\in \mathbb {R}^{^{n_R} }_+ \text { with } \left( y^u, x^l\right) \in P \left( y^{l,k} \right) \right\} . \end{aligned}$$
(2.4)

We consider \(y^{l,k}\) to be lower-level feasible if it can be extended to a feasible solution \((x^l, y^{l,k})\) of the follower’s optimization problem by suitable \(x^l\) for some \(y^u\), and, similarly, \(y^{l,k}\) to be HPR-feasible if it can be extended to a HPR-feasible solution \((x^u, y^u, x^l, y^{l,k})\) by suitable \(x^u, y^u, x^l\). We slightly abuse the notion of lower-level and HPR feasibility in the same way at various places throughout the paper.

The HPR of the original bilevel problem together with optimality conditions of the lower level for temporary fixed lower-level feasible \(y^{l,k}\) is our master problem, denoted by (MP):

figure b

where

$$\begin{aligned} \theta \left( \bar{x}^{u}, \bar{y}^{u}, y^{l,k}\right) = \max _{x^l} \left\{ f(\bar{x}^{u}, \bar{y}^{u}, x^l, y^{l,k}) : g(\bar{y}^{u}, x^l, y^{l,k}) \le 0, x^l\in \mathbb {R}^{^{n_R} }_+ \right\} \end{aligned}$$
(2.6)

is the optimal objective function value of the lower-level problem where all but continuous lower-level variables \(x^l\) are fixed, and \(Y^L\) is a set of some lower-level feasible integer configurations, which formation will be shown in Algorithm 1. We deviate from notation purity for the sake of more intuitive understanding and use the same symbol \(\theta \) as for the optimal-value function defined in (2.2). Particular realizations of projection test, implication and optimality package will be discussed later in this paper.

The following subproblem provides the optimal objective function value of the lower level for some fixed upper-level decision \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \), i.e., the value of \(\theta \) at the point \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \). We call it follower optimality (FO) problem:

$$\begin{aligned} \begin{aligned} \theta \left( \bar{x}^{u}, \bar{y}^{u}\right) = \max _{x^l, y^l} \quad&f(\bar{x}^{u}, \bar{y}^{u}, x^l, y^l) \\ \mathrm {s.t.}\quad&g(\bar{y}^{u}, x^l, y^l) \le 0 \\&x^l\in \mathbb {R}^{^{n_R} }_+, y^l\in \mathbb {Z}^{^{n_Z} }_+. \end{aligned} \end{aligned}$$
(FO)

The last required subproblem checks whether for certain fixed \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \) there exists an optimal solution of the lower-level problem that together with \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \) satisfies the upper-level constraints, i.e., whether \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \) can be extended to a bilevel feasible solution. In addition to that, if such a lower-level solution exists, the one that produces best results for the upper level will be chosen, thus realizing the optimistic assumption. We call this subproblem bilevel feasibility (BF) problem:

figure c

Now we are ready to state a general form of the algorithm as proposed in [60] for feasible bilevel problems:

figure d

All of our assumptions apart from Assumption 4 stem directly from this general algorithm form and are also assumed in [60]. They guarantee that we are able to solve the master problem (MP) or determine that it is infeasible (Assumption 3), formulate optimality packages for fixed \(y^{l,k}\) (Assumption 2) and, if the bilevel problem is feasible, arrive at a bilevel optimal solution after finitely many iterations (Assumption 1). Assumption 4, that has already been motivated from a theoretical viewpoint, will be shown to be essential in practice too in Sect. 3.1.

Remark 2.3

For all HPR-feasible \(\left( \bar{x}^{u}, \bar{y}^{u}\right) \) we have

$$\begin{aligned} \theta \left( \bar{x}^{u}, \bar{y}^{u}\right) = \max _{y^{l,k}} \left\{ \theta \left( \bar{x}^{u}, \bar{y}^{u}, y^{l,k}\right) :\text {(FO) is feasible for } \left( \bar{x}^{u}, \bar{y}^{u}, y^{l,k}\right) \right\} . \end{aligned}$$

The following statements—Lemmas 2.42.6, and Theorem 2.7—were already shown in or follow directly from [60]. We list them here in our notation for completeness. Corollary 2.5 is not explicitly stated in [60] (due to their assumption that a feasible solution exists), but is a direct consequence of Lemma 2.4.

Lemma 2.4

For any set \(Y^L\) of lower-level feasible integer variable configurations the master problem (MP) is a relaxation of the original bilevel problem (BLP).

If \(Y^L\) comprises a complete set of lower-level feasible integer variable configurations, the master problem (MP) is equivalent to the original bilevel problem (BLP).

Corollary 2.5

If the master problem (MP) is infeasible, then the original bilevel problem (BLP) is infeasible. If the original bilevel problem (BLP) is infeasible, the master problem (MP) for a complete set of HPR-feasible lower-level integer variable configurations \(Y^L\) is infeasible.

Lemma 2.6

Algorithm 1 generates some \(y^{l,k}\) to be added to \(Y^L_k\) at the end of each iteration \(k\). If thus generated \(y^{l,k}\) is already contained in \(Y^L_k\), i.e., it has been generated in some previous iteration, Algorithm 1 terminates with a bilevel optimal solution at the latest in iteration \(k+1\).

Theorem 2.7

Given a feasible bilevel problem (BLP) satisfying Assumptions 14, Algorithm 1 finds a bilevel optimal solution in finitely many iterations.

The proofs of statements analogous to Lemma 2.6 and Theorem 2.7 that are needed for the exact projection test as described in Sect. 3.1, can be found in Sect. 3.3.

Note that in the worst case, Algorithm 1 can result in a complete enumeration of all lower-level feasible \(y^l\). See Example A.1 in the appendix for an unfavorable instance in this regard.

For practical purposes, in the remainder of this paper we focus on the case in which the follower problem is linear, i.e., \( f\) and \( g\) are linear functions for fixed upper-level decisions \(\bar{y}^{u}\) and \(\bar{x}^{u}\). In this case we can write

$$\begin{aligned} f(x^u,y^u,x^l,y^l)= f_R(\bar{x}^{u}, \bar{y}^{u}) x^l+ f_Z(\bar{x}^{u}, \bar{y}^{u}) y^l\end{aligned}$$

(a lower-level objective function term that is constant for fixed \(y^u\) and \(x^u\) would be meaningless) and

$$\begin{aligned} g(y^u,x^l,y^l)= g_R(\bar{y}^{u}) x^l+ g_Z(\bar{y}^{u}) y^l+ g_c(\bar{y}^{u}) \end{aligned}$$

for vector-valued functions \( f_R, f_Z\) and matrix-valued functions \( g_R, g_Z, g_c\) of the upper-level decisions. Extensions to the general case characterized by Assumptions 14 will be discussed in Sect. 5.

2.1 Projection test, implications and optimality packages as realized in the literature

In this section we describe the setting and the way projection test, implication of optimality package and optimality package for \(y^{l,k}\) necessary for Algorithm 1 are realized in [60].

Their set of assumptions include Assumptions 13 and an additional assumption that the inducible region is nonempty, i.e. the bilevel problem is feasible. However, Assumption 4 is not present and the setting treats only bilevel problems where the corresponding HPR is linear.

Projection test (PT) determining whether \(y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) is realized with the following subproblem formed at the end of each iteration for the resulting \(y^{l,k}\):

figure e

where is an \(s\)-dimensional all-ones vector \(\left( 1, \ldots , 1 \right) ^\top \), \(s\) is the number of constraints in the follower’s problem and \(x^{l,k}\) are copies of the lower-level continuous variables \(x^l\), introduced specifically for projection test in iteration \(k\).

Remark 2.8

Given \(y^{l,k}\) generated by Algorithm 1 and a HPR-feasible \(y^u\). If the optimal value of (PT\(_0\)) is 0, then \(y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) applies, and otherwise \(y^u\notin Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \).

Then (PT\(_0\)), its dual feasibility conditions as well as both types of complementarity constraints are added to (MP) in order to replace

$$\begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \right] \implies \left[ f(x^u,y^u,x^l,y^l)\ge \theta \left( x^u, y^u, y^{l,k}\right) \right] \end{aligned}$$

by

(2.7a)
(2.7b)
(2.7c)
(2.7d)

The authors of [60] claim that constraint (2.7a) cannot be handled by off-the-shelf solvers directly and propose an approximative reformulation dependent on some \(\varepsilon \ge 0\):

(2.8a)
(2.8b)

In Sect. 2.2 we elaborate on the theoretical meaning of this approximation and illustrate how it can lead to algorithm failure in practice.

The algorithm from [60] then utilizes a feature for specifying indicator constraints provided by some mixed-integer solvers such as, e.g., CPLEX, in order to realize implication (2.8a).

In total the approximative projection test, implication and optimality package add one new binary as well as \((2n_R + 2s)\) new continuous variables, and \((6n_R + 4s+ 2)\) new constraints to (MP) in each iteration.

All \((3n_R + 2s)\) complementarity constraints from both projection test and optimality package are then linearized using a big-M technique, as the master problem has to remain linear in [60] in order to be able to employ the indicator constraints as mentioned above. Note that, unless upper bounds for dual variables of the lower level are known, the big-M linearization technique may lead to suboptimal solutions in the bilevel context, see, e.g., [44] for more information.

2.2 Consequences of the approximative projection test

In this section we first explain some theoretical background regarding constraint (2.7a). Then we show how an inappropriate choice of \(\varepsilon \) used for the approximation (2.8a)–(2.8b) can lead to algorithm failure even on a very simple bilevel problem satisfying Assumption 4. In particular, we show that a value for \(\varepsilon \) ensuring correctness of Algorithm 1 with projection test as described in [60] depends also on the constraint representation of the bilevel problem.

First of all, note that constraint (2.7a) to be approximated in projection test implies optimizing over non-closed sets, as will be explained later in more detail in Sect. 3, see (3.2) and (3.3) therein. Hence, its realization poses problems not just from a practical viewpoint of, e.g., unavailable solver features. Recall that, as stated in Sect. 2, sometimes no bilevel optimum can be attained precisely because the bilevel feasible region is not compact, cf. Example 6.2.1(ii) from [2]. Assumption 4 eliminates this theoretical problem, i.e. when upper-level variables do not influence the follower’s feasible region, the bilevel optimum under the optimistic assumption can always be attained.

However, the approximation (2.8a)–(2.8b) can still cause problems even for bilevel problems satisfying Assumptions 14. An \(\varepsilon \) that is on the one hand small enough to ensure correctness of Algorithm 1 with projection test as described in [60], and on the other hand big enough so as not to lead to numerical intractability, may be hard to find in practice. The following example shows how all bilevel feasible points can be cut off and a problem instance is erroneously classified as infeasible due to constraint scaling.

Example 2.9

Let \(\nu \ge 0\) be some scaling parameter. We solve the following bilevel problem parameterized by \(\nu \) employing the algorithm from [60] and show that the outcome is incorrect for \(\nu < \varepsilon \):

$$\begin{aligned} \begin{aligned} \max _{y^u} \quad&y^u- y^l\\ \mathrm {s.t.}\quad&y^u+ 4y^l\le 12, \quad y^u\in \mathbb {Z}_+ \\&\begin{aligned} \max _{y^l} \quad&y^l\\ \mathrm {s.t.}\quad&\nu y^u+ \frac{\nu }{2} y^l\le 3\nu , \quad 2y^u- y^l\le 2, \quad y^l\in \mathbb {Z}_+. \end{aligned} \end{aligned} \end{aligned}$$
figure f

We apply Algorithm 1 to this example problem and start with \(k=0\), \(\textit{UB} = \infty \), \(\textit{LB} = -\infty \):

Problem (MP)

$$\begin{aligned} \max _{y^u,y^l} \quad&y^u- y^l\\ \text {s.t.} \quad&y^u+ 4y^l\le 12, \quad \nu y^u+ \frac{\nu }{2} y^l\le 3\nu , \quad 2y^u- y^l\le 2, \quad y^u,y^l\in \mathbb {Z}_+ \end{aligned}$$

yields \(\left( y_{0}^{\text {u},*}, y_{0}^{\text {l},*} \right) = (1,0)\) together with \(\textit{UB} = 1\). Problem (FO) for \(y_{0}^{\text {u},*} = 1\)

$$\begin{aligned} \max _{y^l} \quad&y^l\\ \text {s.t.} \quad&\quad \nu y_{0}^{\text {u},*} + \frac{\nu }{2} y^l\le 3\nu , \quad 2y_{0}^{\text {u},*} - y^l\le 2, \quad y^l\in \mathbb {Z}_+ \end{aligned}$$

yields \(\hat{y_{}^{}}^\text {l}_0 = 4\) and \(\theta _0(y_{0}^{\text {u},*}=1) = 4\). Finally, problem (BF)

$$\begin{aligned}&\max _{y^l} -y^l\\&\text {s.t.} \quad \nu y_{0}^{\text {u},*} + \frac{\nu }{2} y^l\le 3\nu , 2y_{0}^{\text {u},*} - y^l\le 2, \\&\qquad y_{0}^{\text {u},*}+ 4y^l\le 12, y^l\ge \theta _0(y_{0}^{\text {u},*}=1) = 4, y^l\in \mathbb {Z}_+ \end{aligned}$$

is infeasible as the two last constraints imply \(4 \le y^l\le \frac{11}{4}\). So we add \(y_{}^{l,0} = \hat{y_{}^{}}^\text {l}_0 = 4\) to \(Y^L_1\). Thus, approximative projection test (PT\(_0\)) reads:

$$\begin{aligned} \begin{aligned}&\min _{ t^{0}_1, t^{0}_2} \quad t^{0}_1 + t^{0}_2 \\&\quad \text {s.t.} \quad - t^{0}_1 + \nu y^u+ \frac{\nu }{2} y_{}^{l,0} \le 3\nu , \quad - t^{0}_2 + 2y^u- y_{}^{l,0} \le 2, \quad t^{0}_1, t^{0}_2 \in \mathbb {R}_+. \end{aligned} \end{aligned}$$

For any optimal solution \(\left( t^{0,*}_1, t^{0,*}_2 \right) \) of (PT\(_0\)) we have

$$\begin{aligned} t^{0,*}_1 = \max \left\{ \nu \left( y^u- 1\right) , 0 \right\} , \quad t^{0,*}_2 = \max \left\{ 2\left( y^u- 3\right) , 0 \right\} . \end{aligned}$$

For \(0 \le t^{0}_1 + t^{0}_2 < \varepsilon \), constraint (2.8b)

$$\begin{aligned} \varepsilon (1-\psi ^0) \le t^{0}_1 + t^{0}_2, \quad \psi ^0 \in \{0,1\} \end{aligned}$$

implies \(\psi ^0 = 1\).

Assuming \( t^{0}_1 = t^{0,*}_1\) and \( t^{0}_2 = t^{0,*}_2\) are optimal solutions of (PT\(_0\)), constraint (2.8a) for \(y_{}^{l,0} = 4\), given by

$$\begin{aligned} \left[ \max \left\{ \nu \left( y^u- 1\right) , 0 \right\} + \max \left\{ 2\left( y^u- 3\right) , 0 \right\} < \varepsilon \right] \rightarrow \left[ y^l\ge 4\right] , \end{aligned}$$

is added to problem (MP). Observe that there are only three HPR-feasible \(y^u\) in this example problem, namely \(\{0,1,2\}\). For \(y^u=0\) and \(y^u=1\) the optimal objective function value of (PT\(_0\)) is 0 irrespective of the value of \(\nu \). In this case the above constraint correctly activates the corresponding optimality package, i.e. \(\left[ y^l\ge 4\right] \) for \(y^u\in \{0,1\}\). In particular, it means that there are no bilevel feasible solutions for \(y^u\in \{0,1\}\). The correct rational response of the follower to the only remaining HPR-feasible point \(y^u=2\) is \(y^l= 2\). However, for \(\nu < \varepsilon \) the above constraint will impose the implication \(\left[ y^u= 2\right] \rightarrow \left[ y^l\ge 4\right] \), which cuts off the only bilevel feasible point (2, 2) and leads to the instance being classified as infeasible.

So, for any \(\varepsilon > 0\) to be chosen for the approximative projection test there exists a half-space representation of the lower-level problem from Example 2.9 such that Algorithm 1 will erroneously classify the corresponding bilevel problem as infeasible.

For numerical reasons, optimization solvers usually employ some kind of scaling procedure on the problem before the actual solving process starts, see, e.g., [8]. This obviously changes the original representation of the problem and thus has, as shown on Example 2.9, an influence on the suitability of the choice of \(\varepsilon \). Unfortunately, in most cases the exact scaling process of a solver when using the most performant parameter settings is neither known nor accessible to the user.

3 Exact algorithm realization suitable for nonlinearities

In this section we describe our realization of projection test, implication and optimality package for each \(y^{l,k}\in Y^L_k\) in the master problem (MP), i.e.,

$$\begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \right] \implies \left[ f(x^u,y^u,x^l,y^l)\ge \theta \left( x^u, y^u, y^{l,k}\right) \right] , \end{aligned}$$

which makes Algorithm 1 exact and allows its extension to a nonlinear setting. Analogously to the implementation described in Sect. 2.1, we employ a binary variable \(\psi ^k\) for every \(y^{l,k}\in Y^L_k\) to separate the above routines of the algorithm:

$$\begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \right]&\implies \left[ \psi ^k= 1 \right] \end{aligned}$$
(3.1a)
$$\begin{aligned} \left[ \psi ^k= 1 \right]&\implies \left[ f(x^u,y^u,x^l,y^l)\ge \theta \left( x^u, y^u, y^{l,k}\right) \right] . \end{aligned}$$
(3.1b)

Section 3.1 deals with (3.1a), and Sect. 3.2 handles (3.1b). At the end of this section we present our algorithm version with the proof of its correctness and finite termination.

3.1 Exact projection test

As has been seen in Example 2.9, approximative projection test applies optimality package for \(y^{l,k}\) also to some \(y^u\notin Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \). Here we aim at precise handling by adding the implication for \(y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \), but abstaining from adding it for any \(y^u\notin Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \). Note that \(Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) is a discrete set, so the required implication (3.1a) could be imposed by some kind of enumeration procedure. However, this would mean that Algorithm 1 would enumerate not only \(y^l\), but combinations of \(y^l\) and \(y^u\). Therefore we want to avoid enumeration of \(y^u\) whenever possible and use dual formulations for projection test, similar to the technique described in Sect. 2.1.

For this we utilize the linear relaxation of \(P \left( y^{l,k} \right) \), denoted by \(P_{\mathrm{lin}} \left( y^{l,k} \right) \), i.e.,

$$\begin{aligned} P_{\mathrm{lin}} \left( y^{l,k} \right) = \left\{ \left( y^u, x^l\right) : g(y^u, x^l, y^{l,k}) \le 0, y^u\in \mathbb {R}^{^{m_Z} }_+, x^l\in \mathbb {R}^{^{n_R} }_+ \right\} \end{aligned}$$

(cf. Definition 2.1). Correspondingly, we denote the projection of \(P_{\mathrm{lin}} \left( y^{l,k} \right) \) to the space of upper-level discrete variables by

$$\begin{aligned} Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) = \left\{ y^u\in \mathbb {R}^{^{m_Z} }_+ : \exists x^l\in \mathbb {R}^{^{n_R} }_+ \text { with } \left( y^u, x^l\right) \in P_{\mathrm{lin}} \left( y^{l,k} \right) \right\} . \end{aligned}$$

Note that while \(Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) and \(Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \) contain exactly the same integer points, \(Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \) is a discrete set whereas \(Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \) is a continuous closed set.

The implication

$$\begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \right] \implies \left[ \psi ^k= 1 \right] , \end{aligned}$$
(3.2)

which is equivalent to the disjunction

$$\begin{aligned} \left[ y^u\notin Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \right] \vee \left[ \psi ^k= 1 \right] , \end{aligned}$$
(3.3)

however, cannot be modeled directly, as the complement of \(Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \) is an open set. Consequently, for linearly relaxed \(y^u\in \mathbb {R}^{m_Z}_{+}\) the set on which (3.2) is true is not a closed set.

One option to handle this obstacle is to use an approximation as it is done in [60], described in Sect. 2.1. However, as discussed in Sect. 2.2, we want to avoid this and obtain an exact algorithm. So our idea is the following:

  1. (a)

    Find an open subset \(U \subseteq Proj _{\left( y^u \right) }P_{\mathrm{lin}} \left( y^{l,k} \right) \) for which the implication \(\left[ y^u\in U \right] \implies \left[ \psi ^k= 1 \right] \) can be modeled using standard duality theory.

  2. (b)

    Model the remaining requirement \(\left[ y^u\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \right] \implies \left[ \psi ^k= 1 \right] \) via disjunction over \(y^u\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \).

Since (b) will involve some kind of enumeration, U should be chosen as large as possible such that (a) can still be modeled conveniently. We choose \(U = Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \), i.e., the interior of \(P_{\mathrm{lin}} \left( y^{l,k} \right) \) projected to the space of discrete upper-level variables. Note that the projection is an open map, so U is indeed an open set in the target space.

Remark 3.1

Since we would like U to be as large as possible, \(U=Proj _{\left( y^u \right) }^\circ \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) \) seems to be the canonical candidate as it is by definition the largest open subset of \(Proj _{\left( y^u \right) }P_{\mathrm{lin}} \left( y^{l,k} \right) \). However, this set is more difficult to describe. In particular, note that \(Proj _{\left( y^u \right) }^\circ \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) \ne Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \) in general as can be seen from the following example:

Suppose \( g :\mathbb {R}^3 \mapsto \mathbb {R}\) is a scalar function with \( g(y^u, x^l, y^{l,k}) = x^l- \left( y^u- 1 \right) ^2\). In this case, for any \(y^{l,k}\) we have

$$\begin{aligned} Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) = \left\{ y^u\in \mathbb {R}_+ : \exists x^l\in \mathbb {R}_+ \text { with } x^l\le \left( y^u- 1 \right) ^2 \right\} = \mathbb {R}_+, \end{aligned}$$

and hence its interior is equal to \(\mathbb {R}_{>0}\). However, for \(y^u= 1\) there is no \(x^l\in \mathbb {R}_+\) such that \(x^l< \left( y^u- 1 \right) ^2\) and therefore

$$\begin{aligned} 1 \notin \left\{ y^u\in \mathbb {R}_+ : \exists x^l\in \mathbb {R}_+ \text { with } x^l< \left( y^u- 1 \right) ^2 \right\} = Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) . \end{aligned}$$

One may object that \(U = \emptyset \) if \(P_{\mathrm{lin}} \left( y^{l,k} \right) \) is not full-dimensional, which happens regularly in practical instances due to equations in the follower problem. Nonetheless, an adjustment for U in this case is possible and will be presented in Sect. 3.1.1.

W.l.o.g. we assume in Sects. 3.1 and 3.2 that there are no lower-level constraints which depend on \(y^l\) only, i.e., where neither \(x^l\) nor \(y^u\) are present. Indeed, as such constraints are always satisfied for a \(y^{l,k}\) that resulted from solving (FO) or (BF) and have no influence on other variables, they can be safely disregarded once discrete lower-level variables are fixed to \(y^{l,k}\).

We introduce the following auxiliary optimization problem for projection test:

$$\begin{aligned} \begin{aligned} \max _{x^{l,k}, t^{k}} \quad&t^{k}\\ \text {s.t.} \quad&g_R(y^u)x^{l,k}+ \left( t^{k}\right) _s&\le - g_c(y^u)- g_Z(y^u)y^{l,k}\\&x^{l,k}\in \mathbb {R}_+^{n_R}, t^{k}&\in \mathbb {R}, \end{aligned} \end{aligned}$$
(PT)

where \(\left( t^{k}\right) _s= \left( t^{k}, \ldots , t^{k}\right) ^\top \in \mathbb {R}^s\) is the vector having \( t^{k}\) in every component.

It is easy to see that \(y^u\in Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \) if and only if (PT) has a feasible solution with \( t^{k}\)\( \ge 0\). Furthermore, we have the following:

Lemma 3.2

For any fixed \(y^{l,k}\) and \(\bar{y}^{u}\) the problem (PT) has a nonempty feasible region. Moreover, the optimal solution part \( t^{k,*}\) is unique, and the following equivalences hold:

  1. Case 1:

    \( t^{k,*}> 0 \Longleftrightarrow \bar{y}^{u}\in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) = U\),

  2. Case 2:

    \( t^{k,*}= 0 \Longleftrightarrow \bar{y}^{u}\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \),

  3. Case 3:

    \( t^{k,*}< 0 \Longleftrightarrow \bar{y}^{u}\notin Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \).

Proof

Recall that due to Assumption 1 all variables of both levels have finite bounds. Since \( t^{k}\) is a free slack variable present in every constraint of (PT), with \(x^{l,k}= 0\) it can be chosen such that all constraints are satisfied. Since the objective function of (PT) depends only on one decision variable \( t^{k}\), the optimal solution part \( t^{k,*}\) is unique. For the optimal solution \( t^{k,*}\) at least one of the constraints is satisfied with equality. Thus \( t^{k,*}\) is per construction the maximal slack applicable to all lower-level constraints with lower-level variables fixed to \(y^{l,k}\). Since \(y^{l,k}\) is a constant, Case 1 is equivalent to the existence of some \(\bar{x}^{l,k}\) such that

$$\begin{aligned} (\bar{y}^{u}, \bar{x}^{l,k}) \in \left\{ \left( y^u, x^l\right) : g(y^u, x^l, y^{l,k}) < 0, y^u\in \mathbb {R}^{^{m_Z} }_+, x^l\in \mathbb {R}^{^{n_R} }_+ \right\} = P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \end{aligned}$$

if \( g\) does not contain any tautological equations for the particular \(y^{l,k}\). Recall that w.l.o.g. we have excluded the possibility of lower-level equations containing no other variables apart from \(y^l\), so this translates to \(\bar{y}^{u}\in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \). Consequently, Case 2 corresponds to

$$\begin{aligned} \bar{y}^{u}\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) . \end{aligned}$$

This completes the proof. \(\square \)

Let us now consider task (a), corresponding to the first case from Lemma 3.2. For some sufficiently large number \(\text {M}_{\mu }\) (details of which we discuss in Sect. 3.1.1), we introduce the following inequality:

$$\begin{aligned} t^{k,*}\le \psi ^k\text {M}_{\mu }. \end{aligned}$$
(3.4)

It ensures that \( \left[ \bar{y}^{u}\in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \right] \implies \left[ \psi ^k= 1 \right] \) due to Case 1 of Lemma 3.2.

We can formulate the dual problem to (PT):

figure g

Here, is an \(s\)-dimensional all-ones vector.

We will show that we need only dual feasibility constraints to be able to reformulate (3.4) as

$$\begin{aligned} -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top \mu \le \psi ^k\text {M}_{\mu }. \end{aligned}$$
(3.4a)

Lemma 3.3

Let \(\psi ^k\in \{0,1\}\) and \(\text {M}_{\mu }\) be some sufficiently large number. Let strong duality hold for (PT) and (PT dual). Adding (PT dual) and (3.4a) to the master problem (MP) implies

$$\begin{aligned} \begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \right]&\implies \left( \psi ^k= 1 \right) \\ \left[ y^u\notin Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \right]&\implies \mathrm{(PT dual)}\,\text {and}\,\mathrm{(3.4a)}\,{\text {impose no restrictions on}}\,(\mathrm{MP}). \end{aligned} \end{aligned}$$

Proof

The weak duality theorem implies

$$\begin{aligned} -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top \mu \ge t^{k}\end{aligned}$$

for every feasible solution of (PT) and (PT dual). Lemma 3.2 states that \( t^{k,*}> 0\) if and only if \(y^u\in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \) and, therefore,

$$\begin{aligned} -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top \mu \ge t^{k,*}> 0 \end{aligned}$$

for every \(\mu \) satisfying (PT dual). Then \(\psi ^k\) is set to 1 due to (3.4a).

Note that in the master problem (MP) the variables \(\mu \) appear only in constraints from (PT dual) and in (3.4a), which do not involve any other variables apart from \(\psi ^k\). Thus \(\psi ^k\) is the only variable that links these constraints to the rest of (MP). Consequently, if \(\mu \) can be chosen such that no restriction is imposed on the value of \(\psi ^k\), i.e., such that the left-hand side of (3.4a) is nonpositive, these constraints exert no influence on (MP). This is indeed possible due to strong duality and Cases 2 and 3 of Lemma 3.2. \(\square \)

Remark 3.4

Note that unless strong duality holds between (PT) and (PT dual), the above approach may erroneously classify some upper-level variable values as belonging to a certain projection when in fact they do not. However, the required strong duality is satisfied due to Assumption 2.

Thus, adding (PT dual) and (3.4a) to (MP) ensures correct handling for the Cases 1 and 3 from Lemma 3.2 and completes task (a) from p. 14.

Next we consider task (b) handling Case 2. We are unable to add the implication

$$\begin{aligned} \left[ y^u\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \right] \implies \left[ \psi ^k= 1 \right] \end{aligned}$$
(3.5)

directly to the formulation for the same reason we could not do so for implication (3.2).

Instead, we may add a weaker implication to (MP) that considers only the optimal solution values \(y_{k}^{\text {u},*}\) of the upper-level discrete variables in the current algorithm iteration:

$$\begin{aligned} \left[ y^u= y_{k}^{\text {u},*} \right] \implies \left[ \psi ^k= 1 \right] \end{aligned}$$
(3.6)

An obvious way to enforce implication (3.6) is to employ a so-called no-good cut [9, 27], which cuts off a specific solution. A similar approach was pursued in [52] for a Benders’ decomposition algorithm with a mixed-integer linear subproblem. In it, each assignment of discrete variables corresponds to a nonconvex set, which is the union of the complementary set and the boundary of a polyhedron. The no-good cut is used to exclude cases in further iterations that lead to points on faces of the polyhedron.

A way to realize a no-good cut for implication (3.6) is to use a quadratic constraint:

$$\begin{aligned} 1 - \sum _{i = 1 }^{m_Z} {\left( y_{i}^{\text {u}}-y_{k,i}^{\text {u},*} \right) ^2 } \le \psi ^k. \end{aligned}$$
(3.7)

In case all upper-level integer variables are binary, instead of (3.7) one may use the linear constraint

$$\begin{aligned} 1 + \sum _{y_{k,i}^{\text {u},*}=1} {\left( y_{i}^{\text {u}}-1 \right) } - \sum _{y_{k,i}^{\text {u},*}=0} y_{i}^{\text {u}} \le \psi ^k. \end{aligned}$$
(3.8)

Remark 3.5

Note that (3.5) can be imposed on the master problem gradually by adding suitable no-good constraints realizing the implication (3.6) for some \(y^{u,k}\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \cap \mathbb {Z}_+^{m_Z}\), where \(y^{u,k}\) is the upper-level discrete part of the optimal solution of (MP) in iteration \(k\).

The required implication (3.1a) can be imposed on the master problem by adding (PT dual), (3.4a) and a finite number of no-good constraints realizing (3.5), which at the same time imposes no implications on any \(y^u\notin Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \). The number of required no-good constraints is finite because the number of HPR-feasible \(y^u\) is finite. Note that Assumption 4 is essential for this to work as it ensures that only discrete upper-level variables influence the set of feasible follower responses.

3.1.1 Projection test with equalities

If the description of \(P_{\mathrm{lin}} \left( y^{l,k} \right) \) contains equations, \(P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \) is empty, i.e., Case 1 from Lemma 3.2 never occurs. Thus, the primarily desired implication (3.2) would be imposed by (3.6) only, which would lead to adding a lot of no-good cuts and, consequently, a large number of algorithm iterations. In order to avoid this, we propose a special handling for the case of equality constraints being present in the lower-level problem formulation.

First, we distinguish between equality and inequality constraint functions, denoted by \( h\) and \( g\), respectively. Matrix-valued functions \( h_R\), \( h_c\), \( h_Z\) and \( g_R\), \( g_c\), \( g_Z\) are used accordingly. We denote the number of inequality constraints of the lower level by \(s^\text {I} \le s\).

Our aim is to substitute

$$\begin{aligned} U = Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) ^\circ \right) \text { by } U^\text {I} = Proj _{\left( y^u \right) } \left( P^I_{\text {lin}} \left( y^{l,k}\right) ^\circ \cap P_{\mathrm{lin}} \left( y^{l,k} \right) \right) , \end{aligned}$$

where \(P^I_{\text {lin}} \left( y^{l,k}\right) \!=\! \left\{ \left( y^u, x^l\right) : g(y^u, x^l, y^{l,k}) \!\le \! 0, y^u\in \mathbb {R}^{^{m_Z} }_+, x^l\in \mathbb {R}^{^{n_R} }_+ \right\} \) and \( g(y^u,x^l,y^l)\le 0\) are the inequality constraints of the lower level. Hence,

$$U^\text {I} = \left\{ y^u\in \mathbb {R}^{^{m_Z} }_+ \mid \exists x^l\in \mathbb {R}^{^{n_R} }_+: g(y^u, x^l, y^{l,k}) < 0,~ h(y^u, x^l, y^{l,k}) = 0 \right\} . $$

To achieve this we modify the primal projection test problem (PT) by abstaining from adding slack variables to equality constraints of the lower level:

figure h

where \(\left( t^{k}\right) _{s^\text {I}} = \left( t^{k}, \ldots , t^{k}\right) ^\top \in \mathbb {R}^{s^\text {I}}\) is a vector having \( t^{k}\) in every component.

Note that in contrast to (PT), (PTEq) is not necessarily feasible. However, we will show that its dual is always feasible.

With the dual variables \(\lambda \) and \(\mu \) corresponding to equality and inequality constraints of (PTEq), respectively, we formulate the dual problem to (PTEq):

figure i

Again, -dimensional all-ones vector.

Lemma 3.6

The optimization problem (PTEq dual) always has a nonempty feasible region.

Proof

Recall that according to Lemma 3.2, (PT) always has a nonempty feasible region. Thus, although (PTEq) is not necessarily feasible, its infeasibility can arise only from violated equality constraints. Therefore (PTEq) without equality constraints is always feasible with finite optimal value, as consequently is the case for its dual

figure j

For any feasible solution \(\bar{\mu }\) of (PTwithoutEq dual), \(\left( \bar{\mu }, \lambda = 0 \right) \) constitutes a feasible solution of (PTEq dual). \(\square \)

Corollary 3.7

If (PTEq) is infeasible, (PTEq dual) is unbounded.

Proof

The dual of an infeasible linear optimization problem can be either infeasible or unbounded according to the LP theory, see Corollary 2.5 from [57]. Lemma 3.6 shows that the first case can not occur with the considered primal-dual pair. Therefore the second case is true and an infeasibility of (PTEq) implies unboundedness of  (PTEq dual). \(\square \)

Analogously to (3.4a), we introduce the following constraint to complete the projection test (PTEq dual) with equalities:

$$\begin{aligned} -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top \mu -\left( h_c(y^u)+ h_Z(y^u)y^{l,k}\right) ^\top \lambda \le \psi ^k\text {M}_{\mu }. \end{aligned}$$
(3.4b)

Now we are ready to provide the analogon of Lemma 3.3 for the case of equality constraints being present on the lower level:

Lemma 3.8

Let \(\psi ^k\in \{0,1\}\) and \(\text {M}_{\mu }\) be some sufficiently large number. If strong duality holds for (PTEq) and (PTEq dual) whenever (PTEq) is feasible, adding (PTEq dual) and (3.4b) to the master problem (MP) implies:

$$\begin{aligned} \begin{aligned}&\left[ \bar{y}^{u}\in Proj _{\left( y^u \right) } \left( P^I_{\text {lin}} \left( y^{l,k}\right) ^\circ \cap P_{\mathrm{lin}} \left( y^{l,k} \right) \right) \right] \implies \psi ^k= 1 \\&\left[ \bar{y}^{u}\notin Proj _{\left( y^u \right) } \left( P^I_{\text {lin}} \left( y^{l,k}\right) ^\circ \cap P_{\mathrm{lin}} \left( y^{l,k} \right) \right) \right] \\&\quad \implies \text {no additional restrictions on the master problem.} \end{aligned} \end{aligned}$$

Proof

If \(\bar{y}^{u}\in U^\text {I} = Proj _{\left( y^u \right) } \left( P^I_{\text {lin}} \left( y^{l,k}\right) ^\circ \cap P_{\mathrm{lin}} \left( y^{l,k} \right) \right) \), all constraints of (PTEq) can be satisfied and the maximal slack \( t^{k,*}\) for inequality constraints is strictly positive, analogously to the first case of Lemma 3.2. Thus the dual (PTEq dual) is also feasible and, due to weak duality, (3.4b) enforces \(\psi ^k= 1\):

$$\begin{aligned} 0 < t^{k,*}\le -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top \mu -\left( h_c(y^u)+ h_Z(y^u)y^{l,k}\right) ^\top \lambda \le \psi ^k\text {M}_{\mu }. \end{aligned}$$

Thus we obtain the first implication of the lemma and proceed with the second one.

If \(\bar{y}^{u}\notin U^\text {I}\), then either (PTEq) is infeasible, or has \( t^{k,*}\le 0\). If (PTEq) is infeasible, then the minimization problem (PTEq dual) is unbounded according to Corollary 3.7. For \( t^{k,*}\le 0\), strong duality holds between (PTEq) and (PTEq dual):

$$\begin{aligned} -\left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top {\mu }^* -\left( h_c(y^u)+ h_Z(y^u)y^{l,k}\right) ^\top {\lambda }^* = t^{k,*}\le 0. \end{aligned}$$

Thus, for \(\bar{y}^{u}\notin U^\text {I}\), a feasible solution \(\left( \mu , \lambda \right) \) of (PTEq dual) can be chosen such that its objective function, which is also the left-hand side of (3.4b), is strictly less than or equal to 0 regardless of the feasibility of (PTEq). Therefore (3.4b) imposes no restrictions on the value of \(\psi ^k\) if \(\bar{y}^{u}\notin U^\text {I}\), and this completes the proof analogously to the proof of Lemma 3.3. \(\square \)

Now we address the issue of computation of \(\text {M}_{\mu }\). Note that it is sufficient to choose \(\text {M}_{\mu }\) big enough such that for every HPR-feasible \(y^u\) there is a feasible solution of (PTEq dual) that satisfies (3.4b). Then an optimal solution of (PTEq dual) also satisfies (3.4b) for every HPR-feasible \(y^u\), and, consequently, adding (3.4b) to the master problem (MP) produces the effect desired by Lemma 3.8. In the proof of Lemma 3.6 we established that for any \(y^u\) there always is a feasible solution for (PTEq dual) with \(\lambda = 0\). Therefore, a suitable \(\text {M}_{\mu }\) is, e.g., the optimal objective function value of (PTwithoutEq dual) combined with the HPR constraint set and variables of (BLP). The resulting auxiliary optimization problem is always feasible, as (PTwithoutEq dual) is feasible for any HPR-feasible \(y^u\), and has a finite objective function value as all variables in there are bounded.

Remark 3.9

The above proposed projection test including no-good cuts allows an exact realization of Algorithm 1 and also results in some more changes compared to the approximative projection test described in Sect. 2.1:

  • We need only dual feasibility constraints together with (3.4a) or (3.4b), respectively, instead of all necessary optimality conditions for (PT) or (PTEq), respectively.

  • Consequently, neither \(x^{l,k}\) as ‘copies’ of continuous lower-level variables, nor \( t^{k}\) for each \(y^{l,k}\) are needed.

  • We introduce one implication, i.e., one no-good constraint, more per iteration \(k\) of the algorithm. Altogether, we introduce one new binary and \(s\) continuous variables as well as \(n_R + 3\) constraints for the exact projection test in each iteration.

  • We may have to enumerate a lot of or, in the worst case, all integer points of some sets \(Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U^\text {I}\).

Example 3.10

We revisit Example 2.9 using the improved algorithm proposed above. For \(k=0\), the steps solving (MP), (FO) and (BF) are identical, so we start from (PT). As the lower level does not contain equations, we employ projection test with U:

(PT) ‘Checking what \(y^u\) allow \(y_{}^{l,0} = 4\)

$$\begin{aligned} \begin{aligned} \max _{ t^{0}} \quad&t^{0} \\ \text {s.t.} \quad&t^{0} \le \nu \left( 1 - y^u\right) \\&t^{0} \le 2 \left( 3 - y^u\right) \\&t^{0} \in \mathbb {R}. \\ \end{aligned} \end{aligned}$$

Let us consider all three possible cases regarding \(Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y_{}^{l,0} \right) \):

  1. Case 1:

    \( t^{0,*} > 0 \text { for } y^u= 0 \Longleftrightarrow \left( y^u= 0 \right) \in Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y_{}^{l,0} \right) ^\circ \right) ,\)

  2. Case 2:

    \( t^{0,*} = 0 \text { for } y^u= 1 \Longleftrightarrow \left( y^u= 1\right) \in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y_{}^{l,0} \right) \right) {\setminus } U \right) ,\)

  3. Case 3:

    \( t^{0,*} < 0 \text { for } y^u\ge 2 \Longleftrightarrow \left( y^u\ge 2 \right) \notin Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y_{}^{l,0} \right) \).

The dual problem (PT dual) reads:

$$\begin{aligned} \begin{aligned} \min _{ \mu ^{0}_1, \mu ^{0}_2} \quad&\nu (1 - y^u) \mu ^{0}_1 + 2(3-y^u) \mu ^{0}_2 \\ \text {s.t.} \quad&\mu ^{0}_1+ \mu ^{0}_2 = 1 \\&\mu ^{0}_1, \mu ^{0}_2 \in \mathbb {R}_+. \\ \end{aligned} \end{aligned}$$

Taking dual feasibility constraints together with the proposed inequality (3.4a)

$$\begin{aligned} \nu (1 - y^u) \mu ^{0}_1 + 2(3-y^u) \mu ^{0}_2 \le \psi ^0 \text {M}_{\mu }\end{aligned}$$

we obtain that \(\psi ^0 = 1\) is implied exactly for \(y^u= 0\) corresponding to Case 1, and \(\psi ^0 \in \{0,1\}\) for larger values of \(y^u\). Indeed, consider all three possible cases regarding \(Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y_{}^{l,0} \right) \):

  1. Case 1:

    for \(y^u= 0\) we have \(\nu \mu ^{0}_1 + 6 \mu ^{0}_2 \le \psi ^0 \text {M}_{\mu }\) which implies \(\psi ^0 = 1\) since \( \mu ^{0}_1+ \mu ^{0}_2 = 1\) and \( \mu ^{0}_1, \mu ^{0}_2 \ge 0\).

  2. Case 2:

    for \(y^u= 1\) we have \(4 \mu ^{0}_2 \le \psi ^0 \text {M}_{\mu }\). Hence, \(\psi ^0 = 0\) is possible as \(( \mu ^{0}_1, \mu ^{0}_2) = (1,0)\) is a feasible solution for (PT dual).

  3. Case 3:

    for \(y^u\ge 2\) the objective coefficient of \( \mu ^{0}_1\) is negative such that the feasible solution \(( \mu ^{0}_1, \mu ^{0}_2) = (1,0)\) even gives a negative objective value for (PT dual), again allowing \(\psi ^0 = 0\).

By adding the no-good cut corresponding to \(y_{0}^{\text {u},*}\), i.e., the implication \(\left[ y_{}^{\text {u}} = y_{0}^{\text {u},*} = 1 \right] \implies \left[ \psi ^0 = 1 \right] \), Case 2 is covered as well. Altogether, projection test, implication and optimality package for \(y_{}^{l,0}\) to be added to the master problem (MP) comprise

$$\begin{aligned} \left[ 0 \le y^u\le 1 \right] \rightarrow \left[ y^l\ge 4\right] . \end{aligned}$$

Taken together with the leader constraint \(y^u+ 4y^l\le 12\), this implies \(y^u\ge 2\), which leaves just one integer feasible point in (MP) for \(k=1\), namely \(\left( y_{1}^{\text {u},*}, y_{1}^{\text {l},*} \right) = (2,2)\), UB \(= 0\). As the point (2, 2) is bilevel-feasible, LB resulting from (FO) and (BF) is also 0, and the algorithm terminates with an optimal solution.

3.2 Implications and optimality packages

Recall that optimality package for \(y^{l,k}\) in the master problem (MP) consists of the following constraint:

$$\begin{aligned} f(x^u,y^u,x^l,y^l)= f_R(x^u, y^u)x^l+ f_Z(x^u, y^u)y^l\ge \theta \left( x^u, y^u, y^{l,k}\right) , \end{aligned}$$

where \(\theta \left( x^u, y^u, y^{l,k}\right) \) is the optimal objective function value of the follower problem with all but continuous lower-level variables \(x^l\) fixed. Assumption 2 ensures that \(\theta \left( x^u, y^u, y^{l,k}\right) \) can be formulated by using necessary and sufficient optimality conditions of the lower level with fixed \(y^l\), such as, e.g., KKT conditions. Note that we are only interested in globally optimal solutions of the original bilevel problem and thus do not require reformulations to be equivalent also in terms of locally optimal solutions. For the latter, the situation is actually more complicated and equivalence may not hold [13].

Let us denote the lower-level dual variables in iteration \(k\) by \(\pi ^{k}\). For a linear follower problem we can choose optimality package comprised of primal and dual feasibility constraints for the lower level as well as the strong duality equality with fixed integer variables \(y^{l,k}\):

$$\begin{aligned} \begin{aligned} f_R(x^u, y^u)x^l+ f_Z(x^u, y^u)y^l&\ge f_R(x^u, y^u)x^{l,k}+ f_Z(x^u, y^u)y^{l,k}\\ g_R(y^u)x^{l,k}&\le - g_c(y^u)- g_Z(y^u)y^{l,k}\\ g_R(y^u)^\top \pi ^{k}&\ge f_R(x^u, y^u)\\ f_R(x^u, y^u)x^{l,k}&= \left( - g_c(y^u)- g_Z(y^u)y^{l,k}\right) \pi ^{k}\\ x^{l,k}&\in \mathbb {R}^{n_R}_+, \pi ^{k}\in \mathbb {R}^{s}_+. \end{aligned} \end{aligned}$$
(3.9)

For our approach we need to implement logical implications of the following form:

$$\begin{aligned} \begin{aligned} \left[ \psi ^k= 1 \right] \implies&\textsc {optimality package} \text { for } y^{l,k}\text { is }{} active ,\text { i.e., (3.9) is added to (MP)}, \\ \left[ \psi ^k= 0 \right] \implies&\textsc {optimality package} \text { for } y^{l,k}\text { is }{} inactive , \\&\text {i.e., no additional restrictions are imposed on the master problem.} \end{aligned} \end{aligned}$$
(3.10)

Implications as (3.10) are often realized by so-called indicator constraints [5, 7]. An indicator constraint is a way to express logical relationships among variables by designating a binary variable to control whether a specified constraint is active or not. Some solvers provide facilities for using indicator constraints, e.g., CPLEX, which are utilized in [60]. It is also common to implement indicator constraints by using SOS1 conditions [4], which is the approach taken by Gurobi, for example. So far, solvers do not handle arbitrary nonlinearities together with indicator constraints, and since indicator constraints usually lead to weaker relaxations, we follow a big-M approach.

If we use big-M formulations for (3.10), we have to deduce a suitable big-M coefficient for arbitrary lower-level problems. A similar problem arises in a popular solution method for linear bilevel optimization problems where the lower level is reformulated using KKT optimality conditions, which then are linearized using big-M formulations [60]. Finding a suitable big-M coefficient in this case is already challenging as is indicated in [30, 44]. One possibility is to derive the correct big-M coefficient based on bound propagation. The dual variables of the lower level, however, do not have finite bounds in general. Nevertheless, in the following we show that we do not require bounds for dual variables \(\pi ^{k}\) of the lower level in iteration \(k\) in order to realize the implication (3.10).

Consider the following reformulation of the implication (3.10):

(3.11)

with

$$\begin{aligned} \begin{aligned} \text {M}_{\pi }&= \max \left\{ \max _{x^u, y^u, x^l, y^l, y^{l,k}} \left\{ f_Z(x^u, y^u)y^{l,k}- f_R(x^u, y^u)x^l- f_Z(x^u, y^u)y^l\right\} , \right. \\&\quad \qquad \qquad \max _{i, y^u, y^{l,k}} \left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) ^\top e_{i}, \max _{i, x^u, y^u} f_R(x^u, y^u) e_{i},0 \bigg \}, \end{aligned} \end{aligned}$$
(3.12)

where \( e_{i}\) are the corresponding standard basis vectors. Similarly to the calculation of \(\text {M}_{\mu }\) described in Sect. 3.1.1, the calculation of \(\text {M}_{\pi }\) can be done by solving three auxiliary optimization problems with the objectives as given in (3.12) and constraints and variables from the HPR of the original bilevel problem. Note that in these auxiliary problems \(y^{l,k}\) are variables from a set of all lower-level feasible integer configurations.

Terms appearing in the \(\text {M}_{\pi }\) formula (3.12) that do not include any variables can be multiplied by \(\psi ^k\) in (3.11). Then these terms can be disregarded in the \(\text {M}_{\pi }\) calculation.

Lemma 3.11

For \(\psi ^k\in \left\{ 0,1\right\} \) the inequality system (3.11) is equivalent to (3.10).

Proof

The first part of the implication (3.10), i.e., the one for \(\psi ^k= 1\), is fulfilled trivially.

In case of \(\psi ^k= 0\), optimality package corresponding to \(y^{l,k}\) is inactive, i.e., constraints (3.11) have to be satisfied for any HPR-feasible \(\left( x^u, y^u, x^l, y^l\right) \). This is true with, e.g., \(x^{l,k}= 0\) and \(\pi ^{k}= 0\). As both \(x^{l,k}\) and \(\pi ^{k}\) appear only in optimality package for \(y^{l,k}\) and nowhere else in the master problem, in an inactive optimality package their value can be chosen freely if the first constraint of (3.11) is then valid for all HPR-feasible \(\left( x^u, y^u, x^l, y^l\right) \). \(\square \)

optimality package formulation (3.11) can be simplified even further:

(3.13)

with

$$\begin{aligned} \text {M}_{\pi }&= \max \left\{ \max _{x^u, y^u, x^l, y^l, y^{l,k}} \left\{ f_Z(x^u, y^u)y^{l,k}- f_R(x^u, y^u)x^l- f_Z(x^u, y^u)y^l\right\} ,\right. \\&\qquad \qquad \;\,\max _{i, x^u, y^u} f_R(x^u, y^u) e_{i}, 0 \bigg \}. \end{aligned}$$

Lemma 3.12

Assume that strong duality holds for the follower problem in continuous lower-level variables. Then, for \(\psi ^k\in \left\{ 0,1\right\} \) the inequality system (3.13) is equivalent to (3.10).

Proof

The case of \(\psi ^k= 0\) can be treated analogously to Lemma 3.11. All inequalities of (3.13) are satisfied for \(\pi ^{k}= 0\). As \(\pi ^{k}\) appears only in the optimality package corresponding to \(y^{l,k}\), which is inactive for \(\psi ^k= 0\), the values of \(\pi ^{k}\) can be chosen freely without influencing the result of the overall optimization problem.

Then consider the case \(\psi ^k= 1\), where the idea of the proof is similar to Lemma 3.2. Due to weak duality the relation

$$\begin{aligned} f_R(x^u, y^u)x^{l,k}\le - \left( g_c(y^u)+ g_Z(y^u)y^{l,k}\right) \pi ^{k}\end{aligned}$$
(3.14)

is always true and is even satisfied with equality for each optimal solution pair of the primal and dual follower problem. For \(\psi ^k= 1\), the first inequality of (3.11) as well as of (3.13) already ensures lower-level optimality for fixed \(y^{l,k}\) by imposing a lower bound on the lower-level objective function value. Consequently, in the optimal solution of the master problem, \(\pi ^{k}\) must constitute an optimal solution of the dual lower-level problem with fixed \(y^{l,k}\). Indeed, for \(\psi ^k= 1\), the master problem is feasible only if (3.14) is satisfied with equality, which is possible due to strong duality. Thus only the dual feasibility conditions are required to correctly impose the lower bound on the lower-level objective function for fixed \(y^{l,k}\). \(\square \)

Remark 3.13

The optimality package formulation (3.13) dispenses with the copies of lower-level continuous variables \(x^{l,k}\) and corresponding primal feasibility constraints together with the explicit strong duality constraint. Thus, only \(s\) continuous variables and \(n_R + 1\) constraints are added to the master problem in each iteration to realize optimality package.

Note that complete elimination of \(x^{l,k}\) from the problem formulations is possible only in combination with the specific form of the projection test described in Sect. 3.1. Absence of the primal lower-level feasibility constraints makes calculation of \(\text {M}_{\pi }\) easier and the number itself potentially smaller.

3.3 Algorithm form, correctness and finite termination

In the following we present Algorithm 2, a modification of Algorithm 1 with our exact projection test, that also can decide feasibility of (BLP). In order to handle Case 2 from Lemma 3.2, we need to keep track of all encountered upper-level integer solutions \(y^{u,k}\) corresponding to every generated lower-level integer configuration \(y^{l,k}\), denoted by \(Y^U\left( y^{l,k}\right) \). For each of these \(y^{u,k}\) a no-good constraint of the form (3.7) or, if all leader integer variables are binary, (3.8), is added to the master problem (MP) as part of exact projection test. The rest of projection test is composed of (PTEq dual) and (3.4b), or (PT dual) and (3.4a), depending on whether a special equality constraint treatment is needed or not. Optimality package comprises (3.13).

figure k

Analogously to Lemma 2.4, we formulate the following statements:

Lemma 3.14

For any set \(Y^L\) of lower-level feasible integer variable configurations and corresponding sets \(Y^U(y^{l,k})\), the master problem (MP) is a relaxation of the original bilevel problem (BLP).

If \(Y^L\) comprises a complete set of lower-level feasible integer variable configurations while for the corresponding sets \(Y^U(y^{l,k})\) the inclusion \(\left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \cap \mathbb {Z}_+^{m_Z} \subseteq Y^U(y^{l,k})\) holds, the master problem (MP) is equivalent to the original bilevel problem (BLP).

Proof

Lemma 2.4 shows the first statement for the master problem which incorporates

$$\begin{aligned} \left[ y^u\in Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \right] \implies \left[ f(x^u,y^u,x^l,y^l)\ge \theta \left( x^u, y^u, y^{l,k}\right) \right] \text { for all } y^{l,k}\in Y^L. \end{aligned}$$

As Lemma 3.8 and the first part of Remark 3.5 indicate, exact projection test imposes \(\left[ f(x^u,y^u,x^l,y^l)\ge \theta \left( x^u, y^u, y^{l,k}\right) \right] \) for a subset of \(Proj _{\left( y^u \right) } P \left( y^{l,k} \right) \). Therefore, for any set \(Y^L\) of lower-level feasible integer variable configurations and corresponding sets \(Y^U(y^{l,k})\), our master problem (MP) with exact projection test is a relaxation of the master problem from Lemma 2.4, and as such a relaxation of (BLP).

The second statement of this lemma is inferred from the second part of Lemma 2.4 and the second part of Remark 3.5. Note that not necessarily all \(\bar{y}^{u}\in \left( Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U \right) \cap \mathbb {Z}_+^{m_Z}\) for a complete set of lower-level feasible integer variable configurations \(y^{l,k}\) have to be enumerated in order to construct a single-level reformulation of the (BLP). \(\square \)

Even less no-good constraints may be needed to find a bilevel optimal solution of (BLP), as Algorithm 2 needs to add no-good constraints only for \(\bar{y}^{u}\) which form part of an optimal solution of (MP) in some iteration.

To show correctness and finite termination of Algorithm 2 we need the following pendant to Lemma 2.6:

Lemma 3.15

As long as the master problem (MP) remains feasible, Algorithm 2 generates some \(y^{l,k}\) and \(y^{u,k}\) to be added to \(Y^L_k\) and \(Y^U\left( y^{l,k}\right) \), respectively, at the end of each iteration \(k\) of the while-loop. If thus generated \(y^{l,k}\) is already contained in \(Y^L_k\) and \(y^{u,k}\) is already contained in \(Y^U\left( y^{l,k}\right) \), i.e., the pair \(\left( y^{u,k}, y^{l,k}\right) \) has been generated in some previous iteration, Algorithm 2 terminates with a bilevel optimal solution not later than in iteration \(k+1\).

Proof

Let \(\left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, x_{k}^{\text {l},*}, y_{k}^{\text {l},*} \right) \) be the solution of (MP) in iteration \(k\). As the follower optimality subproblem (FO) is always feasible given HPR-feasible \(\left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*} \right) \), a \(y^{u,k}= y_{k}^{\text {u},*}\) and \(y^{l,k}\) are always generated in each iteration as long as the master problem (MP) remains feasible.

Now we prove the second part of the Lemma. If the bilevel feasibility subproblem (BF) is infeasible in iteration \(k\), then \(y^{l,k}= \hat{y_{}^{}}^\text {l}_k\) is part of the optimal solution of (FO) computed before, and consequently \( \theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}\right) = \theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, y^{l,k}\right) \) holds. If (BF) is feasible in iteration \(k\), then \(y^{l,k}= \tilde{y_{}^{}}^\text {l}_k\) is part of its optimal solution and as such also satisfies \(\theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}\right) =\theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, y^{l,k}\right) \).

If thus generated \(y^{l,k}\) is already contained in \(Y^L_k\) and \(y^{u,k}\) is already contained in \(Y^U\left( y^{l,k}\right) \), then the corresponding projection test together with implication and optimality package for \(y^{l,k}\) are also already present in the master problem (MP) in iteration \(k\). In particular, the no-good constraint for \(\left( y^{u,k}, y^{l,k}\right) \) as a part of projection test implies optimality package for \(y^{l,k}\), i.e., the constraint \( f(x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, x_{k}^{\text {l},*}, y_{k}^{\text {l},*}) \ge \theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, y^{l,k}\right) \) is active in (MP) in iteration \(k\). As \(\theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}\right) = \theta \left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, y^{l,k}\right) \), for the upper-level decision variables \(\left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*} \right) \) this optimality package corresponding to \(y^{l,k}\) is exactly the (optimal-value-function constraint). Therefore \(\left( x_{k}^{\text {u},*}, y_{k}^{\text {u},*}, x_{k}^{\text {l},*}, y_{k}^{\text {l},*} \right) \) is a bilevel feasible solution. Since according to the first part of Lemma 3.14 it is also an optimal solution of a relaxation of the original bilevel problem (BLP) and thus it is a bilevel optimal solution of (BLP). \(\square \)

Theorem 3.16

Algorithm 2 with projection test as described in Sect. 3.1 and implications and optimality packages as described in Sect. 3.2 either finds a bilevel optimal solution or shows infeasibility of the original problem (BLP) in finitely many iterations.

Proof

From Lemma 3.15 we can see that Algorithm 2 has three possible outcomes in each iteration of the while-loop:

  • Master problem (MP) is infeasible, which by Corollary 2.5 means the infeasibility of the original bilevel problem (BLP).

  • Some \(y^{l,k}\) and \(y^{u,k}\) generated in iteration \(k\) are already in \(Y^L_k\) and \(Y^U\left( y^{l,k}\right) \), respectively, which by the last statement of Lemma 3.15 leads to termination of the algorithm with a bilevel optimal solution.

  • An \(y^{l,k}\) generated in iteration \(k\) is not yet in \(Y^L_k\), or \(y^{u,k}\notin Y^U\left( y^{l,k}\right) \).

This means that after each iteration the algorithm either terminates or generates a pair \(\left( y^{u,k}, y^{l,k}\right) \) that has not been encountered before. Unless the algorithm terminates earlier according to the first two cases listed above, it enumerates all lower-level feasible integer configurations \(y^{l,k}\) and, according to Remark 3.5 some of the upper-level feasible integer configurations \(y^{u,k}\), thus constructing a master problem (MP) which is equivalent to the original bilevel problem (BLP) due to to Part 2 of Lemma 3.14. As the number of lower-level feasible integer configurations \(y^{l,k}\) as well as the number of HPR-feasible integer configurations \(y^{u,k}\) is finite by Assumption 1, Algorithm 2 terminates after finitely many iterations. \(\square \)

Corollary 3.17

An optimum is attained for a feasible bilevel problem satisfying Assumptions 14.

Suitable realizations of projection test, implication and optimality package that are necessary to confirm this result for nonlinear follower problems will be given in Sect. 5.

Altogether, projection test, implication and optimality package as described in this section add to the master problem 1 binary and \(2s\) continuous variables as well as \(2n_R +4\) constraints per iteration. Note that this calculation is based on bilevel problems with only inequality constraints on the lower level. In case the lower level has some equality constraints, the growth of the master problem reduces further with application of projection test as described in Sect. 3.1.1. Note that via elimination of \(x^{l,k}\) as suggested in this paper, the number of additionally introduced nonlinearities is reduced compared to the realization from [60]. Also, in the current paper \(2n_R\) continuous variables and \(4n_R + 4s- 2\) less constraints are added to the master problem per iteration of the algorithm.

However, the number of iterations can be higher due to possibly enumerating a lot of or, in the worst case, all integer points of some sets \(Proj _{\left( y^u \right) } \left( P_{\mathrm{lin}} \left( y^{l,k} \right) \right) {\setminus } U^\text {I}\). If such enumeration occurs, only one constraint is added per additional iteration of the algorithm, namely a no-good constraint for the encountered upper-level integer configuration. Thus, unless their number is very high, the impact of these additional iterations on the size and complexity of the master problem is moderate compared to projection test, implication and optimality package routines for each newly encountered lower-level integer configuration.

4 Computational results

Given a suitable MINLP solver to be used for the master problem, our implementation is able to handle a MINLP on the upper level and a follower problem which is linear in continuous lower-level variables. To the best knowledge of the authors, no library of bilevel instances exists where on both levels discrete variables and nonlinearities such as products of upper- and lower-level variables are present. Therefore, instances of the desired class were created based on the first 10 MIPS instances from [46].

The original instances miblp_20_15_50_0110_10_1 to miblp_20_15_50_0110_10_10 are all-integer with 5 upper-level variables and 10 lower-level variables each. There are 20 lower-level and no upper-level constraints in each instance, all constraints as well as both objective functions are linear. Notice that as the main difficulty in solving bilevel problems comes from the structure of the lower level, i.e., partial construction of its optimal-value function, absence of upper-level constraints does not impede representativity of the computations. However, as the ability of the proposed algorithm to detect infeasible instances has to be tested as well, some instances with upper-level constraints were constructed too.

We also included a nonlinear toy example (Example A.2) in the appendix with constraints and integer variables on both levels, where the behavior of our implementation can be clearly retraced and the solution found can easily be verified.

Computational results given in this section comprise altogether 120 instances derived from [46]. In order to obtain mixed-integer nonlinear bilevel problems, each of the original 10 instances was modified as follows:

  • adding \(m_R \in \{5,10,20\}\) continuous variables to the upper level and redeclaring every second lower-level integer variable as a continuous variable,

  • adding \(m_R \in \{5,10,20\}\) continuous variables to the upper level and redeclaring every fourth lower-level integer variable as a continuous variable,

  • redeclaring every second or, respectively, fourth upper-level integer variable as a continuous variable and redeclaring every second lower-level integer variable as a continuous variable,

as well as adding a bilinear term to both lower- and upper-level objective functions. Thus, 80 new instances were created, where every modification produced a distinct combination of numbers of upper- and lower-level integer and continuous variables. In order to keep as close to the original bilevel library instances from [46] as possible, no nonlinearities apart from the above-mentioned terms in the objective functions of both upper and lower level were added. If an existing integer variable was redeclared as continuous, it retains all its bounds as well as coefficients in all constraints and objective functions. An exception is made if an upper-level integer variable is redeclared as a continuous one, in which case its coefficients in lower-level constraints are set to 0 in order to comply with Assumption 4.

The lower bound for added continuous upper-level variables is set to 0, and their upper bound is set to the maximum of the upper bounds of upper-level integer variables. No constraint coefficients need to be produced while adding continuous upper-level variables, since there are no upper-level constraints in the original instances, and lower-level constraints should not contain upper-level continuous variables due to Assumption 4.

Coefficients for the added continuous upper-level variables in the linear part of the objective functions on both levels are generated by rearranging the corresponding objective function coefficients of the existing discrete upper-level variables. We describe only the construction of the continuous part of the upper-level objective function, as the procedure for the continuous part of the lower-level objective function is exactly the same:

$$\begin{aligned}&m_R = m_Z = 5: \underbrace{(1,2,3,4,5)}_{\text {upper-level obj coefs of}\, y^u} \rightarrow \underbrace{(3,4,5,1,2)}_{\text {upper-level obj coefs of}\, x^u}\\&m_R = 10 > m_Z = 5: \underbrace{(1,2,3,4,5)}_{\text {upper-level obj coefs of}\, y^u}\ \rightarrow \underbrace{(3,4,5,1,2,3,4,5,1,2)}_{\text {upper-level obj coefs of}\, x^u} \end{aligned}$$

Note that the number of added upper-level continuous variables is always a positive integer multiple of \(m_Z = 5\), the number of upper-level integer variables.

Regardless of the way original instances have been made mixed-integer, the following bilinear term is added to the upper-level objective function of each instance:

$$\begin{aligned} x^u\frac{1}{ub(x^u)} B x^l, \end{aligned}$$

with B a matrix comprised of an identity matrix \(I_{\min \{m_R,n_R\}}\) extended with 0-entries to fit the required dimensions, and \(ub(x^u)\) the largest upper bound of all \(x^u\) variables. The lower-level objective function of each instance receives the above bilinear term with a minus sign.

As none of the instances described so far proved to be infeasible, bilevel optimization problems with upper-level constraints were constructed, again based on instances miblp_20_15_50_0110_10_1 to miblp_20_15_50_0110_10_10 from [46]. For each of the original 10 instances, three instances were produced by shifting all but every second, fifth or tenth lower-level constraint to the upper level. No other modifications were done to obtain these 30 new instances, which therefore remain all-integer and linear on both levels.

We used Gurobi 9.0 [26] as MINLP solver, Pyomo 5.6.9 for modeling and CPython 3.7.6 for the implementation of Algorithm 2. Notice that Gurobi allows only products of variables as nonlinearities. To extend Algorithm 2 to more general nonconvex MINLPs, other global nonlinear solvers such as, e.g., SCIP [24] or BARON [47] can be used.

Computations were performed on Xeon E3-1240 v6 CPUs (4 cores, HT disabled, 3.7 GHz base frequency) with 32 GB RAM. Runtimes are stated excluding instance loading from MPS and AUX files as well as their modification, but including big-M calculations. The time limit for each instance is 2 h.

The duality gap measures the maximum relative deviation from optimality of the best feasible solution found. As proposed in [34], we have calculated the gap for a given lower bound LB and upper bound UB by the following formula:

$$\begin{aligned} \text {gap}(\textit{LB},\textit{UB})= \left\{ \begin{array}{ll} 0, &{} \text {if}\quad \textit{LB}=\textit{UB}, \\ \infty , &{}\text {if}\quad \textit{LB}\cdot \textit{UB}\le 0 \quad \text {and not}\quad \textit{LB}=\textit{UB}=0,\\ \frac{\vert \textit{LB}-\textit{UB}\vert }{\min \{\vert \textit{LB}\vert ,\vert \textit{UB} \vert \}}&{} \text {else}. \end{array} \right. \end{aligned}$$

The original 10 instances from [46] were solved to optimality, and key characteristics of the runs are listed in 1 separately for each instance.

Table 1 Computational results for original ILP instances (without upper-level constraints) miblp_20_15_50_0110_10_1 to miblp_20_15_50_0110_10_10 from [46]

From the 80 MINLP instances all but 5 were solved to optimality, while the remaining 5 instances had a relative optimality gap under 0.35%. Consolidated statistics of the runs on all 80 MINLP instances are given in 2 with the corresponding minimum, maximum, arithmetic mean and standard deviation for the runtime. Detailed data for each individual instance is listed separately in 6 in Appendix A.3.

Table 2 Computational results for 80 MINLP instances

From the 30 instances with upper-level constraints 14 were solved to optimality and 12 were proven infeasible. For 3 of the 4 remaining instances no decision on the feasibility could be made, while the last instance was proven feasible but not solved to optimality within the time limit. Consolidated statistics of the runs on these 30 instances are given in 3, and the full data for each instance can be found in 7 in Appendix A.3.

Each of altogether 120 instances is uniquely identified by the columns instance number, \(m_Z\), \(m_R\), \(n_Z\), \(n_R\), \(r\) and \(s\) of the detailed computational result Tables 1, 6 and 7. The instance number refers to the number of the original instances miblp_20_15_50_0110_10_1 to miblp_20_15_50_0110_10_10 from [46], while \(m_Z\), \(m_R\), \(n_Z\), \(n_R\), \(r\) and \(s\) denote the number of upper-level discrete, upper-level continuous, lower-level discrete and lower-level continuous variables as well as upper- and lower-level constraints, respectively.

Table 3 Computational results for 30 ILP instances with upper-level constraints

From altogether 120 instances, thereof 10 original library ILP, 80 MINLP and 30 ILP with upper-level constraints, only 9 were neither solved to optimality nor proven infeasible within the time limit. In the case of these 9 instances we can observe two possible causes of the algorithm not terminating within the time limit.

First, the number of no-good cuts can be too large, which raises the iteration count to several thousands. This behavior in particular is present in all 5 MINLP instances which were not solved to optimality and the penultimate ILP instance with upper-level constraints. The exact number of no-good cuts for each instance can be inferred by subtracting the number of optimality packages from the total number of iterations listed in Appendix A.3. However, for all 5 instances the algorithm found a solution with a gap less than or equal to 0.35%, which can be considered an acceptable result for such a challenging problem type.

Second, the number of optimality packages can cause the master problem (MP) to become too hard and thus demand too much time to solve. This is the case with the remaining 3 instances, which are all linear on both levels and with upper-level constraints. Two of these instances acquired more than 30 optimality packages, and in one case the last completed master problem solve took over an hour. See Appendix A.3 for details on each instance.

Mean run time of the algorithm for all 120 instances is 10 min 20 s, median run time is less than 10 s. So more than 90% of the test instances were either solved to optimality or proven infeasible by Algorithm 2 within 2 h. Over 95% of the instances were either solved up to a relative optimality gap under 1% or proven infeasible before hitting the time limit of 2 h.

5 Algorithm extension for nonlinear follower

For the presentation of the results so far we have assumed the follower problem to be linear for all \(x^u\), \(y^u\). In this section, we describe how to extend Algorithm 2 to the more general setting described by Assumptions 14. The pseudo-code description of Algorithm 2 on p. 24 stays exactly the same, but we have to generalize projection test, implication and optimality package for the nonlinear setting. In order to do this, we will use the Wolfe dual for replacing linear programming duality, which also provides us with the required strong duality statements.

5.1 Exact projection test for nonlinear follower

The general concept behind our exact projection test as described in Sect. 3.1 remains the same, and the adaptation of (PT) to the nonlinear setting is straightforward:

figure l

Note, however, that this is now a convex continuous problem—by Assumption 2—instead of a linear program. Its Wolfe dual is given as follows:

figure m

where the partial derivative \(\nabla _{x^l} g\) exists by Assumption 2. Note that the Lagrangian multipliers for the non-negativity conditions for \(x^{l,k}\) have been eliminated from the formulation already, as well as the slack variable \( t^{k}\).

Eliminating all primal variables from the Wolfe dual is unfortunately not possible in general, but only for special cases, e.g., problems with linear constraints and strictly convex quadratic objective [41, Example 12.12]. It requires regularity of the partial derivative of the Lagrangian w.r.t. the primal variables, thus enabling usage of the implicit function theorem in order to consider \(x^{l,k}\) as a function of \(\mu \). Thus, in contrast to (PT dual) for the case of a linear follower problem discussed in Sect. 3.1, primal variable copies are in general needed for (PT W-dual).

The statement of Lemma 3.2 holds also for the nonlinear version of projection test (PT) since the arguments in its proof do not depend on the linearity of the lower-level constraints. In particular, (PT) is feasible for any fixed \(y^{l,k}\), \(y^u\), and \(x^{l,k}\). Additionally, Assumption 1 guarantees that its optimum is finite and attained by some \((x^{l,k}_0, t^{k}_0)\). Furthermore, (PT) satisfies Slater’s condition in case of feasibility due to Assumption 2. Therefore there exists \(\mu _0 \in \mathbb {R}_+^s\) such that \((x_0^{l,k},\mu _0)\) is optimal for (PT W-dual), i.e., we have strong duality; cf.  [56]. Thus we obtain Lemma 3.3 for an accordingly adapted version of (3.4a). Hence, as no-good cuts handling Case 2 from Lemma 3.2 are independent of the lower-level problem class, Remark 3.5 holds for the nonlinear follower problem when (PT W-dual) and the corresponding analogon of (3.4a) are employed.

Note that using the Wolfe dual for solving bilevel problems with nonlinear lower level is not limited to our particular algorithm framework. The Wolfe dual can be employed to obtain single-level reformulations of bilevel optimization problems if, e.g., all variables have finite bounds on their respective levels, and the lower level is convex and satisfies Slater’s condition. So far the majority of the single-level reformulations of bilevel problems in the literature rely on optimality conditions of the lower level expressed either with strong duality in the linear case or the full KKT system in the nonlinear case. In contrast, it seems that using the Wolfe dual in bilevel optimization has been explored very little.

5.2 Projection test with equalities for nonlinear follower

We can implement special handling of equations as discussed in Sect. 3.1.1 also for the more general case. However, the situation is slightly more complicated.

Let \( g\) and \( h\) denote the functions defining the lower-level inequalities and equations, respectively. Note that by convexity due to Assumption 2, any equations must be linear in the continuous follower variables, so we can write \( h(y^u,x^l,y^l)\) as \( h(y^u,x^l,y^l)= h_R(y^u, y^l)x^l+ h_c(y^u, y^l)\) for coefficient functions \( h_R\) and \( h_c\).

We again modify the primal problem (PT) by applying slack variables only to the inequality constraints:

figure n

The Wolfe dual of this problem is given by

figure o

We again face the problem that (PTEq) could—in contrast to (PT)—be infeasible. We have to show that also in this case, (PTEq W-dual) is feasible and admits a solution with objective value \(\le 0\). This would ensure that the strong duality constraint of the form of (3.4b) does not impose any restriction to the master problem if (PTEq) is infeasible. In fact, we will be able to show that (PTEq W-dual) is unbounded in that case, matching Corollary 3.7.

Lemma 5.1

If (PTEq) is infeasible, (PTEq W-dual) is unbounded.

Proof

We can show that (PTEq W-dual) is feasible in a way that is completely analogous to the proof of Lemma 3.6, obtaining a feasible solution of the form \(\left( \bar{x}^{l,k}, \bar{\mu }, \lambda = 0 \right) \). However, this does not directly imply the statement since the Wolfe dual can in general have a finite optimum even if the primal is infeasible [56].

To complete the proof, we will find an unbounded ray of (PTEq W-dual). Consider a version of the primal problem without the inequalities \( g(y^u, x^{l,k}, y^{l,k}) + \left( t^{k}\right) _s\le 0\). It is a linear problem and still infeasible if (PTEq) is. Due to the results in Sect. 3.1.1, its dual

$$\begin{aligned} \begin{aligned} \min _{x^{l,k},\lambda } \quad&- {\lambda }^\top h_c(y^u, y^l)\\ \text {s.t.} \quad&h_R(y^u, y^l)^\top \lambda \ge 0^{n_R} \\ \end{aligned} \end{aligned}$$
(PTwithoutIneq dual)

is unbounded. Let \(\bar{\lambda }\) be an unbounded ray, i.e., \(- ({\bar{\lambda }})^\top h_c(y^u, y^l)< 0\) and \( h_R(y^u, y^l)^\top \bar{\lambda }\ge 0\). Then \((x^{l,k}= 0, \mu = 0,\bar{\lambda })\) is an unbounded ray for (PTEq W-dual). In combination with the feasible solution \(\left( \bar{x}^{l,k}, \bar{\mu }, 0 \right) \) for (PTEq W-dual), we can construct feasible points \((\bar{x} ^{l,k},\bar{\mu },\alpha \bar{\lambda })\) with arbitrarily small objective value as \(\alpha \rightarrow \infty \). \(\square \)

Therefore, we also have the result of Lemma 3.8 for the nonlinear case. Note that we may assume \(\bar{x}^{l,k}=0\) in the above proof, which will allow us to reuse the primal variable copies \(x^{l,k}\) in the optimality package for \(y^{l,k}\).

Choosing \(\text {M}_{\mu }\) works similar to the situation in Sect. 3. For example, we can choose \(\text {M}_{\mu }\) to be equal to the optimal objective function value of the dual of (PTEq) with inequalities only, combined with the HPR constraint set and variables of (BLP). It still holds that this problem always has a finite optimum and admits a feasible solution for (PTEq dual) with \(\lambda = 0\).

5.3 Implications and optimality packages for nonlinear follower

Assumption 2 ensures that we can still express optimality of the follower’s continuous decisions via strong duality. As before, we denote the lower-level dual variables in iteration \(k\) by \(\pi ^{k}\). optimality package consists of primal and dual feasibility constraints for the lower level as well as the strong duality equation for fixed integer variables \(y^{l,k}\):

$$\begin{aligned} \begin{aligned} f(x^u,y^u,x^l,y^l)&\ge f(x^u, y^u, x^{l,k}, y^{l,k}) \\ g(y^u, x^{l,k}, y^{l,k})&\le 0 \\ \nabla _{x^l} g(y^u, x^{l,k}, y^{l,k})^\top \pi ^{k}&\ge \nabla _{x^l} f(x^u, y^u, x^{l,k}, y^{l,k})^\top \\ \nabla _{x^l} f(x^u, y^u, x^{l,k}, y^{l,k}) x^{l,k}&= -{\pi ^{k}}^\top g(y^u, x^{l,k}, y^l) +{\pi ^{k}}^\top \nabla _{x^l} g(y^u, x^{l,k}, y^l) x^{l,k}\\ x^{l,k}&\in \mathbb {R}^{n_R}_+, \pi ^{k}\in \mathbb {R}^{s}_+. \end{aligned} \end{aligned}$$
(5.1)

Note that we consider the gradient \(\nabla _{x^l} f\) to be a row vector. This is for reasons of consistency with the Jacobian matrix \(\nabla _{x^l} g\), which has gradients as its rows. Recall that our formulation needs to implement the following implications:

$$\begin{aligned} \begin{aligned} \left[ \psi ^k= 1 \right] \implies&\textsc {optimality package} \text { for } y^{l,k}\text { is }{} active ,\text { i.e., (5.1) is added to (MP)} \\ \left[ \psi ^k= 0 \right] \implies&\textsc {optimality package} \text { for } y^{l,k}\text { is }{} inactive , \\&\text {i.e., no additional restrictions are imposed on the master problem.} \end{aligned} \end{aligned}$$
(5.2)

In order to do this we again use a big-M formulation, adding (or, respectively, subtracting) a term \((1 - \psi ^k) \text {M}_{\pi }\) to (from) each inequality in (5.1).

Just as described in Sect. 3.2, a sufficiently large \(\text {M}_{\pi }\) can then be found by solving auxiliary problems for the maximal constraint violations under the given bounds for \(x^u, y^u, x^l, y^l\), while the package-specific variable copies \(x^{l,k}\) and \(\pi ^{k}\) are fixed to 0. This way, bounds for the dual variables of the lower level (which in general are not available) are not required for realizing the desired implications.

With similar arguments as in Lemma 3.12, the optimality package can be simplified to

(5.3)

explicitly enforcing only dual feasibility and exploiting strong duality. Recall that we were able to eliminate all primal variables from (3.11), which, unfortunately, is not possible in general for (5.3). This, however, does not impede the reduction step to (5.3). One should just be aware that \(x^{l,k}\) are not necessarily optimal solutions of the primal follower problem with fixed \(y^l= y^{l,k}\). Note that we can use the same primal variable copies \(x^{l,k}\) as in (PT W-dual). This is because setting \(x^{l,k}\) to any optimal follower response given fixed \(y^{l,k}\) and \(y^u\) will work for both (PT W-dual) and (5.3) at the same time if \(y^u\in Proj _{\left( y^u \right) } P_{\mathrm{lin}} \left( y^{l,k} \right) \), i.e., if the optimality package for \(y^{l,k}\) is supposed to be active. Otherwise, \(x^{l,k}=0\) is always possible in both problems without imposing any relevant implications.

Remark 5.2

Note that the KKT-based tightening that was proposed in [38] and also used in [61] can be incorporated into the algorithm presented in the current paper too. Indeed, any suitable necessary optimality conditions for the lower-level problem can be added to the master problem (MP) in order to obtain a tighter relaxation of the original bilevel problem.

6 Conclusions

In this work, we proposed an exact algorithm for solving problems from the challenging class of mixed-integer nonlinear bilevel optimization problems where both leader and follower integer variables are present on both levels. Our method is based on recent work of Yue, Gao, Zeng and You [60], following the same projection-based scheme described in Algorithm 1. We turned it into an exact method under an additional Assumption 4, which bans continuous upper-level variables from lower-level constraints. In conjunction with the other assumptions it therefore guarantees that a bilevel optimum is attained if the problem is feasible in the first place—a fact for which our algorithm also contributes a constructive proof. Assumption 4 is relatively mild compared to other assumptions with the above-mentioned effect that are commonly made in the literature. The key enhancement of our algorithm is to separately realize implication (3.2) for an open subset U of the relevant projections, chosen to be as large as possible, and the remaining boundary cases—as outlined on p. 14.

Furthermore, we extend the algorithm from [60] from a purely linear setting to a more general bilevel problem class allowing nonlinearities. The limiting requirements are given by Assumptions 23, which essentially ensure that optimality of the continuous follower decisions can be expressed via strong duality, and that the HPR with these optimality conditions can be handled by an off-the-shelf solver. The nonlinear version of our method as described in Sect. 5 may be particularly attractive if primal variable copies can be eliminated from the Wolfe dual, though this is not a strict requirement for using it.

Proof-of-concept computational results have been presented for the case in which the lower level is linear in the follower variables, but products of lower- and upper-level variables may be present in the objective functions of both levels. Therefore our implementation covers a problem class for which currently no solver exists to the best of the authors’ knowledge. Our method is able to solve many bilevel library instances that have been modified to also include continuous variables and nonlinear terms on both levels (unfortunately, no established library exists for this problem class yet). Still, there are clear limitations in terms of instance size, which is not surprising given the extremely challenging problem class. We think that the experiments are quite encouraging, especially considering the number of optimality packages that still allow the master problem to be solved. It is important to note that our framework relies on established MINLP solvers, so any performance improvements in the underlying MINLP solver will also automatically further benefit our approach. However, the master problem growing due to additional optimality packages is not the only limiting factor. We also observed instances for which optimum solutions of the master problem regularly ended up being boundary cases, i.e., in the relevant projection but not in U. Hence, they required a large number of no-good cuts, showing that our modification for exactness in general comes at a price. Moreover, this observation highlights the importance of avoiding such situations and consequently of our equation-handling modifications from Sects. 3.1.1 and 5.2 for solving problems with equations on the lower level.

In future work, further performance gains might be achieved using cutting planes specifically designed for the structure of the master problem as it evolves during the solution process and/or by warmstarting. Perturbation of the follower problem could increase the chance for the master problem solution being in U. However, problem-specific knowledge will be necessary in order not to run into the very same problem that is illustrated in Example 2.9. In preliminary computations, we have observed that having good bounds for the dual variables can help the solver immensely. While such bounds cannot be given in general, dual variables often have a nice interpretation in applications, which might allow for a suitable estimation.