Adaptive discretization-based algorithms for semi-infinite programs with unbounded variables

The proof of convergence of adaptive discretization-based algorithms for semi-infinite programs (SIPs) usually relies on compact host sets for the upper- and lower-level variables. This assumption is violated in some applications, and we show that indeed convergence problems can arise when discretization-based algorithms are applied to SIPs with unbounded variables. To mitigate these convergence problems, we first examine the underlying assumptions of adaptive discretization-based algorithms. We do this paradigmatically using the lower-bounding procedure of Mitsos [Optimization 60(10–11):1291–1308, 2011], which uses the algorithm proposed by Blankenship and Falk [J Optim Theory Appl 19(2):261–281, 1976]. It is noteworthy that the considered procedure and assumptions are essentially the same in the broad class of adaptive discretization-based algorithms. We give sharper, slightly relaxed, assumptions with which we achieve the same convergence guarantees. We show that the convergence guarantees also hold for certain SIPs with unbounded variables based on these sharpened assumptions. However, these sharpened assumptions may be difficult to prove a priori. For these cases, we propose additional, stricter, assumptions which might be easier to prove and which imply the sharpened assumptions. Using these additional assumptions, we present numerical case studies with unbounded variables. Finally, we review which applications are tractable with the proposed additional assumptions.


Introduction
Adaptive discretization-based algorithms are widely used for the solution of semiinfinite programs (SIPs), generalized semi-infinite programs, and bilevel programs, e.g., Hettich and Kortanek (1993), Winterfeld (2008), Mitsos and Barton (2007), López and Still (2007), Küfer et al (2008). For a recent review of applications and adaptive discretization-based algorithms for SIPs, the reader may refer to . In this paper, we consider SIPs in the form of s.t. ∀ y ∈ Y g (x, y) ≤ 0 with the sets X ⊆ R n x and Y ⊆ R n y with |Y| = ∞; the constraint function g : X × Y → R; and the objective function f : X → R with f * being the optimal objective value. Note that the semi-infinite constraint ∀ y ∈ Y g (x, y) ≤ 0 is often written in literature as ∀ y ∈ Y : g (x, y) ≤ 0.
Conceptually, adaptive discretization-based algorithms for (SIP) replace the infinite index set Y with a finite one, i.e., Y k Y, with k being the iteration index. This, then finite program, is an approximation of (SIP). Using an adaptive refinement strategy for the finite index set Y k Y k+1 ... Y, one obtains an improving approximation of (SIP), which usually yields a converging lower bound. Hence, one obtains a lowerbounding procedure.
In this paper, we paradigmatically consider the lower-bounding procedure of Mitsos (2011). Mitsos applies a convergent lower-bounding procedure as well as proposes a convergent upper-bounding procedure to compute in finite time a feasible point of (SIP) with a certificate of ε f -optimality. The upper-bounding procedure of Mitsos is a slight adaption of the lower-bounding procedure. The lower-bounding procedure in turn is equivalent to the algorithm by Blankenship and Falk (1976) which can be considered as the main representative of adaptive discretization-based algorithms. Moreover, many adaptive discretization-based algorithms for (generalized) semi-infinte programs and bilevel programs have identical predecessors, i.e., the previously mentioned Blankenship and Falk (1976) which itself is based on Remez (1962). These algorithms are conceptually closely related to, e.g., Reemtsen (1991), Still (1999Still ( , 2001, Stein (2003), Mitsos et al (2008a), Guerra Vázquez et al (2008), Rustem (2011), Mitsos andTsoukalas (2015), Djelassi andMitsos (2017, 2021), Seidel and Küfer (2020), Djelassi et al (2019), Schwientek et al (2021). Therefore, and also based on our additional findings, c.f., Appendix B, we expect that the results within this paper, i.e., convergence guarantees with the sharper, slightly relaxed, assumptions and recovered convergence guarantees in case of unbounded variables, directly carry over from one algorithm to the other.
The proof of convergence of algorithms for the global solution of SIPs often relies, among other assumptions, on compact host sets. This is true both for discretizationbased and other methods as well as local methods allowing nonconvex lower-level problems (Bhattacharjee et al 2005a, b;Floudas and Stein 2008;Mitsos et al 2008b). Another popular assumption within literature is the one of a compact lower-level set (Reemtsen and Görner 1998). This assumption is fulfilled if (SIP) is feasible, the host sets are compact, and the functions f and g are continuous. The advantage of this assumption is that it already allows special cases of SIPs with unbounded sets X . However, unbounded sets Y are not covered. Typically, the upper-and lower-level variables x and y have a technical or physical meaning and thus, in most cases, inherit finite bounds from their physical or technical origin. Furthermore, these finite bounds are usually attainable yielding closed and therefore compact upper-and lower-level host sets X and Y.
However, one might not know the finite bounds of the variables. Furthermore, SIPs stemming from specific applications or reformulations may exhibit unbounded upper-and/or lower-level variables, and hence unbounded upper-and/or lower-level host sets. In the following, we give examples of problem classes where SIPs with unbounded host sets can arise. Note that in some of the applications, finite bounds of the variables may be computed or generated by additional assumptions. In most of the following examples, (arbitrary) bounds are usually used in practice. An unbounded lower-level set Y may occur, e.g., in approximation theory where one wants to approximate a function with a minimal estimation error, not only over a compact set, but over all R n (Chebyshev approximation). An unbounded upper-level variable host set X occurs, e.g., in design centering, in epigraph reformulations of min-max programs or in approximation theory, e.g., classical Chebyshev problem, where the parameter values are unbounded, or reverse Chebyshev approximation, where the approximation error is fixed and the region where the approximation is not worse than the fixed approximation error is computed (Still 1999;Guerra Vázquez et al 2008).
In order to address such applications, we will investigate whether adaptive discretization-based algorithms are directly applicable to SIPs with unbounded host sets, i.e., whether the assumption of compact host sets can be relaxed. By relaxing the assumption of compact host sets, we will also consider SIPs with bounded but noncompact host sets, e.g., host sets consisting of half-open intervals or open intervals. For cases where the assumptions are not directly applicable, we will derive additional assumptions, which are possibly easier to prove, to enable the application. In Sect. 2, we first introduce the basic notation used throughout this paper and review the assumptions in Mitsos (2011). Second, we prove convergence of the lowerbounding procedure. In the proof, we use sharper and slightly relaxed assumptions compared to Mitsos (2011), which in turn are already relaxed compared to Blankenship and Falk (1976). We show that our relaxed assumptions are implied by the original assumptions of Mitsos (2011). Section 3 shows that the lower-bounding procedure may exhibit convergence problems if the host sets are not compact. In Sect. 4, we give additional assumptions to apply the lower-bounding procedure to SIPs with noncompact and unbounded host sets. In Sect. 5, we present two case studies as a proof-of-concept for our findings. Finally, we give a conclusion and outlook in Sect. 6.

Preliminaries
In this section, we briefly review the notation, formulation, definitions, algorithm description, and assumptions of the lower-bounding procedure in Mitsos (2011). Note that this procedure can be seen paradigmatically for the class of adaptive discretizationbased algorithms.

Notation, formulation, definitions and algorithm description
The iterative lower-bounding procedure is illustrated in Fig. 1. At the start of the procedure the iteration index k is set k ← 0 and a (arbitrary) finite set Y 0 Y is chosen. Then the discrete lower-bounding problem (LBP) is solved.
Definition 1 (Discrete lower-bounding problem) The discrete lower-bounding prob- Due to Y k Y, (LBP) is a relaxation of (SIP) and it holds f k ≤ f * . For the optimal solutions of (LBP), we use the notation Notation 1 (Optimal solution of (LBP)) Assuming the optimal solution of (LBP) exists in all iterations, we denote the optimal solution point by x k and the sequence of optimal solutions by x k m k=0 = x 0 , x 1 , x 2 , ..., x m . To simplify the notation, we omit indexing numbers wherever possible, i.e., x k . If (LBP) is determined unbounded, then by Assumption 4 (to follow), also (SIP) is unbounded. If (LBP) is determined infeasible, also (SIP) is infeasible. Otherwise, the lower-level problem (LLP) is solved to determine SIP-feasibility of the current iterate. Definition 2 (Lower-level problem) The lower-level problem (LLP) for fixed upperlevel variables x k is The iterate x k is SIP-feasible if sup y∈Y g x k , y ≤ 0. Note that according to the later used Assumption 3, (LLP) may only be solved approximately. For the optimal and approximate solution of (LLP), we use the notation Notation 2 (Optimal solution of (LLP)) Assuming the optimal solution of (LLP) exists in all iterations, we denote the optimal solution point by y * ,k and the sequence of optimal solutions by y * ,k m k=0 = y * ,0 , y * ,1 , y * ,2 , ..., y * ,m . To simplify the notation, we omit indexing numbers wherever possible, i.e., y * ,k .
Notation 3 (Approx. solution of (LLP)) Assuming the approximate solution of (LLP) exists in all iterations, we denote the approximate solution point by y k and the sequence of approximate solutions by y k m k=0 = y 0 , y 1 , y 2 , ..., y m . To simplify the notation, we omit indexing numbers wherever possible, i.e., y k . Fig. 1 Algorithm flowchart of the adaptive-discretization based lower-bounding procedure (Blankenship and Falk 1976;Mitsos 2011) To ease notation, we also use Notation 1 to 3 for infinite sequences. Note that this is abuse of notation as in these cases no maximum iteration index m exists.
When (LLP) is solved approximately according to the later used Assumption 3, it is either determined that g x k , y * ,k ≤ 0 or an approximate solution point y k is furnished which fulfills g x k , y k ≥ α · g x k , y * ,k for some α ∈ (0, 1]. As finite termination to a feasible point is not guaranteed, in practice a feasibility tolerance ε a > 0 is usually introduced as a termination criterion. In this case, the algorithm may terminate with an SIP-ε a -feasible point which suffices g x k , y * ,k ≤ ε a . If x k is not SIP-(ε a )-feasible, the approximate solution of (LLP) is used to populate the discretization set Y k for subsequent iterations.
A feasibility tolerance ε a > 0 is used in the numerical case studies in Sect. 5. However, note that in the theoretical considerations, no feasibility tolerance is introduced, or equivalently the feasibility tolerance is set to zero, i.e., ε a = 0. Therefore, in the following theoretical analysis, especially in the proof of convergence in Sect. 2.3, one does not terminate early, but considers the full (infinite) sequence of iterates.
The following is a summary of additional notation and the definition of compact sets used in this work.
Notation 4 (Feasible set) We use for the set of all feasible points in the host set of (SIP) the notation (1) Notation 5 (Infeasible set) We use for the set of all infeasible points in the host set of (SIP) the notation

Assumptions
The existing global discretization-based algorithms use a global solver to compute the subproblems (LBP) and (LLP) which are (mixed-integer) (non)linear problems ((MI)(N)LP). By considering SIPs with unbounded host sets, we obviously inherit the need for global solvers that can handle optimization problems with unbounded host sets. In the case of linear programs, this does not pose a problem as, e.g., the simplex method can handle unbounded host sets (Nocedal and Wright 1999). In the more general case of (MI)NLPs, e.g., BARON is able to treat some problems with unbounded variables systematically by trying to compute appropriate bounds from problem constraints (Khajavirad and Sahinidis 2018) but substantial theoretical and practical challenges remain. In what follows, we focus on the SIP algorithm's convergence properties for unbounded host sets and do not discuss these challenges. The presented lower-bounding procedure in Mitsos (2011), which is equivalent to the procedure proposed by Blankenship and Falk (1976), relies on the following assumptions (c.f., Lemma 2.2 in Mitsos (2011), revised in Lemma 2 in Harwood et al (2021)): Assumption 1 (Compactness of sets) The sets X R n x and Y R n y are compact.
Assumption 2 (Continuous functions) The functions f and g are continuous on X and X × Y, respectively.
Assumption 3 (Appr. solution of (LLP)) At each iteration k, (LLP) is solved approximately for the solution of the lower-bounding problem x k either establishing sup y∈Y g x k , y ≤ 0, or furnishing a point y k such that g x k , y k ≥ αg x k , y * ,k > 0.
With α being constant over all iterations and α ∈ (0, 1]. Assumption 3 is relaxed compared to the assumption in Blankenship and Falk (1976), where the exact solution of (LLP) is assumed. It is also slightly relaxed to Harwood et al (2021) wherein α is restricted to (0, 1). However, as will be shown below, the problems associated with unbounded host sets persist even if the (LLP) is solved exactly.

Proof of convergence of the lower-bounding procedure
The proof presented in this paper relies on slightly relaxed assumptions compared to those made by Mitsos (2011) and Reemtsen and Görner (1998). Basically, we split the assumptions made by Mitsos (2011) and some properties, which result from them, into multiple sharpened assumptions. Additionally, we use the idea of level sets by Reemtsen and Görner (1998). These sharpened assumptions are often challenging to prove a priori. In these cases, we later present alternative, stricter, assumptions, which are easier to prove and that imply the sharpened assumptions. The sharpened assumptions motivate the additional, stricter, assumptions, which might be easier to prove, for SIPs with unbounded host sets in Sect. 4.
Section 1 is relaxed to whereas f * is the optimal objective value of (SIP).
Remark 1 Assumption 4 builds on the consideration that the algorithm must only exclude points which • are super-optimal, i.e., points which belong to the lower level-set x ∈ X : f (x) ≤ f * , • are feasible within the initially chosen discretization Y 0 , and • belong to X infeas .
If the iterate is not an element of Y 0 , the iterate is feasible and optimal by assumption. Note that if (SIP) is infeasible we have by convention f * = +∞ and therefore Assumption 4 allows for unbounded host sets and also for bounded but not closed sets. In Sect. 4 we give examples of stronger assumptions or checks which can be performed since the assumption is difficult to verify a priori. The property of uniform continuity of f and g on X and X × Y, respectively, follow from Assumptions 1 and 2. In the following we relax these two assumptions.
Assumption 5 The function f is lower semi-continuous at all x ∈ ∂ Y 0 .

Assumption 6 It holds
We first prove that the proposed assumptions are indeed relaxed compared to the original assumptions, i.e., the latter imply the former.
Lemma 1 Assumptions 4 to 6 hold if Assumptions 1 and 2 are satisfied.
Proof First, we show that Assumption 1 implies Assumption 4. X is compact according to Assumption 1. Since Y 0 ⊆ X , we directly get cl Y 0 ⊆ X . X is bounded by compactness and, therefore, Y 0 is also bounded. Second, according to Assumption 2, f is continuous on X and hence lower semicontinuous on ∂ Y 0 ⊆ X , or Assumption 5 holds.
Third, from Assumptions 1 and 2 follows uniform continuity of g on X × Y and we have Then, it also holds Next, we prove convergence of the lower-bounding procedure using Assumption 3 and the relaxed Assumptions 4 to 6. Recall, that no feasibility tolerance ε a is introduced and, hence, the full (infinite) sequence of iterates is considered.
Theorem 1 If Assumptions 3 to 6 are satisfied, the adaptive discretization-based lowerbounding procedure in Mitsos (2011) terminates finitely with the optimal objective value or converges to the optimal objective value, i.e., f k → f * for k → ∞. If the SIP is infeasible or unbounded, the lower-bounding procedure terminates finitely with proof of infeasibility or unboundedness, respectively.
Proof We first show that we move away from any compact set of infeasible points within finitely many iterations. Second, we consider the case of an infeasible SIP. We show that the algorithm terminates finitely with proof of infeasibility. Third, we consider the case of a feasible SIP and show that the algorithm terminates finitely with a globally optimal solution, or the algorithm produces the optimal solution in the limit.
1. Consider a compact set of infeasible points K infeas ⊆ Y 0 . In the following, we restrict the iterations to be in this set, i.e., x k ∈ x k ∩ K infeas . Recall that g x k , y * ,k ≤ 0 would imply that x k is feasible, x k is not a member of K infeas , and we have left K infeas . Due to Assumption 3, we have Since Assumption 6 holds for all ε > 0, it also holds We obtain for the deduction in (8) the two cases for x ∈ K infeas Case 1: g x k , y k − g x, y k ≥ 0: Case 2: g x k , y k − g x, y k < 0: Therefore, it holds Thus, with each iteration, the open neighborhood N δ 1 x k ∩ K infeas is infeasible for the following iterations (and therefore we do not (re)visit points in this neighborhood). Since the excluded neighborhoods cannot be revisited and K infeas is compact, after at most finitely many iterations, these neighborhoods form a finite cover of K infeas . Therefore, we only need a finite number of iterations until we have covered K infeas (c.f., Definition 3), i.e., we prove infeasibility of all points in K infeas after a finite number of iterations.

Consider now an infeasible SIP.
Y 0 is compact by Assumption 4. The proof of finite termination with proof of infeasibility directly follows from the proof for case 1. above with K infeas = Y 0 . It follows that (LBP) is infeasible after a finite number of iterations. 3. Finally, consider a feasible SIP.
(a) If (SIP) is unbounded, the problem will be determined to be unbounded in the first iteration. (LBP) is a valid relaxation of (SIP), i.e., f k ≤ f * . As (SIP) is by assumption unbounded, also (LBP) is in the first iteration unbounded, i.e., x * = ∞. By Assumption 4, x * / ∈ X infeas and the unbounded solution is feasible. (b) If the optimal solution is finite, i.e., x * < ∞, we will show by contradiction that a feasible and optimal point is generated in the limit. By Assumption 4, there exists a compact set X Y 0 containing the optimal point. By compactness of X , we can choose an infinite subsequence x k that converges tox ∈ X . Now, assume the limit point is infeasiblex ∈ X infeas . There exists a compact set K infeas x. By proof from case 1. above, we move away from any infeasible set K infeas within finite time. Therefore, the infeasible pointx is not a limit point that gives us the desired contradiction. It remains to show that the feasible limit point is optimal. (LBP) is a valid relaxation of (SIP). Hence, we have f k ≤ f * . The limit pointx is feasible, i.e., f x ≥ f * and in the boundary of Y 0 . With lower semi-continuous of f at all ∀x ∈ ∂ Y 0 and with f k ≤ f * follows f x = f * .
Following the proof of Lemma 1 and Theorem 1, we can also prove that the lowerbounding procedure in Mitsos (2011) converges in the infeasible case under the original Assumptions 1 to 3. The proof is also applicable with slight changes for a lowersemicontinuous constraint function g on X for all y ∈ Y. This property might be of interest in cases where the constraint function resembles the solution of an embedded optimization problem. The reader may refer to Appendix A.1 for the corresponding proposition and proof.
The upper-bounding procedure in Mitsos (2011) is conceptually similar to the lower-bounding procedure. The proof of convergence and the slightly changed assumptions for the case of unbounded host sets of the upper-bounding procedure are shown in Appendix B.3. This particular supports our claim that the results of our work can be directly transferred to other conceptually similar procedures.

Illustrative examples of SIPs with unbounded host sets
We first show examples where Assumptions 3 to 6 do not hold and discuss how the lower-bounding procedure may fail. All examples follow the same pattern. They contain an upper-or lower-level variable that i) has an unbounded host set and ii) can be chosen arbitrarily in the sense that the variable does not affect the (LBP) or (LLP) objective, respectively. Using this property, we can choose a sequence of points that generates arbitrarily weak discretization cuts, thus violating Assumption 6. This leads to a convergence to an infeasible point in the limit with no proof of infeasibility

Illustrative example for X compact and Y unbounded
Consider the SIP Note that the corresponding LLP has infinitely many solutions. The optimal solutions of (LLP) are y * (x) = (ỹ,x) T or y * (x) = (0,ỹ) T with arbitraryỹ ∈ R. We have ∀x ∈ X g (x, y * ) = 1 > 0 . Therefore, (E1) is infeasible. Now, we show that the lower-bounding procedure may converge to an infeasible point. We start with an empty discretization Y 0 = ∅. The optimal solution of the (LBP) is x 0 = 2 with an optimal objective value of f 0 = −2. Then, y * ,0 = (2, 2) T is a globally optimal solution of (LLP). Using y * ,k = 2 k+1 , x k T for all subsequent iterations and considering the (LBP) objective, the last introduced discretization point determines the optimal solution in the next iteration. See also Fig. 2 for graphical illustration. x k+1 can be computed by Therefore, we have for any iteration k ≥ 1 With k i=1 1 2 i being the geometrical series, we obtain lim k→∞ x 0 − k i=1 1 2 i = 1 with an objective value of f k=∞ = −1. Therefore, the lower-bounding procedure converges to an infeasible point in the limit, and no proof of infeasibility is given within finite time.
The same convergence issues arise when we consider an SIP with X compact and Y bounded but not closed, c.f., Appendix A.2.1. For the sake of simplicity, we considered the infeasible SIP (E1). Recall that Mitsos (2011), Reemtsen and Görner (1998) exclude infeasible SIPs. Therefore, (E1) would not be considered (assuming the extension to unbounded host sets). The reader may refer to the Appendix A.2.2 for a feasible but conceptually equivalent SIP that exhibits the same convergence issues.
We show in (E1) that in certain cases the lower-bounding procedure can converge to an infeasible point. Similarly, also, the upper-bounding procedure in Mitsos (2011) may never produce a SIP-feasible point, supporting our claim that our results carry over to other related adaptive discretization-based procedures. For a detailed example, the reader may refer to Appendix B.2.

Illustrative example for X unbounded and Y compact
Consider the SIP The optimal value function of the corresponding (LLP) is g (x, y * ) = 1 > 0, ∀x ∈ X with the optimal solution y * (x) =x 2 . Therefore, (E3) is infeasible. Note that the objective function only depends on x 2 and is constant in the interval x 2 ∈ [1, 2]. Again, we show below that the lower-bounding procedure can converge to an infeasible point in the limit. First, start with an empty discretization set Y 0 = ∅. For iteration k = 0, choose x 0 = (4, 2) T as the optimal solution, with an objective value of f 0 = −1. For the corresponding (LLP), we have y * ,0 = 2. Using for iteration k ≥ 1 and all following iterations x k 1 = 2 k+2 and considering the (LBP) objective, we can choose x k 2 = 2 − k i=1 1 2 i , see Fig. 3 for graphical illustration.
With k i=1 1 2 i being the geometrical series, we obtain lim k→∞ x 2 = lim k→∞ x 0 2 − k i=1 1 2 i = 1 with an objective value of f * = −1. Therefore, the lower-bounding procedure does not prove infeasibility of (E3) within finite time.
Again, for the sake of simplicity, an infeasible SIP with a non-differentiable function f has been considered. Similar adaptations as in Appendix A.2.2 can be made to obtain a feasible SIP with n-times differentiable functions, which exhibit the same convergence issues. Furthermore, similar adaptions as in Appendix A.2.1 can be made to obtain an SIP with compact Y and bounded but non-compact X , which exhibits the same convergence issues.

Examples of assumptions that are easier to prove than assumptions to 6
As already shown in Sect. 2, the often encountered assumptions in literature for SIPs imply our slightly relaxed assumptions. However, for SIPs with unbounded host sets the former assumptions do not hold. In these cases, it is often not obvious whether the relaxed Assumptions 4 to 6 hold. This section, discusses additional (stricter) assumptions and criteria for different cases of SIPs with unbounded host sets which can be often checked a priori or during run-time to enable application.

Case 1: X unbounded Y compact
First, note that if the lower-bounding procedure produces a point outside Y 0 , the algorithm terminates (Remark 1). Hence, we restrict our discussion to the following cases: Feasible SIPs can belong to cases 1.1 or 1.2. Infeasible SIPs usually belong to case 1.1 if the initial discretization does not directly lead to an infeasible (LBP). If this is not the case, it follows from X unbounded and X feas = ∅ that Y 0 is unbounded. In the general case 1.1, the existence of a finite cover of the set Y 0 cannot be guaranteed. Even if we exclude an open neighborhood of points whose size tends to infinity in each iteration, we cannot guarantee convergence. A different perspective offer Reemtsen and Görner (1998) who note that due to unboundness the proof of convergence fails as no limit point of x k may exist. As a counterexample, revisit (E3) and replace g with the piecewise-defined function In this case, the open ball we exclude in each iteration is infinite. However, the same convergence issues arise when using the same sequence of (LLP) solutions given in (E3). The function g in Eq. 12 is continuous but not differentiable on X × Y. Similar to Remark 2, we can obtain a function belonging to differentiability class C n that produces the same convergence issues.
Therefore, we concentrate in the following on case 1.2, i.e., Y 0 is bounded. We will discuss stricter assumptions or checks which can often be applied to verify that case 1.2 holds irrespectively of X unbounded. In detail, we will discuss 1.2.1 X feas = ∅ and f is continuous and coercive on X , i.e., lim x →+∞ f (x) = +∞.

the (SIP) stems from the min-max program
with q : X × Y → R and q being coercive in x for all y. In this case, we consider in the following the epigraph reformulation of this min-max program, which reads

Case 1.2.1: f is continuous and coercive on X and X feas = ∅
If f is continuous and coercive on X and X feas = ∅, we can prove convergence of the lower-bounding procedure.

Assumption 7
The set Y R n y is compact and cl X infeas ⊆ X ⊆ R n x . f is continuous and coercive on X and X feas = ∅ Proposition 1 Under Assumptions 2, 3 and 7, the lower-bounding procedure converges.
Proof Because f is coercive on X and X feas = ∅, we have f * < +∞ and the set {x ∈ X : f (x) ≤ f * } is bounded. Hence, also Y 0 is bounded irrespective of Y 0 . From boundness of Y 0 and Assumption 7 follows cl Y 0 ⊆ X . First, Assumption 4 is therefore satisfied. Second, from Assumption 7 follows Assumption 5. Third, from continuity of g (Assumption 2) and cl Y 0 ⊆ X and compactness of Y follow uniform continuity of g and Assumption 6 is satisfied. By Theorem 1, the lower-bounding procedure converges.

Case 1.2.2: SIP has the form of a epigraph reformulation
In the special case that (SIP) is the epigraph reformulation of a min-max problem with a coercive objective function, we can show that Assumption 4 holds irrespectively of unboundness of X .
Assumption 8 (SIP) has the form of (14), q is continuous and coercive in x for all y, and Y is compact.

Proposition 2 Under Assumptions 3 and 8, the lower-bounding procedure converges.
Proof From continuity and coerciveness of q in x ∀ y follows the optimal point exists, is finite, and attainable. It also follows, ∃M such that ∀ y, x : x > M q (x, y) ≥ max y∈Y q (x * , y) = μ * . Because of the epigraph form of (14), the construction of (LBP), and the global solution of the subproblems, we also have ∀k, l : k > l μ * ≥ q x k , y l . Combining these results above, there exists a compact K such that x k ⊆ K X and x * ∈ K.
Using K instead of X in (14), we obtain a new SIP, called SIP. From compactness of K follows Assumptions 4 is satisfied. From Assumptions 8 follows Assumptions 5. From continuity of q and compactness of K and Y follows Assumptions 6. By Assumptions 1, the lower-bounding procedure converges.
It remains to show that the solution of SIP and of the original SIP are equivalent. SIP is a restriction of (14) because K X . Since the optimal point of (14) is also feasible in SIP, the optimal solutions are equivalent.

Case 1.2.3: Additional computational checks
In some instances, one can show (analytically) that Assumptions 4 is satisfied by proving that lim x →+∞ sup y g (x, y) < 0 holds for any sequence x k with x k → ∞ for k → ∞. This can also be done in some cases computationally by solving the two problems (15) and (16) a priori. The computational burden is in most cases tractable because the problems are adaptions of (LLP). However, one must assume that g is continuous on cl (X ).
If x infeas,UBD is bounded, Y 0 is also bounded (Assumption 4(b)). We now verify if Assumption 4(a) holds. If the optimal objective value of is greater than zero, there exist no points that belong to cl X infeas but not to X . Because cl X infeas ⊇ Y 0 , Assumption 4(a) holds.

Case 2: X compact and Y unbounded
If one of the following assumptions holds instead of Assumption 1, no convergence issues occur.

Assumption 9
The set X R n x is compact and ∀ε > 0 ∃δ 2 > 0 : ∀x k ∈ x k , x ∈ X x k − x < δ 2 holds g x k , y k − g x, y k < ε.

Assumption 10
The optimal solution y k of (LLP) exists for all iterates x k , and the sequence y k does not diverge, i.e., lim k→∞ y k < ∞.
As shown before, Assumption 5 is satisfied if f is continuous on X . Furthermore, Assumption 9 holds if g is uniformly continuous on X × Y and X is compact.
Proof First, from Assumption 9 follows Assumption 4. Second, Assumption 5 is fulfilled by proposition 3. Third, Assumption 6 is fulfilled by Assumption 9. By Theorem 1, the lower-bounding procedure converges.

Proof
Consider With Assumption 2 follows uniform continuity of g on Y × X . ( SIP) fulfills Assumptions 1 to 3. Therefore, Theorem 1 and its proof are applicable. We will show by contradiction that the optimal objective value of the limit point of ( SIP) and the original SIP are equivalent, i.e., f x * = f (x * ).
First, assume f x * < f (x * ). Since the functions g and f are the same for ( SIP) and the original SIP, we have ∃ y ∈ Y : g x * , y > 0, i.e.,x * is infeasible in the original SIP. Assumption 10 and lim k→∞ g x * ,ỹ k > 0 provex * infeasible which gives us the desired contradiction. Second, assume f x * > f (x * ) which gives us (due to the same functions g and f ) X feas X feas . But from Y ⊆ Y, follows X feas ⊇ X feas which gives us the desired contradiction.
Assumption 10 generally cannot be proven a priori. Therefore, it is not directly applicable. However, stronger conditions than required for Assumption 10 can be checked a priori. For example, one may choose some large number M a priori and check during runtime whether y k < M holds. One way to achieve this may be to require the solver to minimize the magnitude of the variable values among all global solutions of (LLP). The auxiliary problem inf y∈Y y max (AUX) is solved withȳ being the optimal solution of the (LLP). Note that only an approximate solution of the (LLP) is required, c.f., Assumption 3. Therefore, a relaxation of the equality constraint is possible. The optimal solution of (AUX) is then used to populate the set Y k .
We must point out that if one chooses a too large M in combination with a too restrictive maximum number iteration, the test may be inconclusive. Furthermore, populating the set Y k with the solution of (AUX) may be counterproductive in certain cases. One could also prove that Assumption 10 holds, by computing upper and lower bounds for each lower-level variable y i a priori. The following optimization-based bound tightening based approach computes upper and lower bounds for all y i , i.e., y UBD i and y LBD i , respectively. However, there is no guarantee of success.
Note the direction of the inequality in (17) and (18) as we want to compute the maximum and minimum value of y i on cl X infeas ∩ x∈ X : y ∈ Y 0 g (x, y) ≤ 0 × Y.

Case 3: X unbounded and Y unbounded
For SIPs with unconstrained upper-and lower-level hosts, convergence can be guaranteed if a suitable combination of the assumptions presented in Sect. 4.2 and Sect. 4.1 are adopted. Note that this is possible, as none of the corresponding pairs are mutually exclusive.

Numerical case studies
We present two illustrative case studies from Chebyshev approximation. These are a proof-of-concept for our findings rather than a complete numerical study which is beyond the scope of this paper. The corresponding (LBP) and (LLP) of the considered cases studies are written in the domain specific language libALE (Djelassi and Mitsos 2020). The implementation of the lower-bounding procedure is provided by the library for discretization-based semi-infinite programming solvers (libDIPS) (Djelassi 2020). The termination tolerance of the lower-bounding procedure is set to ε a = 10 −3 . The default values for BARON are used with the exception of the optimality tolerances EpsA and EpsR which are set to 10 −9 . The numerical case studies are carried out on a Windows Server 2016 Standard with an Intel(R) Xeon(R) CPU E5-2640 v3 @2.60GHz processor and 128GB of RAM. All subproblems are solved using BARON version 19.12.7., accessed through GAMS version 30.2.0 (Khajavirad and Sahinidis 2018; GAMS Development Corporation 2019). Note that BARON cannot handle trigonometric functions, which limits the applicability, and does not give certificate of global optimality for complex subproblems.

Case study for X unbounded and Y compact
Consider the min-max problem inspired by Chebyshev-approximation with the auxiliary functions p and h defined as p (x, y) = x 1 + x 2 y + x 3 y 2 + x 4 y 3 + x 5 y 4 .
We reformulate (19) to Note that we have in this example the case of Sect. 4.1.2, i.e., the SIP has the form of an epigraph reformulation. The lower-bounding procedure terminates after 11 iterations (total CPU time 19.5s) with an optimal solution of x * = (−2.111, 6.471, −4.377, 1.091, −0.088) T and with a maximal error of the fit of 0.028 (Fig. 4).

Conclusion and outlook
Unconstrained upper and/or lower variable host sets in SIPs may arise, e.g., in (inverse) Chebyshev approximation, epigraphic reformulations, or design centering. In this paper, we showed that adaptive discretization-based algorithms are not suitable for all SIPs with unbounded host sets since convergence problems may arise.
Therefore, we investigated under which circumstances adaptive discretizationbased algorithms are applicable to SIPs with unbounded host sets. The study was carried out using the lower-bounding procedure of Mitsos (2011) which can be considered representative of the class, since the procedure is equivalent to the one proposed by Blankenship and Falk (1976). First, we briefly reviewed the assumptions of the lowerbounding procedure. Instead of the original assumptions, we used sharper, (slightly) relaxed, assumptions for our proof of the convergence of the lower-bounding procedure. In essence, the sharpened, slightly relaxed, assumptions establish a weaker form of uniform continuity of the constraint function on the set of all infeasible points in the host set of the SIP. In addition, the objective function must be at least semi-continuous on the boundary of a subset of the infeasible points.
For SIPs with unbounded host sets, it is often not obvious whether the relaxed assumptions hold. We discusse additional (stricter) assumptions and criteria for different cases of SIPs with unbounded host sets which can be often checked a priori or during run-time to enable application. The criteria are expected to be, at most, of the same computational burden as the subproblems of the SIPs. The additional assumptions and criteria are expected to apply to many applications. Finally, we give two numerical case studies of SIPs with unbounded host sets, as a proof of concept.
Moreover, we assume that our results are transferable to conceptually related adaptive discretization-based procedures for generalized semi-infinite programs and bilevel programs, e.g., the algorithms proposed by Mitsos and Tsoukalas (2015); Djelassi et al (2019). We show that this assumption is justified by also considering the conceptually related upper-bounding procedure of Mitsos (2011) (Appendix B). However, for future work, we plan to investigate this for the named procedures in detail. g ·, y k is lower semi-continuous in X iff Therefore, it also holds Now, we consider the two cases for x ∈ K infeas : Case 2: g x k , y k − g x, y k < 0: Therefore, it holds Note, the equivalence between (A6) and (11). The rest of the proof in Sect. 2.3 is conceptually the same and therefore omitted.

A.2.1: Illustrative example for X compact and Y bounded and not closed
The same convergence issues as in (E1) (Sect. 3.1) arise when we consider (E1.1) with X compact and Y bounded but not closed.
In general, one could use the closure of the non-compact host sets. However, this is not possible here.

A.2.2: Illustrative example for X compact and Y unbounded: feasible SIP
Consider the feasible SIP with unbounded host sets.
Remark 2 The function g is continuous but not differentiable on X × Y. This is not required according to Assumptions 1to 3. The non-differentiability of g is of no relevance in this example, as we receive the same convergence properties when using the function belonging to the differentiability class C n instead. The optimal value function of (LLP) of (E2) is with the optimal value y * (x) = (ỹ,x) T or y * (x) = (0,ỹ) T withỹ ∈ R. The feasible set is X feas = 0, 2 3 . The optimal solution of (E2) is x * = 2 3 with f * = − 2 3 . Using the same sequence of points as in (E1), we converge to the same infeasible point x = 1 in the limit, c.f., Fig. 6. We do not converge to the globally optimal solution, again failing our expectations.
In the following, we use the same Notation 1 to 3 as for the lower-bounding procedure. The proof of convergence of the upper-bounding procedure in Mitsos (2011) relies on the existence of an ε f -optimal SIP-Slater point.
Assumption 12 (Existence of ε f -optimal SIP-Slater Point) An ε f -optimal SIP-Slater point x s exists such that f (x s ) ≤ f * + ε f and ∀ y ∈ Y g (x s , y) ≤ −ε s .

B.2: Illustrative example for X compact and Y unbounded
Mitsos (2011) states at each iteration of the upper-bounding procedure, there are three potential outcomes, namely (i) (UBP) is infeasible, (ii) (UBP) is feasible and furnishes a SIP-feasible point and (iii) (UBP) is feasible but furnishes a SIP-infeasible point. In the former two cases the restriction parameter ε g is reduced and in the latter case Y UBD,k is populated. and shows that after a finite number of updates of ε g , the upper-bounding procedure produces a SIP-feasible point. We show that in the case of unbounded sets, the upperbounding procedure may never produce a SIP-feasible point. Consider and choose ε g = 0.5 in (UBP). With the chosen ε g , (UBP) of (E2) is equivalent to the (LBP) of (E2). Figure 7 depicts the objective function f , f UBD,k , g * ,UBD , g * and the feasible set of (UBP) and (E2), i.e., and X UBD,feas X feas , respectively.
Note that the (LLP) solution point of (E2) is independent of to ε g . Using the same sequence of points as in (E2), the outcome is always (iii), i.e., no SIP-feasible point is furnished. Therefore, ε g is never reduced, and the upper-bounding procedure converges to the infeasible point x = 1 in the limit.

B.3: Proof of convergence of the upper-bounding procedure in Mitsos (2011)
In Mitsos (2011) follows with the existence of a SIP-Slater point x s , compactness of Y, and continuity of g (c.f, Assumptions 1 and 2), there exists a ε s > 0 such that ∀ y ∈ Y g x s , y ≤ −ε s . This also holds for the case of unbounded X and compact Y but does not hold for the case of unbounded Y. For example, for y ∈ Y = [1, +∞), g (x, y) = − 1 y there exists an SIP-Slater point but ε s : ∀y ∈ Y g (x s , y) ≤ −ε s ). Therefore, for the case of unbounded Y, we slightly adapt Assumption 12 to Assumption 13 (Existence of ε f -optimal and strictly feasible Point) An ε f -optimal and strictly feasible point x s exists such that there exists an ε s > 0 and f (x s ) ≤ f * + ε f and ∀ y ∈ Y g (x s , y) ≤ −ε s .
Note that for compact X and Y, Assumption 13 directly follows from Assumption 12. For the upper-bounding procedure, Assumption 4 has to be slightly stricter because the upper-bounding procedure visits points with f x k > f * .

Assumption 14 It holds (a) cl
Y 0 ⊆ X and Y 0 is bounded with the set Y 0 defined as To extend the applicability to unbounded SIPs, we adapt the original Lemma to Theorem 2 Take any Y UBD,0 Y, any ε f and any r > 1. Assume that Assumptions 3 to 6 and 13 hold. Then the upper-bounding procedure in Mitsos (2011) furnishes a SIP-feasible point x * finitely, such that f (x * ) ≤ f * + ε f .
The proof of Theorem 2 is, apart from slight changes, equivalent to the original proof in Mitsos (2011).

Proof In each iteration one of the following cases hold i) (UBP) is infeasible ii) (UBP) is feasible and a SIP feasible point is furnished iii) (UBP) is feasible and a SIP infeasible point is furnished
In case i) and ii) ε g is reduced. In case iii) Y UBD,k is populated with the solution point of the corresponding (LLP). We first show that a infinite sequence of infeasible (UBP) is not possible. By Assumption 13 there exists a point x s such that f (x s ) ≤ f * + ε f and ∀ y ∈ Y g (x s , y) ≤ −ε s . This point is feasible in (UBP) if ε g ≤ ε s , regardless of Y UBD,k . Therefore, (UBP) is feasible if ε g ≤ ε s .
Because of ε g = ε g,0 r a with a being the number of reductions, after a = log r ε g,0 ε s reductions, the (UBP) becomes feasible. For (UBP) with ε g < ε s follows f x k ≤ f (x s ) ≤ f * + ε f . Hence, the solution point of (UBP) x k is a candidate point with ε f -optimality. If x k is SIP-feasible, the desired result holds. It remains to show that an infinite sequence of iii) is not possible, which also means that ε g is no longer updated. It therefore holds for all iterations ε g ≥ ε g,min ≡ ε s r . By construction of (UBP) we have ∀l, k : l > k g x l , y k ≤ −ε g ≤ −ε g,min Due to Assumption 6, it also holds ∃δ 1 > 0 : ∀x l ∈ x k ∩ X infeas , x ∈ X infeas x l − x < δ 1 ⇒ g x l , y k − g x, y k < ε g,min 2 . (B12) Combining (B11) and (B12), we have ∀l, k : l > k g x, y k < − ε g,min 2 < 0 (B13) Due to Assumption 4, the limit point x * ∈ cl (X ) infeas exists. From x k → x * , we have Substituting x = x k in (B13), we obtain ∃K : g x k , y k < − ε g,min 2 < 0.
Therefore, after a finite number of iterations K the point x k is SIP-feasible which gives us the desired result.
Combining the results above, the upper-bounding procedure furnishes a point x * that satisfies f (x * ) ≤ f * + ε f after a finite number of iterations.