A unified approach to inverse robust optimization problems

A variety of approaches has been developed to deal with uncertain optimization problems. Often, they start with a given set of uncertainties and then try to minimize the influence of these uncertainties. Depending on the approach used, the corresponding price of robustness is different. The reverse view is to first set a budget for the price one is willing to pay and then find the most robust solution. In this article, we aim to unify these inverse approaches to robustness. We provide a general problem definition and a proof of the existence of its solution. We study properties of this solution such as closedness, convexity, and boundedness. We also provide a comparison with existing robustness concepts such as the stability radius, the resilience radius, and the robust feasibility radius. We show that the general definition unifies these approaches. We conclude with examples that demonstrate the flexibility of the introduced concept.


Introduction
In many real-world problems, one does not know exactly the input data of a formulated optimization problem.This may be due to the fact that we are dealing with forecasts, predictions, or simply unavailable information.To deal with this, it is essential to treat the given data as uncertain.In principle, there are two different ways to deal with uncertainty.Either one knows some distribution of the uncertainty, or not.In the first case, this information can be used for the mathematical optimization problem, while in the second case, no additional information is given.Both approaches are widely used in many realworld applications, such as energy management, finance, scheduling, and supply chain.For a detailed overview of possible applications of robust optimization, we refer to [3].In this article, we focus mainly on problems without information about the distribution of uncertainty.
Fixing the uncertainty to solve the corresponding optimization problem may yield a solution that is infeasible for other scenarios of the uncertainty set.Therefore, one tries to find solutions that are feasible for all possible scenarios of the uncertainty set.The problem of finding an optimal solution, i.e. the solution with the best objective function value, among these feasible solutions is called the robust counterpart (cf.[1]).
There are many surveys on robust optimization, such as Ben-Tal et al. [1] or Bertsimas et al. [3].For tractability reasons, the focus is often limited to robust linear or robust conic optimization.Robust optimization in the context of semiinfinite optimization can be found e.g. in [13], while [19,[23][24][25] consider general solution methods.For applications and results on robust nonlinear optimization, we refer to a survey by Leyffer et al. [17].
The question of how to construct an appropriate uncertainty set is often not addressed, and the uncertainty set is assumed to be given.A closely related question is which subset of the uncertainty set is covered by a given solution.Considering a larger uncertainty set may lead to overly conservative solutions, since more and more scenarios have to be considered.This trade-off between the probability of violation and the effect on the objective function value of the nominal problem is called the price of robustness and was introduced by Bertsimas and Sim [5].Many robust concepts that have been formulated and analyzed in recent years try to deal with the price of robustness in order to avoid or reduce it.
Bertsimas and Sim [4,5] defined the Gamma robustness approach, where the uncertainty set is reduced by cutting out less likely scenarios.The concept of light robustness was first defined by Fischetti and Monaci [10] and later generalized by Schöbel [20].Given a tolerable loss for the optimal value of the nominal solution, one tries to minimize the grade of infeasibility over all scenarios of the uncertainty set.
Another approach to deal with overly conservative solutions is to allow a second stage decision.Ben-Tal et al. [2] introduced the idea of adjustable robustness, where the set of variables is divided into here-and-now variables and wait-andsee variables.While the former need to be chosen before the uncertainty is revealed, the latter need to be chosen only after the realization is known.
In this article, we pursue a different approach to dealing with the price of robustness, which we call inverse robustness.The main idea is to reverse the perspective of the approaches described above.Instead of finding a solution that minimizes (or maximizes) the objective function under a given set of uncertainties, we want to find a solution that maximizes the considered set of uncertainties under a given objective function.In this way, we are not dependent on the a priori choice of the uncertainty set and then accepting the loss of objective value.Instead, we can set the price we are willing to pay and then find the most robust solution with this given budget.Furthermore, the study of the above approaches is often limited to the robust linear case.We want to define inverse robustness in a more general way and study the concept also for nonlinear problems.
Especially for the linear case, concepts have been introduced to measure the robustness of a given solution.The stability radius and resilience radius of a solution can be seen as measures for a fixed solution of how much the uncertain data can deviate from a nominal value while still being an (almost) optimal solution.For a more detailed discussion of resilience we refer to [26].Both concepts can be seen as properties of a given solution, and the shape of the uncertainty set must be specified in advance.A similar concept has been studied in the area of facility location problems.Labbé presented in [16] an approach to compute the sensitivity of a facility location problem.Several publications ( [6][7][8][9]) deal with the question of how to find a solution that is least sensitive, and thus deal with a concept quite similar to resilience.We will show that finding a point that maximizes the stability radius or the resilience radius, given a budget on the objective, can be seen as a special case of inverse robust optimization.However, the general definition of inverse robustness provides more flexibility.First, it allows to define measures that can include distributional information about the uncertainty.Second, the shape of the considered uncertainty is not restricted to given shapes, but can be more complex.
The outline of the article is as follows.In Section 2 we define the inverse robust optimization problem (IROP) and discuss the properties of its solution.In Section 3 we discuss different possible choices and description for the cover space that contains all potential uncertainty sets.Afterwards we compare our general definition with other inverse robustness concepts in Section 4. In Section 5 we provide and discuss examples.Finally, we conclude the article with a short outlook.

The inverse robust optimization problem
In this article, we consider parametric optimization problems given by s.t.g(x, u) ≤ 0, depending on an uncertain parameter u ∈ R m .We assume that f (•, u), g(•, u) : X → R are at least continuous functions w.r.t.x for some fixed parameter u, which is also called scenario, belonging to a uncertainty set U ⊆ R m .The set X ⊆ R n is given by further restrictions on x that do not depend on u.
For simplicity, we consider only one constraint that depends on the uncertain parameter u.However, the following results generalize to multiple constraints by considering their maximum.We assume that there is a special scenario ū ∈ U called nominal scenario.This could be the average of the scenarios, or the most likely scenario.The nominal problem (P ū) is defined as follows: We call the objective function value of the optimization problem for the nominal scenario above the nominal objective value and denote it as f * .Throughout this article we assume that at least the nominal problem has a feasible solution and the nominal objective value f * is well-defined.
The idea of the inverse robust optimization problem (IROP) is to allow a nonnegative deviation ≥ 0 from the nominal objective value in order to cover the uncertainty set U as much as possible.We refer to the deviation as the budget.
The task to cover U as much as possible needs a more precise interpretation.
For this, we define a cover space W ⊆ 2 U and a merit function V : W → R which maps every subset of U to a value in R. With this, we obtain an instance of the IROP as follows: We call the constraint (3) the budget constraint and the constraints (4) the feasibility constraint of the IROP.
Please note that it is a non-trivial task to define a merit function V and a cover space W, since the optimal solution and the tractability depend on it.A bad choice can even lead to an ill-posed problem due to Vitali's theorem (cf.[14]).However, this should not be seen as a drawback.These two objects make the definition of a inverse robust optimization problem very general.The merit function can be simply the volume, but can also contain information about the distribution of the uncertain parameter u.The cover space can either consist of sets of a concrete shape, e.g.ellipses or boxes, or it can also be a generic set system like a σ-algebra.
In Section 3 we will discuss some concrete choices of the cover space.In this section we are going to show some general statements about the existence and shape of solutions for (P IROP ).One property we want to emphasize here, is that existence of a feasible solution is relatively easy to guarantee.As long as {ū} ∈ W, there is a feasible solution, as we assumed that the nominal problem is well-defined.Note that it can be hard to check this for an ordinary robust optimization problem.For the next statements we make some basic assumptions about the cover space W and the merit function.Given a compact subset C ⊆ U, we denote the set of all compact subsets of C by K(C).
Assumption 2.1.We assume that the cover space W satisfies the following conditions: 1.For any W ∈ W, we know that W ∈ W.
2. K(C) ∩ W is complete w.r.t. the Hausdorff-metric d H for any compact subset C ⊆ U.

{u} ∈ W.
In the following we let W := K(U) ∩ W. Note that, if U is itself compact, it suffices to check the second condition in Assumption 2.1 for C = U.
Assumption 2.2.Given a cover space W ⊆ 2 U , we assume that the objective function V : W → R satisfies the following conditions: 1. V : W → R is upper semi-continuous w.r.t. the topology induced by the Hausdorff-metric and In the remainder of this section we study how the structure of the parametric problem (P u ) influences an optimal chosen set W * ∈ W. We start with a theorem that ensures the existence of a solution of (P IROP ).
Theorem 2.3.Given a compact uncertainty set U ⊆ R m , two continuous functions f, g : X × U → R w.r.t.(x, u) ∈ X × U, a compact set X ⊆ R n , a cover space W ⊆ 2 U and a merit function V which fulfill Assumption 2.1 and Assumption 2.2.Then there exists a maximizer (x * , W * ) ∈ X × W of (P IROP ), where W * is a compact set.
Proof.First we show that if a solution exists, then the corresponding solution set W * is a compact set.Let W be the closure of a set W ∈ W. Because of Assumption 2.1, we know that W ∈ W holds. Due to the continuity of f, g w.r.t.u we can also conclude that for any feasible (x, W ) ∈ F, where we can reduce the search space of the original optimization problem to the space of closed elements of the cover space W. As the uncertainty set U was assumed to be compact, we reduce the search space to the space of compact elements of the cover space which is by definition W.
In a second step, we show that the feasible set Because we assumed that W is complete w.r.t. the Hausdorff-metric d H , we know that it is closed and therefore compact, too.Consequently, the set X × W is a compact set as the Cartesian product of two compact sets.
Next we prove that F is a closed set.Therefore, we consider a convergent sequence (x n , W n ) n∈N ⊆ F with limit (x * , W * ) ∈ X × W. We have to show that (x * , W * ) ∈ F. We do this by showing that the constraints (3) − (5) are satisfied.
• Fix an arbitrary u * ∈ W * .As lim n→∞ d H (W n , W * ) = 0, we can find a sequence (u n ) n∈N with u n ∈ W n for all n ∈ N and u n → u * .By continuity of g and feasibility of (x n , W n ) for all n ∈ N we get: As u * ∈ W * was chosen arbitrarily this implies max u∈W * g(x * , u) ≤ 0.
• We can argue the same way as for the feasibility constraint (4) to show: This means that all constraints are satisfied and (x * , W * ) ∈ F. As the sequence (x n , W n ) n∈N was arbitrarily chosen, we showed that F is closed.
In total we know that the feasible set F is compact as a closed subset of a compact set.
Because V was assumed to be upper semi-continuous w.r.t.W on W, we can ensure the existence of a maximizer of (P IROP ).Note that the feasible set F is non-empty as the choice (x * , {ū}) is feasible by definition of f * for all budgets ≥ 0.
In the statement above we assumed that U is compact.We will now drop this assumption, but demand that the function V is a finite measure on a σ-algebra.
Theorem 2.4.Assume that W is a σ-algebra on U and V : W → R is a finite measure.Let X be a compact set, f, g : X × U → R be continuous functions and let Assumption 2.1 hold.Moreover, assume that there is a sequence of compact sets Then there exists a maximizer (x * , W * ) ∈ X × W of (P IROP ).
Proof.As in the proof of Theorem 2.3 we can restrict our consideration to closed sets in W. Note that by assumption the feasible set of (P IROP ) is non-empty and we consider a finite measure, which fulfills Assumption 2.2 (2) by definition and guarantees that the objective is bounded.Thus the supremum V * exists and we can find a sequence of feasible elements ( As X is assumed to be compact, we can find a subsequence which converges towards an x * ∈ X.We can assume for the remainder that lim n→∞ x n = x * .As we consider a finite measure we can find for each δ > 0 a k ∈ N such that Now K(C k ) ∩ W is, as in the proof above, again a compact set.Which implies that for a fixed k the sequence (W n ∩ C k ) n∈N has an accumulation point W * k .W.l.o.g.we assume that this accumulation point is unique.Otherwise, we switch notations to the corresponding subsequence.
As W * k is a compact set and V is a finite measure, we conclude using Fatou's Lemma that Because of Equation ( 7) we moreover know that Together with Equation ( 6) we receive As W is a σ-algebra we can guarantee W * ∈ W and by the continuity of measures we have After ensuring the existence of a solution, we can ask which properties of the original problem described by f, g and U induce which structure of W * .One property that we will use later in the discussion of an example problem in Section 5 is the inheritance of convexity.Lemma 2.5.If a given IROP instance has a maximizer (x * , W * ), and f (x * , •), g(x * , •) are convex functions w.r.t.u ∈ conv(U) -where conv(U) denotes the convex hull of U -, the merit function V satisfies Assumption 2.2 and the cover space satisfies W * := conv(W * ) ∩ U ∈ W, then the decision (x * , W * ) is also a maximizer of the problem.
Proof.Let us denote the optimal solution of the IROP instance as (x * , W * ).We argue by showing that the choice (x * , W * ) satisfies V (W * ) ≤ V ( W * ) and that this choice is feasible w.r.t. the inverse robust constraints.
By definition we know W * ⊆ W * ⊆ U and by Assumption 2.2 that implies V (W * ) ≤ V ( W * ).In order to prove that W * is feasible, we choose any arbitrary u ∈ W * .By the definition of W * there exist holds.Due to the convexity of f w.r.t.w ∈ U and the feasibility of W * we know that holds as well.Since u ∈ W * was chosen arbitrarily we know that Analogously we show g(x * , u) ≤ 0 ∀u ∈ W * .Furthermore we know that ū ∈ W * ⊆ W * and consequently W * is feasible and the claim holds.
Next we will show that the continuity of the describing functions f, g w.r.
This settles the proof.

Choice of cover space
Given an optimization problem as in (P IROP ), we have to specify the cover space W to define the problem.This section illustrates some example cover spaces which satisfy Assumption 2.1 such as the whole power set, the Borel-σ-algebra of the uncertainty set or parameterized families of subsets.These cover spaces can be used together with Theorem 2.3 to generate a solution of the (P IROP ).
The whole power set.At first we consider the whole power set W = 2 U and show that it satisfies Assumption 2.1.Therefore, we assume that the uncertainty set U is compact.We then know that for an arbitrary W ∈ 2 U the closure W ⊆ U and therefore W ∈ W. This means that the first condition of Assumption 2.1 holds.The second condition holds because U is compact (see [15]).The last condition holds because u ∈ U and therefore {u} ∈ 2 U = W. Consequently, W = 2 U satisfies Assumption 2.1 for any compact uncertainty set U ⊆ R m .Because the power set is in a sense big enough to contain a solution for (P IROP ), it is not surprising that it satisfies Assumption 2.1.In the next steps we gradually decrease the size of the cover space.
Borel-σ-algebra.A more suitable choice, especially if we want to consider measures, is a σ-algebra.We are interested in the cover space W = B(U) where B(U) denotes the Borel-σ-algebra on the closed set U.
By definition the Borel-σ-algebra contains all closed subsets of U, especially all compact sets and {u}.Therefore, the first and last condition of Assumption 2.1 hold if U is a closed set.As the Borel-σ-algebra on a close set contains also all compact sets, the completeness condition then also follows.This means that for a closed set U Assumption 2.1 is satisfied.
Also the additional assumptions of Theorem 2.4 on a cover space W holds for the Borel-σ-algebra.As the sequence (C k ) k∈N of compact sets unit balls around the nominal solution with increasing radius k ∈ N can be considered.
Sets described by continuous inequality constraints.Another step towards a numerically more controllable cover space is done by considering In this cover space each element is described by a continuous inequality constraints on U. Specifying we can guarantee that W (δ u ) = {u} is in W. Furthermore, the inclusion K(U) ⊆ W holds as for any compact set A ∈ K(U) the distance function is continuous.Because for a compact A the points satisfying δ A (u) ≤ 0 are exactly the points u ∈ A, we can conclude that A = W (δ A ) ∈ W holds for an arbitrary A ∈ K(U).
Sets described by a family of continuous inequality constraints.Last, but not least, we consider cover spaces that are induced by elements of a design space D ⊆ R q .Using so called design variables d ∈ D we focus on the cover space induced sets This is not surprising.The choice of a finite dimensional design space D ⊆ R q with q ∈ N reduces the inverse robust optimization problem to a general semiinfinite problem (GSIP) as we can rewrite (P IROP ) in this case as: For GSIP it is well known that the solution might not exist.For a more detailed discussion we refer to [22].A survey of GSIP solution methods is given in [23].
A possibility to ensure the existence of a solution and to design discretization methods is to assume the existence of a fixed compact set Z ⊆ R m and a continuous transformation map t : In this case the GSIP reduces to a standard semi-infinite optimization problem and a solution can be guaranteed by assuming compactness of X.This idea is used by the transformation based discretization method introduced in [21].

Comparison to other robustness approaches
As we have pointed out in the introduction, there exist several concepts similar to the inverse robustness.Here we briefly discuss how the stability radius, the resilience radius and the radius of robust feasibility fit in the context of inverse robustness.

Stability radius and resilience radius
The stability radius provides a measure for a fixed solution on how much the uncertain parameter can deviate from a nominal value while still being an (almost) optimal solution.There are many publications regarding the stability radius in the context of (linear) optimization.For an overview, we refer to [26].
Let x ∈ R n denote an optimal solution to a parametrized optimization problem with fixed parameter ū ∈ U of the form where the set of feasible solutions is denoted by X ⊆ R n .The solution x is called stable if there exists an ρ > 0 such that x is -optimal, i.e. f (x, u) ≤ f (x, u) + for all feasible solutions x ∈ X with an ≥ 0, for all uncertainty scenarios u ∈ B ρ (ū).The stability radius is given as the largest such value ρ.Altogether, it can be calculated for a given solution x ∈ X and a budget ≥ 0 by max While the stability radius compares a fixed decision x with all other feasible choices x ∈ X, the resilience radius allows to change the former optimal decision to gain feasibility.For an introduction into this topic we also recommend [26].
Given a budget w.r.t. the objective value, the resilience radius searches the biggest ball centered at a given uncertainty scenario that satisfies feasibility with respect to some original problem.If we denote the optimal solution of a parametrized optimization problem with fixed parameter ū again by x, then x is called B-feasible for some budget B ∈ R and some scenario u Then, the resilience ball of a B-feasible solution x around a fixed scenario ū ∈ U is defined as the largest radius ρ ≥ 0 such that x is B-feasible for all scenarios in this ball.Finally the resilience radius is the biggest radius of a resilience ball around some x ∈ X and can be calculated by solving the following optimization problem.
To compare these concepts with the concept of inverse robustness, we fix the uncertainty set as U := R m and define W (d Furthermore we want to measure V (W (d)) := vol(W (d)).If we assume that we can describe X by finite many inequality constraints, i.e. there exists an finite index set |I| < ∞ and continuous functions g i : X → R for all i ∈ I such that X = {x ∈ R n : g i (x) ≤ 0, i ∈ I} holds, we can define the problem as follows.max This problem can be simplified to the following problem.
We see that the difference between the stability radius and the inverse robust problem is that the stability radius checks the budget constraint not only for all scenarios u ∈ B ρ (ū), but also for all feasible x ∈ X, while the inverse robust concept allows to choose a new argument x ∈ X such that the radius is maximized while staying close to the nominal objective value f * := f (x, ū).
Furthermore, by defining the budget B := f * + , we obtain that the resilience radius can be seen as a variant of the IROP, where we are searching an optimal set W in the set of balls around the nominal scenario ū.

Radius of robust feasibility
The radius of robust feasibility is a measure on the maximal 'size' of an uncertainty set under which one can ensure the feasibility of the given optimization problem.It is discussed for example in the context of convex programs [12], linear conic programs [11] and mixed-integer programs [18].
The radius of robust feasibility ρ RF F is defined as where We see that this way to calculate the radius of robust feasibility can be interpreted as a special inverse robust optimization problem, where we are searching for sets of the form ū + αZ and where we are not interested in the budget constraint.The radius of robust feasibility allows us to analyze problems without any pre-defined values such as the given budget ≥ 0 or the nominal solution f * .But, the certain structure of the set Z is rather restrictive and we do not now how the objective value of a solution x with a large radius α deviates from the nominal solution value.

Examples
After introducing and investigating the concept from a mathematical point of view, we present some further properties using three examples.

Dependency on budget
The first example illustrates that the solution of an inverse robust optimization problem does not depend on the choice of the uncertainty set U in general, but instead on the available budget ≥ 0. Therefore, we focus on the following parametric optimization problem where we consider a parametrized uncertainty set U(a) = [0, a] with a ≥ 1.
Choosing u = 0 leads to the nominal solution The corresponding inverse robust optimization problem with W = {[0, d], d ∈ [0, a]} and the merit function V (W ) = vol(W ) has the form Please note that due to Lemma 2.5 -2.7 considering the cover space B(U) would lead to an equivalent problem.
The inverse robust optimization problem has the solution . This solution is independent of the uncertainty set parameter a ≥ 1 and thus allows modelling mistakes in the specification of U. On the contrary, the corresponding strict robust optimization problem min has the solution x * (a) = a and f * (a) = a 2 + a for a ∈ [1,2] and no solution for a > 2. This dependence makes it crucial to think about the specification of U beforehand.

Extreme scenarios
In the next example we want to study the effect of extreme scenarios that can occur especially in nonlinear optimization.We consider for u ∈ [0, 1] the following parameterized optimization problem. min If we consider the nominal scenario u = 0, then the nominal objective value f * = 0, we receive for chosen budget ≥ 0 and the same cover space as before the following inverse robust optimization problem: The optimal solution is given by x * = and d * = 1 100 .On the other hand, choosing an uncertainty set U = [0, a] with a ∈ [0, 1] before solving the classical robust counterpart min leads to the optimal solution x * = a 100 .
A very conservative choice in classical robust optimization would be to choose a = 1 which would also lead to a high price for robustness and x * = 1 as optimal robust solution.In inverse robustness we would first choose a budget .Lets say = 0.1.The price of robustness we would pay is fixed.The maximal uncertainty set we can cover with this budget has a size of d * = 0.1 1/100 ≈ 0.977.This means that we only need pay a price of 0.1, but cover more than 95% of the area of the original uncertainty set.
Choosing a smaller apriori set with a = 0.5 leads to a very small price to pay to achieve robustness, 1  2 100 .However, if one is ready to pay more for robustness, e.g.= 0.001 one can cover more than 90% of the area of the original uncertainty set, which is a large part of all scenarios.
The reason for this phenomenon is that u = 1 is for this problem an extreme scenario.Covering it, has a high price in optimality.In the inverse robust formulation we tend to leave out extreme scenarios and try to find a good solution on the remainder.
The first two examples show two differences to a robust counterpart.First the solution depends directly on the price we are willing to pay to achieve robustness and not on the apriori choice of the uncertainty set.Second the inverse robust optimization will leave out extreme scenarios making it a less conservative approach for robust optimization.

A bi-criteria problem
In a final example, we want to demonstrate the flexibility of the new approach.Therefore, we consider a probability measure as the merit function and consider a parameterized bi-criteria optimization problem with an inequality constraint.This constraint is linear with respect to the decision parameter x ∈ R, but nonlinear in the uncertainty u ∈ U such that a solution for a nominal scenario can be easily computed, while the analysis of the behavior with respect to the uncertainty is not trivial.We consider the following bi-criteria optimization problem Fixing the nominal scenario u = 0, we can compute the Pareto-front F * as After considering the original problem using a fixed nominal scenario, we now focus on the inverse robust problem.Therefore we allow a generic budget = ( 1 , 2 ) ∈ R 2 ≥0 and fix a point on the Pareto-front, i.e. f * = (−2, 4) ∈ F * .Additionally, we assume that our uncertainty is given by a normal-distributed random variable u ∼ N (0, 1).Therefore we let W = B(R), where B(R) denotes the σ-algebra of Borel-measurable sets of R. We want to maximize the probability of uncertainties we can handle while not loosing more than from our solution f * , which leads to: The statements in Section 2 were all formulated for only one objective.However, it is easy to check that all statements carry over to the case of multiple objectives and can be used to investigate the present example.
As f 1 is increasing and f 2 is decreasing in x and 0 ∈ W , we know that depending on the budget , we can restrict the search space for x to a bounded interval.According to Theorem 2.4 then an optimal solution (x * , W * ) exists and we can replace the supremum of the last problem by a maximum We collect this simplification in the following proposition a proof is given in the Appendix.
Proposition 5.1 (Reduced problem reformulation).The inverse robust example problem can be simplified to the reduced inverse robust example problem given as: Furthermore, this problem is a convex optimization problem w.r.t.(x, d) ∈ X × D and has a solution for all ∈ R 2 ≥0 .
In Figure 1 the objective values for different budget 1 and 2 are shown.We start with a solution that does not allow any uncertainty, i.e.P (W (d * )) = 0 for 1 = 2 = 0.If we allow to differ from the nominal values f * 1 or f * 2 , we see that we can first gain more robustness by increasing 1 .For each 2 there is an 1 such that for 1 ≥ 1 the solution does not change anymore.A proof of this can be found in Proposition A.1 in the appendix.For larger 2 the objective value converges towards P(u ≤ 1) ≈ 0.842.By the equivalent formulation (P red ) it is clear that this is an upper bound for IROP.However for large k ∈ N the point is feasible for (P red ) with the budgets 1 = 0 and 2 = −2k(1 − exp(1 − 1 k )) + k.The objective value of this point converges towards P(u ≤ 1).Some of the optimal solution sets W * and the robustified decisions x * can be seen in Figure 2 and Figure 3 for different values of .One could think that the solution sets satisfy an ordering w.r.t.⊆ if increases component-wise.But as on can see this is in general not the case as changes to the decision x * ( ) could destroy these inclusions.

Conclusion
Given a parameterized optimization problem, a corresponding nominal scenario, and a budget, one can ask for a solution that is close to optimal with respect to the objective function value of the nominal optimization problem, while being feasible for as many scenarios as possible.
In this article, we introduced an optimization problem to compute the best coverage of a given uncertainty set.In Section 2 we introduced the inverse robust optimization problem (IROP) and some structural properties of its solution.In Section 3 we discussed different cover spaces that satisfy the assumptions needed for the given structural results of Section 2. After comparing IROP with the stability radius, the resilience radius, and the radius of robust feasibility in Section 4, we provided examples in Section 5 that demonstrate the flexibility of the concept of inverse robustness.

A Properties of Example 5.3
Proof of Proposition 5.1.As discussed in Section 5.3 it is enough to consider bounded intervals.Thus, we know that the problem is equivalent to max x∈R,d1,d2∈R This problem can be reformulated by computing the maxima within the budget and feasibility constraints arg max x(u − 1) + exp(u) − 1 = {d 2 }.
To determine the maximal argument in the feasibility constraint we used the identity ∂ u g(x, u) = x+exp(u) and that 0 ∈ [d 1 , d 2 ] implies that g(x, 0) = −x ≤ 0 is a necessary condition for a feasible choice of x.Therefore ∂ u g(x, u) > 0 holds for all feasible choices of x and u ∈ R. In a last step, we obtain the maximizer d 2 by considering g(x, u) = For the following proposition denote for a given budged the optimal solution of the reduced problem (P red ) by x * ( ), d * 1 ( ) and d * 2 ( ) Proposition A.1 (Behavior w.r.t.increasing budgets).Fixing 0 := (0, 0) leads to the solution x * = 2, d * = (0, 0) and therefore V (W (d * ( 0 ))) = 0.For any fixed 1 ≥ 0 we get: For any fixed 2 ≥ 0 and 1 ≥ ¯ 1 := 3 the second budget constraint and the feasibility constraint are active.Since the feasibility constraint is independent of , it will not change w.r.t. an increasing budget and therefore we obtain Proof of Proposition A.1.
i) Case = (0, 0) .Given the budget := (0, 0) , the reduced inverse robust example problem can be formulated as: Considering the budget constraints ( 15) and ( 16), we conclude Since d 1 ≤ 0, d 2 ≥ 0 has to hold, it follows directly Since this is the only feasible point, it is also the optimal solution of the given problem.
ii) Case lim 2 → ∞.We have seen in Section 5.3 that for 1 = 0 and 2 going to infinity there is a sequence of feasible points such that the objective value converges towards P(u ≤ 1).This means that for the optimal objective value we have Considering the feasibility constraint we receive .
Let us fix an arbitrary 2 ≥ 0. If we analyze the reduced inverse robust example problem again, we can rewrite its first budget constraint as As we know that the variable d 2 is bounded above by 1 and we already mentioned that a feasible x has to satisfy x ≥ 0. Consequently the first budget constraint is fulfilled for all 1 ≥ 3.Because 1 just occurs in the first budget constraint of the reduced inverse robust example problem, we know that for 1 ≥ 3 the solution of the problem instance just depends on the choice of 2 ≥ 0 what proves the claim.
where v(•, d) : U → R is a continuous function w.r.t.u ∈ U for all d ∈ D. Consequently, all sets W (d) are closed for any d ∈ D such that the first condition of Assumption 2.1 is fulfilled by construction.The other two conditions will not automatically hold and depend on the choice of the function v and the set D. As an easy positive example one could think about v(u, d) = ||u − u|| − d.This way the function v induces the elements W (d) = B d (u) for d ∈ D. The choice d = 0 ensures {u} ∈ W. Choosing D = [0, r] for some r ∈ R will then ensure that W satisfies Assumption 2.1.However it is possible to construct examples where there exists no solution to (P IROP ).Consider for example U = [−1, 1], u = 0, D = [0, 1] and v(u, d) := max(u • (1 − d), −d − u).We then can setW (d) = [−d, 0] for all d ∈ [0, 1), however for d = 1 obtains W (1) = [−1, 1].Given X = [−1, 1], the objective function as f (x, u) := x and the constraint as g(x, u) := 0.5 − x + u, and consider the corresponding inverse robust problem.Here, we can define a feasible point (x, d) for every d ∈ [0, 1).If we take the length of the interval W (d) as a merit function, we are interested in the choice d = 1.Because (x, 1) is always infeasible for any x ∈ [−1, 1], there exists no solution to (P IROP ).
with U α := ( Ā, b) + αZ for nominal values Ā ∈ R m×n , b ∈ R m and Z being a compact and convex set.Since we are only interested in the feasibility of (PR) α , we can replace its objective function by 0. Therefore, given a fixed, convex, compact set Z we can compute the radius of robust feasibility by solving the following optimization problem:ρ RF F := sup x∈R n ,α≥0 α s.t.Ax ≤ b ∀(A, b) ∈ U α ,with U α := ( Ā, b) + αZ.To compare this concept to the concept of inverse robustness, we define W (d) := ū + dZ as subsets of U := R mn+m characterized by d ∈ D := [0, ∞).Furthermore we use the objective function V (W (d)) := vol(W (d)).Since we do not consider an objective function, we drop the budget constraint.Thus, given a nominal scenario ū := ( Ā, b) ∈ U and a function g(x, (A, b)) := Ax − b, we obtain the inverse robust problem sup x∈R n ,d≥0 vol(W (d)) s.t.g(x, u) ≤ 0 ∀u = (A, b) ∈ ū + dZ that can be reformulated as sup x∈R n ,d≥0 d s.t.Ax ≤ b ∀(A, b) ∈ U α .
As W ∈ B(R) is too large as a search space, we reduce the dimension by searching for intervals W (d) := [d 1 , d 2 ] defined by elements of the design spaceD := {d ∈ R 2 : d 1 ≤ d 2 }.Since f 1 (x, u) = −x + u, f 2 (x, u) := 2x − u are convex w.r.t.u ∈ Ras linear functions and g is convex w.r.t.u because of ∂ 2 u g(x, u) = exp(u) > 0 for all u ∈ R, we can use Lemma 2.5.As the describing functions f 1 , f 2 , g are continuous w.r.t.u we can use Lemma 2.6 and by Lemma 2.7 we are looking for a bounded solution set as h(x, u) = max{f 1 (x, u), f 2 (x, u), g(x, u)} is a coercive function w.r.t.u for any arbitrary x ∈ R. Consequently the choice of W (d) = [d 1 , d 2 ], d 1 , d 2 ∈ R to search for a convex, closed, bounded set in R is appropriate.

Figure 2 :
Figure 2: Optimal arguments x * as red line and W * as blue area for different values 1 while fixing 2 := 0.

Figure 3 :
Figure 3: Optimal arguments x * as red line and W * as blue area for different for different values 2 while fixing 1 := 0.