Robust Combinatorial Optimization Problems Under Budgeted Interdiction Uncertainty

In robust combinatorial optimization, we would like to find a solution that performs well under all realizations of an uncertainty set of possible parameter values. How we model this uncertainty set has a decisive influence on the complexity of the corresponding robust problem. For this reason, budgeted uncertainty sets are often studied, as they enable us to decompose the robust problem into easier subproblems. We propose a variant of discrete budgeted uncertainty for cardinality-based constraints or objectives, where a weight vector is applied to the budget constraint. We show that while the adversarial problem can be solved in linear time, the robust problem becomes NP-hard and not approximable. We discuss different possibilities to model the robust problem and show experimentally that despite the hardness result, some models scale relatively well in the problem size.


Introduction
Uncertainty can manifest in various forms, such as imprecise data or the inherent unpredictability of the future.A notable case study utilizing linear programs [BTN00] demonstrated that even slight changes in problem data can significantly shift an optimal solution towards infeasibility, rendering it practically useless.Consequently, a range of decision-making approaches under uncertainty have been developed, including stochastic programming [KM05], fuzzy optimization [LK10], and robust optimization [BTEGN09].Often, such approaches make the resulting decision-making problems more challenging to solve than their nominal counterparts.The focus of this paper is on robust combinatorial decision problems, which have the distinct advantage that a probability distribution on the uncertain data does not need to be known.More formally, consider some nominal combinatorial problem Many more variants of robust optimization problems exist, see e.g.[GS16, KZ16,BK18] for an overview.What they have in common is that a set U containing all scenarios can be formulated by the decision maker, and is made available to the optimization problem.Datadriven robust optimization [BGK18] aims at automating this step by formulating suitable uncertainty sets based on available data (e.g., by using on the risk preference of the decision maker).
There is typically a trade-off between the modeling capabilities of the uncertainty set U and the complexity of the resulting problem.A discrete scenario set U offers broad flexibility as it allows direct utilization of any amount of historical data observations in the model.However, it comes with a drawback that the robust versions of relevant combinatorial problems are already computationally difficult (NP-hard) even when considering only two scenarios [KZ16].Representing U using a general polyhedron, defined by its inner or outer description, suffers from the same limitation [GKZ22].
A significant breakthrough was made with the introduction of budgeted uncertainty sets, also known as the Bertsimas-Sim approach [BS03,BS04].This approach addresses an uncertain linear objective c c c ⊺ x x x, where each coefficient i ∈ [n] is bounded by a lower bound c i and an upper bound c i .Moreover, only a fixed integer Γ of coefficients are allowed to deviate simultaneously from their lower to upper bounds.In other words, U incorporates a cardinality constraint of the following form: The introduction of this simple idea has had a profound impact on the field of robust optimization.The two papers that presented this idea continue to be widely cited, highlighting their significance.The appeal of uncertainty sets of this nature lies in their simplicity and intuitive nature.Furthermore, it has been demonstrated that the robust min-max problem can be decomposed into a manageable number (specifically, O(n)) of nominal-type problems (Nom).This decomposition allows for increased modeling flexibility without incurring significant computational complexity.If the nominal problem can be solved in polynomial time, the corresponding robust problems can be solved in polynomial time as well.
The advantages offered by budgeted uncertainty sets have resulted in their widespread and varied applications to real-world problems.These applications encompass a range of domains, including portfolio management [BP08], wine grape harvesting [BMV10], supply chain control [BT06], furniture production planning [AM12], train load planning [BGKS14], and many others.The versatility of budgeted uncertainty sets has made them a valuable tool in addressing uncertainty and optimizing decision-making in numerous practical scenarios.
A noteworthy characteristic of U Γ is that if Γ is an integer, we can utilize continuous deviations δ δ δ ∈ [0, 1] n without altering the problem.This is due to the fact that when finding an optimal strategy for the adversary in the problem max c c c∈U c c c ⊺ x x x given a fixed solution x x x, it is sufficient to sort the items chosen by x x x based on the potential cost deviation c i − c i , and select the Γ largest values.Consequently, the equivalence between "discrete" and "continuous" budgeted uncertainty holds.However, this equivalence does not generally hold in the case of multi-stage robust problems, where recourse actions can be taken after the cost scenario has been revealed (see, e.g., the discussion in [GLW22]).
The effectiveness of budgeted uncertainty sets has led to the emergence of various variants and generalizations of this approach.In the paper by [BPS04], norm-based uncertainty sets were introduced.It was demonstrated that the traditional budgeted uncertainty set can be constructed using a specific norm known as the D-norm.Another variant is multi-band uncertainty [BD12], which involves a system of deviation values d1 ij < d 2 ij < . . .< d K ij with both lower and upper bounds on the number of possible deviations from each band k ∈ [K].In variable budgeted uncertainty [Pos13], the number of deviations γ taken into account may depend on the size ∥x x x∥ 1 of the solution x x x for which the adversarial problem is being solved.
Additionally, there is knapsack uncertainty [Pos18], which can be represented as follows: Here, the set of possible scenarios is bounded by m linear knapsack constraints.When the value of m is fixed, similar results to those obtained for the original set U Γ can be derived.
A special case of this type of set is locally budgeted uncertainty, see [GL21,Yam23], where each of the knapsack constraints affects a subset of variables, and these subsets are disjoint between constraints.These variants and generalizations of budgeted uncertainty sets provide additional flexibility and adaptability to various problem settings, enhancing the robustness of decision-making under uncertainty.
In this paper we consider a new type of uncertainty set, applicable to an objective or constraints that involve the cardinality 1 ∥x x x∥ 1 , e.g., to problems where the task is to maximize the size of a set, or where this cardinality is not allowed to fall below a certain threshold.The motivation to consider such sets comes from a real-world problem involving the composition of teams to take on a set of jobs under uncertain skill requirements (see [AM19,AGM20]).In such problems, one would like to compose teams that can take on the maximum possible number of jobs.From an adversarial perspective, the task is to change the job skill requirements in a way that minimizes the number of jobs that can be carried out successfully.From a more theoretical perspective, the study of robust combinatorial problems often makes use of selection-type problems (see, e.g., [Ave01, DK12, DW13, KKZ15]).In the most basic form, the selection problem requires us to select p out of n possible items, i.e., to solve While this nominal problem is trivial to solve, treating robust variants becomes more complex.A new perspective on problems of this type is to locate the uncertainty not (only) on the item costs; instead, items have different degrees of reliability, and an adversary tries to violate the constraint i∈ [n] x i ≥ p.
Motivated by these two problems, but being applicable to a wider range of problems as well, the "budgeted interdiction" approach that we thus propose is to consider uncertainty sets of the form with w w w ∈ N n and B ∈ N. The adversary can therefore interdict a solution (i.e., let items fail), but has a specified budget for this purpose.Throughout the paper, we assume that each w i is not larger than B; otherwise, its coefficient cannot be attacked and is therefore not uncertain.The uncertainty can affect a cardinality objective function (1 1 1 − c c c) ⊺ x x x that should be maximized, or a cardinality constraint (1 1 1 − c c c) ⊺ x x x ≥ p.Note that in the corresponding nominal problems, vector c c c is not present, but is introduced in the robust problem to model the uncertainty of the vector 1 1 1.Cardinality constraints also play a role in many optimization problems that allow a cut-based formulation.For example, the shortest path problem can be written as min i∈E d e x e s.t.
Observe that this definition of uncertainty set is essentially the budgeted uncertainty set U Γ "upside down": while U Γ has a bound on the number of coefficients that can deviate and the effect of deviation is given by some parameter c i − c i , here we want to maximize the number of deviations and each deviation has a cost parameter w i .Note that different to U knap , there is a single budget constraint, we consider a discrete instead of continuous deviation, and in particular, the vector c c c is binary.
As an example, consider the selection problem min 2x 1 + 3x 2 + 4x 3 + 5x 4 : where the cardinality constraint i∈ [4] x i ≥ 1 is uncertain and thus can be attacked by an adversary.The cardinality constraint of the robust counterpart of this example becomes min c c c∈U where the function ϕ(x x x) represents the number of items that can fail.To further illustrate this setting, let us assume that that is, in the definition of U, we use B = 10 and w = (3, 7, 4, 10) ⊺ .A possible solution to the robust problem is to pick items 1, 2 and 3 at cost 2 + 3 + 4 = 9.The adversary can attack items 1 and 3, but does not have sufficient budget to let all three items fail.An even better solution is to pick items 1 and 4 at cost 2 + 5 = 7.In this case, the adversary can only attack one of the two items.The remainder of this paper is structured as follows.In Section 2, we discuss the complexity of the robust problem with budgeted interdiction uncertainty, and prove that the problem is not approximable.Furthermore, we provide five compact formulations to solve problems with cardinality constraints under interdiction uncertainty set in Section 3. Experimental results illustrating the performance of the models for the selection, job assignment and 2edge-connected subgraph problems are collected in Section 4. We summarize our findings and pointing out further research questions in Section 5.The detailed information on how to model the compact formulation of both job assignment and cut-based problems are provided in Appendix A and B, respectively.

Complexity Analysis
In order to check the complexity level of the robust selection problem under interdiction budgeted uncertainty, we first need to introduce the compact formulation of it, thus we have where the adversary problem is x i c i : for a given x x x ∈ {0, 1} n .There is a trivial algorithm to solve this problem; namely, we sort items i with x i = 1 by non-decreasing weight w i , and pack items in this order until the budget B cannot accommodate any further items.Hence, the adversarial problem can be solved in O(n) time (as it is not necessary to sort the complete vector, see, e.g.[KV18, Chapter 17.1]).Now we show that the decision version of the robust selection problem with interdiction uncertainty affecting the constraints (ROSel) is hard.
Theorem 1.The following decision problem is NP-complete: Given , which means that the decision problem is indeed in NP.
To show NP-completeness, we make use of the partition problem: Given positive integers Given such an instance of the partition problem, we construct a robust problem with budgeted interdiction in the following way.Set requires us to pack items of total weight strictly greater than V − 1 to avoid having all items interdicted.This means that the partition problem is a Yes-instance if and only if there is a feasible solution x x x ∈ {0, 1} n with objective value less or equal to V .As the partition problem is well-known to be NP-complete [GJ79], the claim follows.
This brief analysis shows that we lose a main advantage of classic budgeted uncertainty, where the robust problem can be decomposed into a set of nominal problems.Note that Theorem 1 applies to optimization problems with an uncertain cardinality constraint and an objective i∈[n] d i x i that should be minimized, but it also applies to the case of having one linear constraint i∈[n] d i x i ≤ V and an uncertain cardinality objective that should be maximized.In particular, in the latter case this means that it is NP-complete to find a solution with a non-zero objective value; in other words, it is not possible to find a polynomialtime approximation algorithm for this setting, unless P=NP.Hence we conclude the following result.

Corollary 2. The optimization problem max
Proof.Given a partition problem as in the proof of Theorem 1, set X = {x x x ∈ {0, 1} n : Then there is a solution with objective value greater or equal to one if and only if the partition problem is a Yes-instance.Hence, there cannot be an α-approximation for any α > 0, unless P=NP.

Model Formulations
In this section, we introduce five compact formulations of the robust problem, where we focus on an uncertain cardinality constraint i∈[n] x i ≥ p for ease of presentation.Additional constraints on x x x may be considered, which are assumed to be modeled indirectly in the set X ⊆ {0, 1} n .That is, we consider reformulations of the following type of robust problem with cardinality constraints: where the nominal problem corresponds to the case c c c = 0 0 0. In addition, without loss of generality, we assume that the items are sorted based on their weights (w i ), non-decreasingly.

IP-1
The first idea to find a compact formulation of the problem is only applicable to the case p = 1 with integer weights w w w.This means we only need to have one item after the adversary attacks, a case that remains hard, as Theorem 1 shows.Therefore, it suffices to pack items with minimum cost whose total weight strictly exceeds the adversarial budget B. This idea can be formulated as follows:

IP-2
We now consider the general case of arbitrary values for p.As noted, the adversarial problem ϕ(x x x) can be solved in polynomial time by packing items with smallest weight first.Therefore, we introduce variables , where λ k is active if and only if we attack the first k items (note that the case k = 0 can be ignored, as we can always attack at least one item, due to each w i being not larger than B).An attack only incurs costs on the interdiction budget if x i = 1.Hence, we obtain the following integer program: By Constraint (3), we can only choose one of the candidate attacks represented by λ k .Due to Constraint (2), we cannot use attack It is easy to see that we can relax the integrality constraints of λ k , which gives an LP formulation for ϕ(x x x).By using linear programming duality, we thus can obtain the formulation for the robust problem under budgeted interdiction uncertainty: This formulation is nonlinear due to the products (w i α k −1)x i .As x x x is binary, we can linearize the model using

IP-3
We now consider a third option to model the robust problem.As an alternative formulation for IP-2, here the adversarial problem is obtained by considering the ratio of items weight divided by the budget B. Thus the model is as follows: Analogously to IP-2, ϕ(x x x) represents the highest number of items that can be attacked by the adversary.By using linear programming duality, we can find the following formulation for the robust problem: The formulation is not linear.To eliminate the floor function, we introduce a new integer variable This leads to products x i y k which are linearized using additional variables z ik with A compact formulation of the robust problem under knapsack uncertainty is hence as follows:

IP-4
In the following, we assume that w w w is integer.Let W k (x x x) be the smallest required weight to attack at least k items of x x x.If this is not possible then i∈ [n] x i < k and we set W k (x x x) = ∞.
To calculate this value, consider the following selection problem: where we define the minimum over an empty set to be infinity.This allows us to reformulate ϕ(x x x) as a minimization problem in the following way: where k∈[n] y i represent the maximum number of items that can be attacked by the adversary.Therefore, if W k (x x x) ≤ B, we need to set y k = 1 to have the constraint fulfilled; otherwise, we can choose y k = 0.As ϕ(x x x) is expressed as a minimization problem, the robust problem becomes To replace W k (x x x), note that we can relax variables z i without changing the value of the corresponding minimization problem.This means that we can use linear programming duality again to arrive at the following problem formulation: y y y ∈ {0, 1} n (11) To avoid the non-linearity between β k i and x i , we increase the weight w i sufficiently far if x i = 0 so that it will not be part of an attack within the available budget B. That is, we replace Constraint (8) with The resulting model is called (IP-4).

IP-5
In this final model, we again make the assumption that weights w w w are integer.For fixed k ∈ [n], consider the constraint with binary variable y k , which shows the attack of the adversary.As the adversary will attack items in the order from 1 to n if possible, the term i∈[k] w i x i gives the required budget to attack all items that x x x packed up to and including item k.This means that if x k = 1 and i∈[k] w i x i ≤ B, then item k will be attacked.In the constraint, this corresponds to the case that (B + 1)y k ≥ B + 1 − W with W ≤ B, so y k = 1 is the only feasible choice.On the other hand, if This discussion shows that the following compact formulation for the robust problem with budgeted interdiction uncertainty is correct: y y y ∈ {0, 1} n x x x ∈ X

Model Comparison
In Table 1, we summarize the five proposed models.
Column "# Cont.V." gives the additional number of continuous variables, while "# Disc.V." gives the additional number of discrete variables in the model (beyond variables x x x ∈ X .Column "# Con." shows the additional number of constraints that are created in comparison to the nominal model.Some models require w w w to be integer.This property can be replaced with the more general requirement that we need to be able to determine some value ϵ > 0 in polynomial time such that for each c c c ∈ {0, 1} n , either w w w t c c c ≤ B or w w w t c c c ≥ B + ϵ holds.For integer weights, ϵ = 1 clearly fulfills this property. Having five candidate models available (four of which are general), a natural question is to compare the strengths of their respective linear programming relaxations: Is it possible that one model dominates the other in the sense that any feasible solution of the linear programming relaxation of the first model corresponds to a feasible solution of the linear programming relaxation of the second model with the same objective value?As we show experimentally in the next section, this is not the case.For any pairwise comparison of models, there are instances where the LP relaxation of one model outperforms the LP relaxation of the other.

Experiments
In this section we check the performance of our IPs to solve three different problems under budgeted interdiction uncertainty.We start with the selection problem as the models are obtained with respect to this problem.We also show the results of our approaches applied to the job assignment and 2-edge-connected subgraph problem.
To evaluate solution times for each problem, and in addition to presenting the normal solution times, we use performance profiles as introduced in [DM02].We briefly recall this concept: Let S be the set of considered models, K the set of instances and t k,s the runtime of model s on instance k.We assume t k,s is set to infinity (or large enough) if model s does not solve instance k within the time limit.The percentage of instances for which the performance ratio of solver s is within a factor τ ≥ 1 of the best ratio of all solvers is given by: Hence, the function k s can be viewed as the distribution function for the performance ratio, which is plotted in a performance profile for each model.For all experiments of the selection problem, we use CPLEX version 12.8 on an Intel Xeon Gold 6154 CPU computer server running at 3.00GHz with 754 GB RAM.For the experiments of both the job assignment and 2-edge-connected subgraph problem we use CPLEX version 22.11 on an Intel pc-i440fx-7.2CPU computer server running at 2.00GHz with 15 GB RAM.All processes are restricted to one thread.

Setup
We conduct three types of experiments on robust selection problems of the type min with budgeted interdiction uncertainty in the cardinality constraint.In the first experiment, we compare the tightness of the lower bounds obtained by all five models when the size of n and p is fixed.The remaining experiments compare the solution times of the proposed models.
In the second experiment, we vary the size of n.This is done in two ways: once for a fixed value of p, and once for a value of p that increases with n.In the third experiment, we fix n and vary only the size of p.This way, we can thoroughly evaluate the effect these two parameters have on the performance of the models.
We use two techniques to generate instances for the selection problem under budgeted interdiction uncertainty, called Gen-1 and Gen-2.In Gen-1, the weights are chosen independently from the corresponding costs, which meanst that there may be both particularly good items (with low c i and high w i ) and bad items.In Gen-2, the weight of items depend on their costs, which intuitively may lead to harder instances compared to Gen-1.The generation methods are considered as follows: • Gen-1: for each i ∈ [n] we choose d i , w i from {1, . . ., 100} independently random uniform • Gen-2: for each i ∈ [n] the value of w i depends on the value of d i , thus we choose d i from {1, . . ., 100} and w i from {max(1, d i − 5), . . ., min(100, d i + 5)} randomly uniform In both cases, we set Here we focus on the LP-relaxation of all models and compare the lower bounds obtained by each of them.In this experiment we fix n = 10 and p = 1 to include all models.We solve the LP relaxations of 1000 instances for each combination of generation and solution methods using CPLEX.We then perform a pairwise comparison of the resulting lower bounds.The results of this experiment is presented in Tables 2 and 3 for Gen-1 and Gen-2, respectively.Each number shows how many times the method in the respective row provided a strictly better (in this case higher) lower bound than the model in the correspondence column.The last column shows the average of cases per row where the model has been better then the comparison model in percent.Based on the information provided in Table 2 for Gen-1, we note that IP-1 dominates all other models in over 98 percent of cases (which is not surprising, as it is the most specialized model).The next best model is IP-5, followed by IP-3.With some gap behind these two models follow IP-4 and IP-2.The weakest model, IP-2, is stronger than another model in only around 12 percent of cases.

IP-1 IP-2 IP-3 IP-4 IP
Interestingly, this ordering changes when using instances of type Gen-2, see Table 3.While IP-1 still outperforms other models other models in nearly all cases (over 99 percent), the second best model is IP-3, which performs relatively better than before.Similarly, IP-2 has improved in the ranking, while IP-5 (which was the second best choice in Table 2

Experiment 2
In this experiment, we vary the problem size in n ∈ {20, 25, . . ., 100}.The experiment is divided into two parts.In the first part, we fix p to 1 so that all solution methods could be included, and also consider p = 5 to compare the performance of solution methods which can be applied to cases where p > 1.In the second part, we use p = n 5 so that p grows linearly in n.For each combination of generation and solution methods we solved 50 instances using CPLEX and with a 600 second time limit.We always present a plot of average solution times and a performance profile.Figure 1 shows the solution times for p = 1.Clearly, IP-1 is the fastest model to solve instances for both Gen-1 and Gen-2, followed by IP-3.Other models also show similar behavior for both generation methods.Interestingly, IP-2 may even become faster as n increases, which seems counterintuitive, but can be explained by the fact that p remains constant.The performance profiles (see Figure 2) reflect a similar relative performance of the five models.
In Figures 3 and 4, we show the average solution times and performance profiles for the case p = 5, where IP-1 is not included.A similar to the cases when p = 1 can be seen.As IP-1 is excluded, here IP-3 has the best average solution time.The difference is that IP-3 fails to be faster than IP-5 for instances with smaller size of n.Similarly, in the performance profile, IP-3 dominates other models.The main difference compared to the case with p = 1 is that none of the models is always superior to others.We now consider the case p = n 5 , i.e., p grows linearly with n.The average solution times and performance profiless of this experiment are presented in Figures 5 and 6, respectively.The results of the second part is closely similar to the first part results when p = 5.That is, IP-3 beats all formulations except IP-5 with n = 20, 30, 40 for Gen-1 and n = 20, 30 for Gen-2.In addition, the behavior of IP-2 and IP-4 is similar, however their comparison is difficult because of the time limit.In this experiment, IP-4 has better solution time than IP-2 for Gen-1, but for Gen-2, IP-2 is slightly faster than IP-4.

Experiment 3
In this experiment, we fix the number of items n and change the number of items we want to select p.We consider the cases n = 20 and n = 40.In both cases, p is chosen from {1, 2, . . ., 10}.As before, we solved 50 instances using CPLEX with a 600-second time limit for each combination of generation and solution methods.The results of this experiment is provided in Figures 7-10  Interestingly, the experiment for instances with n = 20 shows that IP-2, IP-3 and IP-4 are dominated by IP-5 except for p = 1 for both Gen-1 and Gen-2.In this single case, IP-5 is outperformed by IP-3.This experiment shows that the problem solved faster for larger value of p.This can be explained by the observation that for larger values of p, nearly all items need to be selected (recall that we need to pack more than p items to respect the uncertainty).Similar to other experiments, the performance profile (see Figure 8) represents the obtained results for the average solution times also holds for the instance-wise comparisons of the given models.The results depicted in Figures 9 and 10 show that the problems first tend to become harder to solve and then the solution time falls.Notably, unlike the case with n = 20, IP-3 has the best performance with regard to the solution time.Another difference is that IP-3 is always superior in comparison to the other mathematical formulations.In this case the problem in considerably harder to solve than cases with n = 20.In this sense, the problem hits the time limit even for p = 1, while IP-3 never reaches even close to the time limit.The performance profile shows that IP-3 is almost always faster than other IPs.

Setup
We now consider a job assignment problem with m jobs and n workers.Each job has a profit p j and workers demand of d j for all j ∈ [m].Here, instead of weights for each item in the selection problem, we have a failure probability for each worker w i for all i ∈ [n].Therefore, the robust job assignment problem under the budgeted interdiction uncertainty can be formulated as follows max x ij , zj ∈ {0, 1} In this case and in order to see the performance of our IPs over the job assignment problem, we only consider IP-2, IP-3, IP-4 and IP-5.The compact formulations of these four IPs are collected in Appendix A. Furthermore, we introduce two experiments, where in experiment 1 we fix the number of jobs and change the number of workers; while in experiment 2 we fix the number of workers and vary the number of jobs.
Similar to the experiments on the selection problem, we use two types of instance generation methods for the job assignment problem under budgeted interdiction uncertainty, called Gen-1 and Gen-2.In Gen-1, the job demands are chosen independently from the corresponding profits.In Gen-2, however, the profits of jobs depend on their demands.The generation methods are considered as follows: Gen-1 • for each i ∈ [m] we choose d i from {1, . . ., 2n m } independently random uniform.• for each i ∈ [m] we choose p i from {1, . . ., 25} independently random uniform.

Gen-2
• for each i ∈ [m] we choose d i from {1, . . ., 2n m } independently random uniform.• for each i ∈ [m] the value of p i depends on the value of d i .If d i ≤ n m then p i is chosen from {1, . . ., 25}, otherwise we choose p i from {10, . . ., 34} randomly uniform.
In both cases, we choose w i from {101, . . ., 150} randomly uniform for all i ∈ [n].Then we set In this experiment, we fix the number of jobs (m) and change the number of workers (n).To this end, we consider the case when m = 5 and n = {5, 10, . . ., 40}.For each combination we solve 50 instances with a time limit of 600 seconds and show the average solution times.The results of this experiment is provided in Figures 11 and 12.
The solution times presented in Figres 11 show that all introduced IPs perform similarly over the given generation methods.It can be seen that the problem constantly tends to be harder to solve from n = 5 to n = 30 and then solution times for all IPs decrease slightly.Here, IP-4 and IP-5 (which is one of the best also for the selection problem) are the fastest IPs.The reason is they have fewer number of constraints.
Like the solution times, Figres 12 illustrates that IP-5 leads to the best performance profile, meaning that for most of instances it is the fastest IP following by IP-4.In this setting IP-2 is the worst IP in terms of solution times.In the second experiment of the job assignment problem, we fix the number of workers (n = 20) and change the number of jobs (n = {2, 3, . . ., 9}).Similarly, for each given combination we solve 50 instances with a time limit of 600 seconds and show the average solution times.The time performance of this experimental setting is shown in Figures 13 and 14.
Here, again the same trend as experiment 1 (4.2.2) can be observed in terms of both solution times and the corresponding performance profile.The results of the introduced generation methods are equivalent.However, the drop of the solution times for the larger case of instances are less noticeable.Likewise, IP-5 has both the best average and instance-wise solution times and the slowest IP is again IP-2.The objective is to find a subset of edges E ′ with minimum weight such that G[E ′ ] is 2-edge-connceted, i.e., there are two edge-disjoint paths between any pair of nodes (see, e.g., [Huh04]).We consider a robust version where edges can fail, but the subgraph is still required to remain 2-edge-connected.
The nominal problem can be formulated as follows: where C denotes the set of all cuts in the graph G.As there are exponentially many constraints, we use an iterative procedure.In Appendix B, we describe models IP-2, IP-3, IP-4 and IP-5 for a subset C ′ ⊆ C of cuts.We begine with C ′ = ∅, solve the corresponding formulation, and check if the resulting solution x x x is feasible with respect to all cuts C. To check if a cut is violated, we solve the following IP: If a violated cut can be found, it is added to C ′ , and the robust problem is solved again, until convergence is reached.We introduce one experiment where the number of nodes of the given graph vary.We also set the density (D) of the graph equal to 0.8.
Unlike the experiments on both the selection and job assignment problem, we just introduce on approach of instance generation for the cut based problem under budgeted interdiction uncertainty.In this case our graph has n nodes and m edges, where m = D × n(n−1) 2 .Here, for each e ∈ E we choose d e from {1, . . ., 10} and w e from {5, . . ., 10} randomly uniform.Moreover, we set B = 15.

Experiment
In the only experiment of the cut based problem, we change the number of nodes and thus choose n from {10, 12, . . ., 30}.Similarly, for each instance size we solve 50 instances with a time limit of 600 seconds and show the average solution times.The time performance of this experimental setting is shown in Figure 15.
The results gathered in Figure 15 show that the given IPs perform similarly for both the job assignment and the cut based problem.Here, again IP-5 is the fastest IP, whose performance profile also dominates the other methods.The second best is IP-4 and the worst performance belongs to IP-2.

Conclusions
Tools to model uncertainty sets are of central importance in robust optimization.While classic budgeted uncertainty sets make the assumption that each "attack" (i.e., changing a parameter away from its nominal value) has the same costs, extensions such as knapsack uncertainty sets relax this constraint and allow different attacks to have different costs.However, for a discrete knapsack uncertainty set, this means that even evaluating a solution (i.e., calculating the worst possible attack) requires us to solve an NP-hard knapsack problem.
In this paper, we propose an alternative uncertainty set, where attacks may have different costs, but each attack leads to the same consequence.Such a model is particularly useful to model uncertainty in cardinality-based constraints or objectives.This type of uncertainty has the advantage that calculating a worst-case attack is still possible in polynomial time, even though the corresponding robust problems become hard.We also demonstrated how this approach can be applied to a job assignment problem, and the problem of finding a minimum-cost 2-edge-connected subgraph under failures.
We consider different ways to model the worst-case attack, which lead to a total of five compact integer programming formulations, none of which dominates the other.Our computational experiments for the selection prblem indicate that in particular models IP-3 (based on rounding down attack budgets with integer variables) and IP-5 (based on forcing a binary variable to become active if an item can be attacked) show promising performance and can solve problems to proven optimality with up to 100 items within seconds.However, given the job assignment and finding minimum cost 2-edge-connected subgraph problem, IP-5 has the best performance followed by IP-4.
In further research, it will be interesting to explore if additive approximation algorithms may be possible, as multiplicative approximation guarantees are impossible to achieve.Furthermore, we intend to study budgeted interdiction sets in multi-stage environments such as two-stage and recoverable robust optimization.