Designing Tractable Piecewise Affine Policies for Multi-Stage Adjustable Robust Optimization

We study piecewise affine policies for multi-stage adjustable robust optimization (ARO) problems with non-negative right-hand side uncertainty. First, we construct new dominating uncertainty sets and show how a multi-stage ARO problem can be solved efficiently with a linear program when uncertainty is replaced by these new sets. We then demonstrate how solutions for this alternative problem can be transformed into solutions for the original problem. By carefully choosing the dominating sets, we prove strong approximation bounds for our policies and extend many previously best-known bounds for the two-staged problem variant to its multi-stage counterpart. Moreover, the new bounds are - to the best of our knowledge - the first bounds shown for the general multi-stage ARO problem considered. We extensively compare our policies to other policies from the literature and prove relative performance guarantees. In two numerical experiments, we identify beneficial and disadvantageous properties for different policies and present effective adjustments to tackle the most critical disadvantages of our policies. Overall, the experiments show that our piecewise affine policies can be computed by orders of magnitude faster than affine policies, while often yielding comparable or even better results.


Introduction
In practice, most decision-making problems have to be solved in view of uncertain parameters.In the operations research domain, two fundamental frameworks exist to inform such decisions.The stochastic optimization framework captures uncertainty by using probability distributions and aims to optimize an expected objective.Initially introduced in the seminal work of Dantzig [26], the framework has been intensively studied and scholars used it to solve a vast variety of problems including production planning [6,55], relief networks [49], expansion planning [62,64], and newsvendor problems [52].While stochastic optimization performs well on many problem classes, finding tractable formulations is oftentimes challenging.Additionally, the data needed to approximate probability distributions might not always be available, and gathering data is often a costly and time-consuming process.The robust optimization (RO) framework overcomes many of these shortcomings by capturing uncertainty through distributionfree uncertainty sets instead of probability distributions.By choosing well-representable uncertainty sets, RO often offers computationally tractable formulations that scale well on a variety of optimization problems.In recent years, these favorable properties have led to a steep increase in research interest, see, e.g.[8,11,20,36,63].The flexibility of uncertainty sets and scaleability of solution methods also make RO very attractive for applicational purposes and it has been widely applied to many operations management problems [45].
In traditional RO, all decisions must be made before the uncertainty realization is revealed.However, oftentimes some decisions can be delayed until after (part of) the uncertainty realization is known in realworld situations.As a consequence, RO may lead to excessively conservative solutions.To remedy this drawback, Ben-Tal et al. [9] introduced the concept of adjustable robust optimization (ARO) where some decisions can be delayed until the uncertainty realization is (partly) known.In general ARO, uncertainty realizations are revealed over multiple stages and decisions can be made after each reveal.A decision made in stage t can thus be modeled as a function of all uncertainties associated with previous stages t ′ ≤ t.
While ARO improves decision-making in theory, ARO is equivalent to RO on some special problem instances, where static decision policies yield optimal adjustable solutions [9,22].Using similar arguments to the ones used for the optimality of static solutions, Marandi and den Hertog [46] identified conditions where optimal adjustable decisions are independent of some uncertain parameters.In general, static policies do not yield optimal solutions in the ARO setting.Elucidating this, Haddad-Sisakht and Ryan [37] identified a collection of sufficient conditions that imply the suboptimality of static policies and a strict improvement of ARO over RO.
In general, even the task of finding optimal adjustable solutions in the special case of two-stage ARO proves to be computationally intractable [9].Accordingly, recent works developed many approximation schemes for ARO that often yield good and sometimes even optimal results in practice, see, e.g.[9,19,42,58].In the special case with only two decision stages, the first stage decisions are fixed before any uncertain parameters are known and the second stage decisions use full knowledge of the uncertainty realization.Two-stage ARO already has many applications in practice and has widely been studied in the literature, see, e.g.[14,18,22,16,23,39,40,59].
Still, many real-world problems show inherent multi-stage characteristics and cannot be modeled by two-stage ARO.Examples include variants of inventory management [12], humanitarian relief [13], and facility location [5].The transition from two-stage to multi-stage ARO introduces two main challenges.First, multi-stage ARO problems entail nonanticipativity restrictions that disallow decisions to utilize future information.Second, many approaches that solve the problem by iteratively splitting the uncertainty space, like scenario trees [41], and adaptive partitioning [16,53], grow exponentially in the number of stages.As a consequence, many results found for two-stage ARO do not readily generalize to multistage scenarios.Against this background, we design piecewise affine policies for multi-stage ARO that overcome the previously mentioned challenges and extend, although it is not straightforward, many of the best know approximation bounds for two-stage problems to a multi-stage setting.
In the remainder of this section, we formally introduce our problem (Section 1.1), discuss closely related work (Section 1.2), and summarize our main contributions (Section 1.3).

Problem Description
In this work we study multi-stage adjustable robust optimization with covering constraints and a positive affine uncertain right hand side.Specifically, we consider the following problem: with Here, m is the number of uncertain parameters, n is the number of decisions, and l is the number of constraints.To model the problem's T stages, we split the uncertainty vector ξ into T sub-vectors ξ = ξ 1 , . . ., ξ T with ξ t being the uncertainty vector realized in stage t.In the following, we denote by ξ t := ξ 1 , . . ., ξ t the vector of all uncertainties with known realization in stage t.Similarly, the adjustable decision vector x(ξ) divides into x(ξ) := x 1 (ξ 1 ), . . ., x T (ξ T ) , where the decision x t made in stage t has to preserve nonanticipativity and may only depend on those uncertainties ξ t whose realization is known in stage t.We explicitly allow ξ 1 to be zero-dimensional making the initial decision x 1 non-adjustable.Finally, we denote by x t (ξ) := x 1 (ξ 1 ), . . ., x t (ξ t ) the vector of all decisions in the first t stages.Figure 1 visualizes the multi-stage decision process with nonanticipativity restrictions.
Unless explicitly stated otherwise, we assume w.l.o.g. that the following assumption holds throughout the paper.Assumption 1. U ⊆ [0, 1] m is convex, full-dimensional with e i ∈ U for all i ∈ {1, . . ., m}, and downmonotone, i.e., ∀ξ ∈ U, 0 ≤ ξ ′ ≤ ξ : ξ ′ ∈ U. Down-monotonicity holds because D, d, ξ are all non-negative, and thus constraints become less restrictive for smaller values of ξ.Convexity holds due to the linearity of the problem, and e i ∈ U ⊆ [0, 1] m holds as U is compact and D can be re-scaled appropriately.We note that the non-negativity assumption of the right-hand side does restrict the problem space.As Bertsimas and Goyal [18] point out, this Figure 1: Illustration of multi-stage decision making over T stages.In each stage t a fraction ξ t of the uncertainty is realized and decisions x t are made.Here, decisions x t may only depend on those uncertainties ξ t whose realization is known in stage t assumption prevents the introduction of uncertain or constant upper bounds.However, upper bounds in other decision variables are still possible as A is not restricted, and D, d can be zero.Overall, Problem (1) covers a wide range of different problem classes including network design [50,61], capacity planning [44,48,51], as well as versions of inventory management [12,60] where capacities are unbounded or subject to the decision makers choice.
In the context of multi-stage decision making, some works require stagewise uncertainty, i.e., that the uncertainty set U consists of uncertainty sets U 1 , . . ., U T for each stage, see, e.g., [19,20,32].Like other approaches based on decision rules [43,17], we do not need these restricting assumptions.However, we show how to utilize the existence of such a structure in Section 3.

Related Work
Feige et al. [30] show that already the two-stage version of Problem (1) with D = 1, d = 0 and A being a 0-1-matrix is hard to approximate with a factor better than Ω(log m), even for budgeted uncertainty sets.As it is thereby impossible to find general solutions for x, a common technique to get tractable formulations is to restrict the function space.
In this context, Ben-Tal et al. [9] consider x to be affine in ξ.Specifically, they propose x t to be of the form x t (ξ t ) = P t ξ t + q t .Affine policies have been found to deliver good results in practice [1,10] and are even optimal for some special problems [19,42,58].Further popular decision rules include segregate affine [25,24], piecewise constant [20], piecewise affine [14,31], and polynomial [4,21] policies, as well as combinations of these [54].For surveys on adjustable policies we refer to Delage and Iancu [28] and Yanıkoglu et al. [63].
A key question that arises when using policies to solve ARO problems is how good the solutions are compared to an optimal unrestricted solution.To answer this, many approximation schemes for a priori and a posteriori bounds have been proposed.In the context of a posteriori bounds the focus lies on finding tight upper and lower approximation problems.Hadjiyiannis et al. [38] estimate the suboptimality of affine decision rules using sample scenarios from the uncertainty set.Similar sample lower bounds are used by Bertsimas and Georghiou [17] to bound the performance of piecewise affine policies.Kuhn et al. [43] investigate the optimality of affine policies by using the gap between affine solutions on the primal and the dual of the problem.Georghiou et al. [31] generalize this primal-dual approach to affine policies on lifted uncertainty sets.Building on both of the previous approaches, Georghiou et al. [33] propose a convergent hierarchy of policies that combine affine policies with extreme point scenario samples.Daryalal et al. [27] construct lower bounds by relaxing nonanticipativity and stage-connecting constraints in multi-stage ARO.They then use these lower bounds to construct primal solutions in a rolling horizon manner.
In the context of a priori bounds, most approximation schemes have been proposed for the two-staged version of Problem (1).For general uncertainty sets on the two-stage version of (1), Bertsimas and Goyal [18] show that affine policies yield an O( √ m) approximation if c and x are non-negative.They further construct a set of instances where this bound is tight, showing that no better general bounds for affine policies exist.Using geometric properties of the uncertainty sets, Bertsimas and Bidkhori [15] improve on these bounds for some commonly used sets including budgeted uncertainty, norm balls, and intersections of norm balls.Ben-Tal et al. [14] propose new piecewise affine decision rules for the two-stage problem that on some sets improve these bounds even further.In addition to strong theoretical bounds, this new approach also yields promising numerical results that can be found by orders of magnitude faster than solutions for affine adjustable policies.For budgeted uncertainty sets and some generalizations thereof, Housni and Goyal [40] show that affine policies even yield optimal approximations with an asymptotic bound, i.e., asymptotic behavior of the approximation bound, see, e.g.[56], of O log m log log m .This bound was shown to be tight by Feige et al. [30] for reasonable complexity assumptions, namely 3SAT cannot be solved in 2 O( √ m) time on instances of size m.We present an overview of known a priori approximation bounds for some commonly used uncertainty sets on two-stage ARO in Table 4 of Appendix A.
To the best of our knowledge, Bertsimas et al. [20] are the only ones that provide a priori bounds for multi-stage ARO so far.They show these bounds for piecewise constant policies using geometric properties of the uncertainty sets.More specifically, they consider multi-stage uncertainty networks, where the uncertainty realization is taken from one of multiple independent uncertainty sets in each stage.While the choice of the set selected in each stage may depend on the sets selected before, the uncertainty sets are otherwise independent.Although this assumption is fairly general, it still leaves many commonly used sets where uncertainty is dependent over multiple stages uncovered.Among others, uncovered sets include widely used hypersphere and budgeted uncertainty.
As can be seen, previous work has predominantly focused on providing tighter approximation bounds for two-stage ARO, inevitably raising the question of whether similar bounds hold for multi-stage ARO as well.This work contributes towards answering this question by extending many of the currently bestknown a priori bounds on two-stage ARO to its multi-stage setting.By so doing, we are -to the best of our knowledge -the first ones to provide a priori approximation bounds for the multi-stage ARO Problem (1), where uncertainty sets can range over multiple stages.Unless explicitly stated otherwise, we will always refer to a priori approximation bounds when we discuss approximation bounds in the remainder of this paper.

Our Contributions
With this work, we extend the existing literature in multiple ways, where our main contributions are as follows.
Tractable Piecewise Affine Policies for Multi-Stage ARO: Motivated by piecewise affine policies for two-stage ARO [14], we present a framework to construct policies that can be used to efficiently find good solutions for the multi-stage ARO Problem (1).Instead of solving the problem directly for uncertainty U, we first approximate U by a dominating set Û. To do so, we define the concept of nonanticipative multi-stage domination and show that this new definition of domination fulfills similar properties to two-stage domination.Based on this new definition, we then construct dominating sets Û such that solutions on Û can be found efficiently.More specifically, we choose Û to be a polytope for which worst-case solutions can be computed by a linear program (LP) over its vertices.In order to ensure nonanticipativity, which is the main challenge of this construction, we introduce a new set of constraints on the vertices that guarantee the existence of nonanticipative extensions from the vertex solutions to the full set Û. Finally, we show how to use the solution on the dominating set Û to construct a valid solution for the original uncertainty set U.
Approximation Bounds: To the best of our knowledge, we provide the first approximation bounds for the multi-stage Problem (1) with general uncertainty sets.More specifically, we show that our policies yield O( √ m) approximations of fully adjustable policies.While this bound is tight for our type of policies in general, we show that better bounds hold for many commonly used uncertainty sets.
While our main contribution is to extend approximation bounds to multi-stage ARO, Problem (1) is further less restrictive than problems previously discussed in the literature on approximation bounds.In addition to being restricted to two-stage ARO, previous work often assumed c and x to be nonnegative [18,15,14].Ben-Tal et al. [14] additionally restricted parts of A, i.e., they require the parts of A associated with the first stage decision to be non-negative.Our policies do not need this assumption.However, we show that Problem (1) is unbounded whenever there is a feasible x with c ⊺ x < 0, due to the non-negativity of the right-hand side.As a consequence, our policies do not readily extend to general maximization problems.
From a theoretical perspective, mainly asymptotic bounds are of interest.In practice, however, also the exact factors of the approximation are important.Throughout the paper, we thus always give the asymptotic, as well as the exact bounds.We compare all our bounds to the previously best-known bounds for the two-stage setting given in Ben-Tal et al. [14] and show that our constructions yield both constant factor, as well as asymptotic improvements.
Comparison with Affine Policies: Using the newly found bounds, we show that no approximation bound for affine policies on hypersphere uncertainty exists that is better than the bound we show for our policies.For budgeted uncertainty, on the other hand, we show that affine policies strictly dominate our piecewise affine policies.These findings confirm results that have been reported for the two-stage variant, where affine policies do not perform well for hypersphere uncertainty [18], but very well for budgeted uncertainty [40].
Improvement Heuristic: Due to inherent properties of our policy construction, resulting solutions are overly pessimistic on instances where the impact on the objective varies significantly between different uncertainty dimensions.To diminish this effect, we introduce an improvement heuristic that performs at least as well as our policies and that can be integrated into the LP used to construct our policies.While these modifications come at the cost of higher solution times, they allow for significant objective improvements on some instance classes.
Tightening Piecewise Affine Policies via Lifting: We show that in the context of Problem (1) the piecewise affine policies via lifting presented by Georghiou et al. [31] yield equivalent solutions to affine policies.To prevent this from happening, we construct tightened piecewise affine policies via lifting using insights from our piecewise affine policies.These new policies integrate the approximative power of affine policies, and our piecewise affine policies and are guaranteed to perform at least as well as the individual policies they combine.
Numerical Evidence: Finally, we present two sets of numerical experiments showing that our policies solve by orders of magnitude faster than the affine adjustable policies presented by Ben-Tal et al. [9], the piecewise affine policies via lifting presented by Georghiou et al. [31], and the near-optimal piecewise affine policies by Bertsimas and Georghiou [17], while often yielding comparable or improving results.First, we study a slightly modified version of the tests presented in Ben-Tal et al. [14], allowing us to demonstrate our policies' scalability and the impact of our improvement heuristic.Second, we focus on demand covering instances to demonstrate good performances of our policies for a problem that resembles a practical application.We refer to our git repository (https://github.com/tumBAIS/piecewise-affine-ARO)for all material necessary to reproduce the numerical results outlined in this paper.
Comparison Against Closely Related Work: Compared to the closely related work by Ben-Tal et al. [14], who first introduced the concept of domination in the context of ARO, our contributions are multifold.First, we extend domination-based piecewise affine policies to a wider class of problems by switching from a two-stage to a multi-stage setting and relaxing assumptions.We discuss the structural reasons that make this extension non-trivial at the beginning of Section 2. In addition to showing stronger approximation guarantees for our policies, we conduct comprehensive theoretical and numerical comparisons with other adaptable policies.Based on these comparisons, we construct two new policies that mitigate weaknesses of domination and integrate its strength with the strength of other policies.More specifically, the first policy integrates finding a good outer approximation of the uncertainty set in the optimization process.The second policy integrates structural results from domination into lifting policies, c.f., [31].As a result, we get a hierarchy of piecewise affine adjustable policies with provable relative performance guarantees.We give an overview of all policies constructed in our work and their relative performance guarantees compared to other policies in Figure 2.
The rest of this paper is structured as follows.In Section 2, we introduce our policies and elaborate on their construction.In Section 3, we present our approximation bounds for the multi-stage ARO Problem (1).We present an improvement heuristic for our policies in Section 4. By using the results of Sections 2 and 3, we construct tightened piecewise affine policies via lifting in Section 5. Finally, we provide numerical evidence for the performance of our policy compared to other state of the art policies in Section 6. Section 7 concludes this paper with a brief reflection of our work and avenues for future research.To keep the paper concise, we defer proofs that could possibly interrupt the reading flow to Appendices B-O.

Framework For Piecewise Affine Multi-Stage Policies
In this section, we present our piecewise affine framework for the multi-stage ARO Problem (1).The main rationale of our framework is to construct new uncertainty sets Û that dominate the original uncertainty sets U.With this, our framework follows a similar rationale as the two-stage framework from Ben-Tal et al. [14].For a problem Z AR (U) we construct Û in such a way that Z AR ( Û) can be efficiently solved, and a solution of Z AR ( Û) can be used to generate solutions for Z AR (U).[14] (PAPBT); our piecewise affine policies via domination (PAP), c.f., Sections 2 and 3; affine policies [9] (AFF); our piecewise affine policies with rescaling (SPAP), c.f., Section 4; near-optimal piecewise affine policies [17] (BG); piecewise affine policies via lifting [31] (LIFT); our tightened piecewise affine policies via lifting (TLIFT), c.f., Section 5 In this context, we note that one cannot straightforwardly apply the construction scheme used by Ben-Tal et al. [14] due to nonanticipativity requirements.More specifically, Ben-Tal et al. [14] construct Û as polytopes, where it is well known that worst-case solutions always occur on extreme points, as any solution can be represented by convex combinations of extreme point solutions.The construction of these convex combinations, however, is not guaranteed to be nonanticipative in the multi-stage setting.To overcome this challenge, we incorporate nonanticipativity in the concept of uncertainty set domination and extend it to a multi-stage setting.
Definition 1 (Domination).Given an uncertainty set U ⊆ R m + , we say that U is dominated by Û ⊆ R m + if there is a domination function h : U → Û with h(ξ) ≥ ξ, and h can be expressed as h(ξ) = h 1 (ξ 1 ), . . ., h T (ξ T ) where h t maps to the uncertainties in stage t and depends on uncertainties up to that stage.
Intuitively, an uncertainty set Û dominates another set U if for every point ξ ∈ U there is a point ξ ∈ Û that is at least as large in each component, i.e., ξ ≥ ξ.Later we show that the dominating set Û can be constructed as the convex combination of m + 1 vertices v 0 , . . ., v m .We also show how to construct dominating functions h for these vertex induced dominating sets.Figure 3 illustrates the hypersphere uncertainty set U = ξ ∈ R m + ∥ξ∥ 2 2 ≤ 1 together with our dominating set Û (2) and the dominating function h (4) for m = 2.
Due to the non-negativity of the problem's right-hand side, domination at most restricts the set of feasible solutions.As a consequence, each feasible solution for a realization ξ ∈ Û is also a feasible solution for all realizations ξ ∈ U that are dominated by ξ.Using this property, we can derive piecewise affine policies for Z AR (U) from solutions of Z AR ( Û).Since U is full-dimensional and down-monotone by Assumption 1, there always exists a factor β ≥ 0 such that scaling U by β contains Û. Theorem 1 shows that with this factor β, solutions of problem Z AR ( Û) are β-approximations for problem Z AR (U).It also shows that Z AR ( Û) is unbounded exactly when Z AR (U) is unbounded.Thus, we assume for the remainder of this paper w.l.o.g. that both Z AR (U) and Z AR ( Û) are bounded.
Theorem 1.Consider an uncertainty set U from Problem (1) and a dominating set Û. Let β ≥ 1 be such that ∀ ξ ∈ Û : 1 β ξ ∈ U.Moreover, let Z AR (U) and Z AR ( Û) be optimal values of Problem (1).Then, either Z AR (U) and Z AR ( Û) are unbounded or We present the proof for Theorem 1 in Appendix B.
Figure 3: Two dimensional hypersphere uncertainty set U with (dashed) dominating set Û (2) induced by the convex combination of vertices v 0 , v 1 , v 2 and dominating function h (4) that maps a point ξ ∈ U to a point ξ ∈ Û In the remainder of this section, we demonstrate how the results of Theorem 1 can be used to efficiently construct β-approximations for Z AR (U).Therefore, we show in Section 2.1 how to construct dominating polytopes Û and efficiently find solutions Z AR ( Û) that comply with nonanticipativity requirements.Then, we construct the dominating function h : U → Û, which allows us to extend these solutions to solutions for Z AR (U) in Section 2.2.

Construction of the Dominating Set
In the following, we construct a dominating set in the form of a polytope for which the worst-case solution can be efficiently found by solving a linear program on its vertices.Specifically, for an uncertainty set U, we consider dominating sets Û of the form where for all i ∈ {0, . . ., m} : 1 β v i ∈ U and for all i ∈ {1, . . ., m} : v i = v 0 + ρ i e i for some ρ i ∈ R + .We postpone the construction of the domination function h, the base vertex v 0 , and parameters ρ 1 , . . ., ρ m to Section 2.2 and first focus on the construction of solutions for Z AR ( Û).Here, we extend the notation on x and ξ introduced in Section 1.1 to x i and v i .Consequently, x t i is the sub-vector of x i corresponding to decisions made in stage t, and v i t is the sub-vector of v i corresponding to uncertainties up to stage t.Then, the key component for our construction is LP (3) Intuitively, the Objective (3a) together with Constraints (3b) minimize the maximal cost over all vertex solutions x i .Constraints (3c) ensure that each x i is a feasible solution for the respective uncertainty vertex v i of Û.Finally, Constraints (3d) ensure nonanticipativity by forcing vertex solutions to be equal unless different uncertainties were observed.With these constraints, we construct LP (3) such that it is sufficient to find an optimal solution for Z LP ( Û) in order to find an optimal solution for Z AR ( Û).
Lemma 2. Let Û be a dominating set as described in (2), Z LP ( Û) be the solution of LP (3), and Z AR ( Û) be the solution of Problem (1).Then the LP solution (x i ) on the vertices of Û can be extended to a solution on the full set Û and we find: We present the proof for Lemma 2 in Appendix C.

Construction of the Domination Function
In the previous section, we showed how to construct dominating sets Û such that Z AR ( Û) can be solved efficiently.In order for Û to be a valid dominating set for some uncertainty set U, we additionally have to construct a nonanticipative dominating function h : U → Û according to Definition 1. Specifically, we use where (•) + is the element-wise maximum with 0 and v 0 is the base vertex from Definition (2).By construction, h maps each uncertainty realization ξ to its element-wise maximum with v 0 .It directly follows that h is nonanticipative, as each element in h(ξ) solely depends on the corresponding element in ξ.
Finally, we have to ensure that h(ξ) ∈ Û for all ξ ∈ U. We do so by choosing the base vertex v 0 and parameters ρ 1 , . . ., ρ m during the construction of Û appropriately.Using with the convention 0 0 = 0, we find By definition, any convex combination of v i is contained in Û.Thus, h is a valid domination function if and only if Condition ( 5) gives a compact criterion to check the validity of dominating sets.By doing so, it lays the basis for our optimal selection of the base vertex v 0 and parameters ρ 0 , . . ., ρ m .Checking Condition (5) generally requires solving a convex optimization problem.However, in Section 3 we show that for many commonly used special uncertainty sets, this problem can be significantly simplified, leading to low dimensional unconstrained minimization problems or even analytical solutions.
Recall that we showed how to extend a solution (x 0 , . . ., x m ) of LP (3) to the full set Û in the proof of Lemma 2. Combining this with h and using Theorem 1 we get a piecewise affine solution for Z AR (U) by that has an optimality bound of β.

Limitations
While our policies overcome the nontrivial challenge of nonanticipativity on extreme point solutions, they still rely on the ability to form convex combinations.As integrality is not preserved by convex combinations, there is no natural way to extend our approach to integer or binary recourse decisions x.However, including non-adjustable integer or binary first-stage decisions in our framework is straightforward.Also, it is not straightforwardly possible to incorporate uncertain recourse decisions, i.e., dependence of A on ξ, into our approach, as worst-case realizations for problems with uncertain recourse are not necessarily extreme points of U, see, e.g., [3,33].

Optimality Bounds for Different Uncertainty Sets
In the previous section, we demonstrated how to construct nonanticipative piecewise affine policies for the multi-stage Problem (1).On this basis, proving approximation bounds mainly depends on geometric properties of the uncertainty sets U. We first show approximation bounds for some commonly used permutation invariant uncertainty sets.On these sets, the dominating sets are permutation invariant and we give closed-form constructions.We then give approximation bounds for our piecewise affine policies on general uncertainty sets.Finally, we demonstrate how the bounds of an uncertainty set U generalize to transformations of that set.While in theory, mostly asymptotic bounds are of interest, in practice constant factors are important as well.Thus we always state exact, as well as asymptotic bounds.Table 1 gives an overview of all bounds that are explicitly proven in Propositions 5, 7, 9, 10 and 11 of this We prove specific bounds for uncertainty sets of the forms I) hypersphere uncertainty; II) budgeted uncertainty; III) p-norm ball uncertainty, with p ≥ 1; IV) ellipsoid uncertainty, with Σ := (1 − a)1 + aJ where 1 is the unity matrix and J the matrix of all ones; V) general uncertainty sets.
section.We compare all our results against the results for the two-stage setting in Ben-Tal et al. [14] and show constant factor, as well as asymptotic improvements.For budgeted and hypersphere uncertainty sets, we further compare the theoretical performance of our piecewise affine policies with affine adjustable policies.
For permutation invariant uncertainty sets there exists an optimal choice of Û that is also permutation invariant.More specifically, v i simplifies to v 0 = µe and v i = v 0 + ρe i , for some µ, ρ.Lemma 3. Let U be a permutation invariant uncertainty set.Then there exist µ, ρ such that the dominating uncertainty set Û spanned by v 0 = µe and v i = v 0 + ρe i for i ∈ {1, . . ., m} there is no other dominating set Û′ constructed as in (2) with a smaller approximation factor β.
We present the proof for Lemma 3 in Appendix D.
With these simplifications, Condition (5) becomes With the permutation invariance of the problem, Ben-Tal et al. [14] show that for any µ there exists a j ≤ m, such that the maximization problem in (7) has a solution that is constant on the first j components and zero on all others components.
Lemma 4 (Lemma 4 in Ben-Tal et al. [14]).Let γ(j) be the maximal average value of the first j components of any ξ ∈ U Then for each µ there exists an optimal solution ξ * for the maximization problem in Equation ( 7) that has the form for some j ≤ m.
Hypersphere Uncertainty: We first use Lemma 4 to find a new dominating set for hypersphere uncertainty.By doing so, we find a new approximation bound that improves the bound of 4  √ m provided in Ben-Tal et al. [14] by a factor of While this improvement is irrelevant for the asymptotic complexity of the problem, the new formulation of Û does make a difference in practice.In Figure 4 we illustrate the improvement of our dominating set Û over the dominating set ÛBT proposed in Ben-Tal et al. [14] for hypersphere uncertainty sets in two and three uncertainty dimensions.Note, that our sets Û are fully contained in the sets ÛBT , and all extreme points of ÛBT are located outside of Û.This implies Z AR ( Û) ≤ Z AR ( ÛBT ) for hypersphere uncertainty.The formal proof follows from straightforward convex containment and is left for brevity.
We present the proof for Proposition 5 in Appendix E. We can also use this improved performance bound to show that affine adjustable policies cannot yield better bounds than piecewise affine adjustable policies for m ≥ 153.This is because there are instances of Problem (1) with hypersphere uncertainty where affine adjustable policies perform at least worse than an optimal policy.We formalize these results in Proposition 6. Note, that these better bounds do not imply that piecewise affine adjustable policies always yield better results than affine adjustable policies for hypersphere uncertainty.Proposition 6. Affine adjustable policies cannot achieve better performance bounds than 4 for Problem (1) with hypersphere uncertainty, even for c, x, A being non negative and A being a 0,1matrix.
We present the proof for Proposition 6 in Appendix F.
Budgeted Uncertainty: Next, we tighten the bounds for budgeted uncertainty sets.Proposition 7 shows that our new bound is given by β = k(m−1) m+k(k−2) .Using we show β ≤ min(k, m k ), which matches the bound for the two-stage problem variant in Ben-Tal et al. [14].As β k is decreasing in k and is increasing in k, we obtain a maximum improvement for At this point the improvement of the bound reaches a factor of 1 2 .
Proposition 7 (Budget).Consider the budgeted uncertainty set U = {ξ ∈ [0, 1] m |∥ξ∥ 1 ≤ k} for some k ∈ {1, . . ., m}.Then a solution for Z AR ( Û) where Û is constructed using Criterion (7) with We present the proof for Proposition 7 in Appendix G. Note, that there is no result analogous to Proposition 6 for budgeted uncertainty as Housni and Goyal [40] showed that affine policies are in O log(m) log log(m) for two-stage problems with non-negative c, x, A. Furthermore, our piecewise affine policies are strictly dominated by affine policies for integer budgeted uncertainty.
Proposition 8. Consider Problem (1) with budgeted uncertainty and an integer budget.Let Z P AP be the optimal value found by our piecewise affine policy and Z AF F be the optimal value found by an affine policy.Then Z AF F ≤ Z P AP .
We present the proof for Proposition 8 in Appendix H. Norm Ball Uncertainty: In a similar manner as before, we construct new dominating sets for pnorm ball uncertainty and tighten the bound in Ben-Tal et al. [14] by a factor of 2 for sufficiently large m.This factor is always smaller than one and converges to 1 2 for large p.
Proposition 9 (p-norm ball).Consider the p-norm ball uncertainty set Then a solution for Z AR ( Û) where Û is constructed using Criterion (7) with approximation for problem Z AR (U).
We present the proof for Proposition 9 in Appendix I. Ellipsoid Uncertainty: For the permutation invariant ellipsoid uncertainty set ξ ∈ R m + ξ ⊺ Σξ ≤ 1 with m > 1, Σ := 1 + a(J − 1), a ∈ [0, 1], 1 being the unity matrix, and J being the matrix of all ones, we construct dominating sets via a case distinction on the size of a.While for large a already a scaled simplex gives a good approximation, we construct the dominating set for small a more carefully.By doing so, we improve the previously best known asymptotic bound for the two-stage problem variant of O(m 2 5 ) [14] to O(m 1 3 ).Note, that for a = 0 our bounds converge to the bounds of hypersphere uncertainty in Proposition 5 and for a = 1 towards an exact representation.

Proposition 10 (Ellipsoid). Consider the ellipsoid uncertainty set
Here 1 is the unity matrix and J is the matrix of all ones.Then a solution for Z AR ( Û) where Û is constructed using Criterion (7) with ) approximation for problem Z AR (U).
We present the proof for Proposition 10 in Appendix J. General Uncertainty Sets: After having shown specific bounds for some commonly used permutation invariant uncertainty sets, we now give a general bound that holds for all uncertainty sets that fulfill the assumptions of Problem (1).We show that any uncertainty set can be dominated within an approximation factor of β = 2 √ m + 1, which improves the bound in Ben-Tal et al. [14] by a factor of 1 2 .As shown in Ben-Tal et al. [14] this approximation bounds is asymptotically tight, when using pure domination techniques.More precisely, for any polynomial number of vertices the budgeted uncertainty set with k = √ m cannot be dominated with some β better than O( √ m).
Proposition 11 (General Uncertainty).Consider any uncertainty set U ⊆ [0, 1] m that is convex, fulldimensional with e i ∈ U for all i ∈ {1, . . ., m} and down-monotone.Then, there always exists a dominating uncertainty set Û of the form in (2) that dominates U by at most a factor of β = 2 √ m + 1.
We present the proof for Proposition 11 in Appendix K.
Stagewise Uncertainty Sets: In general, our policies do not require stagewise uncertainty.However, the existence of such a structure can be utilized in the construction of dominating uncertainty sets, leading to approximation bounds that depend linearly on the stagewise approximation bounds.
be a stagewise independent uncertainty set and for each U t , let Ût be a dominating set constructed as in (2).Let β t be the approximation factor for Ût , and let β ′ t = min{β ′ : 1 β ′ e ∈ U t } be the constant approximation factor for set U t .Then for any partition T 1 ∪ T 2 = {1, . . ., T }, T 1 ∩ T 2 = ∅ of the stages, there exists a dominating set Û for U with approximation factor We present the proof for Proposition 12 in Appendix L.
Transformed Uncertainty Sets: We note that by the right-hand side Dξ + d of Problem (1) any positive affine transformation of uncertainty sets U can be dominated by the same affine transformation of the dominating set Û.As the approximation bounds do not depend on D and d, the bounds for the transformed set are the same as for the original set.One well-known uncertainty type covered by these transformations is scaled ellipsoidal uncertainty ξ m i=1 w i ξ 2 i ≤ 1 , which was first proposed by Ben-Tal and Nemirovski [7].These sets can be constructed via transformations from hypersphere uncertainty sets with a diagonal matrix D with D ii = 1 √ wi .Scaled ellipsoidal uncertainty has been applied to many robust optimization problems, including portfolio optimization [7], supply chain contracting [10], network design [47], and facility location [5].
Another widely used class that is partially covered by these positive affine transformations are factorbased uncertainties given by sets U = {Dz + d|z ∈ U z }.In these sets, uncertainties affinely depend on a set of factors z that are drawn from a factor uncertainty set U z .Problems that were solved using such uncertainty sets include, among others, portfolio optimization [35,34] and multi-period inventory management [57,2].In contrast to the general factor sets, that have no limitations on D and d, our approach is restricted to positive factor matrices which allows only for positive correlations between uncertainties.Nevertheless, even this subset of factor-based uncertainties has wide applicational use.As an intuitive example, one could consider component demands where the factors are demands for finished products.
shows the costs for the vertex solutions x i with maximal cost z (blue, dashed) compared to the costs for the re-scaled vertex solutions x ′ i with maximal cost z ′ (green, solid) because the creation of dominating sets may overemphasize single uncertainty dimensions by up to a factor of β by design.Accordingly, it might thus be beneficial to dominate these uncertainty dimensions more carefully on instances where a few critical uncertainties cause almost all the cost.In this context, we show that it is possible to shrink critical vertices v i at the cost of slightly shifting all other vertices towards their direction.
We illustrate the re-scaling process following from Lemma 13 in Figure 5.In the depicted example we shrink the critical vertex v 1 and shift the two remaining vertices v 0 , v 2 towards uncertainty dimension ξ 1 .As a consequence, the cost of the vertex solution x 1 decreases, while the costs of the solutions x 0 , x 2 increase.The cost reduction on the critical vertex v 1 leads to a reduction of the worst case vertex cost z which by the construction of LP (3) corresponds to an overall improvement of the objective function.Note, that the dominating set Û used in the example is not an optimal choice for the hypersphere uncertainty set depicted.However, the effect would barely be visible in two dimensions without this sub-optimal choice.Lemma 13.Let Û := conv(v 0 , . . ., v m ) be a dominating set for U. Let s ∈ [0, 1] m be a vector of scales.
is also a dominating set for U. We present the proof for Lemma 13 in Appendix M.
The two extreme cases for the modified dominating sets from Lemma 13 are given by s = 0 and s = e.While the dominating set does not change for s = 0, all vertices become the unit vector consisting of ones in every component for s = e.Intuitively, increasing s i leads to shifting the i th component towards one.
In the same way as we constructed our dominating sets in (2) this shift of the i th component towards one increases the value for all v j with j ̸ = i and decrease it for j = i.Note that this Lemma is not limited to uncertainty sets that are constructed as described in (2), but holds for any dominating set created as a convex combination of vertices.
The transformation used in Lemma 13 is linear, which allows us to add s as a further decision variable to the second constraint of LP (3) for a given Û.As s = 0 gives the original dominating set, any optimal solution found with these additional decision variables is at least as good as a non-modified solution.Thus, all performance bounds shown in Section 3 also hold for these re-scaled piecewise affine policies.Note that Proposition 8 also extends to the re-scaled uncertainty set; thus, all re-scaled piecewise affine policies are strictly dominated by affine policies for integer budgeted uncertainty.
Adding s to the LP increases its size, which in practice will often lead to an increase of solution times.To limit the increase of model size, it is possible to add only those s i where one expects s i > 0, as not adding a variable s i is equivalent to fixing s i = 0.Those s i with s i > 0 correspond to the critical uncertainty dimensions, and an experienced decision maker with sufficient knowledge of the problem might be able to identify them a priori.

Piecewise Affine Policies via Liftings
Georghiou et al. [31] propose piecewise affine policies via liftings.In this section, we strengthen these policies by using the insights from the policies constructed in Sections 2 and 3.
To construct piecewise affine policies via liftings, in the context of Assumption 1, we first choose for each uncertainty dimension i ∈ {1, 2, . . ., m}.For ease of notation let z i 0 := 0, z i ri := 1.With these breakpoints, we define the lifting operator L : R m → R m L ,where m L := m i=1 r i , componentwise by Further, we define the linear retraction operator R : R m L → R m componentwise by Here ξ L i,j are the components of the m L dimensional vector Note that R • L : U → U is the identity.Finally, Georghiou et al. [31] construct a lifted uncertainty set U L via ∀i ∈ {1, . . ., m}, j ∈ {1, . . ., r i − 1} Uncertainty set ( 8) is an outer approximation of the lifting L(U) ⊆ U L and omits R(U L ) = U. Replacing the uncertainty in Problem (1) with this lifted uncertainty set yields the lifted adjustable problem Limiting x to affine policies in the lifted space, yields piecewise affine policies in the original space, which give tighter approximations than affine policies in the original space, i.e., Z L AF F (U L ) ≤ Z AF F (U) [31].However, U L is not a tight outer approximation of L(U), leading to little or no improvements over affine policies on some instances [31,17].In fact, we show that in the framework of ARO, the piecewise affine policies induced by lifted uncertainty (8) are equivalent to classical affine policies, in the sense that for any optimal feasible lifted affine policy, there is an affine policy with the same objective value and vice versa.
Proposition 14.Let Z AF F be the optimal objective for affine policies on Z AR (U) and let Z LIF T be the optimal objective value for lifted affine policies on Z L AR (U L ).Then We present the proof for Proposition 14 in Appendix N.
We overcome this shortcoming in the construction of lifted affine policies using our results on dominating uncertainty sets from Section 2. Consider the lifting with one break-point v 0i per uncertainty dimension.Here v 0i is the i th component of the base vector v 0 from Section 2. Then the lifting operator L becomes With this construction of L it is easy to verify that by Condition (5) Accordingly, we can tighten the lifted uncertainty set U L and get the new lifted uncertainty set By construction ÛL is an outer approximation of L(U) and we have L(U) ⊆ ÛL ⊆ U L and R( ÛL ) = U.Thus, affine policies on the lifted problem with uncertainty set ÛL yield valid piecewise affine policies for the original problem.Furthermore, the construction of ÛL guarantees that the lifted policies yield tighter approximations than our piecewise affine policies via domination and classical lifted policies with the same breakpoints.Consequently, all approximation bounds for piecewise affine policies via domination also hold for the strengthened piecewise affine policies via lifting.
Proposition 15.Let Z LIF T be the optimal objective value found by the lifting policies with breakpoints v 0 and lifted uncertainty set U L defined in ( 8), Z T LIF T be the optimal objective value found by the lifting policies with breakpoints v 0 and tightened lifted uncertainty set ÛL defined in (10), and Z SP AP be the optimal objective value found by the piecewise affine policies with re-scaling described in Section 4. Then We present the proof for Proposition 15 in Appendix O.

Numerical Experiments
In this section, we present two numerical experiments to compare the performance of our piecewise affine policies for the different constructions of Û and our tightened piecewise affine policies via lifting with the performance of other policies from the literature.We compare the performance in terms of both objective value and computational time.
We run both of the following tests with hypersphere uncertainty sets and budgeted uncertainty sets.In the experiments of Ben-Tal et al. [14] piecewise affine policies performed particularly well compared to affine adjustable policies for hypersphere uncertainty and relatively bad for budgeted uncertainty.Accordingly, considering these two uncertainty types gives a good impression of the benefits and limitations of piecewise affine policies.Additionally, this experimental design allows us to analyze whether or not the new formulations with the tighter bounds presented in Propositions 5 and 7 have a significant impact in practice.
In our studies, we compare the following policies: the affine policies described in Ben-Tal et al. [9] (AFF), the constant policies resulting from a dominating set Û = {e} with only a single point which by down-monotonicity corresponds to a box (BOX), the near-optimal piecewise affine policies with two pieces proposed in Bertsimas and Georghiou [17] (BG), our piecewise affine policies constructed as described in Propositions 5 and 7 (PAP), the piecewise affine policies constructed as described in Propositions 1 and 5 in Ben-Tal et al. [14] (PAPBT), our piecewise affine policies with the vertex re-scaling heuristic described in Section 4 (SPAP), and our tightened piecewise affine policies via lifting described in Section 5 (TLIFT).Note, that piecewise affine policies via lifting from Georghiou et al. [31] are implicitly included in the comparison by Proposition 14.In Table 2 we give an overview of all policies compared in our experiments.
For all studies, we used Gurobi Version 9.5 on a 6 core 3.70 GHz i7 8700K processor using a single core per instance.

Gaussian Instances
We base our first set of benchmark instances on the experiments of Ben-Tal et al. [14] and Housni and Goyal [40].Accordingly, we generate instances of Problem (1) by choosing m = l = n, d = 0, D = 1 m PAP with re-scaling (c.f., Section 4) TLIFT AFF on ÛL tightened pap via lifting (c.f., Section 5) and generate c, A randomly as c = e + αg, Here, e is the vector of all ones, 1 is the identity matrix, g and G are randomly generated by independent and identically distributed (i.i.d.) standard gaussians, and α is a parameter that increases the asymmetry of the problem.More specifically, G is given by stages where the i th decision always belongs to the same stage as the i th uncertainty.For the budgeted uncertainty sets we use a budget of k = √ m.We consider values of m = i 2 for i ∈ {2, . . ., 10} and values of α in {0, 0.1, 0.5, 1, 5}.For each pair of m, α, we consider 30 instances.To make the results more comparable, we scale all objective values presented by the constant policies results (i.e., Z • /Z BOX ) and report averages over all solved instances.For each parameter pair m, α, we only consider those policies that found solutions on at least 75% of instances within a hard solution time limit of 4 hours.Additionally, we present all results on a logarithmic scale and artificially lower bound the scale for solution times by 0.01s to make the effects on higher solution times more visible.Figure 6 shows the performances and solution time results on hypersphere uncertainty sets for the different policies.First, note that BG only finds solutions within the time limit for the smallest instances yielding objective values comparable or marginally better to TLIFT.For the other policies we observe that piecewise adjustable policies perform significantly better than affine adjustable policies for small values of α.The improvement increases for larger values of m reaching almost a factor of 2 for m = 100 on our policies PAP, SPAP, and TLIFT.As expected, the performance of PAP and SPAP for small values of α is almost indistinguishable due to the construction.Additionally, we find that TLIFT only yields marginal improvements over PAP and SPAP for small values of α.For larger values of α the improvements of the piecewise affine adjustable policies vanish and TLIFT starts to improve over SPAP.The two policies without re-scaling (PAP and PAPBT) perform even worse than AFF for α = 5.More severely, PAPBT even performs worse than BOX, which already is a worst-case policy.Only SPAP and TLIFT achieve better results than affine adjustable policies for all values of α.
While solution times for all policies except BOX grow exponentially in the instance size, dominationbased piecewise affine adjustable policies are by orders of magnitude faster than classical affine adjustable policies (AFF) and piecewise affine polices via lifting (TPAP).These solution time improvements exceed a factor of 100 for piecewise affine policies PAP and SPAP and a factor of 1, 000 for PAPBT.Also, solution times of domination-based piecewise affine adjustable policies are barely influenced by values of α.This is not the case for AFF and TLIFT, which take longer to solve for increasing α.While this effect is not easily visible in Figure 6 due to the logarithmic scale, the solution time difference for AFF and TLIFT between α = 0 and α = 5 reaches up to a factor of two on large instances.
Figure 7 shows the performances and solution time results on budgeted uncertainty sets.We observe that for budgeted uncertainty sets domination-based piecewise affine policies perform slightly worse than affine policies throughout all instances.This observation nicely demonstrates that our theoretical and experimental results are aligned, as the worse performance is perfectly explained by Proposition 8, which shows that affine policies strictly dominate our piecewise affine policies.Again, we observe that for higher values of α, PAP and PAPBT perform even worse than BOX.However, the solution values for SPAP stay within 5% of the affine solution values throughout all instances.We further observe that TLIFT yields the same objective values as AFF throughout all instances.Only BG yields slightly better solutions than AFF on some instances.Solution times behave similarly to solution times on hypersphere uncertainty, confirming that piecewise affine policies are found by orders of magnitude faster on different instances and uncertainty types.Only for BG solution times improve significantly, suggesting that BG is highly dependent on the shape of the uncertainty sets.Solution time and performance results for α = 0 align with the results found by Ben-Tal et al. [14] for the two-stage problem variant.This demonstrates that our generalized piecewise affine policies do not only extend all theoretical performance bounds, but also achieve comparable numerical results in a multi-stage setting.However, by breaking the symmetry by increasing α, we show that pure dominationbased piecewise adjustable policies perform poorly on highly asymmetric instances and re-scaling (SPAP) or tightened lifting (TLIFT) constitute good techniques to overcome this shortcoming.

Demand Covering Instances
For the second set of test instances, we consider the robust demand covering problem with non-consumed resources and uncertain demands.The problem has various applications, among others in the domains of appointment scheduling, production planning, and dispatching and is especially relevant for the optimization of service levels.Our instances consist of m l locations, m p planning periods, and m e execution periods per planning period.In each execution period t, an uncertain demand ξ lt has to be covered at each location l.To do so, the decision maker can buy a fixed number of resources R at a unit cost of c R in the first stage and then distributes these R resources among the locations at the beginning of each planning period.If a demand cannot be met with the resources assigned to a location, the decision maker will either delay the demand to the next period or redirect it to another location.In either case a fraction q d tl ∈ [0, 1] or q r tll ′ ∈ [0, 1] of the demand is lost.Each unit of lost demand causes costs of c D .Mathematically, the robust demand covering problem with non-consumed resources and uncertain demands is given by the robust LP (11), where parameters and variables are summarized in Table 3.
Here Objective (11a) minimizes the sum of resource costs and lost demand costs due to delay and relocation.Constraints (11b) ensure that all demands are fulfilled, delayed, or relocated, and Constraints (11c) upper bound the allocated resources in each planning period by the total number of available resources R.
In most real-world applications of the demand covering problem, some of the demand will be revealed before the actual demand occurs, e.g.due to already existing contracts, sign-ups, orders, or due to forecasting.To incorporate the increase in knowledge over time, we assume an uncertainty vector of the form ξ = ξ c + ξ p + ξ e .Here, ξ c is constant and known before the first stage decision, ξ p is revealed before each planning period and ξ e accounts for the short-term uncertainties revealed before each execution period.Specifically, we assume that demands are given by where d lt is the base demand for location l in execution period t.Here the uncertainty vector (ξ p , ξ e ) is taken from one base uncertainty set U B of dimension m = 2m l m p m e .Note, that short-term uncertainties have a less severe effect on demands than uncertainties known in advance.The resulting demand uncertainty set is m/2 dimensional, where in our experiments U B is either a hypersphere uncertainty set, or a budgeted uncertainty set with budget √ m.

2
. In each of the m p ∈ {1, 3, 5, 7} planning periods, we consider m e = 8 execution periods corresponding to the hours in a working day.We assume that a fraction q d tl = 0.1 of the demand is lost when deferred to a later execution period and consider a doubled loss rate (q d tl = 0.2) when demand is deferred to another planning period.Similarly, a fraction of the demand is lost when assigned to another location.We assume this fraction to be correlated to the distance and given by q r ll ′ := min (1, 0.02 • dist(l, l ′ )).We draw the base demands d lt uniformly from the normal distribution N (10, 4).Finally, we set c R = 1 and choose c D ∈ {0.1, 0.25, 0.5}.For each combination of m l , m p we consider 45 instances, where we use each possible value for c D ∈ {0.1, 0.25, 0.5} in a third of these instances.
To analyze practical expected objectives, we also report a simulated average objective that the respective policies achieved on 500 randomly drawn uncertainty realizations ξ, in addition to the robust objective value.We give a detailed description of how uniform uncertainty realizations can efficiently be sampled from the budgeted and hypersphere uncertainty sets in Appendix P. We again scale the results by the results achieved by constant policies and use logarithmic scales.For each instance size, we only consider those policies that found solutions on at least 75% of instances within a hard solution time limit of 2 hours.
First, we observe that BG did not solve any instance within the time limit which can be attributed to the fact that our demand covering instances are significantly larger than our gaussian instances.
For the remaining policies, Figure 8 shows the performance and solution time results on demand covering instances with hypersphere uncertainty.Compared to our previous experiment (see Section 6.1) we no longer observe the strong objective improvements of piecewise affine policies over affine adjustable policies.Still, our piecewise affine formulations give similar results as affine policies, and we observe small improvements on instances with a larger number of planning periods, with TLIFT yielding strict improvements for m p ≥ 3. On the simulated realizations, improvements of PAP and SPAP over affine  adjustable policies can already be seen for m p ≥ 3, which might be of interest for a decision maker with practical interest beyond worst-case solutions.
For the solution times, we observe similar improvements to the ones observed on the gaussian instances.Still, all domination-based piecewise affine policies can be found by orders of magnitude faster than affine policies and TLIFT.Interestingly, PAP is solved similarly fast as PAPBT on these instances, while still achieving up to 15% better objective values on all instances.The largest instance that could be solved by affine adjustable policies within two hours consisted of 320 uncertainty variables and 700 decision variables, while the largest instance solved by PAPs was more than three times larger with 1, 120 uncertainty variables and 6, 230 decision variables.
Figure 9 shows the results on demand covering instances with budgeted uncertainty sets.For budgeted uncertainty sets, domination-based piecewise affine policies perform worse than affine adjustable policies throughout all instances, which again can be explained by Proposition 8. Notably, PAPBT even performs worse than constant policies on most instances.In this setting, domination-based piecewise affine policies remain better only from a solution time perspective, as they still solve by orders of magnitude faster than affine policies.Also, TLIFT does not yield any improvements over AFF.
While in this set of experiments piecewise affine policies do not show the same improvements in the objective over affine adjustable policies, they still perform slightly better with hypersphere uncertainty on larger instances.Also, they still solve by orders of magnitude faster, which makes them an attractive alternative for large-scale optimization in practice.

Discussion
In the experiments presented in Sections 6.1 and 6.2, some results, e.g. the solution time improvements of piecewise affine policies over affine policies, are consistent throughout all instances.However, other results strongly depend on the set of benchmark instances used.In the following, we give an explanation for the strong solution time improvements, discuss two of the main deviations that we observe between our results on gaussian instances and demand covering instances, and give intuitions of why these differences occur.
Size of Robust Counterparts: Throughout all experiments, we see strong solution time improvements of piecewise affine policies over affine policies -including affine policies on lifted uncertainty TLIFT.These can be explained by their respective robust counterparts.The robust counterpart for piecewise affine policies is given by LP (3).For the robust counterparts of affine policies, we refer to Ben-Tal et al. [9].Counterparts of piecewise affine policies have O(nm) variables compared to O(nm + lm) variables for affine policies and both counterparts have O(lm) constraints.More critically, constraints for piecewise affine policies contain at most O(n) variables each and feature a block structure, which solvers use to significantly speed up the solution process.This block structure is only connected by the nonanticipativity constraints.In the robust counterpart of affine policies on the other hand, O(l) constraints have up to O(nm) variables resulting in a denser constraint matrix and the lack of a block structure.Moreover, the robust counterpart for affine policies on hypersphere uncertainty sets is no longer linear.Instead, a quadratic program has to be solved, which tends to be computationally more challenging.
Solution Times of PAPBT: In the experiments we see that PAPBT solves by a factor of 10 to 50 faster than PAP on gaussian instances, but both find solutions similarly fast on-demand covering instances.This can be explained by the construction of PAPBT and the structure of the instances' constraints.On the gaussian instances, A and x are non-negative and D is the unit matrix.In the construction of dominating sets Û for PAPBT most of the vertices are chosen to be scaled unit vectors.As a consequence, most Constraints (3c) in LP (3) have a zero right-hand side, such that they trivially hold.Consequently, these constraints can be eliminated, which reduces the total number of constraints by a factor of O(m).On demand covering instances, however, A contains negative entries.Thus, no constraints trivially hold and no constraints can be removed.For a decision maker who is primarily interested in fast policies, this gives a good criterion on when PAPBT can improve solution times and when no such improvements can be expected.
Performance Differences Between Gaussian and Demand Covering Instances: We observe that the strong performance improvements of piecewise affine policies over affine policies on gaussian instances with hypersphere uncertainty do not transfer to our demand covering instances.This suggests that the relative performance between piecewise affine policies and affine policies significantly depends on the structure of the problem at hand.An intuitive explanation for this lies in the policies' construction.
Recall that piecewise affine policies derive solutions by finding vertex solutions x i that can be extended to a full solution.Thereby, x 0 focuses on finding a good solution for uncertainty realizations where all uncertainties take equal values, and each x i focuses on finding a good recourse to uncertainty ξ i .Thus, good results can be expected when there are (a) synergy effects that can be utilized by x 0 , and (b) good universal recourse decisions for each uncertainty ξ i that do not depend on the realization of other uncertainty dimensions and can be exploited by x i .On the other hand, affine policies directly find solutions on the original uncertainty set.In doing so, they do not depend as strongly on good universal recourse decisions as piecewise affine policies do.However, they also lack the ability to use synergy effects in the way vertex solutions x 0 do.
The gaussian instances used fulfill both of the properties that are favorable for piecewise affine policies.By being based on a unity matrix, A has relatively large values along the diagonal, leading to the existence of good universal recourse decisions.Additionally, the relatively small non-negative entries on the nondiagonals lead to synergy effects for uncertainty realizations with many small values.Demand covering instances, however, do not fulfill these properties.The question of how to redirect demand optimally heavily depends on the demand observed at other locations.Also, the only synergistic effects that can be used solely emerge when multiple demands occur at the same location in a single planning period.
On general instances in practice, we would thus not expect to see the same performance improvements that could be observed on our gaussian benchmark instance.Still, piecewise affine policies find solutions by orders of magnitude faster than affine policies and achieve good results throughout all benchmark instances with hypersphere uncertainty.Additionally, Properties (a) and (b) give intuitive criteria on when strong objective improvements over affine policies can be expected.

Conclusion
In this work, we presented piecewise affine policies for multi-stage adjustable robust optimization.We construct these policies by carefully approximating uncertainty sets with a dominating polytope, which yields a new problem that we efficiently solve with a linear program.By making use of the problem's structure, we then extend solutions for the new problem with approximated uncertainty to solutions for the original problem.We show strong approximation bounds for our policies that extend many previously best-known bounds for two-stage ARO to its multi-stage counterpart.By doing so, we contribute towards closing the gap between the state of the art for two-stage and multi-stage ARO.To the best of our knowledge, the bounds we give are the first bounds shown for the general multi-stage ARO Problem.Furthermore, our bounds yield constant factor as well as asymptotic improvements over the state-of-theart bounds for the two-stage problem variant.
In two numerical experiments, we find that our policies find solutions by a factor of 10 to 1,000 faster than affine adjustable policies, while mostly yielding similar or even better results.Especially for hypersphere uncertainty sets our new policies perform well and sometimes even outperform affine adjustable policies up to a factor of two.We observe particularly high improvements on instances that exhibit certain synergistic effects and allow for universal recourse decisions.However, on some instances where few uncertainty dimensions have a high impact on the objective, pure piecewise affine policies perform particularly badly by design, sometimes even worse than constant policies.To mitigate this shortcoming, we present an improvement heuristic that significantly improves the solution quality by re-scaling the critical uncertainty dimension.Furthermore, we construct new tightened piecewise affine policies via lifting that integrate the two frameworks of piecewise affine policies via domination and piecewise affine policies via lifting and combine their approximative power.
While this work extends most of the best-known approximation results for a relatively general class of ARO problems from the two-stage to the multi-stage setting, it remains an open question whether other strong two-stage ARO results can be generalized to multi-stage ARO in a similar manner.Answering this question remains an interesting area for further research.In this context, binary and uncertain recourse decisions remain particularly relevant challenges.Our analysis in Section 2.3 has shown that the extension of our policies to encompass these recourse decision types is not straightforward.Nevertheless, exploring the integration of our methodology into the established approaches of piecewise constant policies and k-adaptability, which have proven to be effective in these cases, appears as a promising starting point for future work.Another interesting area for future research is the extension of piecewise affine policies and the concept of domination to adjustable data-driven and distributionally robust optimization.More specifically, we believe that one can obtain tractable data-driven policies by directly fitting the polyhedral uncertainty sets used to construct our policies from data.

Appendix A Comparison of Literature Approximation Bounds
Table 4 summarizes existing approximation bounds for some commonly used uncertainty sets in two-stage ARO.We want to point out that in addition to being less restrictive, our bounds for multi-stage ARO presented in Table 1 outperform all these bounds except the ones by Housni and Goyal [40], which require the significant restrictive assumption of A, x, c being non-negative.

Appendix B Proof of Theorem 1
Proof.We split the proof in two parts.First, we handle the cases where at least Z AR (U) or Z AR ( Û) is negative and show that this already implies that both Z AR (U) and Z AR ( Û) are unbounded.Second, we assume Z AR (U), Z AR ( Û) ≥ 0 are bounded and prove that in this case, the desired performance bounds hold.
Part 1: Let Ũ ∈ {U, Û} such that Z AR ( Ũ) < 0.Then, there exists a solution x such that Now, assume that Z AR (U) or Z AR ( Û) is bounded and let Ū ∈ {U, Û} such that Z AR ( Ū) is bounded with an optimal solution x.Arbitrarily fix one ξ ∈ Ũ and consider the constant vector x( ξ).Then x( ξ) + x( ξ) is a feasible solution for Z AR ( Ū) as for all ξ ∈ Ū : Here, (a) holds because x, x are feasible solutions and (b) holds as ξ, D, d are non-negative.For the objective we then find This is a contradiction to x being a minimal solution.Thus, Z AR ( Ū) cannot be bounded.We have thus shown that if any of Z AR (U), Z AR ( Û) are negative, both have to be unbounded.
Part 2: Assume Z AR (U), Z AR ( Û) ≥ 0 are bounded.Let x be an optimal solution for Z AR ( Û).Furthermore, let h : U → Û be the domination function from Definition 1.We claim that x := x • h is a feasible solution for Z AR (U).First, we see that by the definition of h the solution x fulfills the nonanticipativity requirements.Specifically, we have where the decisions in stage t depend only on the uncertainty revealed up to that stage.For the constraints, we find Here, (a) follows from the feasibility of x and (b) follows from the definition of h and D being nonnegative.Thus, x is a well-defined feasible solution for Z AR (U) and we have For the other direction let x * be an optimal solution for Z AR (U).Then, for all ξ ∈ Û we have 1 β ξ ∈ U by definition of β.We define x( ξ) := βx * 1 β ξ and find where (a) follows from the feasibility of x * and (b) from the non-negativity of d.Thus, x is a well-defined feasible solution for Z AR ( Û) and we have Having shown both inequalities this concludes the proof.

Appendix C Proof of Lemma 2
Proof.We begin by showing Z AR ( Û) ≥ Z LP ( Û).Let x be an optimal solution of Z AR ( Û).Then, (x i , z) is a valid solution for Z LP ( Û), where x i , z are defined by The first set of LP constraints (3b) holds by definition of z.The second set of constraints (3c) holds as x is a solution of Z AR ( Û) and v i ∈ Û, and the last set of constraints (3d) holds by nonanticipativity of x and the definition of v i .Furthermore, we find For the other direction let (x i , z) be an optimal solution for Z LP ( Û). Define λ i (ξ) for each ξ ∈ Û by: For ease of notation we will in the following drop the explicit dependence on ξ and use λ i .We directly find for all ξ ∈ Û, as by definition of Û each ξ is a convex combination of v 0 , . . ., v m .Also, note that λ i only depends on the i th component of the uncertainty vector ξ.Using this we can define a nonanticipative decision vector x(ξ) = x 1 (ξ 1 ), . . ., x T (ξ T ) by where I t := i ∈ {1, . . ., m} e i t ̸ = 0 is the index set corresponding to the uncertainties up to stage t.By the nonanticipativity constraints (3d) we have x t i = x t 0 for all 1 ≤ t ≤ T and i ∈ {1, . . ., m} \ I t .Thus, which shows that x is a convex combination of x 0 , . . ., x m by Using this, we find Here, (a) holds because all x i are valid solutions for (3), and (b) holds as by definition of λ i we have where we use for (a) that x is a valid solution for Z AR ( Û).For (b), we use that x is a convex combination of x 0 , . . ., x m with convex coefficients (1 − m i=1 λ i ), λ 1 , . . ., λ m .The convex sum is upper bounded by its largest summand c ⊺ x i .We can thus take the maximum over all these c ⊺ x i , which equals the LP solution value Z LP ( Û).

Appendix D Proof of Lemma 3
Proof.Let S be the set of all permutations on {1, . . ., m} and for any vector ξ let σ(ξ) be the vector with components permuted according to σ.Let S ji ⊆ S be the subset of all permutations mapping component j to component i.Let Û be a dominating uncertainty set for U of the form in (2), such that the approximation factor β is minimal.Let v 0 , ρ 1 , . . ., ρ m be the parameters defining the vertices of Û. Define µ := 1 m e ⊺ v 0 , ρ := 1 m m i=1 ρ i .Then for each i ∈ {1, . . ., m} µe + ρe i where U = {ξ| ∥ξ∥ 2 2 ≤ 1} ⊂ [0, 1] m is the hypersphere uncertainty set, α ∈ R is the first stage decision variable and x ∈ R m are the second stage decisions that may depend on the uncertainty realization ξ.
We begin by finding a lower bound for affine policies on this problem class.By symmetry of the problem there always exists an optimal affine adjustable solution x(ξ) of the form for some a, b, c ∈ R. To see this let x * , α be an optimal affine solution to the above problem and define where S is the set of all permutations on m elements.Then where (a) follows from the permutation invariance of αe and (b) follows from permutation invariance of U and feasibility of x * .Furthermore, Here, (a) follows from permutation invariance of e, (b) follows from the subadditivity of a maximization function, and (c) follows from permutation invariance of U. Non-negativity of x follows from 0 ∈ U. Thus x, α is an optimal feasible solution.Additionally for any σ ∈ S, where (a) follows from (σ −1 • •) being an automorphism on S. Let σ ij be the permutation switching components i and j.Then x(σ ij (0)) = σ ij (x(0)) implies x i (0) = x j (0), x(σ ij (e j )) = σ ij (x(e j )) implies x i (e i ) = x j (e j ) and x k (e i ) = x k (e j ) for i ̸ = k ̸ = j, and x(σ ij (e k )) = σ ij (x(e k )) for i ̸ = k ̸ = j implies x i (e k ) = x j (e k ).This guarantees the claimed structure.Let ξ ∈ U and for some i ≤ m define the vector ξ ′ by ξ ′ := ξ − ξ i e i .Then ξ ′ is also in U and by x i (ξ ′ ) ≥ 0 we find that every feasible solution has to fulfill for all ξ ∈ U. Additionally, by αe + x(e i ) ≥ e i every feasible solution also fulfills Using this we obtain a lower bound for the maximum over U in the objective function as follows Even though fully piecewise affine policies are slightly more flexible than affine policies in general, both are equivalent on Problem (1) under some additional assumptions.Lemma 17.Consider Problem (1) with budgeted uncertainty and integer budget.Let Z F P AP be the optimal value found by a fully piecewise affine policy with modified right hand side uncertainty as in Lemma 16 and let Z AF F be the optimal value found by an affine policy.Then For the proof of Lemma 17 we first need the following helpful result that allows us to construct a robust counterpart of Problem (1) with fully piecewise affine adjustable policies in a similar manner as robust counterparts for affine policies are constructed. (a) Here (a) follows from (ξ − ζ) + ≤ diag (e − ζ) ξ.Inequality (b) follows as e I k is an optimal choice for Problem (20).For (c), observe that diag (e − ζ) e I k = (e I k − ζ) + .Finally, (d) follows, as e I k is a feasible choice for Problem (21).As (19) and ( 21) are the same problem all inequalities are in fact equalities.
To conclude the proof note that Problem (20) is a linear optimization problem, where taking the dual yields Problem (18).
Having this result, we now show Lemma 17.
Proof of Lemma 17.To prove this result we show that the robust counterparts of the two policies on Problem (1) with integer budget uncertainty are equivalent.Let k ∈ N + be the budget.
Then Problem (1) with affine policies becomes where A i and D i are the i th rows of A and D, respectively, and d i is the i th entry of d.Dualizing the suproblems we find this to be equivalent to Z AF F (U) = min P ,q,α 0 ,...,α l ,β 0 ,...,β l e ⊺ β 0 + kα 0 + c ⊺ q s.t.
Similarly Problem (1) with fully piecewise affine policies and modified right hand side becomes Using Lemma 18 on the subproblems we find this to be equivalent to We now show that for each solution of Problem (23) there is an equivalent solution with ζ = 0. Let P , q, ζ, α 0 , . . ., α l , β 0 , . . ., β l be a feasible solution to Problem (23).We replace P by P ′ := P diag(e − ζ), β i by β ′i := β i + diag(ζ)D ⊺ i for i > 0 and ζ by 0 and leave all other decision variables untouched.As none of the variables in the objective function was changed, it is sufficient to show that the new solution is again feasible.Feasibility of Constraint (23b) directly follows from P ′ diag(e) = P diag(e − ζ).For Constraint (23c), we find where (a) follows from the definition of β ′i , (b) from the feasibility of the original solution and (c) from Similarly, Constraint (23d) is fulfilled by  (23).In doing so, we transform Problem (23) into Problem (22).Thus the robust counterparts of affine policies and fully piecewise affine policies with modified right hand side are equivalent.
Having shown these results, Proposition 8 directly follows by combining Lemmas 16 and 17.Note, that Lemma 18, which is crucial for the construction of tractable reformulations of fully piecewise affine policies, heavily depends on the linear structure of budgeted uncertainty and the existence of integer optimal solutions.Thus the results cannot easily be extended to other settings and in general one cannot hope for tractable reformulations of FPAPs.and where for (a) we use µ ≥ 0, a ≤ 1 and (b) follows as the term is maximized for j = β v i ∈ U to be fulfilled when which is minimized for µ =

Appendix K Proof of Proposition 11
We prove Proposition 11 by explicitly constructing a dominating set fulfilling the desired properties.
To find a good choice of ρ 1 , . . ., ρ m , v 0 for the construction of Û in (2), we use the iterative approach described in Algorithm 1. Intuitively, we increase the base vertex v 0 , as well as an upper bound for ρ i , in each iteration.We bound the approximation factor β, by bounding the maximal number of iterations in the algorithm.Algorithm 1 is a refinement of Algorithm 1 in Ben-Tal et al. [14] and uses a different updating step.This modified updating step leads to a less aggressive increase of the base vertex v 0 ultimately improving the approximation bound by a factor of 1 2 .
Proof.In this proof let J := β−1 2 be the index j at termination of Algorithm 1. First, we show that Û is a valid dominating set for U using Criterion (5).Let ξ ∈ U. Then Here, inequality (a) follows from the termination criterion of Algorithm 1, (b) follows as v j only increases through the algorithm and thus ∀j ≤ J ′ : v J ′ ≥ v j , (c) follows as by ξ J ′ ∈ U the maximum over U is an upper bound, (d) follows from the choice of ξ j in Algorithm 1, (e) follows from the construction of v j , (f) follows as by induction v j+1 i = max(v j i , ξ j i ) = max j ′ ≤j ξ j ′ i , and (g) follows by e j ∈ U ⊆ [0, 1] m .
Finally, the third condition holds by Li1 = z i 1 ξ i ≤ z i 1 which follows from ξ i ≤ 1 in Assumption 1. Thus L(ξ) ∈ U L .Let x L be an optimal affine solution for Z L AR (U L ).As L is a nonanticipative affine map, the concatenation x L • L is also nonanticipative and affine.By L(U) ⊆ U L the decision x := x L • L is a feasible solution for Z AR (U).Finally, we find

Appendix O Proof of Proposition 15
Proof.To prove Z T LIF T ≤ Z LIF T and Z T LIF T ≤ Z SP AP we show that any affine solution for Z L AR (U L ) and any picecwise affine solution with re-scaling for Z( Û) can be transformed to a feasible affine solution for Z L AR ( ÛL ) with at most the same objective value.For Z T LIF T ≤ Z LIF T , let x L : U L → R n be an optimal affine solution to Z L AR (U L ).Then x L is also a feasible affine solution for Z L AR ( ÛL ) by ÛL ⊆ U L .For the objective value, we find ≤ max where (a) also follows from ÛL ⊆ U L .For Z T LIF T ≤ Z SP AP let x( ξ) given by vertex solutions x 0 , . . ., x m and scaling factor s be a an optimal solution to Z LP ( ξ) with re-scaling as described in Section 4. We define the map h L s : ÛL → Û′ from the tightened lifted uncertainty set ÛL defined in Equation (10) Here, (a) follows as ξL i1 + ξL i2 ≤ 1 by Assumption 1, and (b) follows from ξL i1 ≤ v 0i as v 0i is the break-point.Using the definition of the re-scaled vertices v ′ i from Lemma 13, we find By the tightening constraint in Definition (10) this is a valid convex combination for all ξL ∈ ÛL and thus h L s ÛL ⊆ Û′ as claimed.Thus x L := x • h L s is a valid solution for Z L AR ( ÛL ) and Using that x is given by the vertex solutions x 0 , . . ., x m together with (26) we find

Figure 4 : 2 2
Figure 4: Comparison of our dominating set Û (blue, solid frame) and the dominating set ÛBT proposed in Ben-Tal et al. [14] (green, dashed frame) for the hypersphere uncertainty set U in m = 2 (a) and m = 3 (b), (c) uncertainty dimensions

Figure 5 :
Figure 5: Re-scaling of the expensive vertex v 1 in a dominating set Û for uncertainty set U. (a) shows the change of dominating set Û (blue, dashed) and it's vertices v 0 , v 1 , v 2 to the new re-scaled dominating set Û′ (green, solid) with verticesv ′ 0 , v ′ 1 , v ′ 2 for s 1 = 0.5, s 2 = 0. (b)shows the costs for the vertex solutions x i with maximal cost z (blue, dashed) compared to the costs for the re-scaled vertex solutions x ′ i with maximal cost z ′ (green, solid)

Figure 6 :
Figure 6: Relative objective values (a) and solution times (b) for different policies on gaussian instances with hypersphere uncertainty

Figure 7 :
Figure 7: Relative objective values (a) and solution times (b) for different policies on gaussian instances with budgeted uncertainty

Figure 8 :
Figure 8: Relative objective values (a) and solution times (b) for different policies on demand covering instances with hypersphere uncertainty

Figure 9 :
Figure 9: Relative objective values (a) and solution times (b) for different policies on demand covering instances with budgeted uncertainty

Lemma 18 .
Let c ∈ R m , ζ ∈ [0,1] m and k ∈ N + .Then the following two problems yield the same objective.maxξ c ⊺ (ξ − ζ) + s.t. e ⊺ ξ ≤ k 0 ≤ ξ ≤ e(17)and minα,β e ⊺ β + kα s.t.β + αe ≥ diag(e − ζ)c α ≥ 0, β ≥ 0.(18)Proof.Let I k be the indices of the largest k values c i (1 − ζ i ) and let e I k be the indicator vector of I k .Then for Problem(17) we find the following set of inequalities.max a) and (b) again follow by the definition of β ′i and feasibility of the initial problem and (c) follows by definition of P ′ .Finally, β ′i = β i +diag(ζ)D ⊺ i ≥ β i ≥ 0 holds as ζ and D are both non-negative.Thus we can w.l.o.g.set ζ = 0 in Problem

3 directly follows from β = 1 √ a ≤ 1 √ m − 2 3 = O(m 1 3
am + (1 − a)m + am 2 .Finally, we close the proof by showing the desired asymptotic approximation bounds.The case a > m − 2 ) and for a ≤ m − 2 a) we use a ≤ m − 2 3 and for (b) we use m ≥ 2.

Table 1 :
Performance bounds of the piecewise affine policy for different uncertainty sets

Table 2 :
Overview of policies compared in the experiments

Table 3 :
Notation for the robust demand covering problem with non-consumed resources and uncertain demands

Table 4 :
Approximation Bounds for Two-Stage ARO (5) using Construction(2).This construction corresponds to dominating each U t with t ∈ T 2 by the unit vector e.Here {e} is dominating U t by Assumption 1.We dominate the remaining U t for t ∈ T 1 with a combined polytope.Then the maximal sum of convex factors for the vertices is given by ) follows from the definition of v ′ 0 and ρ ′t i , (b) follows from U being stagewise independent, (c) follows from Condition(5)and Ût being a dominating set for U t , and (d) follows from the definition of β.Thus Û fulfills Condition(5)and is a valid dominating set for U.Let β ′ := max t∈T2 β ′ t .To see that max(β, β ′ ) is indeed an upper bound for the approximation factor, to the re-scaled dominating uncertainty set Û′ defined in Lemma 13 byh L si ( ξL ) := (1 − s i )(v 0i + ξL i2 ) + s i .