On the Relation Between Affinely Adjustable Robust Linear Complementarity and Mixed-Integer Linear Feasibility Problems

We consider adjustable robust linear complementarity problems and extend the results of Biefel et al. (2022) towards convex and compact uncertainty sets. Moreover, for the case of polyhedral uncertainty sets, we prove that computing an adjustable robust solution of a given linear complementarity problem is equivalent to solving a properly chosen mixed-integer linear feasibility problem.


Introduction
We consider affinely adjustable robust (AAR) linear complementarity problems (LCPs).The classic, i.e., deterministic, LCP is defined as follows.Given a matrix M ∈ R n×n and a vector q ∈ R n , the LCP(q, M ) is the problem to find a vector z ∈ R n that satisfies the conditions z ≥ 0, M z + q ≥ 0, z ⊤ (M z + q) = 0 (1) or to show that no such vector exists.In the following, we use the standard ⊥notation and abbreviate (1) as 0 ≤ z ⊥ M z + q ≥ 0. ( LCPs are very important both in applications as well as in mathematical theory itself.For instance, they are used to model market equilibrium problems in many applied studies of gas or electricity markets [11] but also play an important role in mathematical optimization, game theory, or general matrix theory.We refer the interested reader to the seminal book [10] for an overview.
Although there is a very strong connection between LCPs and mathematical optimization and although the latter has been studied a lot in the recent decades under data uncertainty, the field of LCPs under uncertainty is still in its infancy.Stochastic approaches can be found in [7][8][9]15] and are mainly based on minimizing the expected residual gap of the uncertain LCP.On the other hand, robust approaches for uncertain LCPs have been considered recently as well.The first rigorous analysis of robust LCPs can be found in [19,20], where the authors apply the concept of strict robustness [18] to LCPs, which has been used later in [16] in the context of Cournot-Bertrand equilibria in power networks.Moreover, in [13,14], LCPs have been studied using Γ-robustness as introduced in [3,4,17]; see [6,12] for some applications in power markets.
The most recent paper on robust LCPs, to the best of our knowledge, is [5], where robust LCPs are studied using the concept of adjustable robustness [2,21].In [5], the authors study adjustable robust LCPs in the most simplest setting, which is for affine decision rules and box uncertainties.In this short note, we stay with affine decision rules but generalize the results to general convex and compact uncertainty sets U.In this context, our contribution is twofold.First, we characterize AAR solutions of robust LCPs and, second, use this characterization to prove that the AAR LCP with a polyhedral uncertainty set is equivalent to a properly chosen mixed-integer linear problem (MILP).
Let us finally note that our study is related to [1], where the authors consider multi-parametric LCPs for sufficient matrices M .However, our robust approach as well as the studied relation to MILPs differ from the concepts and results of [1].
We introduce the problem under consideration in Section 2 and derive our main results in Section 3. Afterward, we comment on some special cases and extensions in Section 4.

Problem Statement
We now define the adjustable robust LCP with affine decision rules.To this end, let M ∈ R n×n and q ∈ R n as before and let T ∈ R n×k be given.We assume that q is perturbed by T u with u ∈ U.In what follows, we assume that U ⊂ R k is a convex and compact uncertainty set that, w.l.o.g., contains 0 in its relative interior, i.e., 0 ∈ relint(U).Then, the affinely adjustable robust LCP(q, M, T, U) consists of finding an affine decision rule, i.e., we want to determine Equivalently, we can state the problem more explicitly as Without loss of generality, we may assume that T ∈ R n×k has full column rank; see [1].In many applications, some variables are non-adjustable and thus have to be fixed before the uncertainty realizes.To model these so-called here-and-now variables, we simply require that the first h rows of D are zero for some h < n.For more details, we refer to [5].
We close this section by briefly introducing the following notation.Let A ∈ R m×n , b ∈ R m , and index sets I ⊆ [m] := {1, . . ., m} as well as J ⊆ [n] be given.Then, A I,J ∈ R |I|×|J| denotes the submatrix of A consisting of the rows indexed by I and the columns indexed by J.Moreover, b I denotes the subvector with components specified by entries in I.If I = J, we also write A I instead of A I,I .

Main Results
In this section, we state and prove our two main results.The first one is a full characterization of AAR solutions of robust LCPs.

is an AAR solution if and only if D and r satisfy the conditions
Proof.First, let z(u) = Du + r be an AAR solution.Then, r is a nominal solution (as 0 ∈ U) and therefore r satisfies M I,• r + q I = 0, i.e., ( 5) is fulfilled.For every v j , j ∈ [ℓ], there exists a scalar δ j > 0 such that δ j v j ∈ U and δ j D I,• v j + r I > 0 holds.Thus, for every j ∈ [ℓ] the AAR solution z satisfies where we used (5) for the second equality.Thus, z satisfies (6).
Let now D and r satisfy ( 5) and (6).From 0 ∈ relint(U), it follows that for all u ∈ U there exists an ε > 0 such that −εu ∈ U. Hence, nonnegativity of holds for all u ∈ U. On the other hand, every u ∈ U can be written as a linear combination u = ℓ i=1 λ j v j with λ j ∈ R. Hence, holds, where we used (5) for the first and ( 6) for the last equality.Therefore, z(u) = Du+r fulfills complementarity and is an AAR solution due to the additional assumptions of the theorem.
The last theorem states a rather abstract characterization of AAR solutions.For arbitrary convex and compact uncertainty sets, working with this characterization might be difficult.However, the characterization can be practically used in more specific cases, which is what we do in our second main result about polyhedral uncertainty sets, where we use the characterization of the last theorem to show that affinely adjustable robust solutions are the solutions of a properly chosen MILP.
Theorem 2. Let U = {u ∈ R k : Θu ≥ ζ} with Θ ∈ R g×k and ζ ∈ R g and let B = {v 1 , . . ., v ℓ } be a basis of lin(U).Furthermore, let b ∈ R be sufficiently large and consider the mixed-integer linear feasibility problem If ( 7) is feasible, it returns an AAR solution of the form z(u) = Du + r of (4).If it is infeasible, no AAR solution exists.
Proof.We show that z(u) = Du + r is an AAR solution if and only if there exist x, A, C such that x, D, r, A, C solve (7).We start by proving complementarity of the solutions.Let z(u) = Du + r be an AAR solution.We define I := {i ∈ [n] : r i > 0} and x i = 1 for all i ∈ I and x i = 0 for all i ∈ [n] \ I.Then, Theorem 1 implies that x, D, r satisfy the constraints (7b)-(7d) for sufficiently large b.On the other hand, if x, D, r, A, C satisfy the conditions (7b)-(7d), r i > 0 implies x i = 1 and thus D and r fulfill the conditions ( 5) and ( 6) of Theorem 1.
It remains to consider the nonnegativity constraints of (4).First, we prove nonnegativity of the solution, i.e., Du + r ≥ 0 for all u ∈ U, if and only if there exists a matrix A such that D, r, A satisfy (7e) and (7f).For all i ∈ [n], we observe that D i,• u+r i ≥ 0 holds for all u ∈ U if and only if min u∈U {D i,• u+r i } ≥ 0. We now employ duality and obtain that this is equivalent to the statement that there exists a vector a ∈ R g ≥0 such that ζ ⊤ a + r i ≥ 0 and Θ ⊤ a = D ⊤ i,• .The matrix A ∈ R g×n ≥0 then contains the vectors a as columns.
Next, we show that M Du+M r+q+T u ≥ 0 holds for all u ∈ U if and only if there exists a matrix C such that D, r, C satisfy (7g) and (7h).This is analogous to the previous step and we observe that for every i ∈ ≥0 then contains the vectors c as columns.
Finally, the remaining constraint (7i) enforces that the first h variables are nonadjustable.
Remark 1.The linear hull lin(U) of the uncertainty set U can be computed in polynomial time if U is a polyhedron, i.e., if U = {u ∈ R k : Θu ≥ ζ} as in Theorem 2. We can then maximize once in every direction Θ j,• , j ∈ [g], and check if the optimal value is larger than ζ j .If it is equal to ζ j , we know ζ j = 0 due to 0 ∈ relint(U) and the inequality constraint can be replaced by an equality constraint.We obtain the representation

The basis of lin(U) is then given by the basis of ker(Φ).
Let us also comment on a difference to the setting considered in [5].There, the submatrix M I has to be invertible for an AAR solution to exist if all entries of q are uncertain, cf.Theorem 4.5 in [5].This is not the case in our setting as the following example shows.

Example 1. Consider the uncertain LCP given by
Then, 1 is an AAR solution with I = {1, 2}, but the matrix M is not invertible.
Finally note that if T is the identity matrix and U is a box, the MILP ( 7) is equivalent to the MILP in Theorem 4.7 in [5].

Remarks and Extensions
In this section, we comment on a special case, namely the one in which M is positive semidefinite, and several possible extensions.

4.1.
Positive Semidefinite M .We first consider the case that the matrix M is positive semidefinite.In the following, we show that in this setting an AAR solution can be found in polynomial time.The same result was shown for box uncertainties in [5] with similar arguments.For positive semidefinite M , Theorem 3.1.7(a) in [10] states that holds for any y, z ∈ SOL(q, M ), where SOL(q, M ) denotes the set of solutions of the LCP(q, M ).Let Due to (8), every nominal solution r ∈ SOL(q, M ) satisfies M P,• r + q P = 0. Therefore, every AAR solution has to satisfy for all u ∈ U as otherwise there would exist a u ′ ∈ U with M i,• (Du ′ +r)+q+T i,• u ′ < 0 for some i ∈ P. Thus, the set I in Theorem 1 can be replaced by P and the MILP (7) can be simplified to an LP as we do not need the binary variables anymore.Furthermore, Theorem 3.1.7(c) in [10] states that SOL(q, M ) is given by , where z ∈ SOL(q, M ) is an arbitrary solution.Such a solution z can be found by solving a single convex-quadratic optimization problem.With this polyhedral description of SOL(q, M ), P can be obtained by solving n linear programs in which z i , i ∈ [n], is maximized over SOL(q, M ) and then checking, whether the optimal value is strictly positive.This implies that P can be computed in polynomial time and, hence, we can find an AAR solution in polynomial time if M is positive semidefinite.4.2.Discrete Uncertainty Sets.Next, we briefly discuss discrete uncertainty sets.In the following example, for any uncertainty realization in the discrete set, there exists a solution whereas there does not exist solutions for some realizations in the convex hull of the uncertainty set.This example is in contrast to results for classic robust linear optimization, where one can always replace the uncertainty set with its convex hull.The reason for this behavior can be explained with classic LCP theory.In the literature, the cone of vectors q for which the LCP(q, M ) with a given matrix M has a solution is usually denoted by K(M ), i.e., In general, K(M ) is not convex, and hence the convex hull of some points that lie in K(M ) is not necessarily contained in K(M ).However, K(M ) is convex if and only if M is a so-called Q 0 -matrix, cf.Proposition 3.2.1 in [10], and we obtain the following result.
Corollary 1. Suppose that M is a Q 0 -matrix.Then, the uncertain LCP has a solution for all u ∈ conv(U) if it has a solution for all u ∈ U.

Decision-Dependent Uncertainty Sets.
The MILP (7) can be extended to cover simple decision-dependent uncertainty sets.To this end, consider the uncertainty set that depends on the chosen nominal solution r.If the deviation caused by Ψr is not too large, in some cases the linear hull does not change.Hence, in these cases we only have to replace the constraints (7e) and (7g) by their respective quadratic versions that include the terms (Ψr) ⊤ A •,i and (Ψr) ⊤ C •,i , respectively.We leave the detailed study of such situations for future work.4.4.Mixed LCPs.Finally, we discuss so-called mixed LCPs.These problems consist in finding z ∈ R n and y ∈ R m such that We refer to [10] for some source problems.We now briefly demonstrate necessary adaptions to the MILP (7) to compute an AAR solution to an uncertain version of the mixed LCP (9).As before, we assume that q is affected by uncertainty in the form of q(u) = T u, u ∈ U, and that z is affinely adjustable, i.e., z(u) = Du + r.Several parameters might be uncertain in the case of mixed LCPs.In the simplest case, the matrices V , W , M , and N are certain, y is non-adjustable and only p(u) = p + P u is uncertain for some given P ∈ R m×k and u ∈ U.In this case, the constraints (7c) and (7g) have to be extended by the term N y.Moreover, D, r, and y have to satisfy the resulting slightly adapted version of the MILP (7) and the additional constraints V r + W y + p = 0, This also includes the special case in which all additional parameters p, V , W , M , and N are certain and y is non-adjustable.In this case, the second of the above constraints reduces to V Dv j = 0 for all v j ∈ [l].In the case of adjustable y, i.e., y(u) = Eu + s, the MILP has to be adapted accordingly in a similar way.Additionally, z and y have to satisfy the additional constraints V r + W s + p = 0, (V D + W E + P )v j = 0, j ∈ [l].
Compared to the classic LCP, on the one hand we get additional freedom by being allowed to choose more variables, while on the other hand, there are additional constraints, some of which might be quite restrictive.