A branch-and-prune algorithm for discrete Nash equilibrium problems

We present a branch-and-prune procedure for discrete Nash equilibrium problems with a convex description of each player’s strategy set. The derived pruning criterion does not require player convexity, but only strict convexity of some player’s objective function in a single variable. If satisfied, it prunes choices for this variable by stating activity of certain constraints. This results in a synchronous branching and pruning method. An algorithmic implementation and numerical tests are presented for randomly generated instances with convex polyhedral strategy sets and convex quadratic as well as non-convex quadratic objective functions.


Introduction
The formulation of the Nash equilibrium for an n-person game by Nash in 1950 and 1951 was a landmark in the economic sciences and is still a key model in game theory [11,12].In the setting of this game, finitely many players can choose their individual strategies independently, but their payoffs depend on the strategies of all players.In the absence of coalitions, each player aims to maximize her payoff given the other players' strategies.A situation in the game where no player has an incentive to unilaterally deviate from her strategy defines the famous Nash equilibrium, and finding such a situation is the so-called Nash equilibrium problem (NEP).
Over the last years, research gained interest in the numerical solution of NEPs, and there are a couple of algorithms tackling this issue.However, although integer optimization is applied in many fields and intensely studied, there are only few attempts to solve Nash equilibrium problems with integer variables.Sagratella's publication [14] identifies this as an "important gap in literature".The latter paper proposes a branch-and-prune method to compute all solutions of NEPs with box-constrained discrete strategy sets.Subsequently, the theory was extended to generalized NEPs with linear coupling constraints and mixed-integer variables [15].More recently, there were several publications on computing Nash equilibria for a special class of mixed-integer NEPs, the so-called integer programming games (IPG), which were first introduced in [9].In IPGs, the feasible set of each player consists of linear constraints in her private variables, which are partially integrality constrained, and the payoff functions are only required to be continuous.For these general IPGs, [3] presents an algorithm for the computation of Nash equilibria based on algorithms for strategic games in normal-form.Furthermore, [2] introduced another subclass of mixed-integer NEPs, namely reciprocally-bilinear games (RBG), where the closure of the convex hull of each player's feasible set is required to be a polyhedron and the payoff function is bilinear in her own and the rivals' strategies.However, we will extend Sagratella's framework beyond the box-constrained case and propose a novel branch-and-prune approach for discrete NEPs which takes convexity of the strategy sets explicitly into account.
We introduce the problem and describe preliminary results in Sect. 2. In Sect.3, we deliver a pruning criterion for NEPs with convex strategy sets.Section 4 provides an algorithmic application of the criterion for convex polyhedral strategy sets with finite upper and lower bounds on each variable.In Sect. 5 we apply these findings numerically to discrete Nash equilibrium problems with convex polyhedral strategy sets and convex quadratic as well as non-convex quadratic objective functions.To the best of our knowledge, this is the first implemented and tested branch-and-prune procedure for this problem class.Finally, we wrap up our insights in Sect.6.

Problem description and preliminary results
We study discrete Nash games with N players.In this setting, each player ν = 1, . . ., N aims to solve the optimization problem The vector x ν lies in R n ν and represents all variables which are controlled by the ν-th player.The vector of all decision variables x = x 1 , . . ., x N ∈ R n then is of dimension n = N ν=1 n ν , and the vector x −ν = x 1 , . . ., x ν−1 , x ν+1 , . . ., x N ∈ R n−n ν contains all decision variables except player ν's.The notation x = x ν , x −ν emphasizes those variables, but does not reorder the entries of x.The objective function θ ν : → R has the domain := X 1 × . . .× X N , hence the player's objective function value depends on her own strategy as well as on the other players' strategies.The discrete feasible set is called the ν-th player's strategy set.It is defined by the function g ν : R n ν → R m ν .
In this context, the Nash equilibrium is the most important and commonly used solution concept.A vector x is called Nash equilibrium of this game, if for each ν = 1, . . ., N , the vector x ,ν is an optimal point of Q ν (x ,−ν ), i.e., x ∈ and θ ν (x ) = θ ν (x ,ν , x ,−ν ) ≤ θ ν (x ν , x ,−ν ) ∀x ν ∈ X ν hold.The resulting Nash equilibrium problem may hence be formulated as N E P : Find x such that x ,ν is an optimal point of Q ν (x ,−ν ) for all ν = 1, . . ., N .
Our suggestion of a branch-and-prune approach for solving NEP will use its continuous relaxation NEP.There each player ν solves the continuous problem where the integrality condition is dropped in the strategy set The domain := X 1 ×. ..× X NN of the objective functions is defined analogously, and a vector x is a Nash equilibrium of NEP if x ,ν solves Q ν ( x ,−ν ) for all ν = 1, . . ., N .
From now on, we use the following assumption.
Assumption 2.1 All entries of the function g ν are convex for each player ν = 1, . . ., N .
Note that we will state a stronger assumption on the strategy sets for the algorithmic implementation in Sect. 4. Clearly, under Assumption 2.1, each player's relaxed strategy set X ν is convex.If additionally each player's objective function θ ν is convex with respect to x ν , NEP is called player convex, which is a standard assumption for continuous NEPs.There are several possibilities to characterize and compute solutions of NEP under player convexity.For example, if satisfies the Slater condition, a vector x is a Nash equilibrium of NEP if and only if x ,ν is a Karush-Kuhn-Tucker (KKT) point of Q ν ( x ,−ν ) for each player (see [5,Prop. 1]).Other prominent solution techniques for NEP are the variational inequality (VI) and the Nikaido-Isoda (NI) approaches [6].Unfortunately, none of these approaches carry over to the discrete problem NEP.
For the KKT and VI approaches this is due to the missing convexity of the discrete strategy sets X ν .The NI-function, on the other hand, may be defined for NEP, but it turns out to be structurally nonsmooth, nonconvex and discontinuous, and thus hard to treat algorithmically [15].We mention that the minimization of the NI-function of the NEP's "convexified instance", as introduced in [8], is in some cases algorithmically tractable.However, we will not follow this approach, because we try to impose mild assumptions on the objective functions, which makes the required computation of their convex envelope rather impractical.Instead, we will formulate an approach motivated by integer optimization techniques, where branch-and-bound algorithms are commonly used.Let us briefly recap the three key aspects of branch-and-bound for integer optimization, namely relaxation, branching and bounding.Firstly, branch-and-bound exploits that it is easier to compute an optimal point of the continuous relaxation and that, if this point is integer, it also solves the integer optimization problem over this set.Secondly, if the obtained solution is not integer, it is removed by branching the feasible set.Thirdly, it is essential that the minimal value over some subset of the continuously relaxed feasible set serves as a lower bound for objective values of the integer feasible points in this subset.Thus one can discard subsets if the minimal value over their relaxation is larger than the objective value of the best known integer solution in the whole feasible set.This feature is called bounding.
The following example will help to illustrate how, if at all, relaxation, branching and bounding carry over to the discrete problem NEP.

Example 2.2
For two players, each one controlling a scalar variable, let us consider the NEP with objective functions and strategy sets as well as their continuous relaxations Since for any fixed x −ν , the loss function of each player ν is convex quadratic, the relaxed problem NEP is player convex.For any x 2 ∈ X 2 the unconstrained minimal point of θ 1 (•, x 2 ) is given by ∇ x 1 θ 1 (x 1 , x 2 ) = 0, lies in X 1 and is, thus, the best response of player 1 to x 2 .On the other hand, only for 1illustrates that, thus, exactly the two points (0, 2) and (8/3, 0) solve NEP.
In contrast, the discrete problem NEP possesses exactly the three solutions (0, 2), (1, 1) and (3, 0).In particular, although the point (1, 1) lies close to the solution (0, 2) of NEP as well as of NEP, it is an equilibrium.Moreover, as opposed to the point (3, 0), it may not be obtained by rounding the entries of any of the solutions of NEP.
Regarding relaxation, in Example 2.2 the continuously relaxed problem NEP is easy to solve, and the single discrete solution of NEP also solves NEP.Also in general, the KKT, VI or NI methods can be employed to solve a player convex problem NEP with differentiable defining functions, and the following result from [14, Prop.2.1] guarantees that discrete solutions of the continuously relaxed problem solve the original discrete problem.

Proposition 2.3 Any solution x ∈ Z n of NEP also solves NEP.
Note that this result also holds without player convexity, but that in this case N E P may not be easy to solve, even under Assumption 2.1.In Sect. 4 we will explain how we deal with non-convex objective functions.This means that with regard to relaxation we are in an analogous situation as in integer optimization.Concerning the branching step, we can also branch the strategy sets if the obtained solution is not integer, so that this situation is analogous as well.
In contrast, the bounding step poses some difficulties.Firstly and most obviously, there are multiple objective functions.Equilibrium points are required to be minimal for each player's objective function with respect to the other players' decisions.However, we are interested in a single criterion telling us whether there may exist Nash equilibria on a given subset of the strategy space.More specifically, the bounding idea relies on some function p on the joint strategy set whose minimal points coincide with the solutions of NEP.For the continuous problem NEP such functions can be obtained by the VI and NI approaches [4,13] but, as mentioned above, the latter are impossible or hard to apply in the discrete framework.
Under the additional assumption of NEP being a potential game [10] there exists a potential function p : R n → R with for all ν = 1, . . ., N and all x −ν ∈ X −ν .It is straightforward to show that then any optimal point of the integer program solves NEP.However, in general not all solutions of NEP are optimal for P, as required for a bounding procedure relying on p.In fact, Example 2.2 provides a potential game with potential function but the potential values p((0, 2)) = −6, p((1, 1)) = −7, p((3, 0)) = −10.5 of the three solutions of NEP are not identical.In any case, potential games form only a small subclass of NEPs, and their restrictive assumptions cover, e.g., cases where all players unconsciously minimize the same objective function.We, on the other hand, aim to handle non-potential games.Since, if the solution of NEP does not happen to be integer, we do not seem to be able to draw any conclusions for discarding subsets of by a bounding procedure, we will instead follow the branch-and-prune approach from [14,15].There, relations between equilibria of NEP and NEP are exploited algorithmically.Regarding such relations, Example 2.2 illustrates that NEP may possess more solutions than NEP and that not every solution of NEP may be obtained by rounding the fractional components of a solution of NEP.There are also examples where NEP possesses less solutions than NEP.In particular, the solvability of NEP does not entail the solvability of NEP (see [14,Ex. 2]).Additional requirements for the latter are given in [14,Cor. 4.4].

Theoretical foundation
The purpose of this section is to define a pruning criterion for discrete NEPs under Assumption 2.1.Moreover, for each player ν = 1, . . ., N we assume g ν to be continuously differentiable and θ ν to be twice continuously differentiable.We use the term pruning criterion to refer to criteria under which we can exclude parts of a player's strategy set because they are shown not to contain any Nash equilibrium.With effective pruning, we can substantially reduce the search region in order to compute Nash equilibria more efficiently.
The theorem we present in this section generalizes Proposition 3.1 from [14] (see "Appendix A").Instead of boxes as in [14], it treats arbitrary convexly described strategy sets.It provides a set of verifiable conditions under which we are able to prune choices for values of single variables from some player's strategy set.We shall also motivate the underlying geometrical concept.
Our approach uses local approximations of the continuously relaxed problem NEP to infer properties of the discrete problem NEP.As opposed to [14,15] we use arbitrary continuous strategies, rather than only solutions of NEP, to obtain these approximations.This enables us to deal with non-convexities in the objective functions.For the approximations we employ the concept of the (outer) linearization cone of player ν's continuously relaxed strategy set denotes the active index set.Under the convexity property of the strategy constraints from Assumption 2.1 it is straightforward to prove that any linearization cone for player ν provides an outer approximation of her relaxed strategy set in the following sense.
Lemma 3.1 Let g ν i be convex for i = 1, . . ., m ν and let xν ∈ X ν .Then we have Theorem 3.2 Let Assumption 2.1 hold, let x ∈ and, for an arbitrary player ν, let there exist an index i such that θ ν is strictly convex with respect to x ν i .Then the following two statements hold: Then any strategy x ∈ for which q x, defined by q x μ j = x μ j for all (μ, j) = (ν, i), and q is also feasible cannot be a solution of NEP.

123
(ii) Let F ν i = ∇ x ν i θ ν be concave, let F ν i ( x) ≤ 0 and for each player μ = 1, . . ., N let Then any strategy x ∈ for which x, defined by x μ j = x μ j for all (μ, j) = (ν, i), and is also feasible cannot be a solution of NEP.
Proof In order to show that x is not a Nash equilibrium, we will show that player ν can choose a strictly better strategy.
On the one hand, if (i) holds and if we can show the assertion follows by the feasibility of q x for the discrete NEP.The strict inequality holds because of the strict convexity of θ ν in the component x ν i , being the only value in which q x and x differ.Hence, (3) follows, when holds.Firstly, the equation comes from the defined notation and q Secondly, the left inequality follows from convexity of F ν i .Thirdly, the non-negativity comes from so that every summand is non-negative by (1).
On the other hand, if (ii) holds, the Eqs.( 3) and ( 4) can be stated for x instead of q x as well with all requirements fulfilled.It remains to show that the chain of inequalities also holds.Firstly, the equation comes again from notation and Secondly, the left inequality is valid due to the concavity of F ν i , and the non-negativity comes from so that every summand is non-negative by (2).
To verbalize the statement of Theorem 3.2, we use the point x to construct outer approximations of all players' complete strategy sets.This actually results in the outer approximation of .If, on this whole set x + L ≤ ( x, ), some player's variable x ν i has a favorable impact on the objective function θ ν when it is increased or decreased without the new point becoming infeasible, then this player can deviate and realize this positive impact, which is impossible in a Nash equilibrium.In other words, under the given assumptions in a Nash equilibrium x the constructed deviation must result in an infeasible point.Figure 2 shows the two dimensional strategy set X ν of a discrete N player game.Assume that for x and i = 2 all requirements of Theorem 3.2.(i)hold.Then there is always a positive impact in the ν-th player's objective function, when she sets x ν 2 to a lower value.As a result, e.g.x cannot be a Nash equilibrium, because q x ν is feasible and a better answer for player ν.In X ν , the set of possible best answers and thus the candidates for solutions of NEP shrinks to the pairs of "minimum feasible" x ν 2 -values for any given x ν 1 .Roughly speaking, only integer points for which at least one constraint is "active" in the sense that x ν 2 cannot be set to a lower value without changing other components of x ν can be Nash equilibria.This criterion will enable us to reduce the search space significantly in Sects.4 and 5.
For the algorithmic exploitation of Theorem 3.2 we define linear optimization problems to check if (1) and ( 2) are satisfied.The statement (1) is clearly valid if and only if the optimization problem has a non-negative optimal value v F (1) ≥ 0. In the same way, (2) holds if and only if has a non-negative optimal value v F (2) ≥ 0. By definition of the linearization cone, d μ = 0 is always feasible for F (1) and F (2) , so that the above optimal values are actually zero.If, on the other hand, there exists any direction d μ with a negative objective value, the problems are unbounded, because the feasible set is a cone.Therefore, we only have to check if these linear optimization problems are bounded in order to verify the statements.We remark that ( 1) and ( 2) require that the vectors In view of possibly non-unique cone coefficients in the absence of an appropriate constraint qualification it may, however, be algorithmically challenging to determine these cone coefficients explicitly, so that we rather work with the above optimization formulation.
We emphasize that we need the strict convexity of the objective function θ ν in the component (ν, i) in order to apply Theorem 3.2.The (strict) convexity in single variables does not require convexity in all of player ν's variables x ν , as defined in player convexity.However, the additional assumption might be helpful in the sense that it increases the likelihood of finding (ν, i)-components in which θ ν is strictly convex.

Algorithmic application
In this section, we define a branch-and-prune procedure for discrete NEPs by employing the pruning criterion from the previous section.The branching method is a generalization of [14, Alg.1].It is defined in Sect.4.1 and calls a pruning procedure, which we define in Sect.4.2.

Branching method
Algorithm 1 shows the high-level approach for discrete NEPs with convexly constrained, bounded strategy sets.In most aspects, it coincides with [14, Alg.1].For better readability, we repeat each step of the method.In particular, we describe the adjustments that were necessary to integrate the novel pruning procedure.This procedure computes all equilibria of an instance N E P. Within the procedure, we maintain two lists.In one list, we save all equilibria which were already detected (E).The other list L contains all strategy subsets which may contain additional equilibria.It is initialized with the whole joint strategy set .
In each iteration of the while-loop a joint strategy set Y ⊆ is taken from the list L. If the continuous relaxation Y is empty, there are clearly no equilibria in this set and we are done.Otherwise, a point x ∈ Y is computed.Here, the feasibility of x for Y is a minimum requirement but, given the target of finding solutions of NEP as quickly as possible, in view of Proposition 2.3 computing a solution x of the continuously relaxed problem NEP may be advantageous, depending on the effort for such a computation.
Afterwards, the first pruning and simultaneous branching can be started in line 7.In this step, Algorithm 1 differs from [14, Alg.1], in which the pruning procedure only returns one set.Here, the pruning procedure returns a list of sets B = {B 1 , . . ., B k }.This disjunctive structure arises from the additional treatment of other constraints than bounds.We briefly name the assumptions for such a procedure.
Firstly, property (P1) ensures that we are not pruning any Nash equilibrium.Secondly, property (P2) and (P3) are crucial in branching techniques to not allow additional points and avoid that a point needs to be processed multiple times.Lastly, property (P4) is more technical.On the one hand, a pruning procedure could exclude x.On the other hand we further process the point, so we need to know which of the subsets contains it.Algorithm 2 in the next subsection presents a procedure which satisfies (P1)-(P4) and is able to handle convex polyhedral strategy sets.
Starting in line 9, the second branching process depends on whether x ∈ Z n or not.If so, the vector is potentially an equilibrium of the discrete problem NEP and can, after verification, be appended to E, the list of all Nash equilibria.The equilibrium property can be verified by checking if xν is a solution of Q ν ( x−ν ) for all ν = 1, . . ., N .More specifically, we need to solve these N integer (non-)convex programs and check, if their respective optimal values are attained at the given points xν (line 10).The appearing integer (non-)convex problems can be solved with techniques from mixed-integer (non-)convex optimization (see e.g.[1]).The efficient implementation of this step of course depends on the state-of-the-art of available solvers.After knowing whether x is a solution, we can release it and search in the remaining feasible set for Nash equilibrium points.By Algorithm 3 ( [14] and "Appendix B"), we obtain a partition of sets B + which cover all other possible equilibria.Any integer point x ∈ B 1 other than x is in one of the sets from B + .Additionally, these sets are pairwise disjoint subsets of B 1 .
If otherwise x / ∈ Z n , the branching step resembles the one common in integer optimization.One fractional component in x is selected and two sets are added to the list L. In the first one, the value of this component is bounded to be greater or equal the nearest larger integer.In the second one, it is bounded to be less or equal the nearest smaller integer.
When the strategy sets X 1 , . . ., X N are bounded, the termination of Algorithm 1 is ensured because there are finitely many integer strategies.This property can be established by setting finite upper and lower bounds for each variable.Note that the efficiency of Algorithm 1 mainly depends on the effectiveness of the pruning procedure.All points which are not pruned will be enumerated and checked in line 10.

Algorithm 1: Branching Method
call Algorithm 3, with B 1 and x as input, and obtain the list

Pruning procedure for convex polyhedral strategy sets
We now define a pruning procedure for discrete Nash games, where every player's strategy set X ν can be characterized by linear inequalities, which is a special case of Assumption 2.1: With each player having m ν inequality constraints, B ν is an integer valued (m ν × n ν )matrix and b ν ∈ Z m ν a vector.We will refer to the k-th row of B ν as B ν k .The player's decision vector x ν has explicitly defined lower and upper bounds l ν ∈ Z n ν and u ν ∈ Z n ν , respectively.We call a discrete strategy set X ν convex polyhedral, if the continuous relaxation of this strategy set X ν is convex and polyhedral.
Previously, we determined conditions under which in an equilibrium of NEP it is not possible to increase or decrease the value of a variable to the next integer and remain feasible.Accordingly, at least one inequality (or the box restriction) must be active in a way that the next integer value in one direction is not feasible anymore.We formalize this kind of activity for a linear constraint.Definition 4.2 For a feasible point x ∈ the inequality k is called x, defined as q x μ j = x μ j for (μ, j) = (ν, i) and q We now investigate under which conditions inequality k is (ν, i) − -or (ν, i) + -active for a feasible point x.With these conditions we will be able to perform pruning steps.Firstly, by Definition 4.
holds.Because of the integrality assumptions for B ν and b ν , this is exactly true, when holds.Due to feasibility of x, this condition can only be satisfied if and thus holds.This argumentation results in Corollary 4.3.

Corollary 4.3
Let x be a solution of a problem NEP with strategy sets as defined in (5).Then the following two statements hold: (i) Suppose all requirements of Theorem 3.2.(i)hold for some x ∈ and an index pair (ν, i).Then x ν i = l ν i holds or there exists at least one inequality k with (6) and B ν ki < 0. (ii) Suppose all requirements of Theorem 3.2.(ii)hold for some x ∈ and an index pair (ν, i).Then x ν i = u ν i holds or there exists at least one inequality k with (7) and B ν ki > 0. Now we can state Algorithm 2. This procedure can be applied in an arbitrary point x ∈ in order to reduce the search space.Two outer for-loops, starting in line 2 and 3, iterate through all variables of the game.For each variable, the requirements of Theorem 3.2 are checked.If case (i) or (ii) is applicable, we perform a partition of the set according to Corollary 4.3.Suppose that for (ν, i) the if-statement in line 4 is true.Then the first statement of Corollary 4.3 holds.Consequently in any Nash equilibrium x, x ν i is at its lower bound l ν i or at least one inequality from the index set J ν,i must be (ν, i) − -active.Integer points for which none of this holds are pruned in lines 5-12 by introducing new inequalities and splitting the set(s) up.In line 6, there is an inner for-loop which ensures that the subdivision is done for every set in the list B. At first, B only contains Y , but as soon as the if-statements hold true for more than one index pair, this step is performed for all sets from the previous subdivisions.Thus for every set from the current list B the partition is added to a new list C, which replaces B afterwards.We will now describe in detail how lines 7-11 yield a pairwise disjoint subdivision.In line 7, all points for which x ν i is at its lower bound are added to C and line 8 ensures that the next sets are disjoint.Then in lines 9-11, for all inequalities with B ν ki < 0 the points for which firstly (6) holds and secondly are not contained in previous sets are added to C. The latter is done in every iteration by stating the negation of ( 6) for all sets which will be added to C successively in this for-loop.We can state the negation of inequality (6) as Note that ( 6) and (¬6) form a split disjunction.In lines 13-21 the analogous approach is implemented for the case when the requirements of Theorem 3.
player's strategy (sub-)set for an arbitrary game with two variables.The gray area is the continuously relaxed strategy set Y 1 .In this situation, Algorithm 2 was executed with Y and some x ∈ Y , which we do not have to name explicitly.We just suppose that the if-statement in line 4 holds true for x 1 2 and B = {Y } still holds.Thus there is only one iteration of the for-loop starting in line 6 which we consecutively describe.
At first, in line 7 the algorithm puts the set C 1 × Y 2 into C.It contains all possible strategies with active lower bound x 1 2 = l 1 2 .This happens by posing the inequality labeled by "1" in Fig. 3 for this set.For all future sets of this iteration, inequality ¬1, with x 1 2 ≥ l 1 2 + 1, is stated in line 8 without excluding any feasible points.At this point, the innermost for-loop starts in line 9.The index set J 1,2 contains only two elements as there are only two linear constraints with B 1 j2 < 0 which can be (1, 2) − -active.We can identify them because they are avoiding that x 1  2 is decreased at some point in Y 1 .Thus there are two iterations of this loop.The first one is performed with the left inequality.In line 10, we put C 2 × Y 2 into C, where inequality 2 coming from (6) holds.Afterwards, we state inequality ¬2 to prevent an overlap to all incoming sets.The second iteration is done with the right inequality.For this restriction, in addition to the inequalities ¬1 and ¬2, 3 must hold in C 3 × Y 2 which is put into C in line 10.In line 11, the inequality ¬3 is stated for future sets which is not necessary anymore, as we exit the two for-loops and replace B with the three sets in C.
As a result, we have 3 strategy subsets and for player one there are overall 7 integer points remaining, which could be choices in a Nash equilibrium.
In Sect. 5 we will see that Example 4.4 is not an isolated case, but that often a considerable part of the feasible set can be pruned by Algorithm 2.

Numerical results
In this section, we will solve discrete Nash equilibrium problems with the branchand-prune procedure presented in Sect. 4. Our aims are, firstly, to demonstrate the effectiveness of our method with initial experiments.In particular, we would like to show in random instances to what extent the pruning criterion facilitates the search for equilibria by shrinking the search area.Secondly, we want to give an impression of the limitations of this approach and which parts of the algorithm are the most challenging and computationally intensive, thus providing starting points for further improvements.
In the following experiments, all players' feasible sets are polyhedral, as defined in (5).The objective functions are defined as with a symmetric, but not necessarily positive semidefinite (n ν × n ν )-matrix Q ν , an (n ν × (n − n ν ))-matrix C ν and a vector d ν ∈ R n ν for each player ν = 1, . . ., N .We will consider player convex games as well as games which satisfy Assumption 2.1, but in which the objective functions are only required to be strictly convex with respect to individual variables x ν i for i = 1, . . ., n ν .In the following, we firstly give details on the concrete implementation of the algorithms.Secondly, we describe how test instances were generated.Lastly, we evaluate and discuss the results.

Implementation
All algorithms are implemented in Matlab R2020a.We solve all occurring optimization problems via the Matlab interface of Gurobi 9.5.0 which enables us to solve non-convex quadratic optimization problems.The script for the numerical test is executed on an Intel Core i7-9700K CPU @ 3.60GHz with Linux Mint 20 and 32 GB RAM.Some details of the algorithms from Sect. 4 can be implemented in various ways and are specified below.The complete code is available in a Git repository.1 Algorithm 1 At first, the feasible strategy in line 4 is computed with a Gauss-Seidel best response scheme (see [7,Algorithm 1]), because we favor x to be already a continuous Nash equilibrium.Within this Gauss-Seidel procedure, we avoid exhaustive calculations if there is only slow or no convergence by executing the while-loop at most 10 times.Note that in this case our approach works as well, as it does not rely on x being a continuous Nash equilibrium, but only a feasible point.Secondly, the if-statement in line 7 is verified by solving the optimization problem Q ν ( x−ν ) and compare its optimal value to θ ν ( x) for each player.Thirdly, for the considered games, we can use Algorithm 2 as pruning procedure in line 5.
Algorithm 2 The description in Sect.4.2 is tailored for convex polyhedral strategy sets.Consecutively, we explain how the if-statements are checked.For each variable, the strict convexity of θ ν i in x ν i is fulfilled when Q ν ii > 0. Further, F ν i is a linear function and therefore both, convex and concave.Naturally, we can calculate 1) and ( 2) are validated by checking the boundedness of F (1) and F (2) .Lastly, in line 7-11 and 16-20, we ensure that x is always in the first entry of B (if it is not pruned) by some additional logical queries.

Generation of test instances
We randomly generate instances of Nash games and name them C/N X Y k .The first letter is C, if the instance is player convex and N , if not.The second and third signs denote the number of players X and the number of variables Y which each player controls, and k is an index to distinguish instances with similar attributes.For example, the instance C32 k consists of three players with two-dimensional strategy sets and player convexity holds.Table 1lists all instances and their properties.In the next two paragraphs, we explain these properties and sketch how the instances were generated.For further details we refer to our implementation.

Strategy sets
The aim is to generate an arbitrary convex polytope as strategy set for each player.Our approach is to start with a box which has equal side lengths and its center in the origin.We initiate X ν from (5) with l ν i = −5, u ν i = 5, i = 1, . . ., n ν .Afterwards we sequentially add m linear inequalities.We perform the following steps to add a constraint B ν k x ν ≤ b ν k : 1.To determine the number of nonzero values in B ν k we draw a number from a uniform discrete distribution between two and the number of variables.The indices are selected as a random permutation.2. We set each nonzero value in B ν k as follows.We draw a number from N (5, 1.5), round it and switch the sign with 0.5 probability.

The choice of b ν
k ∈ Z is based on geometric considerations.To avoid redundancy, we can set it so that the distance of the new inequality from the origin is less than half the initial box diameter.By setting it on a positive value we ensure consistency.
The mean density of all matrices B ν of each instance is listed in Table 1.Note that the complexity of Algorithm 2 significantly increases with the number of inequalities.For the purpose of later comparisons, we also compute the cardinality of which is simply the number of integer points in the common feasible set.Objective functions For each instance, we compute C ν , d ν and Q ν filled with random values from the interval (−1, 1) for each player.Each entry in C ν and d ν is set to zero with a probability of 0.5 in order to reduce density.
For generating a test bed of player convex problems, we update For those instances, we list the minimum eigenvalue λ min of all Q ν in Table 1.In the non-convex test bed, we set Q ν := 0.5 • (Q ν ) + 0.5 • Q ν and replace the diagonal entries by their absolute values.By this, we have a symmetric matrix and θ ν is convex in single variables x ν i , i = 1, . . ., n ν .In this procedure, the matrices often turn out to be, by chance, positive definite.We discard instances where this happens for all players.In Table 1 we can see in the column N NC , how many players have non-positive-semidefinite matrices Q ν and have thus a non-convex objective function.Test bed Besides convexity, we subdivide the instances according to their sizes.The small test bed are all instances of type 22 .The maximum number of integer points is bounded below 12,000.The medium test bed consists of all 23 and 32 instances.In 23 5−8 , the complexity is increased by adding twice as many constraints.Finally the large test bed are all 24 , 33 and 25 problems.The number of integer points drastically increases due to exponential growth of the strategy sets in the number of variables.Here, we can only expect convergence in a reasonable time if the pruning procedure eliminates an enormous part of the feasible sets.

Evaluation
Subsequently, we investigate if Algorithm 1 is able to compute some or even all equilibria of the test instances.Additionally, we examine how much of the feasible set can be pruned by Algorithm 2 and compare the performance on convex and non-convex instances.
In Table 2 we can see the statistics of the solving process for all player convex instances.We can see in column |E|, how many solutions of the NEP are found within the time limit of t max = 3600 seconds.The column t 1 marks the run time in seconds, when the first equilibrium was found, t 2 when the last equilibrium was found and t 3 when the solving process ended.If, t 3 = t max , the process did not finish and there is no guarantee that all solutions were found.The statistic O(t k ) displays how often the if-statement in line 6 of Algorithm 1 held true, hence how many integer points were processed at these timestamps.Note that, if the algorithm finished, O(t 3 )/| | tells us the share of integer points that needed to be processed, the rest was pruned by Algorithm 2.
In the small test bed we report that the algorithm completed and all equilibria were found.The properties of the instances are quite different: While C22 1 and C22 3 have two solutions, C22 4 has none.In the medium test bed we report that for 11 of 12 instances we were able to find provably all solutions within one hour, C32 3 being an exception.For this instance, we found three equilibria but did not finish.Notably, there are five instances certified to have no equilibria.Lastly, in the large test bed there was no instance for which the procedure finished within an hour.Nevertheless, we found a solution for two instances.
If we were able to compute equilibria, the first one was mostly found in the first ten seconds of the run time.We now analyse the 15 instances for which t 3 < t max holds.In four cases less than 5% of the integer points in were processed, in seven others this share is under 8%.We note that often a large proportion of feasible points could be pruned, the arithmetic mean is 92.5% and the standard deviation 3.6%.We point For the non-convex test bed, displayed in Table 3, we see similar results.We report that all small and 10 of 12 medium instances were solved completely.We have six provably inconsistent instances.Again, there was no large instance solved completely in the time limit, but we found a solution for three instances.For the instances with t 3 < t max , the mean share of pruned points is 92.7% with a standard deviation of 3.4%.Hence, in our randomly generated test bed the convexity in individual variables is sufficient to be able to prune a large proportion of feasible points.In contrast, we detect differences in the run time between the convex and the nonconvex case.Table 4 reports how much of the total run time is caused by solving optimization problems and checking consistency with Gurobi (GT tot ).Of this time, we see on the left the fractions caused by different tasks.In the non-convex test bed, aiming to solve the continuously relaxed problems with the Gauss-Seidel method and checking whether x solves the NEP takes on average a larger proportion of GT tot (47% and 16% instead of 41% and 11%).For these two tasks, non-convex optimization problems need to be solved.The other two columns only report the time fractions needed for consistency checks.Overall, one can also see in the tables which parts of the algorithms require the most run time, to assess where improvements are most beneficial.For example, one could try to determine x with a faster inexact procedure.Furthermore, one may use additional simple logical queries to discard empty sets more efficiently.
All in all, we can say that in the considered low dimensional test instances the presented algorithm is able to prune a considerable share of feasible points.However, because of an exponential growth in the cardinality of the joint feasible set in the number of variables, a computation of all equilibria seems to be prohibitive for higher dimensions.

Conclusion
This paper presents novel theoretical results on pruning for discrete Nash equilibrium problems.The required activity of particular constraints leads to synchronous branching and pruning of the strategy sets.Furthermore, we showed in a numerical study that a noteworthy part of the joint feasible set can be pruned by following this rationale.This was demonstrated for polyhedral strategy sets and (not necessarily convex) quadratic objective functions.It remains to be investigated if these results can also be applied to broader problem classes like, for example, generalized Nash equilibrium problems.
Obviously, the gradient ∇ x ν g ν j (x ν ) of the constraint j equals the unit vector e j for j ∈ {1, . . ., n ν } and −e j for j ∈ {n ν + 1, . . ., 2n ν }, respectively.Proposition A.1 [14, Proposition 3.1] Suppose that is defined by box constraints.Let x ∈ be an integer solution of the continuously relaxed problem NEP.Let us consider a generic player ν.Suppose that an index i ∈ {1, . . ., n ν } exists, such that θ ν is strictly convex with respect to x ν i and that one of the two possibilities holds: (i) Given that xν i = l ν i , F ν i is convex and for each player μ ∈ {1, . . ., N } and each index j ∈ {1, . . ., n μ }, such that (μ, j) = (ν, i), it holds Then any point x ∈ such that x ν i = xν i cannot be a solution of NEP.More specifically, it will now be proven that, if all prerequisites of Proposition A.1 are fulfilled, Theorem 3.2 is applicable and yields the same result.Lemma A.2 shows that the requirements on F ν i ( x) in Theorem 3.2.(i)/(ii)follow from the assumptions in Proposition A.1.(i)/(ii).Given that, Corollary A.3 proves that (1) and ( 2) also follow for the respective cases and that Theorem 3.2 yields the same result.
Lemma A.2 Suppose that is defined by box constraints and that θ ν is strictly convex with respect to x ν i , as stated in Proposition A.1.From x being a solution of NEP and l ν i = u ν i follow the two statements: (i) If xν i = l ν i , then F ν i ( x) ≥ 0 must hold.(ii) If xν i = u ν i , then F ν i ( x) ≤ 0 must hold.
Proof Since x is a solution of NEP, for each ν the point xν is a minimal point of the problem Q ν ( x−ν ).Since the gradients of active box restrictions are linearly independent, xν is a KKT point of Q ν ( x−ν ) and holds with λ j ∈ R ≥0 for all j ∈ I 0 ( xν , X ν ).For the i-th row of this equation, solely the gradients of the i-th variable's upper and lower bound constraints are non-zero, ∇ x ν g ν i = e i and ∇ x ν g ν n ν +i = −e i .By l ν i = u ν i only one of them can be in the active index set.
• If xν i = u ν i , the constraint g ν i is active and statement (ii) follows from F ν i ( x) + λ i • 1 = 0. Proof Since, as stated in Proposition A.1, the set is defined by box constraints, the strategy sets are defined by convex (linear) functions.Given Lemma A.2, it remains to show that (1) and (2) follow in the two particular cases and that the consequential statements are equivalent.
Firstly, we start with Proposition A.1.(i).For an arbitrary player μ we have: • Let J μ ⊆ 1, . . ., n μ be the set of all indices j such that (μ, j) = (ν, i) and ∇ x μ If μ = ν, this also applies to the index i: -∇ x ν i F ν i ( x) > 0 due to strict convexity of θ ν in this component, g ν n ν +i (x μ ) is also active and ∇ x ν g ν n ν +i (x μ ) = −e i .Together, for every player μ ∈ {1, . . ., N } the scalar product is non-negative for all d μ ∈ L ≤ ( xμ , X μ ) and therefore (1) is fulfilled.This is easy to verify with help of the statements above.For each index p ∈ 1, . . ., n μ one of these three possibilities holds: For any point x ∈ with x ν i = xν i it follows that x ν i ≥ xν i +1 = l ν i +1 and directly the feasibility of q x, which is constructed like in Theorem 3.2.Therefore, the conclusion of both rationales is equivalent.
Secondly, in the case of Proposition A.1.(ii)we obtain that (2) holds with similar argumentation and yields the same conclusion, we present it for the sake of completeness.With the sets J μ and K μ defined as above, for an arbitrary player μ and index p one of these three possibilities holds: If (μ, p) = (ν, i), this also applies: -∇ x ν i F ν i ( x) > 0 due to strict convexity of θ ν in this component, g ν i (x μ ) is also active and ∇ x ν g ν i (x μ ) = e i .• ∇ x μ p F ν i ( x) = 0. Together, for every player the scalar product is non-negative for all d μ ∈ L ≤ ( xμ , X μ ) and therefore (2) is fulfilled.
For any point x ∈ with x ν i = xν i it follows that x ν i ≤ xν i − 1 = u ν i − 1 and directly the feasibility of x.Again, the conclusion of both rationales is equivalent.
Remark A. 4 The assumption in Proposition A.1 that x is an integer solution of NEP is not necessary to obtain the statement, but it is sufficient to require that x is any solution of NEP.Nevertheless, this does not restrict the applicability of Proposition A.1 very much, because for all variables x μ j with ∇ x μ j F ν i ( x) = 0 it is required that their value coincides with one of the integer valued bounds anyway.

B Algorithm 3: Remove strategy from search space
Algorithm 3 is employed after x is processed by Algorithm 1.As we have x ∈ B 1 , this set is divided into at most 2 • n sets, excluding x and preserving all other integer points from B 1 , i.e.
Note that at least one component of a point x in any new subset of B 1 must be different from x in order to achieve the exclusion.Moreover, in lines 8-9, Algorithm 3 ensures that all sets from B + are pairwise disjoint.

Fig. 2
Fig. 2 Pruning of x, when Theorem 3.2.(i) is fulfilled for i = 2.In particular ∇ x ν F ν 2 ( x) needs to lie in the dual cone C of L ≤ ( xν , X ν )

Assumption 4 . 1
The output of a pruning procedure, called with Y ⊆ and x ∈ Y , is a list B = {B 1 , . . ., B k } with the following properties: (P1) The set Y \ { k i=1 B i } does not contain any Nash equilibrium

Input:
Discrete Nash equilibrium problem NEP with bounded strategy sets defined by convex functions Output: Solution set E of NEP Initialize list of strategy subsets L := { } initialize list of equilibria E := {} while L = {} do take a strategy set Y from L if Y = ∅ then compute a feasible strategy x ∈ Y call a pruning procedure, satisfying Assumption 4.1, with Y and x as input and obtain the list

Fig. 3
Fig. 3 Pruning in a polyhedral strategy set 2.(ii) hold and Corollary 4.3.(ii)can be applied.We illustrate in Example 4.4 how the procedure works in detail.

Example 4 . 4 2 : 5 initialize C := {} 6 for B ∈ B do 7 put
Figure 3 depicts how the two inner for-loops in Algorithm 2 partition one player's strategy set into three sets C 1 , C 2 and C 3 .The illustration shows the first Algorithm Pruning procedure for convex polyhedral NEP Input: Joint strategy set Y ⊆ of NEP, strategy x ∈ Y = Y 1 × . . .× Y N Output: Pruned list of joint strategy sets B, which meets Assumption 4.1 1 Initialize B := {Y } 2 for ν = 1 to N do 3 for i = 1 to n ν do 4 if θ νi is strictly convex in x ν i , F ν i is convex, F ν i ( x) ≥ 0 and (1) holds for each player then the set B (λ min ), number of inequalities added per player (m), mean density of constraint matrices, cardinality of common feasible set (| |), and number of players with θ ν non-convex w.r.t.x ν

Corollary A. 3
Theorem 3.2 is a generalization of Proposition A.1.

Table 1
Properties of generated instances: Minimum eigenvalue of all Q ν

Table 2
Results on convex test bed: Number of computed equilibria (|E|), timestamps when the first (t 1 ) and the last (t 2 ) equilibrium was found, timestamp at the end of computation (t 3 ), and number of processed integer points at timestamp t i (O(t i )) = t max , the column O(t 3 )/| | has no similar interpretation.It only says how many integer points were processed in the given time.

Table 3
Results on non-convex test bed: Number of computed equilibria (|E|), timestamps when the first (t 1 ) and the last (t 2 ) equilibrium was found, timestamp at the end of computation (t 3 ), and number of processed integer points at timestamp t i (O(t i ))

Table 4
Run time details on test beds: Total run time of Gurobi (GT tot ) and shares of run time for the Gauss-Seidel alg.(GT