Aggregative Variational Inequalities

We enrich the theory of variational inequalities in the case of an aggregative structure by implementing recent results obtained by using the Selten–Szidarovszky technique. We derive existence, semi-uniqueness and uniqueness results for solutions and provide a computational method. As an application we derive very powerful practical equilibrium results for Nash equilibria of sum-aggregative games and illustrate with Cournot oligopolies.


Introduction
When dealing with optimisation, equilibrium or related problems, a usual program is to study existence, semi-uniqueness (i.e. there is at most one solution), uniqueness and computation of solutions. For such problems, variational inequalities provide a unifying, natural, simple and quite novel setting. The systematic study of this subject began in the early 1960 s with the influential work of Hartman and Stampacchia in [9] for the study of (infinite)-dimensional partial differential equations. The present theory of (finite dimensional) variational inequalities has found applications in mathematical programming, engineering, economics and finance. 1 In particular this theory applies to Nash equilibria of games in strategic form. However, various quite sophisticated recent results for sum-aggregative games with pseudo-concave conditional payoff functions do not follow from this theory. The results we have in mind here concern uniqueness results as in [11] which are derived by what was called 'the Selten-Szidarovszky technique' (SS-technique) in [26].
The origin of the SS-technique can be found in the book [21] of Selten dealing with aggregative games and in the article [22] of Szidarovszky dealing with Cournot oligopolies. 2 The aim of the present article is to go a theoretical step further by integrating an advanced version of the SS-technique into the theory of variational inequalities. For more on the SS-technique, see [4,11].
We consider two types of variational inequalities that are special cases of the following quite general form VI(X , F) : where X is a non-empty subset of R n and F = (F 1 , . . . , F n ) : X → R n is a function. A solution of VI(X , F) is defined as an x ∈ X that satisfies all inequalities in (1). 3 Both cases relate to the aggregative variational inequality VI(X, T) with X = R n + or X = X n l=1 [0, m l ] where, with N := {1, . . . , n}, letting x N := l∈N x l , So here T i depends on x i and the aggregate x N . (A precise definition concerning t i is in order.) One may refer to this problem as an 'aggregative variational inequality'. In case X = R n + this variational inequality specialises to a nonlinear complementarity problem and in the other case to a mixed nonlinear complementarity problem. We shall study the complete set of solutions and do not exclude boundary or degenerate 4 ones.
In Sect. 2 the results are obtained by applying standard theory for these aggregative variational inequalities. Although results in this section are not really new, they may contribute to the literature in the sense that the presentation there is efficient, self-contained and in addition critically reviews and repairs a result in [19]. The new and much more powerful results are obtained by the Selten-Szidarovszky technique in Sects. 3 and 4 assuming X = R n + . In Sect. 3, contrary to Sect. 4, there are no differentiability assumptions for the t i , just continuity is assumed. However, a discontinuity at (0, 0) always is allowed.
A vast part of the ideas of proving the results in Sects. 3 and 4 is based on [3,11,29] dealing with sum-aggregative games and [26] dealing with so-called abstract games. In particular Sect. 4.5 provides necessary and sufficient conditions for the variational inequality (2) to have a unique solution. As we shall see, the used mathematics in the SS-technique is quite elementary (although technical): for example, no deep results like Brouwer's fixed point theorem, Gale-Nikaido theorem or advanced theories like topological fixed point index theory is needed. The fundamental idea behind the SS-technique is the transformation of the n-dimensional problem for the aggregate variational inequality into a 1-dimensional fixed point problem for the correspondence b := i b i with b i : R + R is given by 5 see Theorem 3.2. Various assumptions made on the t i relate to the so-called At Most Single Crossing From Above property; see Definition 3.1. In the differentiable case checking these assumptions may be straightforward. Theorem 3.2 also is at the base for computational methods as shown in [1,24] for the Cournot oligopoly context. Section 5 explains how the theory of (aggregative) variational inequalities applies to Nash equilibria of (sum-aggregative) games in strategic form. Especially economic games in strategic form have an aggregative structure. Among others this concerns oligopolistic, public good, cost-sharing, common resource, contest and rent-seeking games (e.g. see [3,27]). The most important results concerning Nash equilibria of sumaggregative games are in Theorem 5.1 which provides a very practical uniqueness result and Theorem 4.3 which is, as illustrated in Sect. 5.4, at the base for games with a possible discontinuity at the origin. The latter one is especially important for contest and rent-seeking games and in fact provides a (very abstract) generalisation and improvement of the results in [10,23]. Both theorems do not use explicit pseudoconcavity conditions for conditional payoff functions (which may be not so easy to verify in various applications); in fact they implicitly hold. In doing so, the game theoretic results in [11] are improved upon.
When one looks to the articles on Cournot oligopoly theory it becomes clear that generalised convexity properties of the price function play an important role in more sophisticated results; also Assumption (c) for the t i in Theorem 5.1 is closely related to such properties. 6 In this context it may be interesting to note that minima of various (pre)invex functions (see [17,18]) can be characterised by so-called variational-like inequalities.
There are three appendices: on variational inequalities, on smoothness issues and on various types of matrices.
where X = R n + (unbounded case) or X = X n l=1 [0, m l ] with m l > 0 (bounded case) and with t i : R + × R + → R (unbounded case) and t i : [0, m i ] × [0, n l=1 m l ] → R (bounded case). Further we suppose n ≥ 2. Let Results in this section not being really new, we shall not use the designation 'theorem' for them.

Assumptions
In this section the following assumptions will occur.
CONT. t i is continuous. DIFF. T i and t i are continuously differentiable. EC. (For unbounded case) There exists For the unbounded case, with K i as in Assumption EC, let K := X n l=1 K l . These assumptions are supposed to hold for every i ∈ N . 8 Below we often consider situations where such an assumption just holds for a specific i; then we add [i] to the assumption; for example, EC[i].
Some comments concerning DIFF are in order. Of course, in DIFF, properties of T i and t i are related. However, it is comfortable to present them here as stated. As the domain of T i (t i ) is not open, we interpret continuous differentiability in DIFF as usual: there exists a continuously differentiable extension of T i (t i ) to an open set.
If Assumption DIFF holds, then the Jacobi matrix J(x) of T : X → R n is given by

Proposition 2.1 0 is a solution of AVI if and only if N > = ∅.
Proof 0 is a solution if and only if T(0) · x ≥ 0 (x ∈ X), i.e. if and only if Proof Suppose x is a solution of VI(B, T). As K ⊆ B, it is sufficient to show that x ∈ K. By contradiction, suppose x j > x j for some j. Proof 1. CONT implies that T is continuous. Now apply Lemma A.9 in "Appendix A". 2. By Lemma 2.1 with B = R n + each solution of AVI is a solution of VI(K, T) and thus belongs to K. Next we are going to apply Lemmas A.9 and A.10 with X = R n + . Fix an r > 0 such that K ⊆ X r /2 ⊂ X r ⊆ 137K. 10 As 137K is compact, Lemma A.9 guarantees that VI(137K, T) has a solution, say x . Lemma 2.1 guarantees that x ∈ K.
So also x ∈ X r /2 ⊂ X r . This implies that x also is a solution of VI(X r , T) and that x ≤ r /2 < r . By Lemma A.10 in "Appendix A", x is a solution of AVI.
In order to prove that the set of solutions of AVI is compact, it is sufficient, as this set is, by part 1, bounded, that this set is closed. Well, this is guaranteed by Lemma A.7.

Semi-uniqueness
Suppose Assumption DIFF holds. Thus, by (4), in short notations, It is important to realise that J(x) may not be symmetric. (a). The matrix J(x) is for every x ∈ R n + positive quasi-definite. 11 (b). The matrix J(x) is for every x ∈ R n + a P-matrix.
Proof In order for AVI to have at most one solution, it is, by Lemma A.4 in "Appendix A", sufficient to show that T : X → R n is strictly monotone on X or a P-function on X.
(a). Suppose J(x) is positive quasi-definite for every x ∈ R n + . By Lemma A.5, T is strictly monotone.
(b). Suppose J(x) is for every x ∈ R n + a P-matrix. By Lemma A.6, T is a P-function.
Now results for AVI for the unbounded case are implied by conditions that guarantee that each matrix J(x) is positive quasi-definite or a P-matrix. Such conditions can be found in "Appendix C". The next proposition presents such a result.

Proposition 2.4
Consider the unbounded case. Suppose Assumption DIFF holds. Sufficient for AVI, to have at most one solution is that Proof The proof is, by Proposition 2.3(b) complete if J(x) is for every x ∈ X a P-matrix. Well, if J(x) is row diagonally dominant with positive diagonal entries, then it is a P-matrix. By (4), this specialises to that for every x ∈ X and i ∈ N : Clearly, as n grows the inequality in part 2 of this proposition gets more difficult to be satisfied. And note that, by Proposition 2.1, if we add in addition that N > = ∅, then we obtain the result that if AVI has a solution, then this solution is unique and nonzero.

Uniqueness
Combining Proposition 2.4 (or a variant of it) with Proposition 2.2, we obtain a uniqueness result for the aggregative variational inequality AVI. In Sects. 3 and 4 we shall obtain more interesting results by using the SS-technique.

Application: Cournot Oligopoly
In this subsection, we critically reconsider and repair with Proposition 2.5 below an equilibrium uniqueness result in [19]. 12 This result is as far as we know, the first one analysing equilibria of Cournot oligopolies by means of nonlinear complementarity problems. The setting for this result is a Cournot oligopoly game Γ with n ≥ 2 firms without capacity constraints with a price function p : R + → R and with a cost function c i : R → R for firm i ∈ N . With these notations the profit function f i : R n + → R for firm i is given by 12 Equilibrium semi-uniqueness in [19] is based on Proposition 2.3(1) by referring to a false statement in [16] (see footnote 26 while our is based on Proposition 2.3(2) and so relies on P-matrices instead of on positive definite matrices. Also a further article on this topic by [14] refers to this false statement. Equilibrium existence in [19] refers to a result in [13] that in our opinion does not apply here. Furthermore in [19] the relation between solutions of the nonlinear complementarity problem and the Nash equilibria set is not addressed.
This defines a game in strategic form Γ with N as player set and with R + as strategy set for each player and with f i as payoff function of firm i. If p and every c i is twice continuously differentiable, then the aggregative variational inequality AVI where t i : R 2 + → R is given by is referred here to as 'oligopolistic variational inequality' and will be denoted by OVI.
In fact this aggregative variational inequality concerns what we call in Definition 5.1 in Sect. 5, for a more general setting, the associated variational inequality VI(Γ ). Proposition 2.5 deals with the solution set of the oligopolistic variational inequality and the Nash equilibrium set of Γ . Concerning the latter we have to refer in the proof of Proposition 2.5 to results which are developed in Sect. 5.

Proposition 2.5
Consider a Cournot oligopoly Γ where p : R + → R and every c i : R + → R is twice continuously differentiable with the following two conditions.
(a). For every i ∈ N and x ∈ R n + : The following results hold.  (2) implies that e is a Nash equilibrium and then next with Proposition 5.1(1) it follows that e is a unique Nash equilibrium. Well, as ( f For more on Cournot oligopolies, see, for example, [20,25,27].

Setting
Let us fix again the setting. Let With VI(X , F) being the general variational inequality (1), the special case that we consider in this section is where Comparing AVI with AVI, note that for AVI we only consider the unbounded case. The reason for this is that an analysis with the SS-technique becomes here much more technical. Also note that the setting uses a smaller domain of t i than that in Sect. 2.1: We always assume in this section that every t i is continuous on Δ + and denote the set of solutions of AVI by AVI • .

AMSCFA-Property
The following definition is very important for assumptions on the t i in the following subsection.
Thus, a function with the AMSCFA-property has at most one zero. Sufficient for a function to have the AMSCFA-property is that it is strictly decreasing. Two other simple results, that we freely use throughout the article, are the following: suppose g : I → R is continuous where I is a proper real interval. Then: -If g is at every x ∈ I with g(x) = 0 differentiable with g (x) < 0, then g has the AMSCFA-property.
-If g has the AMSCFA-property, then for all x, x ∈ I

Assumptions
For i ∈ N and μ ∈ [0, 1], defining the function t the following assumptions appear in the analysis. 13 AMSV. For every y > 0, the function t i (·, y) : [0, y] → R has at most one zero and if it has a positive zero, then t i (0, y) > 0. LFH'. For every y > 0, the function t i (·, y) : [0, y] → R has the AMSCFAproperty. RA. For every μ ∈ ]0, 1], the function t (μ) i has the AMSCFA-property. RA1. The function t (1) i has the AMSCFA-property. RA0. For every 0 < y < y : These assumptions are supposed to hold for every i ∈ N . Below we very often consider situations where such an assumption just holds for a specific i; then we add [i] to the assumption; for example, RA[i]. Note that the above assumptions do not depend on the value of t i at (0, 0). In fact this value is not important for results on AVI • \ {0}; the reader also may see Lemma A.1.
Of course, In addition to these assumptions, we use the following terminology. We call i ∈ N of type I + if t Proof By contradiction. So suppose LFH'[i] and RA[i] hold, and 0 < y < y , with t i (0, y) ≤ 0 and t i (0, y ) > 0. The continuity of t i (0, ·) : has the AMSCFA-property.

If i is of type I
Proof 1. In the case when t (1) i has a zero, say m, we have, t (1) In the first case i is of type I + and in the second of type I − .
2. By contradiction. So suppose i is of type I − and t (1) i (a i ) ≥ 0 for some a i > 0. As t (1) i (x i ) < 0 for x i > 0 small enough, the continuity of t (1) i implies the existence of an

Proposition 3.2 Suppose Assumption RA1 holds and every i ∈ N is of type I
As RA1[i] holds, Lemma 3.2(2) gives a contradiction.

Computation
And define the correspondences b : R + R n and b : 2. Define the correspondences s i : R ++ R (i ∈ N ) and s : R ++ R by The correspondence b i provides global information on the t i . Denote by fix(b) the set of fixed points of the correspondence b : R + R, i.e. the set of y ∈ R + for which y ∈ b(y).

Definition 3.3 The aggregative variational inequality AVI is
backward solvable if it is internal and external backward solvable.

Proof Write the statement as
The solution aggregator is defined as the function σ : AVI • → R given by 14 The sum here is the Minkowski sum.
3. By part 2, we still have to prove '⊇'. So suppose The standard Szidarovszky variant of the SS-technique deals with at most singlevalued b i . For such situation also b is at most single-valued and Theorem 3.1(3) shows that AVI is backward solvable. So what is a (weak) sufficient condition for the b i to be at most single-valued? Well, the next lemma provides such a condition.

Lemma 3.4 If Assumption AMSV[i] holds, then for every y
tion AMSV holds, then by Lemma 3.4 the correspondencesb i ,ŝ i ,b andŝ are single-valued and we can and will interpret them as functions. Then in particular b(y) = (b 1 (y), . . . ,b n (y)).

Proposition 3.3 If Assumption AMSV holds, then the solution aggregator σ is injective.
Proof By contradiction. So suppose AMSV holds and x, x are distinct solutions with Proposition 3.4 Suppose t 1 = · · · = t n . If Assumption AMSV holds, then each solution x of AVI, is symmetric, i.e. x 1 = · · · = x n .
Proof By contradiction. So suppose x is a non-symmetric solution. Fix π ∈ S n such that 15 The assumption t 1 = · · · = t n implies that the aggregative variational inequality AVI is symmetric. 16 By Lemma A.11, P π (x ) is another solution. As σ (x ) = σ (P π (x )), we have a contradiction with Proposition 3.3, i.e. with the injectivity of σ .

Structure of the Sets
For the further analysis it is important to obtain more insight into the structure of W i . If Assumption AMSV holds, then let Note that

Lemma 3.5 Suppose Assumptions AMSV[i] and RA0
[i] hold, y ∈ W i and y > y. 15 See (15) for P π . 16 May be, see "Appendix A" for this notion.
, the continuous function t (1) i has the AMSCFA-property. It follows that t i (y , y ) < 0. Next the continuity of t i (·, y ) implies that there exists an is an interval. Statement concerning W + i : suppose y, y ∈ W + i with y < y and y ∈ ]y, y [. Now the above proof again applies, and shows that y ∈ W ++

Lemma 3.7 Suppose Assumptions AMSV[i] and EC[i] hold. Thenb
Proof This is, as If t (1) i : R ++ → R has a unique zero, then we denote it by (Thus, x i > 0.) Sufficient for x i to be well-defined is that t (1) i has a zero and that Assumption RA1[i] holds. If in addition Assumption AMSV[i] holds, then we havê

Lemma 3.8 If x i is well-defined and Assumption EC[i] holds, then x i ≤ x i .
Proof By the definitions of x i and x i .

(b). Assumption EC[i] holds. (c). Assumption LFH'[i] holds and i is of type I I − .
Proof Having RA1[i], we prove that t (1) i has a zero. As t (1) i is continuous, it is sufficient to show that this function assumes a positive and a negative value.
(a). Fix y ∈ W i . We have

Proposition 3.5 Suppose Assumptions LFH'[i], RA1[i] and RA0[i] hold. Then the functionb i : W i → R is continuous.
Proof We may suppose that W i = ∅. By Lemma 3.5, W i is a non-empty interval. It is sufficient to prove thatb i is continuous on each non-empty compact interval I with I ⊆ W i . Fix such an interval. Further consider the functionb i : I → R. As 0 ≤b i (y) ≤ y (y ∈ I ),b i is bounded. Asb i is bounded, continuity ofb i is equivalent to the closedness of its graph, i.e. of the closedness of the subset {(y,b i (y)) | y ∈ I } in I × R. As I × R is closed in R 2 , it remains to be proved that this graph is closed in R 2 . In order to do this take a sequence ((y m ,b i (y m ))) in I × R that is in R 2 convergent, with, say, limit (y , b ), and prove that (y , b ) ∈ {(y,b i (y)) | y ∈ I }, i.e. that y ∈ I andb i (y ) = b . We have lim y m = y and limb i (y m ) = b . As I is closed, y ∈ I follows; so y > 0. We have 0    (2), W + i is a real interval. We may assume that W + i is not empty. Now Lemma 3.5 implies that W i is an interval without upper bound. By Proposition 3.6(2)ŝ i : W + i → R is strictly decreasing or strictly increasing. By contradiction we prove thatŝ i is strictly decreasing on W + i ; so supposeŝ i is strictly increasing on W + i . By Proposition 3.5,ŝ i : W i is continuous. Case where i is of type I + : by Lemma 3.9(a), x i in (12) is well-defined. We have Proof We may suppose that the subset of W whereŝ is positive contains at least two elements. Let y a , y b with y a < y b be such. Soŝ(y a ) > 0 andŝ(y b ) > 0. Note that N ). We consider four cases. Case where y a , : this case is impossible by Lemma 3.5. Next fix j withŝ j (y a ) > 0. If alsoŝ j (y b ) > 0, then y a , y b ∈ W + i and by the above, s j (y a ) −ŝ j (y b ) > 0. Ifŝ j (y b ) = 0, then alsoŝ j (y a ) −ŝ j (y b ) =ŝ j (y a ) > 0. As desired, we obtain that s(y a ) − s(y b ) = i∈N (ŝ i (y a ) −ŝ i (y b )) > 0.
Soŝ(x N ) = 1 and thus x N > x.

Semi-uniqueness, Existence and Uniqueness
The proof of the following proposition follows a reasoning similar to a result in [2] for sum-aggregative games.

Proposition 3.7 Suppose Assumption LFH' holds and every t i is decreasing in its second variable. Then AVI has at most one solution.
Proof By contradiction. So suppose x, x ∈ AVI • with x = x . We may suppose x N ≤ x N . Note that x N > 0. As x = x , we can fix i with , the function t i (·, x N ) has the AMSCFAproperty; so t i (x i , x N ) > 0 follows. As t i is decreasing in its second variable, 0 < t i (x i , x N ) ≤ t i (x i , x N ) holds, which is a contradiction.

Theorem 3.3 Suppose Assumptions LFH' and RA hold and for every i ∈ N : i is of type I + or of type I I − or EC[i] holds. Then AVI has at most one nonzero solution.
Proof By Lemma 3.1, RA0 holds. Lemma 3.13 guarantees that everyŝ i : W + i → R is strictly decreasing. By Lemma 3.14,ŝ is strictly decreasing on the subset of its domain where it is positive. Theorem 3.2(3) now implies the desired result.
Of course, if we add N > = ∅ as assumption to this theorem, then (by Proposition 3.1(1)) AVI has at most one solution and such a solution is nonzero.

If in addition to (a) and (b) Assumption RA holds, then AVI has a unique nonzero solution.
Proof We prove the first statement about existence; then the second about uniqueness follows from Theorem 3.3.
Let N = {k ∈ N | k is of type I + }. For both cases (a) and (b), Lemma 3.12(2) guarantees that W = [x, +∞ [ with x = x p for some p ∈ N . It follows that s(x) = i∈Nŝ i (x) ≥ŝ p (x p ) = 1. The solution set of AVI is a non-empty compact subset of R n if AVI • \ {0} is a non-empty compact subset of R n + ; we shall prove the latter. By Theorem 3.2(3), AVI • \{0} equalsb(Z ) where Z is the set of zeros of the functionb : [x, +∞ [ → R. As this function is continuous, it follows that Z is a closed subset of [x, +∞ [, so also a closed subset of R. Below we show that Z also is a bounded subset of R and therefore a compact subset of R. As Proposition 3.5 also implies thatb : [x, +∞ [ → R n is continuous, it then follows that AVI • \{0} =b(Z ) is a compact subset of R n . Finally note that, by Lemma 3.2, each i is of type I + or of type I − .
(a.) Having EC, fix y with y ≥ i∈N x i . By

Setting
The setting here is the same as in Sect. 3.1. However, we always assume here not only that each function t i : Δ + → R is continuous but also is partially differentiable. 17 Partial differentiability is necessary for defining Assumptions LFH, DIR and DIR' given below.

Assumptions
Besides Assumptions AMSV, LFH', RA, RA1, RA0 and EC from Sect. 3.3, we here also consider four new ones: Note that Assumptions LFH, DIR and DIR' concern local conditions. 18

Lemma 4.2 Suppose Assumptions DIFF[i], LFH[i] and DIR[i] hold. Then for every
We have the following identity:

Properties of the Functionsb i andŝ i
In the next lemma we consider the differentiability ofb i ; note that by Lemma 3.6(1),

Proposition 4.1 Suppose Assumptions DIFF[i] and LFH[i] hold and W
Proof 1. For every y ∈ W ++ i we haveb i (y) > 0 and therefore, by (9), t i (b i (y), y) = 0. As DIFF[i] holds, t i is continuously differentiable on Int(Δ + ). As by LFH[i],

Semi-uniqueness, Existence and Uniqueness
The following theorems provide variants of Theorems 3.3 and 3.4. Concerning this note that, by Lemma 4.1 (2), DIFF together with DIR imply RA. As a matter of fact this makes that the other assumptions about type I + , type I I − and EC in Theorem 3.3 are not used anymore. In addition to the previous theorem that presupposes that at least one i ∈ N is of type I + , we provide with the next theorem a result that can handle situations where every i ∈ N is of type I − . Remember the definition ofÑ in (11). Proof Note that by Lemma 4.1(2), RA holds. Now by Lemma 3.1, RA0 also holds.
is as looked for.
'⇒': suppose i∈Ñ s i > 1; soÑ = ∅. By Theorem 4.1 we still have to prove that AVI has a nonzero solution. As RA1 holds, Lemma 3.12(2) guarantees that W = R ++ . Considerŝ : R ++ → R. By part 2, we obtain lim y↓0ŝ (y) = ( i∈Ñ + i∈N \Ñ ) lim y↓0ŝi (y) = i∈Ñ s i > 1. By virtue of EC, we can fix y with N ). It follows that b(y) = k∈Nb k (y) ≤ y and thereforeŝ(y) ≤ 1. Proposition 3.5 implies thatŝ is continuous. By the intermediate value theorem, there exists y ∈ W withŝ(y ) = 1. The fundamental result about the existence of the limit in Theorem 4.3(2) guarantees that this limit in various cases can be computed as we shall illustrate in Sect. 5.4. Its part 3 then gives a sufficient and necessary condition for AVI to have a unique solution while 0 is not a solution.

Sufficient and Necessary Conditions
For Cournot oligopolies there are powerful results dealing with sufficient and necessary conditions for equilibrium uniqueness. Concerning this, [7] is a milestone. It concerns a variant of a result in [14]. Contrary to the latter result, it considers the whole equilibrium set and in particular does not exclude degenerate ones. 19 The proof in [7] also is much more elementary than the proof in [14] which deals with Cournot equilibria as solutions of a complementarity problem to which differential topological fixed point index theory is applied. The more simple nature of this proof was realised by using ideas from the Selten-Szidarovszky technique. A shortcoming of the result in [7] is that a strong variant of a Fisher-Hahn condition (see footnote 13) has to hold. 20 Another is that the price function is not allowed to be everywhere positive (which is an assumption that often is used). In [29] a generalisation of the result in [7] was provided solving these shortcomings; in addition, can deal with sum-aggregative games. Below we even go a step further, by further generalising such that results apply to aggregative variational inequalities. In addition we improve them intrinsically (by using theŝ i besides theb i ). However, we only do this for the case where every i is of type I + and t i (0, y) > 0 (y > 0). 21
This implies x N = ib i (y) =b(y) = y. As  x, x N [ with g(x ) < 0. Also, by Lemma 3.16, g(x) =ŝ(x) − 1 > 0. As g is continuous, g has a zero in ]x, x N [, which is a contradiction. As by Theorem 3.2(4),

Setting
Consider a game in strategic form with player set N := {1, . . . , n}, for player i ∈ N a strategy set X i and payoff function f i . So every X i is a non-empty set and every f i a function X 1 × · · · × X n → R. We denote the set of strategy profiles X 1 × · · · × X n also by X. For i ∈ N , define Xˆı := X 1 × · · · × X i−1 × X i+1 × · · · × X n . Further assume n ≥ 2. We denote such a game by Γ . Given i ∈ N , we sometimes identify X with X i × Xˆı and then write x ∈ X as x = (x i ; xˆı ). For i ∈ N and z ∈ Xˆı , the conditional payoff function f

Associated Variational Inequality
First suppose that each strategy set X i of Γ is a proper real interval and that each payoff function f i is partially differentiable with respect to its i-th variable. Now for x = (x i ; z) ∈ X one has Definition 5.1 Consider Γ . The associated variational inequality VI[Γ ] is the variational inequality VI(X, F), i.e.
2. We prove that e i is a maximiser of f (eˆı ) i . We have F(e) · (x − e) ≥ 0 (x ∈ X). By taking an x ∈ X with x j = e j if j = i, we see that As f (eˆı ) i is pseudo-concave, it follows that e i is a maximiser of this function. 3. By part 2. 4. By parts 1 and 3.
Next let us consider a more subtle situation dealing with games that we simply refer to as 'almost smooth'. This type of game allows for a possible discontinuity at the origin which is useful for various specific games, like that in Sect. 5.4.
a. X i = R + ; b. f i is partially differentiable with respect to its i-th variable at every x = 0; c. the partial derivative D i f i (0) exists as an element of R ∪ {+∞}.
Note that for an almost smooth Γ for every i ∈ N : each conditional payoff function f (z) i with z = 0 is differentiable, the conditional payoff function f (0) i is differentiable on R ++ and its derivative at 0 exists as element of R∪{+∞}. Also note that the payoff functions f i are not supposed to be continuous. Finally note that for x = (x i ; z) ∈ X, formula (13) holds.

Definition 5.3
Consider an almost smooth game Γ . The associated variational inequality VI'[Γ ] is the variational inequality VI (X, F) Verifying pseudo-concavity in applications for the conditional payoff functions may be not so easy. For a broad class of sum-aggregative games we shall derive practical results (i.e. Proposition 5.5) in terms of marginal reductions (see Definition 5.5) guaranteeing pseudo-concavity. Remember the definition of Δ and Δ + in (6).

Definition 5.5
Suppose Γ is almost smooth and sum-aggregative. The marginal reductions of Γ are defined as the functions t i : Δ → R (i ∈ N ) given by

Proposition 5.3
Consider an almost smooth sum-aggregative game Γ together with its marginal reductions t i .
i ) (0). Lemma 5.1 Consider an almost smooth sum-aggregative Γ together with a marginal reduction t i . Suppose t i : Δ + → R is continuous and continuously partially differentiable.

Each conditional payoff function f (z)
Proof 1. First and third statement: let a := l z l . By Proposition 5.3 (2) is nothing else than the derivative of the function R + → R defined by λ → t i (λ, λ + a) at λ = x i . Note that a > 0. As t i : Δ + → R is continuously partially differentiable, it follows that t i is continuously differentiable on Int(Δ + ). If x i = 0, then (x i , x i + a) ∈ Int(Δ + ) and therefore the chain rule can be applied implying ( f 0, a). Second and third statement: by Proposition 5.
2. In order to prove the strict pseudo-concavity of f (z) i , we show (having part 1 and footnote 22) that for every 3. Consider the function f (0) i : R ++ → R. In order to prove the strict pseudoconcavity (having part 1 and footnote 22), we show that for every Having the above, now consider for an almost smooth sum-aggregative game Γ its associated variational inequality VI'[Γ ] (see Definition 5.3). With the t i the marginal reductions of Γ , Proposition 5.
is nothing else than the aggregative variational inequality given by (7).
Then: Proof We apply Lemma 5.1. So in part 1 we have to prove that for every 0 ≤ x i ≤ y: Part 2 of the next theorem provides a full result that applies to many concrete sum-aggregative games in the literature.  Case where #{i ∈ N | e i = 0} ≥ 2: now eˆı = 0 (i ∈ N ). By Proposition 5.5(1), every conditional payoff function f (eˆı ) i is pseudo-concave. So Proposition 5.2(3) guarantees that e is a Nash equilibrium.

The conditional payoff functions f
Case where #{i ∈ N | e i = 0} = 1: let k be such that e i = 0 (i = k) and e k > 0. By Proposition 3.1(2), k ∈Ñ ; so k ∈ N > . By Proposition 5.5(1), every f (eˆı ) i (i = k) is pseudo-concave. So, by Proposition 5.2(2), e i ∈ R i (eˆı ) (i = k). We now prove that also, e k ∈ R k (ek), i.e. that e k is a maximiser of f 2. We prove that VI'[Γ ] has a unique nonzero solution; then we are done by part 1. Well, Theorem 4.2 applies and proves the desired result.

Application to Cournot Equilibria
In this subsection we illustrate the power of our general theory by giving a short proof of an important result in [23]. As therein each firm is of type I − , Theorem 5.1 does not apply; we have to rely on Theorem 4.3.
So consider, as in Sect. 2.6, a Cournot oligopoly game Γ with at least two firms with price function p and cost functions c i . Suppose that p(y) = 1/y (y > 0) and that the c i are twice continuously differentiable with c i > 0 and c i > 0. With formula (5) we see that Γ is an almost smooth sum-aggregative game. Consider the associated aggregative variational inequality VI' [Γ ].
we have for the marginal reductions One very quickly verifies that Assumptions DIFF, LFH, DIR and EC hold. Now let us apply Theorem 4.3. We there haveÑ = N and as t i (λ, λ) = −c i (λ) < 0, each player is of type Theorem 4.3(2) guarantees that s i = lim y↓0 ξ i (y)/y exists. Taking this limit (14) gives

Conclusions
Finite-dimensional variational inequalities over product sets with an aggregative structure are dealt with. New results concerning existence and especially concerning semi-uniqueness, uniqueness and computation of solutions are obtained for the case of R n + . This is achieved by generalising the Selten-Szidarovszky technique and by exploiting the At Most Single Crossing From Above property. This technique transforms the original n-dimensional problem into a 1-dimensional fixed point problem. We allow, as this is important for various applications, for a possible discontinuity at the origin. An application to Nash equilibria of sum-aggregative games that does not need explicit pseudo-concavity assumptions for the conditional payoff functions follows in a natural way. The used mathematics is relatively elementary (although it is technical) when compared to standard approaches. We corrected various errors in the literature that occurred by applying the standard approach to Cournot oligopolies and illustrate the power of our results with such games. In order to make the article more alluring for a broader audience, also a nearly self-contained presentation of the very basic theory of variational inequalities is added in "Appendix A". '(b) ⇒ (a)': suppose x ≥ 0 ∧ F(x ) · x = 0 ∧ F(x ) ≥ 0. Then for every x ≥ 0, we obtain, as desired, that F(x )·(x−x ) = F(x )·x−F(x )·x = F(x )·x ≥ 0.
'b ⇒ c': suppose x ≥ 0 ∧ F(x ) · x = 0 ∧ F(x ) ≥ 0. Concerning c we show that x i F i (x ) = 0 (i ∈ N ). Well, suppose x j F j (x ) = 0 for some j. Then . For x ∈ X the following two statements are equivalent. (a).
x is a solution of VI(X , F).
(b). For every i ∈ N exactly one of the following holds: If For S ⊆ X , F is said to be strictly monotone on S if for all x, x ∈ S with x = x (x − x ) · (F(x) − F(x )) > 0.
And F is said to be a P-function on S if for all x, x ∈ S with x = x there exists an index k such that Of course, if F is strictly monotone on S, then it is a P-function on S.
Lemma A.4 Let X = R n + . Suppose S ⊆ X. 1. If F is a P-function on S, then VI(X , F) has at most one solution in S. 2. If F is strictly monotone on S, then VI(X , F) has at most one solution in S. Proof 1. Suppose F is a P-function on S and x, x ∈ S are solutions. By Lemma A.2(a,c), for every i Since F is a P-function on S, this implies x = x . 2. By part 1.
In the following two lemmas we deal with the situation where S is a proper rectangle in R n + , i.e. where S = S 1 ×· · ·×S n with each S i is a proper real interval with S i ⊆ R + . 25 Lemma A.5 Let X = R n + . Suppose S is a proper rectangle in R n + , every F i : S → R is continuously differentiable. If for every x ∈ S the Jacobi matrix J(x) of F at x is positive quasi-definite, then F is strictly monotone on S.
Proof Suppose for every x ∈ S the matrix J(x) is positive quasi-definite. Fix x,x ∈ S with x =x. We have to prove that (x−x)·(F(x)−F(x)) > 0. Well, let y : [0, 1] → R n be defined by y(λ) := λx + (1 − λ)x and let H := F • y : [0, 1] → R n . Note that H is continuously differentiable with H (λ) = J(y(λ)) (x −x). We obtain Lemma A.6 Let X = R n + . Suppose S is a proper rectangle in R n + , every F i : S → R is continuously differentiable and for all x ∈ S, the Jacobi matrix J(x) of F at x is a P-matrix. Then F is a P-function on S.
Proof This is a quite technical and deep result, due to Gale and Nikaido, which essentially can be found in [6]. Also see [5,Proposition 3.5.9].
Lemma A.7 Suppose F : X → R n is continuous. Then VI • (X , F) is a closed subset of R n .
Proof Let (x m ) be a sequence in VI • (X , F) which is convergent with limit x . So we have for every m that F(x m ) · (x − x m ) ≥ 0 (x ∈ X ). As F is continuous, we obtain, by taking limits, F(x ) · (x − x ) ≥ 0 (x ∈ X ). Thus x is a solution of VI(X , F), which completes the proof.
Lemma A.8 Suppose X is convex and closed. Denote by P X : R n → X the (now well-defined) metric projection of R n on X , i.e. P X (y) denotes the unique z ∈ X with y − z ≤ y − x (x ∈ X ). Define H : Lemma A.9 Suppose X is convex and compact and F : X → R n is continuous. Then VI • (X , F) is a non-empty compact subset of R n .
Proof As X is compact, X is bounded and therefore also VI • (X , F) is bounded. As VI • (X , F) is by Lemma A.7 closed, this set is compact. So we still have to prove that a solution exists. By Lemma A.8, x is a solution of VI(X , F) if and only if x is a fixed point of H . As F and P X are continuous, also H is continuous. Brouwer's fixed point theorem guarantees the existence of a fixed point of H . Lemma A.10 Suppose X is convex and F : X → R n is continuous. For r > 0, let X r = {x ∈ R n | x ≤ r } ∩ X . Then for x ∈ X the following two statements are equivalent. (a).
x is a solution of VI(X , F).
(b). There exists r > x such that x is a solution of VI(X r , F).

Proof '(a) ⇒ (b)
: suppose x is a solution of VI(X , F). Take r > x arbitrary. As x ∈ X r ⊆ X , x also is a solution of VI(X r , F). '(b) ⇒ (a) : suppose r > x is such that x is a solution of VI(X r , F). Let x ∈ X . For λ > 0 small enough, we have, using that X is convex, y := x +λ(x−x ) ∈ X r . As x is a solution of VI(X r , F), we obtain λF(x ) · (x − x ) = F(x ) · (y − x ) ≥ 0 and therefore, as desired, F(x ) · (x − x ) ≥ 0.
We call the general variational equality VI(X , F) symmetric if P π (X ) = X and for every π ∈ S n and x ∈ X F i (x) = F π(i) (P π (x)).

Lemma A.11
Suppose VI(X , F) is symmetric. If x is a solution of VI(X , F) and π ∈ S n , then P π (x ) is also a solution.
Proof As x is a solution, we have F(x ) · (x − x ) ≥ 0 (x ∈ X ). And from this, as desired,