1 Introduction and related literature

Since the seminal papers of Schmeidler (1973) and Mas-Colell (1984), on equilibria in games with continuum of players, as well as their various generalizations including games with incomplete information in the tradition of Harsanyi (1967), and games with differential information in the tradition of Balder and Rustichini (1994) and Kim and Yannelis (1997), the framework of large games has become of the central interest in both game theory and economics. The technical and conceptual issues raised in the extensions of the Schmeidler/Mas-Colell frameworks to the incomplete information or differential games raise few technical issues.Footnote 1 For example, in large games with incomplete information, exact laws of large numbers (ELLN) for a continuum of random variables are typically used (e.g., see Feldman and Gilles 1985; Judd 1985 for an early discussion of this technical issue, as well as Alós-Ferrer 1998).Footnote 2 Alternatively, Balder and Rustichini (1994) and Kim and Yannelis (1997) consider games with differential information, but still the conditions for the existence of equilibria are quite distinct from the finite number of players case. In such games with differential information, only a single state of the game is drawn, but it is observed by every player with respect to a private sub \(\sigma \)-field that can differ across agents and, hence, characterizes the private information structure in the game. In such games, the mapping between realizations of this single state and the distribution of information is taken as a primitive of the game.

The particular choice of approach to large games with information frictions somewhat depends on the economic problem at hand. For example, the former class of games involving the private signals has been proven useful to study economic problems, where agents face random taste or productivity shocks that are payoff relevant. Differential information games have proven appropriate, when studying economic problems such as common value auctions, tournaments, riot games, or beauty contests, where in essence, there is single true state of the world, but that state is idiosyncratically perceived by different players.

A second (and arguably equally) important strand of the literature in game theory that has found numerous applications in economics over the last two decades concerns games with strategic complementarities (henceforth GSC). In a GSC, the question of the existence and characterization of pure strategy Nash equilibrium does not hinge on conditions relating to convexity and upper hemi-continuity of best reply maps, but rather on an appropriate notion of increasing best responses in a well-defined set-theoretic sense, where actions take place in a complete lattice of strategies. In such a situation, the powerful fixed point theorems of Tarski (1955) and its generalizations (e.g., Veinott 1992) can be brought to bear on the existence question. Moreover, in parameterized versions of these games, one can seek natural sufficient conditions for the existence of monotone equilibrium comparative statics.Footnote 3 One additional interesting question that concerns GSC is what are the sufficient conditions for computable equilibrium comparative statics (i.e., when qualitative and computable comparisons of equilibriaFootnote 4 are possible in a GSC). Let us stress that an important limitation of the existing literature on GSC is that the research has been focused on games with a finite number of players [e.g., see the works of Topkis (1979), Vives (1990), and Milgrom and Roberts (1990)].

In this paper, we provide a unified set of results concerning the existence, comparison, and computation of Bayesian Nash equilibria in a broad class of large games with differential information in the spirit of Balder and Rustichini (1994) and Kim and Yannelis (1997).Footnote 5 As we focus on the subclass of large games with differential information that also possess strategic complementarities, we extend the existing literature on GSC with a finite number of players (e.g., Athey 2002, 2001 or Reny 2011; Vives and Zandt 2007) to a settings with a continuum of players. In addition, unlike much of the existing literature (including most of the existing literature we have just mentioned), we are also able to obtain many of our results in the space of strategies which are not monotone with respect to the signal (rather, best responses are only pointwise increasing with respect to strategies of other players as in Vives 1990 and Van Zandt 2010).Footnote 6 In the end, this paper is a direct extension of the approach taken in Balbus et al. (2013) where the authors study large GSC with complete information, but to extend the results in this latter paper, many new constructions are required.Footnote 7

We start by studying distributional equilibrium.Footnote 8 For this situation, we propose an appropriate notion of Bayesian Nash equilibrium and verify the existence of such equilibria in our class of games. What is important about our approach to the existence question is the fact that in general, we cannot use standard arguments found in the literature on GSC. Similarly, for related arguments per equilibrium comparative statics, as equilibria in our games do not exist in complete lattices, new tools are needed.

To deal with these technical issues when proving the existence of distributional Bayesian Nash equilibrium, we develop a new application of the powerful fixed point machinery for chain complete partially ordered sets found in the seminal work of Markowsky (1976). An important aspect of taking this new approach is that we are able to obtain our existence results under different assumptions than those found in the extensive current literature, where authors typically pursue sufficient conditions related to those studied in Mas-Colell (1984) adapted to large games with differential information to apply an appropriate topological fixed point theorem. Next, after proving the existence, we turn to the question of equilibrium comparative statics in the parameters of the class of games. In these results, we not only prove the existence of monotone equilibrium comparative statics on the space of games, but we give sufficient conditions for these equilibrium comparisons to be computable. We are unaware of any results in the existing literature on large differential games where equilibrium comparisons are computable.

We then turn to the equilibrium in the sense of Schmeidler (1973) and, in particular, the question of existence and characterization of Bayesian Nash–Schmeidler equilibrium in our class of large games. Here, what is very interesting is that in general, the existence constructions per distributional equilibria based upon 1976’s theorem no longer apply; rather, to obtain even existence, in addition to having the best reply maps induce monotone fixed point operators, we must also check additional continuity properties of our operators in relevant order topologies. To obtain such results per order continuity of fixed point operators built from the best reply maps, we must first develop applications of order-theoretic maximum theorems.Footnote 9 This allows us to develop a new and novel application of the Tarski–Kantorovich fixed point theorem to the question of existence and computation of equilibrium. In particular, to characterize the set of Bayesian Nash–Schmeidler equilibrium, we actually prove a new theorem in the paper that verifies the existence of a countable chain complete partially ordered set of Bayesian Nash–Schmeider equilibria in our large games. Using this construction, we are also able to develop explicit methods for the computation of Nash–Schmeidler equilibria. It is worth mentioning that none of these characterizations of either distributional equilibria or Bayesian Nash/Schmeidler equilibria can be obtained, in general, using the existing topological approaches found in the literature. As before, we are also able to prove theorems on computable monotone comparative statics relative to ordered perturbations of the deep parameters of the space of primitives of a game.

Under either definition of equilibrium in our large games, although the assumptions imposed for GSC are restrictive, they do allow us to obtain new results for large games with differential information not found in the existing literature. The remainder of the paper is organized as follows. In Sect. 2, we introduce some important mathematical definitions we need in the remainder of the paper. In Sect. 3, we prove the existence of distributional Bayesian Nash equilibrium, characterize the equilibrium set, and provide results on equilibrium comparative statics. In Sect. 4, we then prove similar results for Bayesian Nash–Schmeidler equilibrium. Finally, we provide some economic applications of our results in Sect. 5. To keep the paper self-contained, auxiliary results in order-theoretic fixed point theory, as well as proofs that are not included in the main body of the paper, are placed in the “Appendix”.

2 Useful mathematical terminology

We first define a number of important mathematical terms that will be used in the sequel.Footnote 10 A partially ordered set(or poset) is a set \(S\) endowed with an order relation \(\ge \) that is reflexive, transitive, and antisymmetric. If any two elements of \(C\subseteq S\) are comparable, then \(C\) is referred to as a chain. If the chain \(C\) is countable, we refer to \(C\) as a countable chain. If for every chain \(C\subseteq S\), we have \(\inf C=\bigwedge C\in S\) and \(\sup C=\bigvee C\in S\), then \(S\) is referred to as a chain complete poset (or, for short, CPO). If this condition holds only for every countable chain \(C\,\subseteq S\), then \(S\) is referred to as a countably chain complete poset (or, CCPO). By \([a)=\{x | x\in X,x\ge a\}\), we denote the upperset (or the “up-set”) of \(a\) and \((b]=\{x | x\in X,x\le b\}\) the lowerset (or the “down-set”) of \(b\).

In many situations, we need to work in posets with additional structure (and, in particular, lattices). A lattice is a poset \(X\) such that for any two elements \(x\) and \(x^{\prime }\) in \(X\), this pair of elements has the sup in \(X\) (i.e., “join” denoted \(x\vee x^{\prime }),\) and the inf in \(X\) (i.e., “meet” denoted \(x\wedge x^{\prime }\)), where the infimum and supremum are computed relative to the partial order \(\ge \). We say \(X_{1}\subset X\) is a sublattice of \(X\) if the meet and join of any pair of elements with respect to \(X\) are elements of \(X_{1}.\) A lattice is complete if for any subsetFootnote 11 \(X_{1}\subseteq X\), both \(\bigvee X_{1}\in X\) and \( \bigwedge X_{1}\in X\). A subset \(X_{1}\subseteq X\) is subcomplete lattice if it is complete and also a sublattice relative to the partial order of \(X\).

Increasing mappings play a key role in our work. We consider both increasing functions and correspondences. Let \((X,\ge _{X})\) and \((Y,\ge _{Y})\) be posets, and first consider a function \(f:X\rightarrow Y\). We say \(f\) is increasing (or, equivalently, isotone or order preserving) on \(X\) if \(f(x^{\prime })\ge _{Y}f(x),\) when \(x^{\prime }\ge _{X}x\). If \(f(x^{\prime })>_{Y}f(x)\) when \(x^{\prime }>_{X}x\), we say \( f\) is strictly increasing.Footnote 12 An increasing function \(f:X\rightarrow Y\) is sup-preserving (respectively, inf-preserving) if for any countable chain \(C\), we have \(f(\bigvee C)=\bigvee f(C)\) (respectively, \(f(\bigwedge C)=\bigwedge f(C)\)). If \(f\) is both sup-preserving and inf-preserving for any countable chain \(C,\,f\) will be referred to as \(\sigma \) -order continuous. Moreover, whenever \(f\) is sup-preserving and inf-preserving for any chain \(C,\,f\) will be referred to as an order continuous map.

We can also develop notations of monotonicity for correspondences. We say a correspondence (or multifunction) \(F:X\rightarrow Y^{*}\subseteq 2^{Y}\) is ascending in a binary set relation \(\rhd \) on \(2^{Y}\) if \( F(x^{\prime })\rhd F(x)\) when \(x^{\prime }\ge _{X}x\), where \(Y^{*}\) denotes the range of the correspondence and consists of a subclass of subsets of \(2^{Y}\) endowed with the order relation \(\rhd \) that depends on the nature of monotonicity that is defined. In Smithson (1971), Heikkilä and Reffett (2006), and Veinott (1992), various set relations \(\rhd \) for ascending correspondences have been proposed. For example, for \(Y^{*}=\,2^{Y}\backslash \varnothing ,\) and \(A,B\in 2^{Y}\backslash \varnothing , \) we say \(B\vartriangleright _{\uparrow }\) \(A\) in the weak upward set relation (respectively, weak downward set relation denoted by \(\vartriangleright _{\downarrow })\) if for all \(x_{1}\in A\), there exists \(x_{2}\in B\) such that \(x_{1}\le x_{2}\) (respectively, if for all \(x_{2}\in B\), there exists \(x_{1}\in A\) such that \(x_{1}\le x_{2}).\) If for such \(A\) and \(B\), we have both \(B\vartriangleright _{\downarrow } A\) and \(B\vartriangleright _{\uparrow }A\), the sets are weak-induced set ordered. If in addition \(Y\) is a lattice, and we define \({L}(Y)=\{A\subseteq {Y} | A \text { is a non-empty sublattice } \}\subset 2^{Y}\), then for \(A,B\in {L}(Y)\), we say \(B\ge _{v}A\) in Veinott’s strong set order if for all \(x_{2}\in A,\,x_{1}\in B,\) we have \( x_{1}\vee x_{2}\in B\) and \(x_{1}\wedge x_{2}\in A.\) Footnote 13

We need to define a number of different notions of complementarities that prove useful for obtaining sufficient conditions for monotone best replies in the class of games we study. As many of these concepts have been only recently introduced into the literature (e.g., in Quah and Strulovici 2012), at this stage, we introduce only the relevant definitions and defer to later explanations as to how the particular forms of complementarities are used in our arguments.

Assume \((X,\ge _{X})\) is a lattice. A function \(g:X\rightarrow \mathbb {R}\) is quasi-supermodular on \(X\) if for any two \(x^{\prime },\,x\in X\), we have

$$\begin{aligned}&g(x)\ge g(x^{\prime }\wedge x) \Rightarrow g(x\vee x^{\prime })\ge \ g(x^{\prime }), \text{ and } \\&\quad g(x)> g(x^{\prime }\wedge x) \Rightarrow g(x\vee x^{\prime })>\ g(x^{\prime }). \end{aligned}$$

Further, two quasi-supermodular functions \(g,\,h:X\rightarrow \mathbb {R}\) obey signed-ratio quasi-supermodularity if for any two unordered \( x^{\prime },x\in X,\) we have:

  1. (i)

    if \(h(x^{\prime })> h(x\wedge x^{\prime })\) and \(g(x^{\prime }) < g(x\wedge x^{\prime })\), then

    $$\begin{aligned} -\frac{g(x^{\prime })-g(x\wedge x^{\prime })}{h(x^{\prime })-h(x\wedge x^{\prime })}\ge -\frac{g(x\vee x^{\prime })-g(x)}{h(x\vee x^{\prime })-h(x) }; \end{aligned}$$
  2. (ii)

    if \(g(x^{\prime })> g(x\wedge x^{\prime })\) and \(h(x^{\prime })< h(x\wedge x^{\prime })\), then

    $$\begin{aligned} -\frac{h(x)-h(x\wedge x^{\prime })}{g(x^{\prime })-g(x\wedge x^{\prime })} \ge -\frac{h(x\vee x^{\prime })-h(x)}{g(x\vee x^{\prime })-g(x)}. \end{aligned}$$

We say a family of functions \(\{f(\cdot ,s)\}_{s\in S}\) satisfies signed-ratio quasi-supermodularity if \(f:X\times S\rightarrow \mathbb {R}\) is quasi-supermodular on \(X\) for all \(s\in S\), and for any \(s\), \(s^{\prime }\in S\), the functions \(f(\cdot ,s)\) and \(f(\cdot ,s^{\prime })\) obey signed-ratio quasi-supermodularity.Footnote 14

Next, assume \((S,\ge _{S})\) is a poset. We say a function \(g:S\rightarrow \mathbb {R}\) is a single-crossing function if \(g(s)\ge 0\,\Rightarrow \,g(s^{\prime })\ge 0\) and \(g(s)>0\) \(\Rightarrow \,g(s^{\prime })>0\) for any \(s^{\prime }\ge _{S}s\). We say two single-crossing functions \(g,\, h:S\rightarrow \mathbb {R}\) satisfy signed-ratio monotonicity if for any two \(s^{\prime }\ge _{S}s\), we have:

  1. (i)

    if \(g(s)<0\) and \(h(s)>0\), then

    $$\begin{aligned} -\frac{g(s)}{h(s)}\ge -\frac{g(s^{\prime })}{h(s^{\prime })}; \end{aligned}$$
  2. (ii)

    if \(h(s)<0\) and \(g(s)>0\), then

    $$\begin{aligned} -\frac{h(s)}{g(s)}\ge -\frac{h(s^{\prime })}{g(s^{\prime })}. \end{aligned}$$

Finally, we say a family of functions \(\{f(\cdot ,s)\}_{s\in S}\), where \( f:X\times S\rightarrow \mathbb {R}\), satisfies signed-ratio monotonicity , if \(f(\cdot ,s)\) is a single-crossing function for all \(s\in S\), and for any two \(s,\,s^{\prime }\in S\), functions \(f(\cdot ,s)\) and \(f(\cdot ,s^{\prime })\) satisfy the signed-ratio monotonicity. Furthermore, function \(f:X\times S\rightarrow \mathbb {R}\) has single-crossing differences in \((x,s)\) if \(\varDelta (s)\,{:=}\,f(x^{\prime },s)-f(x,s)\) is a single-crossing function for any \(x^{\prime }\ge _{X}x\).

With this investment in terminology, we can now proceed to describe our large games with differential information and consider the question of existence and characterization of distributional Bayesian Nash equilibria.

3 Distributional Bayesian Nash equilibria

In the paper, we study large games with differential information as in Kim and Yannelis (1997), but with strategic complementarities. For our games, we begin by considering the question of existence and characterization of distributional Bayesian Nash equilibria and then turn to Bayesian Nash equilibria in the sense of Schmeidler (1973).

3.1 Game description

Let \(\varLambda \) be a compact and metrizable space of players. Endow \(\varLambda \) with a non-atomic probability measure \(\lambda \) defined on the Borel \( \sigma \)-field \(\mathcal {L}\). Actions of the players are assumed to be contained in \(A\subset \mathbb {R}^{n}\) endowed with the Euclidean topology generating the Borel \(\sigma \)-field \(\mathcal {A}\) on \(A\). We impose the natural coordinate-wise partial order \(\ge \) on \(A\). Then let, \(\varLambda \times A\) be a product space endowed with an order \(\ge _{p}\) satisfying the following condition:Footnote 15

$$\begin{aligned} (\alpha ,a)\ge _{p}(\alpha ^{\prime },a^{\prime }) \Rightarrow a\ge a^{\prime }. \end{aligned}$$

Let \(D\) denote the set of probability measures on \(\varLambda \times A\) defined on the product \(\sigma \)-field \(\mathcal {L}\otimes \mathcal {A}\) such that for any \(\nu \in D\), the marginal distribution of \(\nu \) on \(\varLambda \) is \( \lambda \). Endow \(D\) with the weak*-topology and the corresponding Borel \(\sigma \)-field \(\mathcal {D}\). Finally, we order \(D\) with respect to the first-order stochastic dominance (henceforth FOSD), which we denote by \( \succeq _{D}\).Footnote 16

We now turn to describing the information structure of the game. Let the measure space of states/public signals by a completion of the Borel probability space \((S, \mathcal {S},\mu )\), such that \(S\) is a complete separable metric space, and \(\mathcal {S}\) is its Borel sigma field. We identify \(\mu \) with completion measure on \(\mathcal {S}\). By \(\mathcal {S}_{\alpha },\,\alpha \in \varLambda \), we denote a sub \(\sigma \)-field of \(\mathcal {S}\) characterizing the private information of agent \( \alpha \in \varLambda \), and, by the mapping \(\pi _{\alpha }:S\rightarrow \mathbb {R}_{+}\) we denote the distribution of agent \(\alpha \in \varLambda \), where \(\pi _{\alpha }\) is such that \(\int _{S}\pi _{\alpha }(s)\mathrm{d}\mu (s)=1\).

Let \(\tilde{A}:\varLambda \times S\rightrightarrows A\) be the set of feasible actions for player \(\alpha \) depending on state \(s\in S\). By \(r:\varLambda \times S\times D \times A\rightarrow \mathbb {R}\), we denote the real-valued ex-post payoff function,Footnote 17 where \(r(\alpha ,s,\phi ,a)\) is the payoff value of player \(\alpha \), using action \(a\in A,\) in state \(s\in S\), when the distribution of actions of other players is \(\phi \).

Since agents choose their actions contingent on their observable signal, the distribution of actions will differ depending on the realized state of the world. Let \(\tau :S\rightarrow D\) be a function mapping space \(S\) to the set of probability distributions on \(\varLambda \times A\). In order to avoid confusion, we shall denote values of function \(\tau \) in state \(s\) by \(\tau (\cdot |s)\).Footnote 18 In some cases, we must consider the partially ordered set of equivalence classes of \( \tau \), which we shall denote by

$$\begin{aligned}{}[\tau ]\,{:=}\,\left\{ f:S\rightarrow D | f(\cdot |s)=\tau (\cdot |s), \mu \text {-a.e.}\right\} . \end{aligned}$$

The set of all equivalence classes containing only measurable functions will be denoted by \(\hat{T}\). By completeness of measure \(\mu \), measurability of \(\tau \) implies measurability of \(\tau ^{\prime }\in [\tau ]\). We endow \(\hat{T}\) with the pointwise order \(\succeq _{\hat{T}}\) with respect to the equivalence classes, that is \(\tau ^{\prime }\succeq _{\hat{T}}\tau \) iff \(\tau ^{\prime }(\cdot |s)\succeq _{D}\tau (\cdot |s),\,\mu \)-a.e. In addition, let

$$\begin{aligned} \hat{T}_{\varLambda }\,{:=}\,\left\{ \tau \in \hat{T} \Big | \tau \left( \left\{ (\alpha ,a)\in \varLambda \times A \big | a\in \tilde{A}(\alpha ,s)\right\} \big |s\right) =1, \mu \text {-a.e.}\right\} . \end{aligned}$$

Finally, denoting by \( CM \) the set of real, continuous, and monotone functions defined on \(\varLambda \times A\), we can define

$$\begin{aligned} \hat{T}_{d}\,{:=}\,\left\{ \tau \in \hat{T}_{\varLambda } \Bigg | \forall f\in CM , s\rightarrow \int \limits _{\varLambda \times A}f(\alpha ,a)\tau (\mathrm{d}\alpha \times da|s)\in M(S)\right\} , \end{aligned}$$

where \(M(S)\) denotes the space of \(\mathcal {S}\)-measurable functions mapping \(S\) to \(\mathbb {R}\). Hence, \(\hat{T}_{d}\) is a set of equivalence classes of functions mapping \(S\) to \(D\) with values (i.e., probability distributions) concentrated on the graphFootnote 19 of \(\tilde{A}(\cdot ,s)\) for \( \mu \)-a.e. \(s\in S\). In addition, we require that for any continuous, monotone function \(f\), function \(g(\cdot )\,{:=}\,\int _{\varLambda \times A}f(\alpha ,a)\tau (\mathrm{d}\alpha \times da|\cdot )\) is \(\mathcal {S}\)-measurable.

3.2 Decision problems and equilibrium definition

We are now ready to characterize the decision problem faced by each agent in the game. If each player assumes the information structure \(\mathcal {S} _{\alpha }\) is generated by a countable partition such that for all \(U\in \mathcal {S}_{\alpha },\,\mu (U)>0\), then the sequence of the game can be defined as follows. First, each player observes the state of the world \(s\in S\) with respect to her private information \(\mathcal {S}_{\alpha }\). Next, players calculate their interim payoffs, which are defined by the mapping \( v:\varLambda \times S\times \hat{T}\times A\rightarrow \mathbb {R}\), where

$$\begin{aligned} v(\alpha ,s,\tau ,a)\,{:=}\,\int \limits _{\varepsilon _{\alpha }(s)}r(\alpha ,s^{\prime },\tau (\cdot |s^{\prime }),a)\pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\mu (\mathrm{d}s^{\prime }), \end{aligned}$$

with

$$\begin{aligned} \pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\,{:=}\,\left\{ \begin{array}{lcl} \frac{\pi _{\alpha }(s^{\prime })}{\int _{\varepsilon _{\alpha }(s)}\pi _{\alpha }(s^{\prime \prime })\mu (\mathrm{d}s^{\prime \prime })} &{}\quad \mathrm{if} &{} s^{\prime }\in \varepsilon _{\alpha }(s), \\ 0 &{}\quad \mathrm{if} &{} s^{\prime }\notin \varepsilon _{\alpha }(s); \end{array} \right. \end{aligned}$$

where \(\varepsilon _{\alpha }(s)\) is the smallest (under set inclusion) set in \(\mathcal {S}_{\alpha }\) that contains \(s\). Our later assumptions guarantee that \(v\) is well defined. Once the players’ strategies are chosen, payoffs are distributed. We summarize this game by the following tuple:

$$\begin{aligned} \varGamma \,{:=}\,\{(\varLambda ,\mathcal {L},\lambda ),(S,\mathcal {S},\mu ),A,\tilde{A} ,r,\{\pi _{\alpha },\mathcal {S}_{\alpha }\}_{\alpha \in \varLambda }\}, \end{aligned}$$

and define the notion of distributional Bayesian Nash equilibrium in this game as follows.

Definition 1

A distributional Bayesian Nash equilibrium of \(\varGamma \) is an equivalence class \(\tau ^{*}\in \hat{T}_{d}\) such that \(\mu \)-a.e.

$$\begin{aligned} \tau ^{*}\left( \left\{ (\alpha ,a) \in \varLambda \times A\ \big | v(\alpha ,s,\tau ^{*},a)\ge v(\alpha ,s,\tau ^{*},a^{\prime }),\forall a^{\prime }\in \tilde{A}(\alpha ,s)\right\} \big | s\right) =1. \end{aligned}$$

Notice that our definition of distributional Bayesian Nash equilibrium generalizes the concept of distributional equilibrium proposed in Mas-Colell (1984) to the case of large differential information game. In particular, we consider a distributional Bayesian Nash equilibrium to be an equivalence class of functions \(\tau \in \hat{T}_{d}\), as opposed to a single function. Therefore, given a function, the above definition need only hold \(\mu \)-a.e. \(s\in S\). Eventually, we shall define equilibrium in the rather specific class of functions \(\hat{T}_{d}\) (rather than the class \(\hat{T}_{\varLambda })\). Clearly, there might exist functions in \(\hat{T}_{\varLambda }\backslash \hat{T}_{d}\) satisfying our definition of Bayesian Nash equilibrium. However, as our existence result holds in \(\hat{T} _{d}\), we restrict our definition solely to this space.

3.3 Sufficient conditions

To prove the existence of distributional Bayesian Nash equilibrium, we impose the following assumptions on the primitives of the game.

Assumption 1

Whenever \(\tau \in \hat{T}_{d}\), let

  1. (i)

    \(\tilde{A}\) be complete sublattice-valued, with \(\tilde{A}(\cdot ,s)\) having a compact graph for all \(s\in S\). Furthermore, let \(\tilde{A}\) be weakly measurable, and the graph correspondence \(\tilde{Gr}(s)\,{:=}\,\{(\alpha ,a):a\in \tilde{A}(\alpha ,s)\}\) be weakly measurable;Footnote 20 Finally, assume that for \(\mu \) a.e. \(s,\,\tilde{Gr}(s)\) is an increasing set, i.e., the indicator of this set is an increasing function;

  2. (ii)

    \(r\) be continuous and quasi-supermodular on \(A\), have single-crossing differences in \((a,\tau )\), and \(r(\alpha ,s,\tau (\cdot |s),a)\) be \( \mathcal {L}\otimes \mathcal {S}\)-measurable;

  3. (iii)

    for \(\lambda \)-a.e. player, and any \(\tau \in \hat{T}_{d}\), the family of functions \(\{r(\alpha ,s,\tau (\cdot |s),\cdot )\}_{s\in S}\) satisfy signed-ratio quasi-supermodularity on \(A\), while functions \( \{\varDelta (\cdot ,s)\}_{s\in S}\), with \(\varDelta (\tau ,s)\,{:=}\,r(\alpha ,s,\tau (\cdot |s),a^{\prime })-r(\alpha ,s,\tau (\cdot |s),a),\) obey signed-ratio monotonicity in the pointwise order, for any \(a^{\prime },\,a\in A,\, a^{\prime }\ge a\);

  4. (iv)

    for all \(\alpha \in \varLambda ,\,\mathcal {S}_{\alpha }\) be generated by a countable partition such that for all \(s\in S,\,\pi _{\alpha }(s)\) is \(\mathcal {L}\otimes \mathcal {S}\) measurable, the correspondence \((\alpha ,s)\rightarrow \varepsilon _{\alpha }(s)\) has \(\mathcal {L} \otimes \mathcal {S}\otimes \mathcal {S}\) measurable graph and \(\mu (\varepsilon _{\alpha }(s))>0\).

We make a few remarks on this assumption. First, although Assumptions 1(i),(ii),(iv) are rather standard, Assumption 1(iii) deserves some comment. In this assumption, we first require sufficient structure such that the quasi-supermodularity of payoff \(r\), as well as single-crossing differences, is preserved under aggregation with respect to the space of public signals. This is necessary for our arguments, as ordinal properties (in this case ordinal complementarities) are generally not preserved under aggregation.Footnote 21 The conditions we impose in Assumption 1(iii) were first proposed in Quah and Strulovici (2012), where the authors referred to it as signed-ratio monotonicity.

Second of all, it bears mentioning that there is a delicate difference between the related definition of signed-ratio monotonicity in Quah and Strulovici (2012), and our functional version of the signed-ratio monotonicity that we use extensively in this paper. In particular, when analyzing large games with differential information, and formulating appropriate ordinal complementarity conditions, we are interested in the aggregation of ordinal difference properties for values at different points in their domain. This fact changes the nature of the signed-ratio monotonicity condition that is required to obtain ascending best replies as compared to the related condition studied in Quah and Strulovici (2012).

We can provide a simple example of the difference between our version of the signed-ratio monotonicity assumption as compared to Quah and Strulovici (2012). Consider an optimization problem faced by an agent who is uncertain about two possible states, say elements of the set \(\{H,L\}\), where state \(H\) occurs with probability \(p\), and state \(L\) occurs with probability \((1-p)\). Assume further that the agent maximizes his expected payoff taking into account strategies of other players, which depend on the realized state. Therefore, for some function \(\tau :\{H,L\}\rightarrow D\), the agent maximizes

$$\begin{aligned} r(H,\tau (H),a)p+r(L,\tau (L),a)(1-p), \end{aligned}$$

where \(a\) denotes the decision variable. As \(\tau (H)\) might not be equivalent to \(\tau (L)\), the above function need not have single-crossing differences even if \(r(s,\phi ,a)\) has single-crossing differences in \((a,\phi ),\,\phi \in D\), and the family of functions \(\{\Delta (\cdot ,s)\}_{s\in \{H,L\}}\) (where \(\Delta (\phi ,s)\,{:=}\,r(s,\phi ,a^{\prime })-r(s,\phi ,a)\)) is a family of functions obeying the signed-ratio monotonicity as in Quah and Strulovici (2012).

Finally, the signed-ratio monotonicity needs to be satisfied for any \(\tau \) within the class of equilibrium candidates, e.g., \(\hat{T}_d\). In fact, we require that the family of functions \(\{\Delta (\cdot ,s)\}_{s\in \{H,L\}}\) satisfies the signed-ratio monotonicity with respect to the space \((\hat{T} _{d},\succeq _{\hat{T}})\). Because of this situation, we must require a somewhat stronger version of the signed-ratio monotonicity in our games so that we can guarantee the existence of sufficient complementarities in the game that are preserved under aggregation in the player’s optimization problems. The next result characterizes the strength of the above assumptions.

Lemma 1

The collection of single-crossing functions \(\{v(s,\cdot )\}_{s\in S}\), where \(v(s,f)\,{:=} u(s,f(s))\) obeys signed-ratio monotonicity if and only if one of the condition holds:

  1. (i)

    \(u\) has a fixed sign, i.e., \(u(s,x)\ge 0\) for all \((s,x)\in S\times X,\) or \(u(s,x)\le 0\) for all \((s,x)\in S\times X,\)

  2. (ii)

    \(u(s,\cdot )\) is increasing for all \(s\in S.\)

Proof

We prove the result by contradiction. Suppose that neither (i) nor (ii) holds. Then, there exist some \(s_0\) and \(x_1<x_2\) such that \(u(s_0,x_1)>u(s_0,x_2) \). Since \(u(s_0,\cdot )\) is a single-crossing function, there are two possible cases: (a) \(u(s_0,x_1)>0\), and \(u(s_0,x_2)>0\); or (b) \(u(s_0,x_1)<0\), and \(u(s_0,x_2)<0\).

In (a) is true, there is some \(s_1 \in S\) and \(y\in X\) such that \(u(s_1,y)<0\). Define \(f:S\rightarrow X\) and \(g:S\rightarrow X\) such that \(f\le g,\,f(s_0)=x_1,\,g(s_0)=x_2\), and \(f(s_1)=g(s_1)=y\). Then,

$$\begin{aligned} -\frac{u(s_1,f(s_1))}{u(s_0,f(s_0))}=-\frac{u(s_1,y)}{u(s_0,x_1)}<-\frac{u(s_1,y)}{u(s_0,x_2)}=-\frac{u(s_1,g(s_1))}{u(s_0,g(s_0))}, \end{aligned}$$

which contradicts signed-ratio monotonicity. If (b) is true, we define \(s_1\) and \(y\) such that \(u(s_1,y)>0\). Let function \(f\) and \(g\) be such that \(f\le g\) and \(f(s_0)=x_1,\, g(s_0)=x_2\) and \(f(s_1)=g(s_1)=y\). Then, we have

$$\begin{aligned} -\frac{u(s_0,f(s_0))}{u(s_1,f(s_1))}=-\frac{u(s_0,x_1)}{u(s_1,y)}<-\frac{u(s_0,x_2)}{u(s_1,y)}=-\frac{u(s_0,g(s_0))}{u(s_1,g(s_1))}. \end{aligned}$$

This contradicts the signed-ratio property of \(v\).\(\square \)

The above lemma implies that if the payoff functions satisfies Assumption 1(iii) for any \(\tau \in \hat{T}_d\) payoff \(r\) needs to be monotone in \((s,\tau )\) or has increasing differences in \((a,\tau )\).

3.4 Existence of distributional Bayesian Nash equilibria

We are now ready to present a series of lemmata, as well as one key proposition, that will allow us to prove the main equilibrium existence result of this section of the paper. We should mention that the main tool used in the proofs of this section per the question of existence of distributional Bayesian Nash equilibria is Markowsky’s fixed point theorem (see Theorem 4 in the “Appendix”). In our context, we will show that the theorem implies the existence of a fixed point of a \(\succeq _{ \hat{T}}\)-increasing operator mapping poset \(\hat{T}_{d}\) to itself.

Along these lines, we begin by showing that the poset \((\hat{T}_{d},\succeq _{\hat{T}})\) is chain complete, a property required to apply Markowsky’s (1976) theorem.

Proposition 1

\((\hat{T}_{d},\succeq _{\hat{T}})\) is a chain complete poset.

Proof

Take any chain \(T_0\subset \hat{T}_d\). First, we show that \(\bigvee T_0 \in \hat{T}_d\). The case \(\bigwedge T_0\in \hat{T}_d\) follows analogously. As usual by \([f]\), we denote an equivalence class of functions equal \(f\) for \(\mu \) a.e. \(s\in S\). We induce \(\le _{\mu }\) between these class as follows \([f]\le _{\mu }[g]\) if \(f\le g\) for \(\mu \) a.e. \(s\in S\). Let \(f\in { CM }\). By Birkhoff (1967, Theorem 3, p. 241), there exists the least (modulo null) measurable function \(\varphi (f):S\rightarrow \mathbb {R}\) such that:Footnote 22

$$\begin{aligned} \left[ \varphi _{0}(f)(\cdot )\right] =\bigvee \limits _{\tau \in T_0} \left[ \,\int _{\varLambda \times A}f(\alpha ,a)\tau (\mathrm{d}\alpha \times da|\cdot )\right] . \end{aligned}$$

Without loss of generality suppose \(\varphi _0(f)(s)=f\) if \(f\) is a constant function. We show that \(L:C(\varLambda \times A)\rightarrow M(S)\) defined as \(L(f)=\left[ \varphi _0(f)\right] \) is an operator preserving linear combination of \({ CM }\) with nonnegative coefficients. Take any \(\tau \in T_0\) and \(\tau '\in T_0\) such that \(\tau '\succeq _{\hat{T}}\tau \). Let \(x\,{:=}\,(\alpha ,a)\) and \(X=\varLambda \times A\). Then, for arbitrary \(\beta _1\ge 0,\,\beta _2\ge 0,\,f\in { CM }\) and \(g\in { CM }\), we have

$$\begin{aligned} \left[ \varphi _0(\beta _1 f+\beta _2 g)(\cdot )\right] ~\ge _{\mu }~ \beta _1\left[ \,\int \limits _{X}f(x)\tau (\mathrm{d}x|\cdot )\right] +\beta _2\left[ \,\int \limits _{X}g(x)\tau '(\mathrm{d}x|\cdot )\right] . \end{aligned}$$

Taking a supremum over \(\tau '\in T_0\) and next over \(\tau \in T\), we have

$$\begin{aligned} \left[ \varphi _0(\beta _1 f+\beta _2 g)(\cdot )\right] ~\ge _{\mu }~ \beta _1\left[ \varphi _0(f)(\cdot )\right] +\beta _2 \left[ \varphi _0(g)(\cdot )\right] . \end{aligned}$$
(1)

We now show the reverse inequality. Take an arbitrary \(\tau \in T_0\), we have

$$\begin{aligned} \left[ \,\int \limits _X\left( \beta _1 f(x)+\beta _2 g(x)\right) \tau (\mathrm{d}x|\cdot )\right] \le _{\mu } \beta _1 \left[ \varphi _0(f)(\cdot )\right] +\beta _2 \left[ \varphi _0(g)(\cdot )\right] . \end{aligned}$$

Taking a supremum over \(\tau \in T_0\), we have

$$\begin{aligned} \left[ \varphi _0(\beta _1 f+\beta _2 g)(\cdot )\right] \le _{\mu } \beta _1\left[ \varphi _0(f)(\cdot )\right] +\beta _2 \left[ \varphi _0(g)(\cdot )\right] . \end{aligned}$$
(2)

By definition of \(L\), Combining (1) and (2), we have

$$\begin{aligned} L(\beta _1 f+\beta _2 g)=\beta _1 L(f)+\beta _2 L(g). \end{aligned}$$
(3)

Observe that \(C(X)\) is Polish space since \(X\) is compact and metrizable. Define \({ LM }\,{:=}\,{ CM }-{ CM }\). That is \({ LM }\,{:=}\,\{f-g,f\in { CM },g\in { CM }\}\). Clearly, it is Riesz sublattice of \(C(X)\) containing the unit constant and separates the points of \(X\). Hence, by the Stone–Weierstrass Theorem (see Aliprantis and Border 2006, Theorem 9.12), \({ LM }\) is uniformly dense in \(C(X)\). As a result, there exists uniformly dense countable subset of \({ LM }\) which is dense in \(C(X)\). Let \(F_0\subset { CM }\) be a countable subset such that \(F_0-F_0\) is uniformly dense in \(C(X)\). Assume \(F_0\) contains the unit constant. For all \(\phi \in F_0,\psi \in F_0\), \(\beta _1\in \mathbb {Q}_+\), \(\beta _2\in \mathbb {Q}_+\) define

$$\begin{aligned} O_{\phi ,\psi ,\beta _1,\beta _2}\,{:=}\,\{s\in S:\varphi _0(\beta _1 \phi +\beta _2 \psi )(s)=\beta _1\varphi _0( \phi )(s)+\beta _2 \varphi _0(\psi )(s)\}. \end{aligned}$$

By definition of \(L\) and (3), we have \(\mu (O_{\phi ,\psi ,\beta _1,\beta _2})=1\) for all \((\phi ,\psi ,\beta _1,\beta _2)\in F_0^2\times \mathbb {Q}_+^2\), hence for

$$\begin{aligned} S_0\,{:=}\,\bigcap \limits _{(\phi ,\psi ,\beta _1,\beta _2)\in F_0^2\times \mathbb {Q}_+^2}O_{\phi ,\psi ,\beta _1,\beta _2} \end{aligned}$$

we have \(\mu (S_0)=1\). We also define

$$\begin{aligned} P_{\phi ,\psi }\,{:=}\,\{s:\varphi _0(\phi )(s)\le \varphi _0(\psi )(s)\} \end{aligned}$$

if \(\phi \le \psi \) where \(\le \) is standard product pointwise order. Clearly, \(\mu (P_{\phi ,\psi })=1\), hence \(\mu (S_1)=1\) where

$$\begin{aligned} S_1\,{:=}\,S_0\cap \bigcap \limits _{\phi \le \psi }P_{\phi ,\psi }. \end{aligned}$$

For fixed \(s\in S_1\), we construct a linear functional \(L_s\) on \(C(X)\) which agrees with \(\varphi _0(\cdot )(s)\) on \(F_0\). In other words, define

$$\begin{aligned} L_s(\phi )\,{:=}\,\varphi _0(\phi )(s). \end{aligned}$$
(4)

Next, we construct \(L_s\) for \(f\in { CM }\). Let \(\{\phi _n\}_{n\in \mathbb {N}}\subset F_0\) and \(\{\psi _n\}_{n\in \mathbb {N}}\subset F_0\) be sequences such that \(\phi _n\rightrightarrows f\) and \(\psi _n\rightrightarrows f\). Let \(\epsilon >0\) be arbitrary small rational number. Then, for sufficiently large \(n>0\), we have

$$\begin{aligned} -\epsilon +\phi _n(x)\le \psi _n(x)\le \phi _n(x)+\epsilon , \end{aligned}$$

for arbitrary \(x\in X\). By definition of \(S_1\) and \(\varphi _0\), we have

$$\begin{aligned} -\epsilon +\varphi _0(\phi _n)(s)\le \varphi _0(\psi _n)(s)\le \varphi _0(\phi _n)(s)+\epsilon , \end{aligned}$$

As a result,

$$\begin{aligned} |\varphi _0(\phi _n)(s)- \varphi _0(\psi _n)(s)|\rightarrow 0\quad \text{ as } n\rightarrow \infty , \end{aligned}$$
(5)

for arbitrary \(s\in S_1\). We show that \(\{\varphi _0(\phi _n)(s)\}_{n\in \mathbb {N}}\) is a convergent subsequence. Since \(\phi _n\) is uniformly convergent, for arbitrary rational and positive \(\epsilon \) and sufficiently large \((n,m)\in \mathbb {N}^n\), we have

$$\begin{aligned} -\epsilon +\phi _n(x)\le \phi _m(x)\le \phi _n(x)+\epsilon , \end{aligned}$$

for all \(x\in X\). Again by definition of \(S_1\) and \(\varphi _0\), we have

$$\begin{aligned} -\epsilon +\varphi _0(\phi _n)(s)\le \varphi _0(\psi _m)(s)\le \varphi _0(\phi _n)(s)+\epsilon . \end{aligned}$$
(6)

Combining (5) and (6), we have \(\{\varphi _0(\phi _n)(s)\}_{n\in \mathbb {N}}\) and \(\{\varphi _0(\psi _n)(s)\}_{n\in \mathbb {N}}\) are both Cauchy sequences convergent to the same limit. Hence, we can define as follows: \(L_s(f)\,{:=}\,\lim \limits _{n\rightarrow \infty }\varphi _0(\phi _n)(s)\) where \(\{\phi _n\}_{n\in \mathbb {N}}\) is arbitrary sequence such that \(\phi _n\rightrightarrows f\). For \(f\in { LM }\), we can define \(L_s(f)\) in a canonical way.

Now, we show that \(L_s\) is continuous linear functional with unit norm on \({ LM }\). Let \(f=f_1-f_2\) and \(f_i\in { CM }\), \(i=1,2\) \(||f||_{\infty }\le 1\). Observe that \(L_s\) is increasing on the whole \({ CM }\), and \(L_s(\mathbf {1})(s)=1\). As a result,

$$\begin{aligned} -1+ L_s(f_2)\le L_s(f_1)\le 1+ L_s(f_2). \end{aligned}$$

Hence, \(|L_s(f)|\le 1\), and \(L_s\) is continuous linear functional on \({ LM }\) with unit norm. Consequently, noting that \({ LM }\) is Riesz Lattice by Hahn–Banach Extension Theorem (see Aliprantis and Border 2006, Theorem 8.31) we can extend \(L_s\) to the whole \(C(X)\). Moreover, this functional is positive with the unit norm. Then, by Riesz Markov Theorem (see Aliprantis and Border 2006, Theorem 14.12), there exists a positive probability measure \(\tau ^0(\cdot |s)\) such that

$$\begin{aligned} L_s(f)=\int \limits _{\varLambda \times A} f(\cdot )\tau ^0(\cdot |s) \end{aligned}$$
(7)

for any \(f\in C(\varLambda \times A)\). Clearly, \(\tau ^0\in \hat{T}_d\). To finish this proof, we need to show \(\tau ^0\) is the least upper bound of \(T_0\).

First, we show that \(\tau ^0\) is upper bound of \(T_0\). Take any \(\tau \in T_0\).

Then, there exist \(S_2\subset S_1\) such that \(\mu (S_2)=1\) and for all \(\phi \in F_0\)

$$\begin{aligned} \int \limits _{X}\phi (x)\tau ^0(\mathrm{d}x|s)=\varphi _0(\phi )(s)\ge \int \limits _{X}\phi (x)\tau (\mathrm{d}x|s), \end{aligned}$$
(8)

for \(s\in S_2\), where equality follows from (4) and (7). If we take \(f\in { CM }\), we just need to take a subsequence \(\{\phi _n\}\subset F_0\,\phi _n\rightrightarrows f\), put it in (8) and take a limit. Hence, (8) holds for all \(f\in { CM }\) and for \(s\in S_2\), with \(\mu (S_2)=1\). Therefore, \(\tau ^0\succeq _d \bigvee T_0\).

We show that \(\bigvee T_0\succeq _d \tau ^0\). Let \(\tau '\) be arbitrary upper bound of \(T_0\), and \(\tau \in T_0\) be given. Then, for all \(f\in { CM }\),

$$\begin{aligned} \left[ \,\int \limits _X f(x)\tau (\mathrm{d}x|\cdot )\right] \le _{\mu } \left[ \,\int \limits _Xf(x)\tau '(\mathrm{d}x|\cdot )\right] . \end{aligned}$$

As a result, by definition of \(\varphi _0\),

$$\begin{aligned} \left[ \varphi _0(f)(\cdot )\right] \le _{\mu } \left[ \,\int \limits _Xf(x)\tau '(\mathrm{d}x|\cdot )\right] . \end{aligned}$$
(9)

Combining (7) and (9) for \(\phi \in F_0\), we have

$$\begin{aligned} \left[ \,\int \limits _X\phi (x)\tau ^0(\mathrm{d}x|\cdot )\right] \le _{\mu } \left[ \,\int \limits _X\phi (x)\tau '(\mathrm{d}x|\cdot )w\right] . \end{aligned}$$

Then, there exists a full measure set \(S_3\) such that for \(s\in S_3\) we have

$$\begin{aligned} \int \limits _X\phi (x)\tau ^0(\mathrm{d}x|s)\le \int \limits _X\phi (x)\tau '(\mathrm{d}x|s). \end{aligned}$$

To finish this proof, we just need to note that \(F_0\) is dense in \({ CM }\). As a result \(\bigvee T_0\succeq _d \tau ^0\). Similarly, we can show \(\bigwedge T_0\in T_0\).

Finally, by Assumption 1 (i), both \(\bigvee T_0\) and \(\bigwedge T_0\) are concentrated on the graph of \(\tilde{Gr}\). \(\square \)

Notice that this proposition greatly generalizes the well-known result stating that the set of probability measures on \(\varLambda \times A\) is a chain complete poset (see Hopenhayn and Prescott 1992, Proposition 1) to the differential information setting. It is also technically very different from the result used in Balbus et al. (2013) where the authors study the case of a large game with strategic complementarities under complete information.

We now define the operator that shall play a central role in our proof of existence. That is, define the best reply correspondence of a player to be

$$\begin{aligned} m(\alpha ,s,\tau )\,{:=}\,\arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s,\tau ,a), \end{aligned}$$

and let

$$\begin{aligned} \overline{m}(\alpha ,s,\tau )\,{:=}\,\bigvee m(\alpha ,s,\tau ) \quad \text { and }\quad \underline{m}(\alpha ,s,\tau )\,{:=}\,\bigwedge m(\alpha ,s,\tau ) \end{aligned}$$

be the extremal elements of \(m(a,s,\tau )\) with respect to the partial order \(\ge \) on \(A\) (i.e., the greatest and the least best reply, respectively), whenever these elements exist. By the definition of equilibrium, for \(\mu \)- a.e. \(s\), distributional Bayesian Nash equilibrium \(\tau ^{*}\) satisfies

$$\begin{aligned} \tau ^{*}\left( \{(\alpha ,a)\in \varLambda \times A | a\in m(\alpha ,s,\tau ^{*})\}|s\right) =1. \end{aligned}$$

Consider an operator \(\overline{B}\) that transforms the space \(\hat{T}_{d}\) into itself, such that given some function \(\tau \in \hat{T}_{d}\), it returns function \(\overline{B}(\tau )\), where probability measure \(\overline{ B}(\tau )(s)\) is concentrated on the set of greatest best responses to \(\tau \). That is, more precisely, we define the operator \(\overline{B}:\hat{T} _{d}\rightarrow \hat{T}_{d}\) by:

$$\begin{aligned} \overline{B}(\tau )=\left\{ \tau ^{\prime }\in \hat{T}_{d} \big |\tau ^{\prime }(Gr(\overline{m}(\cdot ,s,\tau ))|s)=1, \mu \text {-a.e.} \right\} . \end{aligned}$$

We define the least distributional Bayesian Nash equilibrium selection \( \underline{B}(\tau )\) in an analogous way.

In order to make sure that the above operator possesses all the desired properties, we must show that if \(r\) has single-crossing differences in \( (a,\phi ),\,a\in A,\,\phi \in D\), and the family of functions \(\{\varDelta (\cdot ,s)\}_{s\in S}\) obeys the signed-ratio monotonicity (where \(\varDelta : \hat{T}_{d}\times S\rightarrow \mathbb {R},\,\varDelta (\tau ,s)\,{:=}\,r(\alpha ,s,\tau (\cdot |s),a^{\prime })-r(\alpha ,s,\tau (\cdot |s),a)) \), then \(v\) has single-crossing differences in the \(\mu \)-a.e. pointwise order \( \succeq _{\hat{T}}\) on \(\hat{T}_{d}\).

Lemma 2

Let \((X, \ge _X),\,(S, \ge _S)\) be posets, \((S,\mathcal {S},\mu )\) be a non-empty, non-atomic, \(\sigma \)-finite measure space, and \(M(S)\) be a set of \(\mathcal {S}\)-measurable functions \(f:S\rightarrow X\). Let \(u:S\times X\rightarrow \mathbb {R}\) be a function such that \(u(s,f(s))\) is \(\mathcal {S}\)-integrable, whenever \(f\in M(S)\). Assume \(u\) is a single-crossing function in \(x\) for all \(s\in S\). Finally, let \(\{v(s,\cdot )\}_{s\in S}\), where \(v:S \times M(S) \rightarrow \mathbb {R},\,v(s,f)\,{:=}\,u(s,f(s))\), obey the signed-ratio monotonicity with respect to the pointwise ordering on \(M(S)\). Then, \(h:M(S) \rightarrow \mathbb {R}\),

$$\begin{aligned} h(f)\,{:=}\,\int \limits _{T}u(s,f(s))\mu (\mathrm{d}s) \end{aligned}$$

is a single-crossing function with respect to \(\mu \)-a.e. pointwise order.

Proof

Note that \(\forall s\in S,\,u(s,\cdot )\) is a single-crossing function, and family \(\{v(s,\cdot )\}_{s\in S}\) is well defined on \(M(S)\) and obeys the signed-ratio monotonicity with respect to the pointwise order on \(M(S)\). Corollary 9 implies that \(h\) is a single-crossing function with respect to the same ordering.

We will show that \(h\) is also a single-crossing function on \(M(S)\) with respect to \(\mu \)-a.e. pointwise order. Take any \(f^{\prime },\,f\in M(S)\) such that \(f^{\prime }(s)\ge _{X}f(s)\), for \(\mu \text {-a.e. } s \in S\), and let \(S^{\prime }\) denote the set of points \(t\) where either \(f^{\prime }(s)< f(s)\) or \(f^{\prime }(s)\), \(f(s)\) are unordered. Clearly, \(\mu (S^{\prime })=0\). Assume \(0 \le (<) h(f)\). Then,

$$\begin{aligned} 0\le (<) h(f)=\int \limits _{S}u(s,f(s))\mu (\mathrm{d}s)=\int \limits _{S\backslash S^{\prime }}u(s,f(s))\mu (\mathrm{d}s). \end{aligned}$$

Since \(\{v(s,\cdot )\}_{s\in S}\) is a family obeying the signed-ratio monotonicity on \(M(S)\) with respect to the pointwise order, so is \(\{v(s,\cdot )\}_{s\in S\backslash S^{\prime }}\). Hence,

$$\begin{aligned} 0\le (<) \int \limits _{S\backslash S^{\prime }}u(s,f^{\prime }(s))\mu (\mathrm{d}s)=\int \limits _{S} u(s,f^{\prime }(s))\mu (\mathrm{d}s)=h(f^{\prime }). \end{aligned}$$

The proof is complete.

By Lemma 2, we know that integration preserves the single-crossing property in the \(\mu \)-a.e. pointwise order. Therefore, this fact explains why we do not work with standard pointwise partial orders in our existence constructions.Footnote 23

We now characterize the monotonicity properties of the pair of operators defined before.

Lemma 3

Let Assumption 1 be satisfied. Then, operators \(\overline{B}\) and \(\underline{B}\) are well defined and \(\succeq _{\hat{T}}\)-isotone.

Proof

We prove the result for \(\overline{B}\). The proof for \(\underline{B}\) is analogous. First, \(v(\alpha ,s,\tau ,\cdot )\) is continuous on \(A\). Moreover, by Lemma 9 in Ely and Pęski (2006), it is also \(\mathcal {L}\times \mathcal {S}\)-measurable; hence, \(v\) is Carathéodory. By Assumption 1 (ii),(iii), as well as Corollaries 6, 9 and Lemma 2, \(v\) is quasi-supermodular in \(a\) and has single-crossing differences in \((a,\tau )\) with respect to \( \succeq _{\hat{T}}\).

Since \(\tilde{A}(\alpha ,s)\) is compact, by Berge’s Maximum Theorem (see Berge 1997, p. 116), the set \(m(\alpha ,s,\tau )\,{:=}\,\arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s,\tau ,a) \) is non-empty. In addition, by Milgrom and Shannon’s (1994) or Veinott’s (1992) generalization of Topkis’s Monotonicity Theorem (see Topkis 1978), \(m(\alpha ,s,\tau )\) is a complete sublattice of \(\tilde{A }(\alpha ,s)\) with the greatest and the least element. Moreover, it is isotone in the Veinott’s strong set order in \(\tau \). From the Measurable Maximum Theorem (see Aliprantis and Border 2006, Theorem 18.19), it follows that \(m\) is \(\mathcal {L} \otimes \mathcal {S}\)-measurable (hence, weakly measurable, as \(A\) is a metrizable space which admits a measurable selection; see Aliprantis and Border 2006, Lemma 18.2). Therefore, \(\overline{m}(\alpha ,s,\tau )\) exists and is increasing on \(\hat{T}_{d}\).

We now need to prove that \(\overline{m}(\alpha ,s,\tau )\) is a measurable selection of \(m(\alpha ,s,\tau )\). Define \(\overline{m}(\alpha ,s,\tau )=( \overline{m}_{1},\ldots ,\overline{m}_{n})\). Again by the Measurable Maximum Theorem, function \(\overline{m}_{i}(\cdot ,\tau )\,{:=}\,\max _{a_{i}\in m(\cdot ,\tau )}a_{i}\) is \(\mathcal {L}\otimes \mathcal {S}\)-measurable for any \( \tau \) and \(i=1,\ldots ,n\). Hence, \(\overline{m}(\cdot ,\tau )\) is also \( \mathcal {L}\otimes \mathcal {S}\)-measurable, and \(\overline{B}(\tau )\) is \(\mathcal {S}\)-measurable.

Next, we show that \(\overline{B}\) is increasing. Fix an arbitrary, \(\ge _p\)-increasing function \(f:\varLambda \times A\rightarrow \mathbb {R}\). For an arbitrary, \( \alpha \in \varLambda \). We prove that \(\tau \rightarrow \overline{m}(\alpha ,s,\tau )\) is isotone, whenever \(\tau \in \hat{T}_{d}\). Let \(\tau \succeq _{ \hat{T}}\tau ^{\prime }\). Fix an arbitrary an \(s\in S\) such that \(\tau '(s)\succeq _D \tau (s)\). We have:

$$\begin{aligned} \int \limits _{\varLambda \times A}f(\alpha ,a)\overline{B}(\tau ')(\mathrm{d}\alpha \times da|s)&= \int \limits _{\varLambda \times A}f(\alpha ,\overline{m}(\alpha ,s,\tau ' ))\lambda (\mathrm{d}\alpha )\\&\ge \int \limits _{\varLambda \times A}f(\alpha ,\overline{m}(\alpha ,s,\tau ))\lambda (\mathrm{d}\alpha ) \\&= \int \limits _{\varLambda \times A}f(\alpha ,a)\overline{B}(\tau )(\mathrm{d}\alpha \times da|s), \end{aligned}$$

where the first and the final equalities follow from the definition of \(\overline{B}\), and the inequality is implied by the monotonicity of \(\overline{m}\) in \(\tau \). Since the set of \(s\in S\) satisfying the above equation has full measure, \(\overline{B} (\tau )\succeq _{\hat{T}}\overline{B}(\tau ^{\prime })\).

Having these two lemmas in place, we are able to state our main result of this section.

Theorem 1

(Existence) Let Assumption 1 be satisfied. Then, there exists the greatest and the least distributional Bayesian Nash equilibrium of \(\varGamma \) in \((\hat{T}_{d},\succeq _{\hat{T}})\).

Proof

By Lemma 3, \(\overline{B}\) is isotone. Moreover, by Proposition 1, \(\hat{T}_d\) is a chain complete poset. Hence, by Markowsky’s theorem (see “Appendix”, Theorem 4), \( \overline{B}\) has a chain complete poset of fixed points in an induced order, with the greatest and the least element. Denote the greatest element of the set by \(\overline{ \tau }^{*}\). Then, by definition, \(\overline{\tau }^{*} \) constitutes a distributional Bayesian Nash equilibrium of \(\varGamma \).

Next, we prove that \(\overline{\tau }^{*}\) is the greatest equilibrium of the game. Take any other equilibrium of the game \(\tau \). Fix \(s\in S\), such that

$$\begin{aligned} \tau \left( \left\{ (\alpha ,a) | a\in m(\alpha ,s, \tau )\right\} | s\right) =1. \end{aligned}$$

Let \(f:\varLambda \times A\rightarrow \mathbb {R}\) be \(\ge _p\)-increasing. Then,

$$\begin{aligned} \int \limits _{\varLambda \times A}f(\alpha ,a)\tau (\mathrm{d}\alpha \times da|s)&\le \int \limits _{\varLambda \times A}f(\alpha ,\overline{m}(\alpha ,s,\tau ))\lambda (\mathrm{d}\alpha ) \\&\le \int \limits _{\varLambda \times A}f(\alpha ,a)\overline{B} (\tau )(\mathrm{d}\alpha \times da|s). \end{aligned}$$

Therefore, \(\overline{B}(\tau ) \succeq _{\hat{T}} \tau \). Since \(\overline{B}\) is isotone, by Markowsky’s theorem, we have \(\overline{B}(\overline{\tau }^*) \succeq _{\hat{T}} \tau \). We prove the existence of the least equilibrium analogously, using operator \(\underline{B}\).

Few remarks concerning the result. First of all, the above theorem not only shows the existence of distributional Bayesian Nash equilibrium, but also assures existence of extremal equilibria. That is, it implies that there exists the greatest and the least element. We should also remark that by Markowsky’s theorem, both \(\overline{B}\) and \(\underline{ B}\) have a chain complete poset of fixed points, each one of them constituting a Bayesian Nash distributional equilibrium of the game.Footnote 24

Second of all, the sufficient conditions for existence that we impose in our approach differ from those used in Balder and Rustichini (1994) or Kim and Yannelis (1997).Footnote 25 In particular, our class of games relaxes an important payoff continuity assumption.Footnote 26

Finally, and most importantly, Assumptions 1(ii)–(iii) can be relaxed if one is interested in the existence of greatest (respectively, the least) distributional equilibrium of the game, but not both. Specifically, one can replace the condition (a) of quasi-supermodularity (equivalent to lattice superextremal in Li Calzi and Veinott 1992 and Veinott 1992 for real-valued functions) of payoff \(r\) in actions \(a\in A\) with join- (respectively, meet-) super extremal, and condition (b) concerning single-crossing differences with join (respectively, meet) up-crossing differences in \((a,\tau )\), and condition (c) concerning the signed-ratio monotonicity, with their join- (respectively, meet-) counterparts. This weakening of our conditions allows us to generalize our results to an even broader class of large games (see Li Calzi and Veinott 1992 and Veinott 1992 for the details). This observation becomes particularly useful when one is unable to show that the game in question is quasi-supermodular. In fact, we provide one such an example in Sect. 5.3, where we discuss the application of our results to common value auctions where the complementarity structure between \((a,\tau )\) has join up-crossing differences (but not meet up-crossing differences).

3.5 Monotone equilibrium comparative statics

We conclude this section of the paper by considering computational issues related to equilibrium existence, as well as the monotone comparative statics of the equilibrium set. We prove two results. The first one pertains to computing extremal equilibria at fixed parameters. The second one establishes the existence of computable equilibrium comparative statics as a function of deep parameters of the game. Such a question has not been considered in any of the existing literature of which we are aware. For such computability results, we need to impose one additional condition concerning order continuity of payoffs, which proves to be critical in our main result, as it preserves order continuity of the extremal selections in the best reply maps.

Assumption 2

For any monotone sequence \(\{\phi _n\}\) in \({D}\), such that \(\phi _n \rightarrow \phi \) and \(\phi \in {D}\), let \(r(\alpha , s, \phi _n, a) \rightarrow r(\alpha , s, \phi , a)\).

If \(r\) satisfies Assumption 1 in addition to Assumption 2, then \(r(\alpha ,s,\tau ,a)\) is jointly \( \sigma \)-order continuous in \((a,\tau )\) for each \((\alpha ,s).\)

Given the additional assumption, we proceed with the following corollary to our main existence result. We should mention that this result is of utmost importance for designing numerical methods aimed to compute equilibrium distributions and proving a rigorous foundation for their use. First by \( \overline{B}^{n}(\overline{t})\), define the \(n\)-th orbit of operator \(B\) starting from \(\overline{t}\), i.e., \(\overline{B}^{0}(\overline{t})=\overline{ t}\) and \(\overline{B}^{n+1}(\overline{t})=B(\overline{B}^{n}(\overline{t}))\) . Similarly define \(\underline{B}^n(\underline{t})\).

Corollary 1

Let Assumptions 1 and 2 be satisfied and \(\overline{t},\underline{t}\) denote the greatest and the least element of \(\hat{T}_{d}\), respectively. Then, the greatest and least distributional Bayesian Nash equilibrium of \( \varGamma \) satisfies the following successive approximation condition: \(\forall s\in S\) we have \(\overline{\tau }^{*}(s)=\lim _{n\rightarrow \infty } \overline{B}^{n}(\overline{t})(s)\), \(\underline{\tau }^{*}(s)=\lim _{n\rightarrow \infty }\underline{B}^{n}(\underline{t})(s)\), where limits are taken with respect to the weak-star topology.

Proof

Lemma 3 implies that both \(\overline{B}\), \(\underline{ B}:\hat{T}_{d}\rightarrow \hat{T}_{d}\) are well defined. We claim that \(\overline{B}\) is inf-preserving, while \(\underline{B} \) is sup-preserving. Fix \(s\in S\) and take any decreasing sequence \(\{\tau _{n}(s)\}\), \(\tau _{n}(s)\rightarrow \tau (s)\), \(\tau \in \hat{T}_{d}\). Observe that \(\tau (s)=\bigwedge \tau _{n}(s)\). Since \(\overline{B}\) is increasing, \(\bigwedge \overline{B}(\tau _{n})(s)=\lim _{n\rightarrow \infty }\overline{B}(\tau _{n})(s)\). On the other hand, \(\overline{B}(\bigwedge \tau _{n})(s)=\overline{B}(\tau )(s)\). It is therefore sufficient to show that \(\lim _{n\rightarrow \infty }\overline{B} (\tau _{n})(s)=\overline{B}(\tau )(s)\). By Assumption 2 and Lebesgue Dominated Convergence Theorem,

$$\begin{aligned} \lim _{n\rightarrow \infty }\bigvee \left\{ \arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s ,\tau _{n},a)\right\} =\bigvee \left\{ \arg \max _{a\in \tilde{A}(\alpha ,s )}v(\alpha ,s ,\tau ,a)\right\} , \end{aligned}$$

for all \((\alpha , s) \in \varLambda \times S\). Hence, \(\lim _{n\rightarrow \infty }\overline{B}(\tau _{n})(s)=\lim _{n\rightarrow \infty }\bigvee B(\tau _{n})(s)=\bigvee B(\tau )(s)=\overline{B}(\tau )(s)\), and so \(\overline{ B}(\bigwedge \tau ^{n})(s)=\bigwedge \overline{B}(\tau ^{n})(s)\). Analogously, we can prove that \(\underline{B}\) is sup-preserving.

The rest follows from Theorem 1 and the generalization of the Knaster–Tarski theorem (see Theorem 5 in the “Appendix”). \(\square \)

Finally, we can consider the question of computing equilibrium comparative statics. To study this question, we introduced a parameterized version of our game. So let \((\varTheta ,\ge _{\theta })\) and define a tuple:

$$\begin{aligned} \varGamma (\theta )\,{:=}\,\{(\varLambda ,\mathcal {L},\lambda ),(S,\mathcal {S},\mu ),A, \tilde{A}(\theta ,\cdot ),r(\theta ,\cdot )\{\pi _{\alpha },\mathcal {S} _{\alpha }\}_{\alpha \in \varLambda }\}. \end{aligned}$$

That is, for each \(\theta \), the game \(\varGamma (\theta )\) is defined as in the first part of this section. We proceed with the following natural extensions of our original assumptions.

Assumption 3

For each \(\theta \in \varTheta ,\,\varGamma (\theta )\) satisfies Assumption 1. Moreover,

  1. (i)

    \(\tilde{A}(\theta ,\cdot )\) is increasing in the Veinott strong set order on \(\varTheta \);

  2. (ii)

    \(r\) has single-crossing differences in \((a,\theta )\);

  3. (iii)

    the family of functions \(\{\varDelta (\cdot ,s)\}_{s\in S}\), where \(\varDelta (\theta ,s)\,{:=}\,r(\theta ,\alpha ,s,\tau (\cdot |s),a^{\prime })-r(\theta ,\alpha ,s,\tau (\cdot |s),a)\), obeys signed-ratio monotonicity for any \(a^{\prime },\,a\in A,\,a\le a^{\prime }\).

With this assumption in place, our next result follows from Corollary . That is, for any \(\theta \in \varTheta \), let \(\overline{ \tau }^{*}(\theta )\) (respectively, \(\underline{\tau }^{*}(\theta ))\) be the greatest (respectively, the least) distributional Bayesian Nash equilibrium in \(\varGamma (\theta )\). Then, we have the following monotone equilibrium comparative statics result.

Corollary 2

Let Assumptions 1–3 be satisfied. Then, \(\overline{\tau }^{*}(\cdot )\) and \(\underline{\tau }^{*}(\cdot )\) are increasing on \(\varTheta \).

Proof

By Assumptions 1–3, for any \(\tau \in \hat{T}_d\), \(\overline{B}(\tau )\) (respectively, \(\underline{ B}(\tau )\)) is increasing in \(\theta \) and inf-preserving (respectively, sup-preserving) on \(\hat{T}_d\). Therefore, by Corollary , \(\overline{\tau }^{*}(\cdot )\) and \(\underline{\tau } ^{*}(\cdot )\) are increasing on \(\varTheta \).

Note that apart from the related paper of Balbus et al. (2013) concerning large GSC with complete information, we are not aware of any similar comparative statics results with the one notable exception being Acemoglu and Jensen (2010). In this latter paper, the authors consider aggregative games with a finite number of player types, but otherwise develop similar tools to those we consider in this paper. Their approach to equilibrium comparative statics, though, is very similar to ours, as they impose conditions guaranteeing that the joint best response mapping has increasing selections with respect to parameter \(s\) (c.f., Definition 3 in their paper). Further, as they concentrate only on aggregative games where players best respond to the average/mean action of other players, the class of games they analyze is more restrictive than ours.

On the other hand, in the case of a single-dimensional action space \(A\), Acemoglu and Jensen manage to show comparative statics of the extremal (aggregative) equilibria using results of Milgrom and Roberts (1994) without the single-crossing property between player actions and aggregates. This is a very important result, and more general than ours (in the case, we restrict our attention to only large aggregative games). However, for multi-dimensional case of large aggregative games, Acemoglu and Jensen require increasing differences in the action of each player and the equilibrium aggregate, which is stronger than the (ordinal) single-crossing property we invoke to obtain our result. Finally, Acemoglu and Jensen 2010 use a topological fixed point theorem to show existence of an aggregate equilibrium, which makes the issues of computability of equilibrium comparative statics difficult to address. On the contrary, we use exclusively order-theoretic fixed point results, where sufficient conditions to address these issues are very direct.

4 Bayesian Nash–Schmeidler equilibria

In the next section of the paper, we present corresponding results for Bayesian Nash–Schmeidler equilibrium, which requires an alternative description of our large game with differential information. This notion of equilibria is defined in terms of functions mapping the space of players to actions as in Schmeidler (1973). We begin with a slightly modified description of the game.

4.1 Game description

Let \(\varLambda \) again be a compact, metrizable space of players, and endow \( \varLambda \) with a non-atomic, probability measure \(\lambda \) defined on the Borel \(\sigma \)-field \(\mathcal {L}\). Denote the measure space of public signals by \((S,\mathcal {S},\mu )\), defined as in the previous section. By \( \mathcal {S}_{\alpha },\,\alpha \in \varLambda \), we denote a sub \(\sigma \)-field of \(\mathcal {S}\) (denoting the private information of agent \(\alpha \in \varLambda )\), and by \(\pi _{\alpha }:S\rightarrow \mathbb {R}_{+}\) the distribution of agent \(\alpha \in \varLambda \), where \(\pi _{\alpha }\) is such that \(\int _{S}\pi _{\alpha }(s)\mathrm{d}\mu (s)=1\). Further, let \(A\subset \mathbb {R} ^{n}\) be a set of actions of players, endowed with the Euclidean topology generating Borel \(\sigma \)-field \(\mathcal {A}\) on \(A\). We endow \(A\) with the coordinate-wise order \(\ge \). Finally, as we introduce a notion of equilibrium that involves joint actions of players (as opposed to distributions), we analyze the set of functions of joint actions of players \( f:\varLambda \times S\rightarrow A\) which are measurable with respect to product \(\sigma \)-field \(\mathcal {L}\otimes \mathcal {S}\). Denote the space of such functions by \(M(\varLambda \times S)\) and endow it with the product topology and the pointwise order.

We now reconsider the components of the game and define an appropriate alternative notion of equilibrium for the Bayes–Schmeidler case. As before, the correspondence of feasible actions will be \(\tilde{A}:\varLambda \times S\rightrightarrows A\) which assigns a set of feasible actions to player \( \alpha \in \varLambda \), who finds herself in state \(s\in S\). The ex-post payoffs are given by a function \(r:\varLambda \times S\times M(\varLambda \times S)\times A\rightarrow \mathbb {R}\), where \(r(\alpha ,s,f(\cdot ,s),a)\) is the payoff value of player \(\alpha \in \varLambda \), playing action \(a\in A\) at state \(s\in S\), when the joint action of all other players at the state is \( f\in M(\varLambda \times S)\).

We can now describe the sequence of play in the game as follows. First, each player observes the state of the world \(s\in S\) with respect to her private information set \(\mathcal {S}_{\alpha }\). Next, the players calculate their interim payoffs. If we let \(\varepsilon _{\alpha }(s)\) to be the smallest set in \(\mathcal {S}_{\alpha }\) containing \(s\) (by set inclusion), then the interim payoff of player \(\alpha \in \varLambda \) in state \(s\in S\) facing a joint action \(f\in M(\varLambda \times S)\) is defined by value function \( v:\varLambda \times S\times M(\varLambda \times S)\times A\rightarrow \mathbb {R}\):

$$\begin{aligned} v(\alpha ,s,f,a):=\int \limits _{\varepsilon _{\alpha }(s)}r(\alpha ,s^{\prime },f(\cdot ,s^{\prime }),a)\pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\mathrm{d}\mu (s^{\prime }), \end{aligned}$$

where \(\pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\) is defined as in Sect. 3.1. Our later assumptions guarantee that \(v\) is well defined. Finally, once the strategies are chosen, the actual state is revealed, and payoffs of the game are distributed.

According to the above definition of the game, a feasible pure strategy of player \(\alpha \) is an \(\mathcal {S}\)-measurable selection of \(\tilde{A} (\alpha ,\cdot )\). Let \(M(S)\) denote the set of \(\mathcal {S}\)-measurable functions \(f:S\rightarrow \mathbb {R}\) and denote the set of all feasible strategies of player \(\alpha \) by \({M}_{\alpha }\), i.e.,

$$\begin{aligned} {M}_{\alpha }\,{:=}\,\{f\in M(S) | f(s)\in \tilde{A}(\alpha ,s)\}. \end{aligned}$$

Therefore, a joint pure strategy of all players is an element of

$$\begin{aligned} {M}_{\varLambda }\,{:=}\,\{f\in M(\varLambda \times S) | f(\alpha ,\cdot )\in {M} _{\alpha },\forall \alpha \in \varLambda \}. \end{aligned}$$

As in Sect. 3.1, we summarize the game by

$$\begin{aligned} \varGamma \,{:=}\,\{(\varLambda ,\mathcal {L},\lambda ),(S,\mathcal {S},\mu ),A,\tilde{A} ,r,\{\pi _{\alpha },\mathcal {S}_{\alpha }\}_{\alpha \in \varLambda }\}. \end{aligned}$$

In the line of Schmeidler (1973), we define Bayesian Nash–Schmeidler equilibrium as follows.

Definition 2

A Bayesian Nash–Schmeidler equilibrium of \( \varGamma \) is a function \(f^{*}\in {M}(\varLambda \times S)\) such that for all \(\alpha \in \varLambda \) and \(s\in S\) we have

$$\begin{aligned} f^{*}(\alpha ,s)\in \arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s,f^{*},a). \end{aligned}$$

Our definition of Bayesian Nash–Schmeidler equilibrium in strategies is slightly different from the one stated originally in Schmeidler (1973). In his definition, Schmeidler requires that almost every player plays a best response strategy to the equilibrium strategy profile. In contrast, we require every player to be acting optimally in our notion of equilibrium, as it is done for example in the papers of Balder and Rustichini (1994) and Kim and Yannelis (1997).

4.2 Equilibrium existence

In order to guarantee the existence of a Bayesian Nash–Schmeidler equilibrium, we impose the sufficient conditions on the primitives of the model.

Assumption 4

Assume that whenever \(f\in M(\varLambda \times A)\), we have:

  1. (i)

    \(\tilde{A}\) is complete sublattice-valued and weakly measurable;

  2. (ii)

    function \(r\) is continuous and quasi-supermodular on \(A\), has single-crossing differences in \((a,f)\), and \(r(\alpha ,s,f(\cdot ,s),a)\) is \( \mathcal {L}\otimes \mathcal {S}\)-measurable and bounded;

  3. (iii)

    the family of functions \(\{r(\alpha ,s,f(\cdot ,s),\cdot )\}_{s\in S}\) satisfy signed-ratio quasi-supermodularity on \( A\), and the differences \(\{\varDelta (\cdot ,s)\}_{s\in S}\), \(\varDelta (f,s)\,{:=}\,r(\alpha ,s,f(\cdot ,s),a^{\prime })-r(\alpha ,s,f(\cdot ,s),a)\) obey signed-ratio monotonicity in the pointwise order;

  4. (iv)

    for any monotone sequence \(\{f_n\}\) in \(M(\varLambda \times S)\), such that \(f_n \rightarrow f\) and \(f \in M(\varLambda \times S)\), for all \(\alpha \in \varLambda ,\,s\in S\), and \(a\in A\), we have \(r(\alpha , s, f_n(\cdot ,s), a)\rightarrow r(\alpha , s, f(\cdot ,s),a);\)

  5. (v)

    for all \(\alpha \in \varLambda \), \(\mathcal {S}_{\alpha }\) is generated by a countable partition such that for all \(s\in S\), \( \pi _{\alpha }(s)\) is \(\mathcal {L}\times \mathcal {S}\) measurable, the correspondence \((\alpha ,s)\rightarrow \varepsilon _{\alpha }(s)\) has \(\mathcal {L} \otimes \mathcal {S}\otimes \mathcal {S}\) measurable graph and \(\mu (\varepsilon _{\alpha }(s))>0\).

Unlike in the previous section, Assumption 4(iv) not only plays a critical role relative to the question of computation and approximation of equilibria, but also in the existence of equilibria itself. We will remark in more details on this issue in the remainder of this section.

Before proceeding to the main theorem, we state two important lemmas.

Lemma 4

Under Assumption 4, \({M} _{\alpha } \) and \({M}_{\varLambda }\) are non-empty.

Proof

Since \(A \subset \mathbb {R}^n\), any compact subset of \(A\) is closed. Hence, by Assumption 4(i), \(\tilde{A}\) has non-empty, closed values. Moreover, it maps the measurable space into a complete metric space (hence, a Polish space). Therefore, by Kuratowski–Ryll–Nardzewski Selection Theorem (see Aliprantis and Border 2006, Theorem 18.13), \(\tilde{A} (\alpha ,\cdot )\) and \(\tilde{A}\) include a measurable selection.

Notice that by appealing to strategic complementarities and order-theoretic constructions, in a GSC, we are able to relax two important assumptions used by different authors to obtain results per the non-emptiness and/or convexity of best replies to verify existence [e.g., as compared, for example, to Balder and Rustichini (1994) andKim and Yannelis (1997)]. For example, we do not require the feasible action correspondence \(\tilde{A}\) to be convex-valued, nor do we require any form of (quasi-) concavity of \(r\) in \(a\in \) \(A\) (so that best reply correspondences are convex-valued). In particular, we do not appeal to any Kakutani/Fan-Glicksberg type theorem to obtain existence. Also, our payoffs no longer need to be continuous with respect to joint strategies of players. In fact, we only require \(r\) to be order continuous on \({M}(\varLambda \times S)\), a continuity condition checked only along monotone sequences (as opposed to weak continuity conditions that must be checked for arbitrary nets).Footnote 27

Before stating the main result, we introduce some additional notation. First, define the best reply correspondence \({ BR }:M_{\varLambda }\rightrightarrows M_{\varLambda }\) by:Footnote 28

$$\begin{aligned} { BR }(f)(\alpha ,s)\,{:=}\,\arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s,f,a), \end{aligned}$$

From the definition of Bayesian Nash–Schmeidler equilibrium, for \(f^{*}\in M(\varLambda \times S)\) to be an equilibrium, we require \(f^{*}\in { BR }(f^{*})\). As in the previous section, we again study the fixed point of extremal selections of best reply maps. That is, let \(\overline{{ BR }} (f)\,{:=}\,\bigvee { BR }(f)\), and \(\underline{{ BR }}(f)\,{:=}\,\bigwedge { BR }(f)\) denote the greatest and the least element of \({ BR }(f)\) (whenever they exist), with respect to the pointwise order. We now state the following result.

Lemma 5

Under Assumption 4, operators \( \overline{{ BR }},\,\underline{{ BR }}:M_{\varLambda }\rightarrow M_{\varLambda }\) are well defined and increasing.Footnote 29

Proof

By Assumption 4(ii), \(r\) is continuous in \(a\). By Lebesgue Dominated Convergence Theorem, so is \(v\). By Assumption 4(ii), \(r\circ f\) is \(\mathcal {L}\otimes \mathcal {S}\)-measurable. By Lemma 9 in Ely and Pęski (2006), so is \(v\). Therefore, \(v\) is Carathéodory in \((a,(\alpha ,s)) \). By Assumptions 4(ii),(iii), as well as Corollaries 6, 9, \(v\) is quasi-supermodular on \(A\), with single-crossing differences in \((a,f)\).

Recall that \(A\) is a separable metric space, \((S,\mathcal {S})\) is a measurable space, and \(\tilde{A}\) is well defined and weakly measurable, with compact values. Therefore, by the Measurable Maximum Theorem (see Aliprantis and Border 2006, Theorem 18.19), \(\arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s,f,a)\) is well defined with compact values, \(\mathcal {L} \otimes \mathcal {S}\)-measurable, and admits a measurable selection. Hence, \({ BR }\) is well defined. In addition, since it maps a measurable space into a metrizable space, it is also weakly measurable (see Aliprantis and Border 2006, Theorem 18.2).

In addition, by Milgrom and Shannon’s (1994) or Veinott’s (1992) generalization of Topkis’ Monotonicity Theorem, it is a complete lattice with the greatest and the least element, and isotone in the Veinott strong set order in \(f\).

Since \({ BR }(f)(\alpha ,s)\) is a complete lattice, isotone in \(f\), both \(\overline{{ BR }}(f),\, \underline{{ BR }}(f)\) have non-empty values and are increasing in \(f\) (pointwise). Now, we prove that they are measurable selections of \({ BR }(f)\). Consider \( \overline{{ BR }}(f)\). Let \(\overline{{ BR }}(f)\,{:=}\,(\bar{f}_{1},\ldots ,\bar{f}_{n})\). The Measurable Maximum Theorem implies that function \(\bar{f}_{i}(\cdot )\,{:=}\,\max _{a_{i}\in { BR }(f)(\cdot )}a_{i}\) is \(\mathcal {L}\otimes \mathcal {S}\)-measurable for any \(f\) and \(i=1,\ldots ,n\); hence, \(\overline{{ BR }}(f)\) is also \(\mathcal {L}\otimes \mathcal {S}\)-measurable. Analogously, we prove that \( \underline{{ BR }}(f)\) is \(\mathcal {L}\otimes \mathcal {S}\)-measurable.

We now state the main result of this section concerning the existence of equilibria in the sense of Definition 2. For this result, one should keep in mind that the space of measurable functions is only a countably chain complete poset under pointwise partial orders. Therefore, in order to prove our new existence theorem and provide the sharpest characterization of the set of Bayes–Schmeidler equilibria, we must apply a generalized version of Tarski–Kantorovich Theorem (see Theorem 4.2 in Dugundji and Granas 1982 as well as Theorem 5 in the “Appendix”).

Theorem 2

(Existence) Let Assumption 4 be satisfied. Then, there exists the greatest \((\overline{f}^{*})\) and the least \((\underline{f}^{*})\) Bayesian Nash–Schmeidler equilibrium. Moreover, the extremal equilibria can be computed by a successive approximation: i.e., \(\lim _{n\rightarrow \infty }\overline{{ BR }}^{n}( \overline{m})=\overline{f}^{*}\) and \(\lim _{n\rightarrow \infty } \underline{{ BR }}^{n}(\underline{m})=\underline{f}^{*}\), where \(\overline{m} ,\,\underline{m}\) are the greatest and the least elements of \({M}_{\varLambda }\), respectively.

Proof

Lemma 5 implies that both \(\overline{{ BR }}\), \(\underline{ { BR }}:{M}_{\varLambda }\rightarrow {M}_{\varLambda }\) are well defined. We claim that \(\overline{{ BR }}\) is inf-preserving, while \(\underline{{ BR }} \) is sup-preserving. To see this, take a decreasing sequence \(\{f_{n}\},\,f_{n}\rightarrow f,\, f \in {M}_{\varLambda }\). Observe that \(f=\bigwedge f_{n}\). Since \(\overline{{ BR }}\) is increasing, \(\bigwedge \overline{{ BR }}(f_{n})=\lim _{n\rightarrow \infty }\overline{{ BR }}(f_{n})\). On the other hand, \(\overline{{ BR }}(\bigwedge f_{n})=\overline{{ BR }}(f)\). It is therefore sufficient to show that \(\lim _{n\rightarrow \infty }\overline{{ BR }} (f_{n})=\overline{{ BR }}(f)\). By Assumption 4(iv) and Lebesgue Dominated Convergence Theorem,

$$\begin{aligned} \lim _{n\rightarrow \infty }\bigvee \left\{ \arg \max _{a\in \tilde{A}(\alpha ,s)}v(\alpha ,s ,f_{n},a)\right\} =\bigvee \left\{ \arg \max _{a\in \tilde{A}(\alpha ,s )}v(\alpha ,s ,f,a)\right\} , \end{aligned}$$

for all \((\alpha , s) \in \varLambda \times S\). Therefore, \(\lim _{n\rightarrow \infty }\overline{{ BR }}(f_{n})=\lim _{n\rightarrow \infty }\bigvee { BR }(f_{n})=\bigvee { BR }(f)=\overline{{ BR }}(f)\), and so \(\overline{ { BR }}(\bigwedge f^{n})=\bigwedge \overline{{ BR }}(f^{n})\). Analogously, we prove that \(\underline{{ BR }}\) is sup-preserving.

As \(M_{\varLambda }\) is a countably chain complete poset, by the generalization of the Knaster–Tarski Theorem (see Theorem 5 in the “Appendix”), \(\overline{{ BR }}\) (respectively, \(\underline{{ BR })}\) have the greatest (respectively, the least) fixed point.

Denote the greatest fixed point of \(\overline{{ BR }}\) by \(\overline{f}^{*}\) and the least point of \(\underline{{ BR }}\) by \(\underline{f}^{*}\). For an arbitrary equilibrium \(f_0\), by Knaster–Tarski Theorem, \(\underline{f} ^{*}=\bigwedge \{f\in M_{\varLambda } | \underline{{ BR }}(f) \le f\} \le f_0 \le \bigvee \{f \in M_{\varLambda } | \overline{{ BR }}(f)\le f\} = \overline{f}^{*}\), which completes the proof.

Few comments on Theorem 2 are in order. First of all, our existence theorem differs from the ones existing in the literature with respect to the space of equilibrium objects. That is, we prove existence of Bayesian Nash–Schmeidler equilibria in measurable strategies, which represent a broader class of strategies than those studied in Balder and Rustichini (1994) and Kim and Yannelis (1997) who analyzed Bochner integrable strategies.

Second, papers in the literature prove the existence of Bayesian Nash–Schmeider equilibrium based on an application of the Fan–Glicksberg fixed point theorem. In contrast, in our argument for existence, we require the equilibrium strategy space to be a countably chain complete poset, which is a fairly weak notion of order completeness. This allows us to obtain results in a larger space of admissible equilibrium functions. Of course, we do not obtain these new results without cost; our approach requires several additional assumptions that are not necessary in any of these aforementioned papers per the question of existence. That is, none of these papers require a lattice structure for action sets, quasi-supermodularity and single-crossing differences of payoff functions, etc.

Third, Balder and Rustichini (1994) and Kim and Yannelis (1997) also analyze large games without the assumption that the set of players is represented by a measure space (hence, without the measurability assumption on the set of players). Once we use our methods applied to their alternative notion of equilibrium in the game, our results become even stronger. That is, in the case where the measurability requirement for equilibrium is dropped, the set of Nash equilibria is a non-empty complete lattice under pointwise partial order by a simple application of the Veinott’s (1992) or Zhou’s (1994) version of Tarski’s theorem. In such a case, one can weaken the payoff continuity assumption to merely upper semi-continuity of \(r\) on \(A\), as well as drop the order continuity Assumption 4(iv). The assumption that players are represented by a measure space requires additional continuity type assumptions on player payoffs, the feature that is not present in games with strategic complementarities and a finite number of players.

Fourth, the partial order imposed on \(M(\varLambda \times S)\), and used in Assumption 4, is defined “everywhere”, i.e., \(f^{\prime }\ge f\) iff \(\forall (\alpha ,s)\in \varLambda \times S,\,f^{\prime }(\alpha ,s)\ge f(\alpha ,s)\). This actually is important to note. Alternatively, we could consider the case where we relax the order to \(\succeq _{a.e.}\), i.e., \(f^{\prime }\succeq _{a.e.}f\) iff \(f^{\prime }(\alpha ,s)\ge f(\alpha ,s), \,\lambda \otimes \mu \)-a.e. For this alternative partial order, a few important comments should be noted. First of all, for this latter partial order, if we let \( \hat{M}(\varLambda \times S)\) denote the set of equivalence classes of functions in \(M(\varLambda \times S)\) with respect to \(\lambda \otimes \mu \), then \((\hat{M}(\varLambda \times S),\succeq _{a.e.})\) is a complete lattice (see Vives 1990, Lemma 6.1), but the greatest and the least element in \( \hat{M}(\varLambda \times S)\) is unique only up to equivalence classes. Additionally, the assumption concerning single-crossing differences of \(r\) in \((a,f)\) with respect to \(\succeq _{a.e.}\) is significantly stronger in this case when compared with obtaining monotonicity in the “everywhere” pointwise order. In particular, in such a situation, for \(f\simeq _{a.e.}f^{\prime }\), we have \( { BR }(f)={ BR }(f^{\prime })\). Such assumption is satisfied, for example, in a class of aggregative games. Further, in this latter partial order, the set of Nash equilibria of \(\varGamma \) is a non-empty, complete lattice of \( \hat{M}(\varLambda \times S)\), and ,in this case, we do not require the (order-) continuity assumption in Assumption 4(iv) imposed on payoffs. However, equilibria will also only be characterized relative to equivalence classes. Hence, with a stronger assumption concerning the order in which \(r\) has single-crossing differences, we can work with weaker continuity properties of extremal best reply maps and recover a complete lattice of equilibria (as opposed to a countably chain complete partial order set of equilibria). Finally, as argued in the remainder of the paper, analyzing games on equivalence classes of functions is straightforward in many applications.Footnote 30

4.3 Monotone comparative statics

Similarly to the literature concerning complete information quasi-supermodular games with a finite number of players, we may consider a parameterized version of the game defined above and determine how its equilibria vary with respect to the parameter. Along those lines, let \( \varTheta \) denote a partially ordered space of parameters \(\theta \). For any fixed \(\theta \in \varTheta \), define game

$$\begin{aligned} \varGamma (\theta )\,{:=}\,\{(\varLambda ,\mathcal {L},\lambda ),(S,\mathcal {S},\mu ),A, \tilde{A}(\theta ,\cdot ),r(\theta ,\cdot )\{\pi _{\alpha },\mathcal {S} _{\alpha }\}_{\alpha \in \varLambda }\}. \end{aligned}$$

Therefore, for each \(\theta \), the game \(\varGamma (\theta )\) is defined as in the first part of this section. In order to determine comparative statics of equilibria of the game, we impose the following assumptions.

Assumption 5

For any \(\theta \in \varTheta \), let \( \varGamma (\theta )\) satisfy Assumption 4. Moreover, let

  1. (i)

    \(\tilde{A}\) be increasing in the Veinott strong set order on \(\varTheta \) and complete sublattice-valued;Footnote 31

  2. (ii)

    \(r\) have single-crossing differences jointly in \((a,\theta )\);

  3. (iii)

    family \(\{\varDelta (\cdot ,s)\}_{s\in S}\), with \(\varDelta (\theta ,s)\,{:=}\,r(\theta ,\alpha ,s,f(\cdot ,s),a^{\prime })-r(\theta ,\alpha ,s,f(\cdot ,s),a)\), obey the signed-ratio monotonicity for any two \( a^{\prime }\), \(a\in A\), \(a^{\prime }\ge a\) and \(f\in M(\varLambda \times S)\).

The next result follows from Corollary 10. For any \( \theta \in \varTheta \), let \(\overline{f}^{*}(\theta )\) (resp. \(\underline{f }^{*}(\theta ))\) be the greatest (respectively, the least) equilibrium of \(\varGamma (\theta )\).

Corollary 3

Let Assumptions 4 and 5 be satisfied. Then, \(\overline{f}^{*}(\cdot )\) and \(\underline{f}^{*}(\cdot )\) are increasing on \(\varTheta \).

Proof

By Assumptions 4, 5, \( \overline{{ BR }}(f)\) (respectively, \(\underline{{ BR }}(f)\)) is increasing in \( \theta \) and inf-preserving (respectively, sup-preserving). By Corollary 10, \(\overline{f}^{*}(\cdot )\) and \( \underline{f}^{*}(\cdot )\) are increasing on \(\varTheta \).

5 Applications and extensions

We now present some economic applications of our results. In particular, we discuss applications to riot games (or binary choice games), beauty contest and common value auctions. We should mention in particular the example on common value auctions as being of particular interest, as this example shows how we can extend our results to games with weaker forms of complementarities, as used for example, in the work of Li Calzi and Veinott (1992).

5.1 Riot games

Our first example is a version of the riot game presented in Atkeson (2000), which is a continuum version of a binary choice game in the sense of Brock and Durlauf (2001). These games have also found extensive empirical applications in the recent literature on analyzing the nature of equilibrium social interactions (e.g., see Blume et al. 2010; Scheinkman Undated). The game studies the aggregate behavior of a potentially angry crowd that faces the riot police with the mandate of quelling collective violent actions. In this game, each of the demonstrators decides individually whether to fight the police or not (i.e., riot or not). If enough people join the fight, the riot police is overwhelmed by the rioters, and each rioter gets some loot \(W>0\). Otherwise, if the riot police contains the riot, each rioter gets arrested with payoff \(L<0\). Individuals who choose not to fight get a safe payoff of 0 in either situation.Footnote 32

In our version of the game, the ability of the riot police to control the crowd depends on the state of the world \(s\in S\) and is summarized by a function \(p:S\rightarrow \mathbb {R}\), which indexes the fraction of the crowd that must riot in order for the rioters to overwhelm the police (collectively). To make this example more general, we assume that \(p\) may take values outside the unit interval. Therefore, if \(p(s)>1\) the police always contain the riot (regardless of the number of people joining the fight), while it always fails to contain the riot when \(p(s)<0\). We should mention, in the case \(p:S\rightarrow [0,1],\) some trivial equilibria arise, as will be discussed later in this section.

5.1.1 Existence of equilibrium

Let \((S,\mathcal {S},\mu )\) be the measure space of the states of the world, and the private information of each individual \(\alpha \) be represented by a sub \(\sigma \)-field \(\mathcal {S}_{\alpha }\) generated by a countable partition. Assume that each rioter, regardless of the state, chooses action \( a=1\) when willing to join the fight, and \(a=0,\) otherwise. Then, \(\forall \) \((\alpha ,s)\in \varLambda \times A\), we have \(\tilde{A} (\alpha ,s)=\{0,1\}\). Moreover, let \(\tau \in \hat{T}_{d}\) be an equivalence class of functions defined in Sect. 3, mapping set \(S\) into the set of all distributions on \(\varLambda \times \{0,1\}\), denoted by \({D }\), where \(\tau (\{(\alpha ,a)\in \varLambda \times A\ | a=1\}|s)\) is the measure of all players joining the riot in state \(s\). The ex-post payoff of each individual is \(r:\varLambda \times S\times {D}\times A\),

$$\begin{aligned} r(\alpha ,s,\tau (\cdot |s),a)\,{:=}\,a\left[ (W-L)\chi _{R(\tau )}(s)+L\right] , \end{aligned}$$

where \(\chi \) is the indicator function,Footnote 33 with

$$\begin{aligned} R(\tau )\,{:=}\,\{s\in S | \tau \left( \left\{ \left( \alpha ,a^{\prime }\right) \in \varLambda \times A | a^{\prime }=1\right\} |s\right) \ge p(s)\}. \end{aligned}$$

It is easy to verify \(r(\alpha ,s,\tau (\cdot |s),a)\) has single-crossing differences in \((a,\tau )\) for any \(s\in S\), as

$$\begin{aligned} (W-L)\chi _{R(\tau )}(s)+L, \end{aligned}$$

is increasing in \(\tau (\cdot |s)\) (in fact it is piecewise constant).

We next need to show that family of functions \(\{\varDelta (\cdot ,s)\}_{s\in S} \) with \(\varDelta (\tau ,s)\,{:=}\,r(\alpha ,s,\tau (\cdot |s),a^{\prime })-r(\alpha ,s,\tau (\cdot |s),a)\) satisfies signed-ratio monotonicity for any \(a^{\prime }\), \(a\in A\), \(a^{\prime }\ge a\). In fact, we only need to show this condition holds when an agent changes her strategy from \(a=0\) to \( a=1\) (as, otherwise, the condition holds trivially). Along these lines, observe we have

$$\begin{aligned} \varDelta (\tau ,s)\,{:=}\,(W-L)\chi _{R(\tau )}(s)+L. \end{aligned}$$

so \(\varDelta (\tau ,s)<0\) only if \(\chi _{R(\tau )}(s)=0\). Then, for any \( s^{\prime },\,s\in S\) such that \(\varDelta (\tau ,s)\le 0\) and \(\varDelta (\tau ,s^{\prime })>0\), and any two measures \(\tau ,\tau ^{\prime }\in \hat{T}_{d} ,\,\tau ^{\prime }\succeq _{\hat{T}}\tau \), we have

$$\begin{aligned} -\frac{\varDelta (\tau ,s)}{\varDelta (\tau ,s^{\prime })}=-\frac{L}{W}\ge -\frac{ \varDelta (\tau ^{\prime },s)}{W}=-\frac{\varDelta (\tau ^{\prime },s)}{\varDelta (\tau ^{\prime },s^{\prime })}. \end{aligned}$$

Therefore, the signed-ratio monotonicity condition holds for \(\{\varDelta (\cdot ,s)\}_{s\in S}\), and Assumption 1 is satisfied.

Then, the interim payoff of an agent is

$$\begin{aligned} v(\alpha ,s,\tau ,a)\,{:=}\,\int \limits _{\varepsilon _{\alpha }(s)}a\left[ (W-L)\chi _{R(\tau )}(s^{\prime })+L\right] \pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\mathrm{d}\mu (s^{\prime }), \end{aligned}$$

where \(\varepsilon _{\alpha }(s)\) is defined as in Sect. 3. By Theorem 1, there exists the greatest and the least distributional Bayesian Nash equilibrium of the game (with respect to \(\succeq _{\hat{T}}\)) defined as in Sect. ).

Next, we consider conditions for the existence of extremal Bayesian Nash Schmeidler equilibrium in the sense of Sect. 4. Let \({M} (\varLambda \times {S})\) be a set of \(\mathcal {L}\otimes \mathcal {S}\)-measurable functions \(f:\varLambda \times S\rightarrow \{0,1\}\) endowed with the pointwise order. In this setting, the fraction of rioters that join the fight is given by \(\int _{\varLambda }f(\alpha ,s)\lambda (\mathrm{d}\alpha )\), which is pointwise increasing in \(f\). Define

$$\begin{aligned} F(f)\,{:=}\,\left\{ s\in S \bigg | \int \limits _{\varLambda }f(\alpha ,s)\lambda (\mathrm{d}\alpha )\ge p(s)\right\} , \end{aligned}$$

The ex-post payoff is then

$$\begin{aligned} r(\alpha ,s,f,a)\,{:=}\,a[(W-L)\chi _{F(f)}(s)+L]. \end{aligned}$$

Interestingly, this payoff function \(r(\alpha ,s,f,a)\) is not order continuous with respect to \(f\) (hence, Assumptions 4 is not satisfied). Therefore, we cannot directly apply Theorem 2 to obtain the existence of equilibrium in \( M(\varLambda \times S)\).

However, if each players payoff is constant on any equivalence class of functions and equal \(\lambda \)-a.e., there does exist an equilibrium in this game defined in the equivalence classes of \(\mathcal {L} \otimes \mathcal {S}\)-measurable functions. Moreover, the set of such equilibria constitutes a complete lattice. Therefore, aside from highlighting how to check the conditions of our theorems in an important class of examples, this example also shows the importance of disguising partial orders in the context of Bayesian Nash–Schmeidler equilibrium even per the question of existence.

Finally, we should note in some cases the largest and the greatest equilibrium of the game might be trivial. Observe, once \(p:S\rightarrow [0,1] \), the equivalence class \(\overline{\tau }^{*}\) where \(\overline{ \tau }^*(\{(\alpha ,a) \in \varLambda \times A | a=1\}|s)=1\), \(\mu \)-a.e., is the greatest equilibrium, while \(\underline{\tau }^{*}\) such that \( \underline{\tau }^*(\{(\alpha ,a) \in \varLambda \times A | a=0\}|s)=1\), \( \mu \)-a.e., is the least equilibrium.

5.1.2 Difficulties with uniqueness of equilibrium

One important question per the riot game concerns the uniqueness of equilibrium.Footnote 34 In the original paper by Atkeson (2000), at the beginning of the game, a signal \(s\in S\) is drawn from the normal distribution. Then, each player \(\alpha \) observes a distorted value of the signal \(x_{\alpha }=s+\zeta _{\alpha }\), where \(\zeta _{\alpha } \) is drawn from a normal distribution, identical and independent among players. In Atkeson (2000), as well as Morris and Shin (2001), equilibrium is defined by a cutoff signal \(x^{*}\) at which each player is indifferent between joining and withdrawing from the riot. Moreover, the probability of drawing \(x^{*}\), given state \(s\) is \(p(s)\), which implies that the measure of rioters in equilibrium is equal to the strength of the police. Under certain assumptions imposed on distributions governing \(s\) and \(\zeta _{\alpha }\), Morris and Shin (2001) claim uniqueness of such equilibrium.

In our framework, the question of uniqueness of equilibrium poses two main questions. First, the proof by Morris and Shin (2001) is based on an ex-ante symmetry of players whose expectations concerning \(s\) and \(x_{\alpha }\) before the game are identical. In fact, knowing that players are symmetric and that the Law of Large Numbers holds for continuum of players enables the agents to predict the cutoff value of the observed signal. Furthermore, in our model, the players have incomplete information about the true signal, which cannot be distinguished from other elements of the same set contained in the sub \(\sigma \)-field, which makes the Bayesian inference about the true state of the world different from the case when agents receive a distorted signal. As it turns out, these two issues are crucial for uniqueness of equilibria in the presented game.

For the sake of this discussion, we shall concentrate on uniqueness in the symmetric case of the game, where every player has the same sub \(\sigma \)-field. Since players do not receive a distorted signal in our framework, we have to present an alternate definition of an equilibrium to the one analyzed in Morris and Shin (2001) and Atkeson (2000). In our understanding, an equilibrium is an element \(s^{*}\in S\), such that the expected utility of each player is equal to zero, and the measure of rioters is equal to \(p(s^{*})\). We shall assume that the sub \(\sigma \)-algebra is generated for each player by a partition of convex subsets in \(S\), and that \(s\) is drawn from a normal distribution. We should mention that even though the symmetric case of the game is very simple, it is easy to show that the equilibrium is not unique.Footnote 35 In order to prove this, assume that \(s^{*}\) exists. Then, since players are symmetric, and determining their strategy with the same cut off value,

$$\begin{aligned} (W-L)\text {Prob}(s<s^{*}|\varepsilon (s^{*}))+L=0, \end{aligned}$$

must hold. Thus, we have

$$\begin{aligned} \text {Prob}(s<s^{*}|\varepsilon (s^{*}))=-\frac{L}{W-L}. \end{aligned}$$

Since \(-\frac{L}{W-L}\in (0,1)\) and \(\text {Prob}(s<s^{*}|\varepsilon (s^{*}))\) is continuous and increasing in \(s^{*}\) on \(\varepsilon (s^{*})\) and taking values from 0 to 1, then for every element \( \varepsilon (s^{*})\) in the sub \(\sigma \)-algebra \(\mathcal {S}_{\alpha }\), there exists an element \(s^{*}\) at which the interim payoff of players is equal to zero. The agents are therefore indifferent between joining the riot and withdrawing. Hence, as long, as \(p(s^{*})\in [0,1]\) for each element of the sub \(\sigma \)-algebra, there exists an equilibrium value \(s^{*}\). The above argument also shows that in general, this game has multiple equilibria.Footnote 36

The same problem occurs, when analyzing equilibria defined on strategies of players, as in Sects. 3 and 4. Even in the simplest cases, the game exhibits multiple equilibria, as illustrated in the following example.

Example 1

Consider an example of a riot game where the set of signals is \(S=[0,1]\), and its elements are distributed uniformly. Measure of \( \frac{1}{2}\) of players is endowed with a sub \(\sigma \)-field \( \mathcal {S}_{1}=\{\emptyset ,S\}\), while the information structure of the remaining players is \(\mathcal {S}_{2}=\{ \emptyset ,S, [0,\frac{1}{2}), [\frac{1}{2},1]\}\). The strength of the police is determined by an affine function \(p(s)=3s-1\). Eventually, let \(-\frac{L}{W-L}=\frac{1}{2}\).

The game has at least two equilibria. The least one:

$$\begin{aligned} \underline{\tau }^{*}(\{(\alpha ,a) \in \varLambda \times A | a =1\}|s)=\left\{ \begin{array}{lcl} \frac{1}{2} &{} \text { for } &{} s\in [0,\frac{1}{2}), \\ 0 &{} \text { for } &{} s\in [\frac{1}{2},1], \end{array} \right. \end{aligned}$$

and the greatest one:

$$\begin{aligned} \overline{\tau }^{*}(\{(\alpha ,a) \in \varLambda \times A | \ a=1\}|s)=\left\{ \begin{array}{lcl} 1 &{} \text { for } &{} s\in [0,\frac{1}{2}), \\ \frac{1}{2} &{} \text { for } &{} s\in [\frac{1}{2},1]. \end{array} \right. \end{aligned}$$

Hence, even when we constrain our attention to very simple games, equilibria are not unique.

5.2 Beauty contests

Consider a version of the beauty contest game (e.g., see Acemoglu and Jensen 2010). Suppose that the true value of a firm is unknown, but the players (who constitute the stock market) receive a common signal which has to be interpreted with respect to their private information in order to evaluate the asset of interest. Given a signal \(s\), each player \( \alpha \) makes a public prediction about the true value by announcing \(a\in \tilde{A}(\alpha ,s)\subset \mathbb {R}\), where \(\tilde{A}\) is well defined and convex-valued. Every agent is both interested in being close to his personal understanding of the signal, as well as to predictions of other players. Hence, the ex-post payoff can be defined as

$$\begin{aligned} r(\alpha ,s,\phi ,a)\,{:=}\,-\left[ h(\alpha ,|a-H(\alpha ,s)|)+g(\alpha ,|a-G(\alpha ,\phi ))|)\right] , \end{aligned}$$

where \(h,\,g:\varLambda \times \mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\) are concave and decreasing on \(\mathbb {R}_{+},\,H:\varLambda \times S\rightarrow \mathbb {R}\) is \(\mathcal {L}\otimes \mathcal {S}\)-measurable, \(G:\varLambda \times {D}\rightarrow \mathbb {R}^{n}\) is \(\mathcal {L}\otimes \mathcal {D}\)-measurable and increasing on \({D}\), while \(\varepsilon _{\alpha }(s)\) and \( \pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\) are defined as in Sect. 3.

Let \(\hat{T}_{d}\) denote the set of equivalence classes of functions \(\tau :S \rightarrow D\) endowed with ordering \(\succeq _{\hat{T}}\) defined in Sect. 3.1. The interim payoff of player \(\alpha \) is then

$$\begin{aligned} v(\alpha ,s,\tau ,a)= \int \limits _{\varepsilon _{\alpha }(s)}r(\alpha , s^{\prime }, \tau (\cdot |s^{\prime }), a) \pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s^{\prime }))\mathrm{d}\mu (s^{\prime }). \end{aligned}$$

To show that under the above assumptions the interim payoff \(v\) has single-crossing differences in \((a,\tau )\), we prove the following lemma.

Lemma 6

Let \(f:\mathbb {R}\rightarrow \mathbb {R}\) be a decreasing, concave function. Then, for a convex \(X\subset \mathbb {R}\) and some \( S\subset \mathbb {R}\), the function \(g:X\times S\rightarrow \mathbb {R}\) with \( g(x,s)\,{:=}\,f(|x-s|)\) has increasing differences in \((x,s)\).

Under the above lemma, the ex-post payoff function \(r\) has increasing differences in \((a,\phi )\). Hence, it has single-crossing differences and the family of functions \(\{\varDelta (\cdot ,s)\}_{s\in S}\), where \(\varDelta (\tau ,s)\,{:=}\,r(\alpha ,s,\tau (\cdot |s),a^{\prime })-r(\alpha ,s,\tau (\cdot |s),a)\), satisfies the signed-ratio monotonicity condition for any \( a^{\prime },\,a\in A\). Therefore, Assumption 1 is satisfied, and, by Theorem 1, the set of distributional Bayesian Nash equilibria admits the greatest and the least element.Footnote 37

Additionally, equilibrium in the game need not be defined on a space of distributions. That is, once the mapping \(G\) is defined on \(M(\varLambda \times S)\) and order continuous, the game can be generalized as in Sect. 4, and we obtain the greatest and the least equilibrium in the sense of Schmeidler (1973).

Finally, even though in our example the strategies of players are restricted to a subset of \(\mathbb {R}\), the game can be extended to multi-dimensional strategy spaces. Let \(\Vert \cdot \Vert \) denote a taxicab metric defined on \(\mathbb {R}^{n}\). As shown in Topkis (1998), \(f:X\times T\rightarrow \mathbb {R}\), \(f(x,t)\,{:=}\,-\Vert x-t\Vert \) is supermodular on \(X\), and has increasing differences in \((x,s)\) on \(X\times S\) [see Topkis 1998, see Example 2.6.2(g)].Footnote 38 Hence, for any \(s\in S,\,H:\varLambda \times S\rightarrow \mathbb {R}^{n}\) and \(G:\varLambda \times {D}\rightarrow \mathbb {R}^{n} \), increasing on \({D}\),

$$\begin{aligned} r(\alpha ,s,\phi ,a)\,{:=}\,-\left[ \Vert a-H(\alpha ,s)\Vert +\Vert a-G(\alpha ,\phi )\Vert \right] , \end{aligned}$$

is supermodular on \(A\) and has increasing differences in \((a,\phi )\). As a result, the assumptions stated in Sect. 3 for the existence of extremal distributional Bayesian Nash equilibrium in strategies are now satisfied.

5.3 Common value auctions

Assume that a measure space of agents attends a sealed bid, common value, multiple-unit, discriminatory auction. There is a measure \( G\in \mathbb {R}_{+}\) of homogeneous objects which are auctioned, but each player may buy at most one unit of the good. The value of each object is \( s\in S\subset \mathbb {R}\). Each player is able to perceive it only with respect to his private knowledge.Footnote 39

Let \((S,\mathcal {S},\mu )\) be a measure space of values of the good. Since the auction is discriminatory, each player pays the price equal to his bid. In this case, let \(u(\alpha ,s,a)\) denote the payoff of player \(\alpha \), when the value of the good is \(s\) and his winning bid is \(a\in \tilde{A} (\alpha ,s)\). Assume \(u\) to be strictly decreasing in \(a\) and that every player with a loosing bid receives payoff equal to zero. Finally, \(A\) is the set of all possible bids, and \(\tilde{A}(\alpha ,s)\in A\) is a compact subset of \(\mathbb {R_{+}}\). The interim payoff of each player is

$$\begin{aligned} v(\alpha ,s,\tau ,a)\,{:=}\,\int \limits _{\varepsilon _{\alpha }(s)}u(\alpha ,s^{\prime },a)\chi _{R(\tau )}(a,s^{\prime })\pi _{\alpha }(s^{\prime }|\varepsilon _{\alpha }(s))\mathrm{d}\mu (s^{\prime }), \end{aligned}$$

where \(\chi \) is the indicator function, and

$$\begin{aligned} R(\tau )\,{:=}\,\{(a,s) \in A \times S | \tau (\{(\alpha ,a^{\prime })\in \varLambda \times A|a^{\prime }\ge a\}|s) \le G\}. \end{aligned}$$

In the following example, we narrow down our interest solely to increasing functions \(\tau :S\rightarrow {D}\) in \(\hat{T}_{d}\). Moreover, let \( \varepsilon _{\alpha }(s)\,{:=}\,[\underline{z}_{\alpha }(s),\overline{z}_{\alpha }(s)]\) for some increasing functions \(\underline{z}_{\alpha }\) and \( \underline{z}_{\alpha }\).Footnote 40

In the literature concerning quasi-supermodular specifications of auctions with finite number of agents, quasi-supermodularity of the interim payoff function is obtained typically through assumptions concerning the log-supermodularity of the density function of types of players. Importantly, notice in this example, this is not the case. In fact, it is straightforward to verify that \(u(\alpha ,s,a)\chi _{R(\tau )}(a,s)\) does not have single-crossing differences in \((a,\tau (\cdot |s))\), as the strict inequalities that must be checked for the standard single-crossing property are not preserved as \(\tau (\cdot |s)\) \(\succeq _{D}\)-increases; still, the weak inequalities are preserved. This latter implication corresponds to the payoff \(u(\alpha ,s,a)\chi _{R(\tau )}(a,s)\) having join (but not meet) up-crossing differences in \((a,\tau (\cdot |s))\).Footnote 41 Moreover, the class of functions \(\{\varDelta (\cdot ,s)\}_{s\in S}\), where \( \varDelta (\tau ,s)\,{:=}\,u(\alpha ,s,a^{\prime })\chi _{R(\tau )}(a^{\prime },s)-u(\alpha ,s,a)\chi _{R(\tau )}(a,s)\), satisfies only the join signed-ratio monotonicity, for any \(a^{\prime },\, a\in A \), \(a^{\prime }\ge a\).

Given these concerns, we explain the result under weaker conditions in the following lemma.

Lemma 7

Let for all \((\alpha ,s)\in \varLambda \times S\), \( u:\varLambda \times S \times A\rightarrow \mathbb {R}\) be decreasing on \(A\). Then

$$\begin{aligned} u(\alpha ,s,a)\chi _{R(\tau )}(a,s) \end{aligned}$$

has join up-crossing differences in \((a,\tau (\cdot |s))\), and the family \(\{\varDelta (\cdot , s)\}_{s \in S}\), where \(\varDelta (\tau ,s)\,{:=}\,u(\alpha ,s,a')\chi _{R(\tau )}(a',s) - u(\alpha ,s,a)\chi _{R(\tau )}(a,s)\), satisfies the join signed-ratio monotonicity for any \(a',\,a \in A,\,a' \ge a\).

By Theorem 3 and Lemma 3, there exists a well defined, isotone operator \(\overline{B}\) defined as in Sect. , with the greatest fixed point (say \(\overline{\tau } ^{*}\)). By definition, \(\overline{\tau }^{*}\) constitutes the greatest distributional Bayesian Nash equilibrium (in monotone strategies) of the game. Since the operator \(\underline{B}\) might not be well defined, nor isotone, it cannot be determined whether the least equilibrium exists.

Another issue that needs to be addressed is the computation of greatest distributional Bayesian Nash equilibrium (in monotone strategies) \(\overline{ \tau }^{*}\). As operator \(\overline{B}\) is inf-preserving, by Theorem 1, we have \(\lim _{n\rightarrow \infty }\overline{B}^{n}( \overline{t})=\overline{\tau }^{*}\), where \(\overline{t}\) is the greatest element of \(\hat{T}_{d}\). Hence, the operator approximates the distribution using a conceptually simple monotone iterative procedure. This means that our method not only determines the existence of the greatest equilibrium, but also presents tools for its direct computation. No similar result is available using purely topological methods.

Finally, one can modify the above game without affecting supermodular properties of the auction in question. For example, if we assume that the final price paid by winning agents is determined by some increasing aggregate \(H:{D}\rightarrow A\) (e.g., the average price of all winning bids or a quantile of the distribution of prices), then the payoffs are given by

$$\begin{aligned} u(\alpha ,s,H(\tau (\cdot |s)))\chi _{R(\tau )}(a,s). \end{aligned}$$

Given the above results, it is easy to verify that the join up-crossing differences in \((a,\tau )\) of the ex-post payoff function are preserved under aggregation. This implies that all the previous results hold.