Skip to main content
Log in

The context of the game

  • Research Article
  • Published:
Economic Theory Aims and scope Submit manuscript

Abstract

We study games of incomplete information and argue that it is important to correctly specify the “context” within which hierarchies of beliefs lie. We consider a situation where the players understand more than the analyst: It is transparent to the players—but not to the analyst—that certain hierarchies of beliefs are precluded. In particular, the players’ type structure can be viewed as a strict subset of the analyst’s type structure. How does this affect a Bayesian equilibrium analysis? One natural conjecture is that this doesn’t change the analysis—i.e., every equilibrium of the players’ type structure can be associated with an equilibrium of the analyst’s type structure. We show that this conjecture is wrong. Bayesian equilibrium may fail an Extension Property. This can occur even in the case where the game is finite and the analyst uses the so-called universal structure (to analyze the game)—and, even, if the associated Bayesian game has an equilibrium. We go on to explore specific situations in which the Extension Property is satisfied.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. Given this fact, some papers replace Condition 1 with the requirement that each \(s_{i}\) is measurable. This suffices for a positive result, i.e., to establish equilibrium existence or to characterize certain behavior as consistent with equilibrium. To establish a negative result, it is important to rule out more than simply “a sufficient condition cannot be satisfied.” We thank Jeff Ely pointing us to this important distinction.

  2. We thank Pierpaolo Battigalli for this example.

  3. It is often taken for granted that there exists a Bayesian equilibrium for some \((\varGamma ^{*},{\mathcal {T}}^{*})\) where \(\varGamma \) is finite and \({\mathcal {T}}^{*}\) is countable. Takahashi (2009) has written a proof of this claim making use of Glicksberg’s (1952) Theorem.

  4. Dekel et al. (2007) show an analogous result, when the parameter and action sets are finite. A proof in the broader setting can be found in the Online Appendix.

  5. Simon (2003) is an earlier paper that shows an analog of part 2. We use Hellman’s (2014) example and not Simon’s (2003) because his analysis is a three-player game, i.e., not a simple game.

  6. To the best of our knowledge, Corollary 2 provides the first example of a finite universal Bayesian game that has no equilibrium. Certainly, the result is not a far leap from Hellman’s result. Nonetheless, we note that Hellman does not make this leap. Moreover, making the leap appears to require either a proof distinct from Hellman (2014) or a modification of Hellman’s example: Note Hellman’s is about non-measurability of a strategy mapping—namely \(\overline{s}_{i}\). To find a violation of a Bayesian equilibrium, we need non-integrability of a payoff mapping—namely \(\overline{\varPi }_{i}[c_{i},\overline{s}_{-i}]\). (After learning of our result, Hellman added an example, modifying his original example. There is no Bayesian equilibrium of this modified Bayesian game. This modified example is not an example of a universal Bayesian game.)

  7. This interpretation requires Condition 1 in Definition 12. Indeed, for this reason, the type structure must be countable. This is important from the perspective of Lemma 8.

  8. Satoru Takahashi pointed us to the fact that if a game with a countable number of players is (in a sense) “generated” by a compact and continuous game of incomplete information, then the payoff functions are nonetheless continuous. In so doing, Takahashi generalized a result in a previous version of this paper. We are very much indebted to Satoru for this contribution.

  9. There is the question of whether we can instead begin with the weaker requirement that each \(\varPi _{i}[c_{i},s_{-i}]\) is universally measurable. We do not know. In “Appendix 4”, we point out that our proof of continuity breaks down with this weaker assumption.

References

  • Aliprantis, C., Border, K.: Infinite Dimensional Analysis: A Hitchhiker’s Guide. Springer, Berlin (2007)

    Google Scholar 

  • Battigalli, P., Friedenberg, A.: Forward induction reasoning revisited. Theor. Econ. 7, 57–98 (2012a)

    Article  Google Scholar 

  • Battigalli, P., Friedenberg, A.: Forward Induction Reasoning Revisited: Working Paper Version. IGIER working paper 351 (2012b)

  • Battigalli, P., Siniscalchi, M.: Rationalization and incomplete information. Adv. Theor. Econ. 3(1) (2003)

  • Battigalli, P., Brandenburger, A., Friedenberg, A., Siniscalchi, M.: Strategic Uncertainty: An Epistemic Approach to Game Theory (Working Title) (2012)

  • Billingsley, P.: Probability and Measure. Wiley, Hoboken (2008)

    Google Scholar 

  • Bogachev, V.: Measure Theory, vol. 1. Springer, Berlin (2006)

    Google Scholar 

  • Brandenburger, A.: On the existence of a “complete” possibility structure. In: Basili, M., Dimitri, N., Gilboa, I. (eds.) Cognitive Processes and Economic Behavior, pp. 30–34. Routledge (2003)

  • Brandenburger, A., Dekel, E.: Hierarchies of beliefs and common knowledge. J. Econ. Theory 59, 189–189 (1993)

    Article  Google Scholar 

  • Brandenburger, A., Friedenberg, A., Keisler, H.: Admissibility in games. Econometrica 76(2), 307 (2008)

    Article  Google Scholar 

  • Dekel, E., Fudenberg, D., Morris, S.: Interim correlated rationalizability. Theor. Econ. 2(1), 15–40 (2007)

    Google Scholar 

  • Dufwenberg, M., Stegeman, M.: Existence and uniqueness of maximal reductions under iterated strict dominance. Econometrica 70(5), 2007–2023 (2002)

    Article  Google Scholar 

  • Ely, J., Peski, M.: Hierarchies of belief and interim rationalizability. Theor. Econ. 1(1), 19–65 (2006)

    Google Scholar 

  • Fremlin, D.: Measure Theory, vol. 4 (2000). ISBN 978-0-9566071-2-6

  • Friedenberg, A.: When do type structures contain all hierarchies of beliefs? Games Econ. Behav. 68(1), 108–129 (2010)

    Article  Google Scholar 

  • Friedenberg, A., Meier, M.: On the relationship between hierarchy and type morphisms. Econ Theory 46, 1–23 (2010)

    Google Scholar 

  • Fristedt, B., Gray, L.: A Modern Approach to Probability Theory. Birkhäuser, Boston (1996)

    Google Scholar 

  • Glicksberg, I.: A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. In: Proceedings of the American Mathematical Society, pp. 170-174 (1952)

  • Heifetz, A., Samet, D.: Topology-free typology of beliefs. J. Econ. Theory 82(2), 324–341 (1998)

    Article  Google Scholar 

  • Hellman, Z.: A game with no approximate Bayesian equilibrium. J. Econ. Theory 153, 138–151 (2014)

    Article  Google Scholar 

  • Liu, Q.: On redundant types and Bayesian formulation of incomplete information. J. Econ. Theory 144(5), 2115–2145 (2009)

    Article  Google Scholar 

  • Meier, M.: An infinitary probability logic for type spaces. Isr. J. Math. 192(1), 1–58 (2012)

    Article  Google Scholar 

  • Mertens, J., Zamir, S.: Formulation of Bayesian analysis for games with incomplete information. Int. J. Game Theory 14(1), 1–29 (1985)

    Article  Google Scholar 

  • Morris, S., Shin, H.: Global games: theory and applications. In: Advances in Economics and Econometrics: Theory and Applications, Eighth World Congress, vol. 1, pp. 56-114 (2003)

  • Peleg, B.: Equilibrium points for games with infinitely many players. J. Lond. Math. Soc. 1(1), 292 (1969)

    Article  Google Scholar 

  • Purves, R.: Bimeasurable functions. Fundam. Math. 58, 149–157 (1966)

    Google Scholar 

  • Sadzik, T.: Beliefs Revealed in Bayesian-Nash Equilibrium. New York University, New York City (2011)

    Google Scholar 

  • Simon, R.: Games of incomplete information, ergodic theory, and the measurability of equilibria. Isr. J. Math. 138(1), 73–92 (2003)

    Article  Google Scholar 

  • Stuart, H.: Common Belief of Rationality in the Finitely Repeated Prisoners’ Dilemma. Games Econ. Behav. 19(1), 133–143 (1997)

    Article  Google Scholar 

  • Takahashi, S.: Private Communication (2009)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Meier.

Additional information

We are indebted to David Ahn, Pierpaolo Battigalli, Adam Brandenburger, John Nachbar, Marciano Siniscalchi, and Satoru Takahashi for many helpful conversations. We also thank Adib Bagh, Tilman Börgers, Jeff Ely, Ziv Hellman, George Mailath, Stephen Morris, Antonio Penta, Konrad Podczeck, Kevin Reffett, Pablo Schenone and seminar participants at Arizona State University, Institut für Höhere Studien in Vienna, Rice University, UC Berkeley, UCLA, UC San Diego, University of Pennsylvania, the Third World Congress of the Game Theory Society, the European Econometric Society Conference, and the SAET Conference for important input. Jie Zheng and Diana MacDonald provided excellent research assistance. Parts of this project were completed while Friedenberg was visiting the UC Berkeley Economics Department and while Meier was visiting the Center for Research in Economics and Strategy (CRES) at the Olin Business School. We thank these institutions for their hospitality and CRES for financial support. Friedenberg thanks the Olin Business School and the W.P. Carey School of Business for financial support. Meier was supported by the Spanish Ministerio de Educación y Ciencia via a Ramon y Cajal Fellowship (IAE-CSIC) and Research Grant (SEJ 2006-02079).

Appendices

Appendix 1: Proofs for Sects. 2 and 3

Lemma 9

Fix separable metrizable spaces \(\varOmega _{1}, \varOmega _{2}, \varPhi _{1}\), and \( \varPhi _{2}\). Let \(f_{1} : \varOmega _{1} \rightarrow \varPhi _{1}\), \(f_{2} : \varOmega _{2} \rightarrow \varPhi _{2}\) and write \(f : \varOmega \rightarrow \varPhi \) for the associated product map.

  1. 1.

    If \(f_{1}\) and \(f_{2}\) are universally measurable, then f is universally measurable.

  2. 2.

    If f is universally measurable, then \(f_{1}\) and \(f_{2}\) are universally measurable.

Proof

Begin with part 1. Assume \(f_{1}\) and \(f_{2}\) are universally measurable. Since \(\varPhi _{1},\varPhi _{2}\) are separable and metrizable, \({\mathcal {B}}(\varPhi _{1} \times \varPhi _{2})= {\mathcal {B}}(\varPhi _{1}) \times {\mathcal {B}}(\varPhi _{2})\). Thus, to show f is \(({\mathcal {B}}_{\text {UM}}(\varOmega _{1}\times \varOmega _{2}),{\mathcal {B}}(\varPhi _{1} \times \varPhi _{2}))\)-measurable, it suffices to show that, for each \(E_{1} \times E_{2} \in {\mathcal {B}}(\varPhi _{1}) \times {\mathcal {B}}(\varPhi _{2})\), \(f^{-1}(E_{1} \times E_{2}) \in {\mathcal {B}}_{\text {UM}}(\varOmega _{1}\times \varOmega _{2})\). By universal measurability of \(f_{1},f_{2}\), \(f^{-1}(E_{1} \times E_{2}) = f^{-1}_{1}(E_{1} ) \times f^{-1}_{2}(E_{2}) \in {\mathcal {B}}_{\text {UM}}(\varOmega _{1}) \times {\mathcal {B}}_{\text {UM}}(\varOmega _{2})\). Since \({\mathcal {B}}_{\text {UM}}(\varOmega _{1}) \times {\mathcal {B}}_{\text {UM}}(\varOmega _{2}) \subseteq {\mathcal {B}}_{\text {UM}}(\varOmega )\) (see, e.g., Fremlin 2000, p. 202), the conclusion follows.

Now assume that f is universally measurable. Fix some \(\nu _{1} \in \varDelta (\varOmega _{1})\) and some \(E_{1} \in {\mathcal {B}}(\varPhi _{1})\). We will show that there are Borel sets \(F_{1},G_{1} \in {\mathcal {B}}(\varOmega _{1})\) so that \(F_{1} \subseteq f_{1}^{-1}(E_{1}) \subseteq G_{1}\) and \(\nu _{1}(F_{1}) = \nu _{1}(G_{1})\). Thus, \(f_{1}\) is universally measurable (and, analogously, for \(f_{2}\)).

Fix \(\omega _{2}^{*} \in \varOmega _2\) and define \(k: \varOmega _{1} \rightarrow \varOmega _{1} \times \varOmega _{2}\) so that \(k(\omega _1) = (\omega _1,\omega _{2}^{*})\). Certainly, k is measurable. Define \(\mu \) as the image measure of \(\nu _{1}\) under k. Since f is universally measurable, there are Borel sets \(F,G \subseteq {\mathcal {B}}(\varOmega _{1} \times \varOmega _{2})\) so that \(F \subseteq f^{-1}(E_{1} \times \varPhi _{2} ) \subseteq G \) and \(\mu (F)=\mu (G)\). Note that \(f^{-1}(E_{1} \times \varPhi _{2}) = f^{-1}_{1}(E_{1}) \times \varOmega _{2}\).

Since \(\mu (\varOmega _{1} \times \{ \omega _{2}^{*} \}) = 1\), we have that \(\mu (F \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \})) = \mu (F) = \mu (G) = \mu (G \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \}))\), and \(F \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \}) \subseteq f^{-1}_{1}(E_{1}) \times \{ \omega _{2}^{*} \} \subseteq G \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \})\). Define \(F_{1} = {\mathrm {proj\,}}_{\varOmega _{1}}(F \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \})\) and \(G_{1} = {\mathrm {proj\,}}_{\varOmega _{1}}(G \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \})\). Then, \(F_{1},G_{1} \in {\mathcal {B}}(\varOmega _{1})\) (Aliprantis and Border 2007, Theorem 4.44 and Lemma 4.46) with \(F_1 \subseteq f^{-1}_{1}(E_1) \subseteq G_1\). Moreover, since \(F \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \}) = F_1 \times \{\omega _{2}^{*}\}\), \(G \cap (\varOmega _{1} \times \{ \omega _{2}^{*} \}) = G_1 \times \{\omega _{2}^{*}\}\), and \(\mu (\varOmega _{1} \times \{ \omega _{2}^{*} \}) = 1\), we have \(\nu _{1}(F_1) = \mu (F_1 \times \{\omega _{2}^{*}\}) = \mu (G_1 \times \{\omega _{2}^{*}\}) = \nu _{1}(G_1)\). \(\square \)

Lemma 10

Fix metrizable spaces \(\varOmega , \varOmega ^{*},\varPhi , \varPhi ^{*}\), and a Borel measurable mapping \(f:\varOmega \rightarrow \varOmega ^{*}\). Let \(g: \varOmega \rightarrow \varPhi \) and \(g^{*}: \varOmega ^{*} \rightarrow \varPhi ^{*}\) be such that \(g = g^{*} \circ f\). For each \(\mu \in \varDelta (\varOmega )\), if \(g^{*}\) is \(\underline{f}(\mu )\)-measurable, then g is \(\mu \)-measurable.

Proof

Assume \(g^{*}\) is \(\underline{f}(\mu )\)-measurable. Then, for \(E \subseteq \varPhi ^{*}\) Borel, \((g^{*})^{-1}(E) \in {\mathcal {B}}(\varOmega ^{*};\underline{f}(\mu ))\). This says that there are Borel sets \(X^{*},Y^{*} \subseteq \varOmega ^{*}\) with \(X^{*} \subseteq (g^{*})^{-1}(E) \subseteq Y^{*}\) and \(\underline{f}(\mu )(X^{*})=\underline{f}(\mu )(Y^{*})\). Then, \(f^{-1}(X^{*}), f^{-1}(Y^{*})\) are Borel subsets of \(\varOmega \) with \(f^{-1}(X^{*}) \subseteq f^{-1} ((g^{*})^{-1}(E)) \subseteq f^{-1}(Y^{*})\) and \(\mu (f^{-1}(X^{*}))=\mu (f^{-1}(Y^{*}))\). Note \(f^{-1}((g^{*})^{-1}(E))=g^{-1}(E) \) so that \(f^{-1}(X^{*}) \subseteq g^{-1}(E) \subseteq f^{-1}(Y^{*})\). From this \(g^{-1}(E) \in {\mathcal {B}}(\varOmega ;\mu )\), as required. \(\square \)

Corollary 5

Fix metrizable spaces \(\varOmega , \varOmega ^{*}\) and a Borel measurable mapping \(f:\varOmega \rightarrow \varOmega ^{*}\). Let \(g: \varOmega \rightarrow {\mathbb {R}}\) and \(g^{*}: \varOmega ^{*} \rightarrow {\mathbb {R}}\) be bounded functions such that \(g = g^{*} \circ f\). For each \(\mu \in \varDelta (\varOmega )\), if \(g^{*}\) is \(\underline{f}(\mu )\)-integrable, then g is \(\mu \)-integrable.

Proposition 7

(Generalized Pull-Back Property) Suppose \({\mathcal {T}}\) can be mapped to \({\mathcal {T}}^{*}\) via \((h_{1},\ldots , h_{|I|})\). If \((s_{1}^{*}, \ldots , s_{|I|}^{*})\) is a Bayesian equilibrium of \( (\varGamma , {\mathcal {T}}^{*})\), then \((s_{1}^{*}\circ h_{1}, \ldots , s_{|I|}^{*}\circ h_{|I|})\) is a Bayesian equilibrium of \((\varGamma , {\mathcal {T}})\).

Proof

Fix a Bayesian equilibrium, viz. \((s_{1}^{*}, \ldots , s_{|I|}^{*})\), of \((\varGamma , {\mathcal {T}}^{*})\). We will show that \((s_{1},\ldots ,s_{|I|}) = (s_{1}^{* }\circ h_{1},\ldots , s_{|I|}^{* }\circ h_{|I|})\) is a Bayesian equilibrium of \( ( \varGamma , {\mathcal {T}} ) \).

Begin with condition 1 of Definition 3. Fix some \(t_{i} \in T_{i}\) and \(c_{i}\) and note that, by the fact that \((s_{1}^{*}, \ldots , s_{|I|}^{*})\) is an equilibrium, each \(\varPi _{i}^{*}[c_{i},s_{-i}^{*}]\) is \(\beta _{i}^{*}(h_{i}(t_{i}))\)-integrable. Note \(\beta _{i}^{*}(h_{i}(t_{i}))\) is the image measure of \(\beta _{i}(t_{i})\) under \({\mathrm {id\,}}\times h_{-i}\). Moreover, \(\varPi _{i}^{*}[c_{i},s_{-i}^{*}] \circ ({\mathrm {id\,}}\times h_{-i})= \varPi _{i}[c_{i},s_{-i}]\). Thus, it follows from Corollary 5 that \(\varPi _{i}[c_{i},s_{-i}]\) is \(\beta _{i}(t_{i})\)-integrable.

Now turn to Condition 2 of Definition 3. Fix some type \(t_{i}\in T_{i}\) and some choice \(c_{i}\) of the Bayesian game \((\varGamma , {\mathcal {T}}) \). We have that

$$\begin{aligned}&\int _{\varTheta \times T_{-i}} \varPi _{i} [s_{i}^{*}(h_{i}(t_{i})), s_{-i}^{*}\circ h_{-i} ] \mathrm{d}\beta _{i}(t_{i}) \\&\quad = \int _{\varTheta \times T_{-i}} \pi _{i} (\theta , s_{i}^{*}(h_{i}(t_{i})), s_{-i}^{*}(h_{-i}(t_{-i}))) \mathrm{d}\beta _{i}(t_{i}) \\&\quad =\int _{\varTheta \times T_{-i}^{*}} \pi _{i}(\theta , s_{i}^{*}(h_{i}(t_{i})), s_{-i}^{*}(t_{-i}^{*})) \mathrm{d}\beta _{i}^{*}(h_{i}(t_{i})) \\&\quad \ge \int _{\varTheta \times T_{-i}^{*}} \pi _{i}(\theta , c_{i}, s_{-i}^{*}(t_{-i}^{*})) \mathrm{d}\beta _{i}^{*}(h_{i}(t_{i})) \\&\quad =\int _{\varTheta \times T_{-i}} \pi _{i} (\theta , c_{i}, s_{-i}^{*}(h_{-i}(t_{-i}))) \mathrm{d}\beta _{i}(t_{i}) \\&\quad =\int _{\varTheta \times T_{-i}} \varPi _{i} [c_{i}, s_{-i}^{*}\circ h_{-i}] \mathrm{d}\beta _{i}(t_{i}), \end{aligned}$$

where the second and fourth lines use the Change of Variables Theorem (Billingsley 2008, Theorem 16.12) plus the fact that \( (h_{1},\ldots , h_{|I|} ) \) is a type morphism and the third line uses the fact that \( ( s_{1}^{* },\ldots , s_{|I|}^{* } ) \) is a Bayesian equilibrium. This establishes condition 2 of Definition 3. \(\square \)

Remark 4

Call \({\mathcal {T}} = (\varTheta , (T_{i}, \beta _{i})_{i \in I})\) a \({\varvec{\Theta }}\) -based separable metrizable type structure if it satisfies the conditions of Definition 1, with the exception that each \(T_{i}\) may only be a separable metrizable set. The definitions in Sects. 2 and 3 apply by replacing a \(\varTheta \)-based type structure with a \(\varTheta \)-based separable metrizable type structure. In particular, we can repeat the proof of Proposition 7 line-for-line and obtain the Generalized Pull-Back Property for separable metrizable type structures.

Appendix 2: Proofs for Sect. 4

Proof of Proposition 2

Fix a \(\overline{\varTheta }\)-based Bayesian game \((\overline{\varGamma },\overline{{\mathcal {T}}})\) that has no Bayesian equilibrium. Let \(\hat{\theta }\) be a parameter not contained in \(\overline{\varTheta }\) and take \(\varTheta = \overline{\varTheta } \cup \{ \hat{\theta } \}\). Construct a \(\varTheta \)-based game \(\varGamma =((C_{i},\pi _{i}): i \in I)\) from \(\overline{\varGamma }=((\overline{C}_{i},\overline{\pi }_{i}): i \in I)\): for each i, 1. take \(C_{i}= \overline{C}_{i}\), 2. take \(\pi _{i}\) restricted to the domain \(\overline{\varTheta } \times C\) to be \(\overline{\pi }_{i}\), and 3. take \(\pi _{i}(\hat{\theta },\cdot )=1\).

Write \(\overline{{\mathcal {T}}}=(\overline{\varTheta },(\overline{T}_{i},\overline{\beta }_{i}): i \in I)\). We next construct two type structures. First, write \({\mathcal {T}}^{*}=(\varTheta ,(T_{i}^{*},\beta _{i}^{*}): i \in I)\), where \(T_{i}^{*}= \{t_{i}^{*} \}\) and \(\beta _{i}^{*}(t_{i}^{*})(\hat{\theta },t_{-i}^{*})=1\) for each \(i \in I\). Second, construct \({\mathcal {T}}^{**}=(\varTheta ,(T_{i}^{**},\beta _{i}^{**}): i \in I)\) as follows: Take \(T_{i}^{**}\) to be the disjoint union of \(T_{i}^{*}\) and \(\overline{T}_{i}\). For each \(\overline{t}_{i} \in \overline{T}_{i}\) and each \(E_{-i} \subseteq \varTheta \times T_{-i}^{**}\), \(\beta _{i}^{**}(\overline{t}_{i})(E_{-i})=\overline{\beta }_{i}(\overline{t}_{i})(E_{-i} \cap (\overline{\varTheta }\times \overline{T}_{-i}))\). For \(t_{i}^{*} \in T_{i}^{*}\), \(\beta _{i}^{**}(t_{i}^{*})=\beta _{i}^{*}(t_{i}^{*})\).

Observe that there exists some Bayesian equilibrium of \((\varGamma ,{\mathcal {T}}^{*})\). Moreover, \({\mathcal {T}}^{*}\) can be embedded into \({\mathcal {T}}^{**}\). But there can be no Bayesian equilibrium of \((\varGamma ,{\mathcal {T}}^{**})\): Suppose there were such an equilibrium, \((s_{1}^{*},\ldots ,s_{|I|}^{*})\) and define \(\overline{s}_{i}^{*}\) to be the restriction of \(s_{i}^{*}\) to \(\overline{T}_{i}\). Then, by the Pull-Back Property, \((\overline{s}_{1}^{*},\ldots ,\overline{s}_{|I|}^{*})\) would also be a Bayesian Equilibrium of \((\overline{\varGamma },\overline{{\mathcal {T}}})\). \(\square \)

Lemma 11

Fix some \(\mu \in \varDelta (\varOmega )\) and some \(\mu \)-measurable \(f: \varOmega \rightarrow {\mathbb {R}}\). Suppose there is Borel measurable set \(A \subseteq \varOmega \) so that \(\mu (A) \in [0,1)\). Then, f is \((\mu (\cdot |\varOmega \backslash A))\)-measurable.

Proof

Fix some E Borel in \({\mathbb {R}}\). Since f is \(\mu \)-measurable, there are Borel sets \(F,G \subseteq \varOmega \) with \(F \subseteq f^{-1}(E) \subseteq G\) and \(\mu (F)=\mu (G)\). It suffices to show that \(\mu (F|\varOmega \backslash A)=\mu (G|\varOmega \backslash A)\): Observe that \(F \backslash A \subseteq G \backslash A\) and \(F \cap A \subseteq G \cap A\). From this, \(\mu (F \backslash A)\le \mu (G \backslash A)\) and \(\mu (F \cap A ) \le \mu (G \cap A)\). Since

$$\begin{aligned} \mu (F \backslash A) + \mu (F \cap A) = \mu (F)=\mu (G) = \mu (G \backslash A) + \mu (G \cap A), \end{aligned}$$

it follows that \(\mu (F \backslash A) = \mu (G \backslash A)\), as desired. \(\square \)

Appendix 3: Proofs for Sect. 5

Proof of Lemma 2

Fix a \(\varTheta \)-based Bayesian game \( ( \varGamma , {\mathcal {T}} ) \) where \({\mathcal {T}}\) can be embedded into \({\mathcal {U}} ( \varTheta )\). First suppose that, for each \(\varTheta \)-based structure \({\mathcal {T}}^{*}\), so that \({\mathcal {T}}\) can be embedded into \({\mathcal {T}}^{*}\), the pair \( \langle {\mathcal {T}},{\mathcal {T}}^{* } \rangle \) satisfies the Equilibrium Extension Property for \(\varGamma \). Then certainly this is the case when \({\mathcal {T}}^{*}={\mathcal {U}} ( \varTheta ) \). We show that if the pair \( \langle {\mathcal {T}},{\mathcal {U}} ( \varTheta ) \rangle \) satisfies the Equilibrium Extension Property for \(\varGamma \), then the pair \( \langle {\mathcal {T}},{\mathcal {T}}^{* } \rangle \) also satisfies the Equilibrium Extension Property for \(\varGamma \), where \({\mathcal {T}}^{*}\) is some \(\varTheta \)-based structure so that \({\mathcal {T}}\) can be embedded into \({\mathcal {T}}^{*}\).

To show this, it will be useful to begin with properties of the mappings between these structures. By assumption, there exists an injective type morphism, viz. \((h_{1},\ldots ,h_{|I|})\), from \({\mathcal {T}}\) to \({\mathcal {T}}^{*}\). Since \({\mathcal {U}} ( \varTheta )\) is terminal, there is also a (not necessarily injective) type morphism \( ( l_{1},\ldots , l_{|I|} ) \) from \({\mathcal {T}}^{* }\) to \({\mathcal {U}} ( \varTheta ) \). Note the map \((l_{1}\circ h_{1},\ldots ,l_{|I|}\circ h_{|I|})\) is a type morphism from \({\mathcal {T}}\) to \({\mathcal {U}} ( \varTheta ) =(\varTheta , (U_{i},\gamma _{i})_{i \in I})\). To see this, fix an event E in \(\varTheta \times U_{-i}\) and note that

$$\begin{aligned} \gamma _{i} ( l_{i} ( h_{i} ( t_{i} ) ) ) ( E )= & {} \beta _{i}^{* } ( h_{i} ( t_{i} ) )(({\mathrm {id\,}}\times l_{-i})^{-1} ( E) ) \\= & {} \beta _{i} ( t_{i} ) (({\mathrm {id\,}}\times h_{-i})^{-1}(({\mathrm {id\,}}\times l_{-i})^{-1} ( E)) ) \text {,} \end{aligned}$$

where the first line uses the fact that \( ( l_{1},\ldots , l_{|I|} )\) is a type morphism from \({\mathcal {T}}^{* }\) to \({\mathcal {U}} ( \varTheta ) \) and the second line uses the fact that \( ( h_{1},\ldots ,h_{|I|} ) \) is a type morphism from \({\mathcal {T}}\) to \({\mathcal {T}}^{* } \). So, \(\gamma _{i} ( l_{i} ( h_{i} ( t_{i} ) ) ) \) is the image measure of \(\beta _{i} ( t_{i} ) \) under \( ( {\mathrm {id\,}}\times l_{-i} ) \circ ( {\mathrm {id\,}}\times h_{-i} ) ={\mathrm {id\,}}\times (l_{-i}\circ h_{-i})\), as required. An implication is that \({\mathcal {T}}\) can be mapped to \({\mathcal {U}} ( \varTheta )\) via \((l_{1}\circ h_{1},\ldots ,l_{|I|}\circ h_{|I|})\).

Now observe that, by assumption, we have an injective type morphism \((k_{1},\ldots ,k_{|I|})\) from \({\mathcal {T}}\) to \({\mathcal {U}} ( \varTheta )\). We also have that \((l_{1}\circ h_{1},\ldots ,l_{|I|}\circ h_{|I|})\) is a type morphism from \({\mathcal {T}}\) to \({\mathcal {U}} ( \varTheta )\). Since \({\mathcal {U}}(\varTheta )\) is non-redundant and type morphisms preserve hierarchies of beliefs, it follows that \((l_{1}\circ h_{1},\ldots ,l_{|I|}\circ h_{|I|})=(k_{1},\ldots ,k_{|I|})\), i.e., \((l_{1}\circ h_{1},\ldots ,l_{|I|}\circ h_{|I|})\) is injective.

Let \( ( s_{1},\ldots , s_{|I|} ) \) be a Bayesian equilibrium of \(( \varGamma , {\mathcal {T}} ) \). Since \( \langle {\mathcal {T}},{\mathcal {U}} ( \varTheta ) \rangle \) satisfies the Extension Property for \(\varGamma \), there exists an equilibrium \( ( r_{1},\ldots ,r_{|I|} ) \) of \( ( \varGamma , {\mathcal {U}} ( \varTheta ) ) \) so that \( ( s_{1},\ldots , s_{|I|} ) = ( r_{1}\circ l_{1} \circ h_{1},\ldots ,r_{|I|}\circ l_{|I|} \circ h_{|I|} ) \). The Pull-Back Property (Proposition 1) gives that \(( s_{1}^{*},\ldots , s_{|I|}^{*} ) = ( r_{1} \circ l_{1}, \ldots , r_{|I|}\circ l_{|I|} )\) is an equilibrium of \(( \varGamma , {\mathcal {T}}^{* } ) \). Thus, \(( s_{1},\ldots , s_{|I|} ) = ( s_{1}^{*} \circ h_{1},\ldots , s_{|I|}^{*} \circ h_{|I|} ) \), as required. \(\square \)

Definition 15

Call a \(\varTheta \)-based game \(\varGamma \) injective if, for each i and each \((\theta ,c_{i}) \in \varTheta \times C_{i}\), \(\hat{\pi }_{i}[c_{i}](\theta ,\cdot ): \prod _{j \in I \backslash \{i\}} \varDelta (C_{j}) \rightarrow {\mathbb {R}}\) is injective.

Note carefully that a \(\varTheta \)-based game may be injective even if some player’s payoff function \(\pi _{i}\) is not injective. Many games of interest fail the injectivity condition. In fact, we will see that there is a close connection between injective games and simple games.

Remark 5

Fix distinct players i and j. Fix some \((\theta ,c_{i}) \in \varTheta \times C_{i}\) and distinct actions of player j, i.e., \(c_{j},d_{j}\). Also, if there is some player \(k \ne i,j\), fix \(c_{-i-j} \in \prod _{k \ne i,j}C_{k}\) (Otherwise, take \(c_{-i-j}\) to be null notation.) Suppose \(x = \pi _{i}(\theta ,c_{i},\hat{c}_{j},\hat{c}_{-i-j})\) where \((\hat{c}_{j},\hat{c}_{-i-j}) \ne (c_{j},c_{-i-j}),(d_{j},c_{-i-j})\). If the game is injective, then

$$\begin{aligned} x \not \in \left[ \pi _{i}(\theta ,c_{i},c_{j},c_{-i-j}), \pi _{i}(\theta ,c_{i},d_{j},c_{-i-j})\right] . \end{aligned}$$

This fact will be of use.

Say a \(\varTheta \)-based game is non-trivial if \(|I| \ge 2\) and, for at least two players i, \(|C_{i}| \ge 2\).

Lemma 12

Fix a countable set \(\varTheta \) and some non-trivial \(\varTheta \)-based game \(\varGamma \). Then \(\varGamma \) is simple if and only if \(\varGamma \) is injective.

Proof

First suppose that \(\varGamma \) is simple. Then, for each i, \(C_{-i}\) can be written as \(C_{-i} = \{ c_{-i},d_{-i} \}\). It follow that, for each \((\theta ,c_{i})\), \(\hat{\pi }_{i}[c_{i}](\theta ,c_{-i})\ne \hat{\pi }_{i}[c_{i}](\theta ,d_{-i})\). So the game is injective.

Next suppose that the game has some player i with \(|C_{i}| \ge 3\). Fix some pair \((\theta ,c_{-i}) \in \varTheta \times C_{-i}\) and some player \(j \ne i\). Observe that there must be distinct choices of player i, namely \(c_{i},d_{i},e_{i}\), so that \(\pi _{j}(\theta ,e_{i},c_{-i}) \in [\pi _{j}(\theta ,c_{i},c_{-i}),\pi _{j}(\theta ,d_{i},c_{-i})]\). By Remark 5, the game is not injective.

Third, suppose, contra hypothesis, that \(|I| \ge 3\) and the game is injective. Let \(|C_{1}| = |C_{2}| = 2\). Fix some pair \((\theta ,c_{-1-2}) \in \varTheta \times {\mathrm {proj\,}}_{k \not \in \{1,2\} } C_{k}\) and some player \(j \not \in \{1,2\}\). Figure 14 depicts \(\pi _{j}(\theta ,\cdot ,c_{-1-2})\). Suppose, contra hypothesis, that the game satisfies injectivity. This immediately gives that \(x^{1} \ne x^{2}\) and \(x^{3} \ne x^{4}\). Without loss of generality, assume that \(x^{4}>x^{3}\). By Remark 5, \(x^{1},x^{2} \not \in [x^{3},x^{4}]\). It follows that either 1. \(x^{4}>x^{3}>x^{1}>x^{2}\), 2. \(x^{4}>x^{3}>x^{2}>x^{1}\), 3. \(x^{1}>x^{2}>x^{3}>x^{4}\), or 4. \(x^{2}>x^{1}>x^{3}>x^{4}\). In the first two cases, \(x^{3} \in [x^{2},x^{4}]\) and in the latter two cases \(x^{3} \in [x^{4},x^{2}]\). Again applying Remark 5, this contradicts injectivity. \(\square \)

Lemma 13

Fix a Bayesian Game \((\varGamma ,{\mathcal {T}})\), where \(\varGamma \) is simple. For each \((c_{i},s_{-i}) \in C_{i} \times S_{-i}\) and each \(\mu _{i} \in \varDelta (\varTheta \times T_{-i})\), \(\varPi _{i}[c_{i}, s_{-i}]\) is \(\mu _{i}\)-measurable if and only if \(({\mathrm {id\,}}\times s_{-i})\) is \(\mu _{i}\)-measurable.

Fig. 14
figure 14

Non-simple game

Proof

Fix a Bayesian Game \((\varGamma ,{\mathcal {T}})\), where \(\varGamma \) is simple and recall it is injective. Fix also a strategy profile \(s_{-i}:T_{-i} \rightarrow \prod _{j \in I \backslash \{i\}} \varDelta (C_{j})\). We will show that, for each \(c_{i} \in C_{i}\) and \(\mu _{i} \in \varDelta (\varTheta \times T_{-i})\), \(({\mathrm {id\,}}\times s_{-i})\) is \(\mu _{i}\)-measurable if and only if \(\varPi _{i}[c_{i}, s_{-i}]\) is \(\mu _{i}\)-measurable. Observe that if \(({\mathrm {id\,}}\times s_{-i})\) is \(\mu _{i}\)-measurable, then \(\varPi _{i}[c_{i}, s_{-i}]\) is \(\mu _{i}\)-measurable. (Use the fact that \(\varPi _{i}[c_{i}, s_{-i}] = \hat{\pi }_{i}[c_{i}] \circ ({\mathrm {id\,}}\times s_{-i})\) and \(\hat{\pi }_{i}[c_{i}]\) is measurable.) So we will suppose that \(\varPi _{i}[c_{i}, s_{-i}]\) is \(\mu _{i}\)-measurable and show that \(({\mathrm {id\,}}\times s_{-i})\) is \(\mu _{i}\)-measurable.

For this, let \(F \in {\mathcal {B}}(\varTheta \times \prod _{j \in I \backslash \{i\}} \varDelta (C_{j}))\) and, for each \(\theta \in \varTheta \), write \(F_{\theta } = F \bigcap (\{\theta \} \times \prod _{j \in I \backslash \{i\}} \varDelta (C_{j}))\). It suffices to show that each \(({\mathrm {id\,}}\times s_{-i})^{-1}(F_{\theta }) \in {\mathcal {B}}(\varTheta \times T_{-i};\mu _{i})\). If so, then, using the fact that \(\varTheta \) is at most countable, \(({\mathrm {id\,}}\times s_{-i})^{-1}(F) = \bigcup _{\theta \in \varTheta } ({\mathrm {id\,}}\times s_{-i})^{-1}(F_{\theta })\) is also in \({\mathcal {B}}(\varTheta \times T_{-i};\mu _{i})\).

Fix some \(F_{\theta } = F \bigcap (\{\theta \} \times \prod _{j \in I \backslash \{i\}} \varDelta (C_{j}))\) and write \(G_{\theta }=\hat{\pi }_{i}[c_{i}](F_{\theta })\). It follows from the fact that \(\hat{\pi }_{i}[c_{i}](\theta ,\cdot )\) is measurable and injective that \(G_{\theta } \subseteq {\mathbb {R}}\) is measurable (use Purves’ Theorem, Purves 1966). Using the fact that \(\varPi _{i}[c_{i}, s_{-i}]\) is \(\mu _{i}\)-measurable, it then follows that \((\varPi _{i}[c_{i}, s_{-i}])^{-1}(G_{\theta }) \cap (\{\theta \} \times T_{-i}) \in {\mathcal {B}}(\varTheta \times T_{-i};\mu _{i})\).

With this, it suffices to show that

$$\begin{aligned} (\varPi _{i}[c_{i}, s_{-i}])^{-1}(G_{\theta }) \cap (\{\theta \} \times T_{-i}) = ({\mathrm {id\,}}\times s_{-i})^{-1}(F_{\theta }). \end{aligned}$$

Certainty, if \((\theta ,s_{-i}(t_{-i})) \in F_{\theta }\) then \(\hat{\pi }_{i}[c_{i}](\theta ,s_{-i}(t_{-i})) \in G_{\theta }\). From this \(\varPi _{i}[c_{i},s_{-i}](\theta ,t_{-i}) \in G_{\theta }\) as desired. Conversely, suppose \(\varPi _{i}[c_{i},s_{-i}](\theta ,t_{-i}) \in G_{\theta }\). Then \(\hat{\pi }_{i}[c_{i}](\theta ,s_{-i}(t_{-i})) \in G_{\theta }\). Thus, there exist some \(\sigma _{-i}\) so that \((\theta ,\sigma _{-i}) \in F_{\theta }\) so that \(\hat{\pi }_{i}[c_{i}](\theta ,\sigma _{-i})=\hat{\pi }_{i}[c_{i}](\theta ,s_{-i}(t_{-i}))\). By injectivity, \(s_{-i}(t_{-i})=\sigma _{-i}\). Thus, \((\theta ,t_{-i}) \in ({\mathrm {id\,}}\times s_{-i})^{-1}(F_{\theta })\). \(\square \)

Proof of Lemma 3

Immediate from Lemma 13 \(\square \)

Lemma 14

Fix \(\varTheta \)-based type structures, \({\mathcal {T}}\) and \({\mathcal {T}}^{*}\), so that \({\mathcal {T}}\) can be mapped to \({\mathcal {T}}^{*}\) via \((h_{1},\ldots , h_{|I|})\). For every universally measurable Bayesian equilibrium of \((\varGamma ,{\mathcal {T}}^{*})\), viz. \((s_{1}^{*}, \ldots , s_{|I|}^{*})\), \((s_{1}^{*}\circ h_{1}, \ldots , s_{|I|}^{*}\circ h_{|I|})\) is a universally measurable Bayesian equilibrium of \((\varGamma , {\mathcal {T}})\).

Proof

Let \((s_{1}^{*}, \ldots , s_{|I|}^{*})\) be universally measurable Bayesian equilibrium of \((\varGamma , {\mathcal {T}}^{*})\). By Proposition 7, \((s_{1},\ldots ,s_{|I|}) = (s_{1}^{* }\circ h_{1},\ldots , s_{|I|}^{* }\circ h_{|I|})\) is a Bayesian equilibrium. We will show that each \(s_{i}\) is \(\mu \)-measurable, for each \(\mu \in \varDelta (T_{i})\). Fix \(\mu \) and let \(\mu ^{*} = \underline{h}_{i}(\mu )\). Since \((s_{1}^{*}, \ldots , s_{|I|}^{*})\) is universally measurable, \(s_{i}^{*}\) is \(\mu ^{*}\)-measurable. Then, by Lemma 10, \(s_{i}\) is \(\mu \)-measurable. \(\square \)

Appendix 4: Proofs for Sect. 6

Proof of Lemma 6

Since each \(h_{i}\) is an embedding, each \(h_{i}(T_{i}) \in {\mathcal {B}}(T_{i})\). From this, for each i, \(\varTheta \times h_{-i}(T_{-i}) \in {\mathcal {B}}(\varTheta \times T_{-i})\). So, by definition of a type morphism, for any \(h_{i}(t_{i}) \in h_{i}(T_{i})\),

$$\begin{aligned} \beta _{i}^{*}(h_{i}(t_{i}))(\varTheta \times \prod _{j \in I\backslash \{i\}} h_{j}(T_{j})) = \beta _{i}(t_{i})(\varTheta \times T_{-i})=1, \end{aligned}$$

as desired. \(\square \)

Proof of Lemma 7

Suppose that \({\mathcal {T}}\) can be embedded into \({\mathcal {T}}^{*}\) via \((h_{1},\ldots ,h_{|I|})\). Define \(h_{i}^{*}=h_{i}^{-1}\); this is well defined, since \(h_{i}\) is bijective. Moreover, \((h_{1}^{*},\ldots ,h_{|I|}^{*})\) is an injective type morphism and so \({\mathcal {T}}^{*}\) can be embedded into \({\mathcal {T}}\).

Fix a Bayesian equilibrium of \((\varGamma , {\mathcal {T}})\), viz. \((s_{1}, \ldots , s_{|I|})\). By the Pull-Back Property (Proposition 1), \((s_{1}^{*},\ldots ,s_{|I|}^{*})=(s_{1} \circ h_{1}^{*},\ldots ,s_{|I|} \circ h_{|I|}^{*})\) is a Bayesian equilibrium of \((\varGamma ,{\mathcal {T}})\). Moreover, \(s_{i}^{*} \circ h_{i} = s_{i} \circ h_{i}^{*} \circ h_{i}=s_{i}\), as required. \(\square \)

1.1 Proofs Sect. 6.1

Proof of Lemma 8

Now, let \(\mu \) be a common prior for \({\mathcal {T}}\). Fix distinct players i and j and note that

$$\begin{aligned} \beta _{i} ( t_{i} ) ( \varTheta \times \{ t_{j} \} \times T_{-i-j} ) =\frac{\mu ( \varTheta \times \{ t_{i} \} \times \{ t_{j} \} \times T_{-i-j} ) }{\mu ( \varTheta \times \{ t_{i} \} \times T_{-i-j} ) }. \end{aligned}$$

So, \(\beta _{i} ( t_{i} ) ( \varTheta \times \{ t_{j} \} \times T_{-i-j} ) >0\) if and only if \(\mu ( \varTheta \times \{ t_{i} \} \times \{ t_{j} \} \times T_{-i-j} ) >0\). But, an analogous argument for j gives that \(\beta _{j} ( t_{j} ) ( \varTheta \times \{ t_{i} \} \times T_{-i-j} ) >0\) if and only if \(\mu ( \varTheta \times \{ t_{i} \} \times \{ t_{j} \} \times T_{-i-j} ) >0\). This establishes the result. \(\square \)

The next result refers to separable metrizable type structures. See Remark 4.

Lemma 15

Fix a \(\varTheta \)-based type structure \({\mathcal {T}} = (\varTheta , (T_{i}, \beta _{i})_{i \in I}) \). Let \(\prod _{i=1}^{|I|} \hat{T}_{i}\) be a belief-closed subset of T. Then, there is a \(\varTheta \)-based separable metrizable type structure

$$\begin{aligned} \hat{{\mathcal {T}}} = (\varTheta , (\hat{T}_{i}, \hat{\beta }_{i})_{i \in I}), \end{aligned}$$

where, for each \(t_{i}\in \hat{T}_{i}\) and each event \(E_{-i} \subseteq \varTheta \times \hat{T}_{-i}\), \(\hat{\beta }_{i}(t_{i})(E_{-i}) = \beta _{i}(t_{i})(E_{-i})\).

Proof

Since we endow each \(\hat{T}_{i}\) with the relative topology, we have that each \(\hat{T}_{i}\) is separable metrizable. Moreover, since \(\varTheta \times \hat{T}_{-i}\) is Borel in \(\varTheta \times \hat{T}_{-i}\), it follows that any set \(E_{-i} \subseteq \varTheta \times \hat{T}_{-i}\) that is Borel in \(\varTheta \times \hat{T}_{-i}\) is also Borel in \(\varTheta \times T_{-i}\) (see Lemma 4.20 in Aliprantis and Border 2007). So, using the fact that, for each \(t_{i} \in \hat{T}_{i}\), \(\beta _{i}(t_{i})(\varTheta \times \hat{T}_{-i})=1\), \(\hat{\beta }_{i}(t_{i})\) is a probability measure on \(\varTheta \times \hat{T}_{-i}\). Thus, it suffices to show that \(\hat{\beta }_{i}\) is measurable.

Fix some F Borel in \(\varDelta (\varTheta \times \hat{T}_{-i})\). Define \(H \subseteq \varDelta (\varTheta \times T_{-i}) \) so that \(\nu \in H\) if and only if there exists some \(\mu \in F\) so that \(\mu (E_{-i})= \nu (E_{-i})\) for each event \(E_{-i}\) in \(\varTheta \times \hat{T}_{-i}\). It follows from Lemma 14.4 in Aliprantis and Border (2007) that H is Borel in \(\varDelta (\varTheta \times T_{-i})\). It is immediate from the construction that \( (\hat{\beta }_{i})^{-1}(F) = (\beta _{i})^{-1}(H) \cap \hat{T}_{i}\). So, using the fact that \(\beta _{i}\) is measurable, \((\hat{\beta }_{i})^{-1}(F)\) is measurable, as required. \(\square \)

Lemma 16

Fix \(\varTheta \)-based Bayesian games \((\varGamma ,{\mathcal {T}})\) and \((\varGamma ,{\mathcal {T}}^{*})\) where

  1. 1.

    \({\mathcal {T}} = (\varTheta , (T_{i}, \beta _{i})_{i \in I}) \)is a separable metrizable type structure and

  2. 2.

    \({\mathcal {T}}\) can be embedded in \({\mathcal {T}}^{*}=(\varTheta , (T_{i}^{*}, \beta _{i}^{*})_{i \in I}) \)via \((h_{1},\ldots ,h_{|I|})\).

Let \((s_{1},\ldots ,s_{|I|})\) (resp. \((s_{1}^{*},\ldots ,s_{|I|}^{*})\) ) be a strategy profile of \((\varGamma ,{\mathcal {T}})\) (resp. \((\varGamma ,{\mathcal {T}}^{*})\)) so that \(s_{i}= s_{i}^{*} \circ h_{i}\) for each i. If \(\varPi _{i}[c_{i},s_{-i}]\) is \(\beta _{i}(t_{i})\)-measurable, then \(\varPi _{i}^{*}[c_{i},s_{-i}^{*}]\) is \(\beta _{i}^{*}(h_{i}(t_{i}))\) measurable.

Proof

Fix some \(h_{i}(t_{i}) \in h_{i}(T_{i}) \subseteq T_{i}^{*}\). Fix also some \(c_{i} \in C_{i}\) and some Borel \(E \subseteq {\mathbb {R}}\). We will show that \((\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \in {\mathcal {B}}(\varTheta \times T_{-i}^{*}; \beta _{i}^{*}(h_{i}(t_{i})))\).

By assumption, \((\varPi _{i}[c_{i},s_{-i}])^{-1}(E)\) is in \({\mathcal {B}}(\varTheta \times T_{-i}; \beta _{i}(t_{i}))\). That is, there exists \(F_{-i},G_{-i} \in {\mathcal {B}}(\varTheta \times T_{-i})\) so that

$$\begin{aligned} F_{-i} \subseteq (\varPi _{i}[c_{i},s_{-i}])^{-1}(E) \subseteq G_{-i} \end{aligned}$$

and \(\beta _{i}(t_{i})(F_{-i})=\beta _{i}(t_{i})(G_{-i})\). Since \(({\mathrm {id\,}}\times h_{-i})\) is bimeasurable, \(({\mathrm {id\,}}\times h_{-i})(F_{-i}), ({\mathrm {id\,}}\times h_{-i})(G_{-i}) \in {\mathcal {B}}(\varTheta \times T_{-i}^{*})\). Moreover,

$$\begin{aligned} ({\mathrm {id\,}}\times h_{-i})(F_{-i}) \subseteq ({\mathrm {id\,}}\times h_{-i}) ((\varPi _{i}[c_{i},s_{-i}])^{-1}(E)) \subseteq ({\mathrm {id\,}}\times h_{-i})(G_{-i}) \end{aligned}$$

and, using the fact that \(({\mathrm {id\,}}\times h_{-i})\) is injective,

$$\begin{aligned} \beta _{i}^{*}(h_{i}(t_{i}))( ({\mathrm {id\,}}\times h_{-i})(F_{-i}) )= & {} \beta _{i}(t_{i})(F_{-i})=\beta _{i}(t_{i})(G_{-i})\\= & {} \beta _{i}^{*}(h_{i}(t_{i}))( ({\mathrm {id\,}}\times h_{-i})(G_{-i}) ). \end{aligned}$$

Now notice that

$$\begin{aligned} ({\mathrm {id\,}}\times h_{-i}) ((\varPi _{i}[c_{i},s_{-i}])^{-1}(E)) = (\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \cap (\varTheta \times h_{-i}(T_{-i})). \end{aligned}$$

This allows us to conclude that \((\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \cap (\varTheta \times h_{-i}(T_{-i}))\) is \(\beta _{i}^{*}(h_{i}(t_{i}))\) measurable.

Since \( (\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \cap (\varTheta \times h_{-i}(T_{-i}))\) is \(\beta _{i}^{*}(h_{i}(t_{i}))\) measurable, there exists \(F_{-i}^{*},G_{-i}^{*}\) in \({\mathcal {B}}(\varTheta \times T_{-i}^{*})\) with

$$\begin{aligned} F_{-i}^{*}\subseteq (\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \cap (\varTheta \times h_{-i}(T_{-i})) \subseteq G_{-i}^{*} \end{aligned}$$

and \(\beta _{i}^{*}(h_{i}(t_{i}))(F_{-i}^{*})=\beta _{i}^{*}(h_{i}(t_{i}))(G_{-i}^{*})\). Take \(H_{-i}^{*}= \varTheta \times (T_{-i}^{*} \backslash h_{-i}(T_{-i}))) \). Since \(({\mathrm {id\,}}\times h_{-i})\) is bimeasurable, \(G_{-i}^{*} \cup H_{-i}^{*}\) is Borel. Thus,

$$\begin{aligned} F_{-i}^{*}\subseteq (\varPi _{i}^{*}[c_{i},s_{-i}^{*}])^{-1}(E) \subseteq G_{-i}^{*} \cup H_{-i}^{*}. \end{aligned}$$

Moreover, since \(H_{-i}^{*}\) is \(\beta _{i}^{*}(h_{i}(t_{i}))\)-null, \(\beta _{i}^{*}(h_{i}(t_{i}))(F_{-i}^{*})=\beta _{i}^{*}(h_{i}(t_{i}))(G_{-i}^{*} \cup H_{-i}^{*})\).

\(\square \)

Suppose \({\mathcal {T}}\) induces a decomposition of \({\mathcal {T}}^{*}\) via \((h_{1},\ldots ,h_{|I|})\). Then the \(\prod _{i \in I}(T_{i}^{*} \backslash h_{i}(T_{i}))\) is a belief-closed subset of T. By Lemma 15, this induces a separable metrizable type structure. Write

$$\begin{aligned} ({\mathcal {T}}^{*} \backslash {\mathcal {T}}) = (\varTheta , (T_{i}^{*} \backslash h_{i}(T_{i}), \beta _{i}^{\triangledown } )_{i \in I} ), \end{aligned}$$

for this structure; we call it a difference structure. Here, \(\beta _{i}^{\triangledown }(t_{i}^{*})(E_{-i}^{\triangledown }) = \beta _{i}^{*}(t_{i}^{*})(E_{-i}^{\triangledown })\) for each event \(E_{-i}^{\triangledown }\) in \(\varTheta \times \prod _{j\ne i} (T_{j}^{*} \backslash h_{j}(T_{j}))\).

Lemma 17

Fix a \(\varTheta \)-based Bayesian game \((\varGamma , {\mathcal {T}})\) that has an equilibrium. If \({\mathcal {T}}\) induces a decomposition of \({\mathcal {T}}^{*}\) via \((h_{1},\ldots ,h_{|I|})\), then the following are equivalent:

  1. 1.

    There is an equilibrium of the Bayesian game \((\varGamma , {\mathcal {T}}^{*})\), viz. \((s_{1}^{*},\ldots ,s_{|I|}^{*})\), so that \((s_{1}^{*}\circ h_{1}, \ldots , s_{|I|}^{*}\circ h_{|I|} ) = (s_{1}, \ldots , s_{|I|})\).

  2. 2.

    There is an equilibrium of the difference game \((\varGamma , ({\mathcal {T}}^{*} \backslash {\mathcal {T}}))\).

Proof

Suppose \({\mathcal {T}}\) induces a decomposition of \({\mathcal {T}}^{*}\) via \((h_{1},\ldots , h_{|I|}) \). Observe that the difference structure \(({\mathcal {T}}^{*} \backslash {\mathcal {T}})\) can be embedded into \({\mathcal {T}}^{*}\).

The fact that 1 implies 2 is immediate: Since there is an equilibrium of \((\varGamma , {\mathcal {T}})\) and \( \langle {\mathcal {T}},{\mathcal {T}}^{*} \rangle \) satisfies the Equilibrium Extension Property for \(\varGamma \), it follows that there exists an equilibrium of \((\varGamma , {\mathcal {T}}^{*})\). So, by the Generalized Pull-Back Property (Remark 4), there is an equilibrium of \((\varGamma , ({\mathcal {T}}^{*} \backslash {\mathcal {T}}))\).

We focus on showing that 2 implies 1: Let \((s_{1}, \ldots , s_{|I|})\) be an equilibrium of \((\varGamma , {\mathcal {T}})\) and let \((s_{1}^{\triangledown }, \ldots , s_{|I|}^{\triangledown }) \) be an equilibrium of the difference game \( (\varGamma , ({\mathcal {T}}^{*} \backslash {\mathcal {T}}))\). Construct a strategy, viz. \(s_{i}^{*}\), for \((\varGamma , {\mathcal {T}}^{*})\), as follows. For each \(t_{i} \in T_{i}\), let \(s_{i}^{*}(h_{i}(t_{i})) = s_{i}(t_{i})\). (This is well defined since each \(h_{i}\) is injective.) For each \(t_{i}^{*} \in T_{i}^{*} \backslash h_{i}(T_{i})\), let \(s_{i}^{*}(t_{i}^{*}) = s_{i}^{\triangledown }(t_{i}^{*})\). We now show that the constructed \((s_{1}^{*}, \ldots , s_{|I|}^{*})\) is a Bayesian equilibrium for \((\varGamma , {\mathcal {T}}^{*})\). Condition 1 of Definition 3 follows from Lemma 16. Thus, we focus on Condition 2 of Definition 3.

First, fix a type \(h_{i}(t_{i}) \in h_{i}(T_{i})\). For an action \(c_{i} \in C_{i}\), the Change of Variables Theorem (e.g., Billingsley 2008, Theorem 16.12) gives that

$$\begin{aligned} \int _{\varTheta \times T_{-i}^{*}} \pi _{i} ( \theta , c_{i}, s_{-i}^{*} (t_{-i}^{*}) ) \mathrm{d}\beta _{i}^{*} ( h_{i}(t_{i}) ) =\int _{\varTheta \times T_{-i}} \pi _{i} ( \theta , c_{i}, s_{-i} ( t_{-i}) ) \mathrm{d}\beta _{i} ( t_{i} ). \end{aligned}$$

So, using the fact that \( (s_{1}, \ldots , s_{|I|}) \) is a Bayesian Equilibrium of \( ( \varGamma , {\mathcal {T}}) \),

$$\begin{aligned}&\int _{\varTheta \times T_{-i}^{*}} \pi _{i} ( \theta , s_{i}^{*} ( h_{i} ( t_{i} ) ), s_{-i}^{*} ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{*} ( h_{i} ( t_{i}) ) \nonumber \\&\quad \ge \int _{\varTheta \times T_{-i}^{*}} \pi _{i} ( \theta , c_{i}, s_{-i}^{*} ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{*} ( h_{i} ( t_{i} ) ), \end{aligned}$$
(3)

for all \(c_{i}\). Likewise, given a type \(t_{i}^{*} \in T_{i}^{*} \backslash h_{i}(T_{i}) \) and a choice \(c_{i} \in C_{i}\),

$$\begin{aligned} \int _{\varTheta \times T_{-i}^{*}} \pi _{i} ( \theta , c_{i}, s_{-i}^{*} ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{*} ( t_{i}^{*} ) =\int _{\varTheta \times \prod _{j\ne i}T_{j}^{*}\backslash h_{j}(T_{j})}\pi _{i} ( \theta , c_{i}, s_{-i}^{\triangledown } ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{\triangledown } ( t_{i}^{*} ). \end{aligned}$$

So, using the fact that \( ( s_{1}^{\triangledown },\ldots ,s_{|I|}^{\triangledown } ) \) is a Bayesian equilibrium of \( ( \varGamma ,{\mathcal {T}}^{*}\backslash {\mathcal {T}} ) \),

$$\begin{aligned} \int _{\varTheta \times T_{-i}^{*}}\pi _{i} ( \theta , s_{i}^{*} ( t_{i}^{*} ), s_{-i}^{*} ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{*} ( t_{i}^{*} ) \ge \int _{\varTheta \times T_{-i}^{*}}\pi _{i} ( \theta , c_{i}, s_{-i}^{*} ( t_{-i}^{*} ) ) \mathrm{d}\beta _{i}^{*} ( t_{i}^{*} ) \text {,} \end{aligned}$$
(4)

for all strategies \(c_{i}\). Taking Eqs. 34 establishes Condition 2 of Definition 3. \(\square \)

Proof of Proposition 4

Suppose \( \langle {\mathcal {T}},{\mathcal {T}}^{*} \rangle \) satisfies the Extension Property for \(\varGamma \). Then it is immediate that there is an equilibrium for the Bayesian game \( (\varGamma , {\mathcal {T}}^{*})\). Conversely, suppose there is an equilibrium for the Bayesian game \((\varGamma , {\mathcal {T}}^{*}) \). By the Pull-Back Property (Proposition 1), there is an equilibrium for the difference game \( (\varGamma , ( {\mathcal {T}}^{*}\backslash {\mathcal {T}} )) \). Now, using Lemma 17, \( \langle {\mathcal {T}},{\mathcal {T}}^{*} \rangle \) satisfies the Equilibrium Extension Property for \(\varGamma \).

\(\square \)

Proof of Proposition 5

Immediate from Lemma 8 and Proposition 4. \(\square \)

1.2 Appendix 4: Proofs for Sect. 6.2

This appendix is devoted to proving Proposition 6. Throughout, we make use of the following notational conventions: Given sets \(\varOmega _{1},\ldots , \varOmega _{|I|}\) and some subset \(K\subseteq \{ 1,\ldots , |I| \} \), write \(\varOmega _{K}=\prod _{k\in K}\varOmega _{k}\) and write \(\omega _{K}\) for a profile in \(\varOmega _{K}\). Likewise, given maps \(f_{1},\ldots , f_{|I|}\), where each \(f_{i}:\varOmega _{i} \rightarrow \varPhi _{i}\), write \(f_{K}:\varOmega _{K} \rightarrow \varPhi _{K}\) for the associated product map.

Suppose \({\mathcal {T}} = (\varTheta , (T_{i}, \beta _{i})_{i \in I}) \) can be embedded into \({\mathcal {T}}^{* }=(\varTheta , (T_{i}^{*}, \beta _{i}^{*})_{i \in I}) \) via \( ( h_{1},\ldots , h_{|I|} ) \). By Lemma 7, it suffices to assume that some \(h_{i}\) is not surjective. So, some \(T_{i}^{*}\backslash h_{i} (T_{i} )\) is non-empty. Moreover, we assume that each \(T_{i}^{*}\backslash h_{i} (T_{i} )\) is (at most) countable (and possibly empty). Order players so that (a) for each \(i=1,\ldots , J\), \(T_{i}^{* }\backslash h_{i} ( T_{i} ) \ne \emptyset \) and (b) for each \(i=J+1,\ldots , |I|\), \(T_{i}^{* }\backslash h_{i} ( T_{i} ) =\emptyset \) (if \(J<|I|\)). For each \(i=1,\ldots , J\), write M ( i ) for the cardinality of \(T_{i}^{* }\backslash h_{i} ( T_{i} )\) and m(i) for some element of \(T_{i}^{* }\backslash h_{i} ( T_{i} ) \). By assumption, M ( i ) is (at most) countable.

Consider a \(\varTheta \)-based compact and continuous game \(\varGamma = ( (C_{i}, \pi _{i})_{i \in I})\). Throughout this appendix, we fix a universally measurable equilibrium of the Bayesian game \( (\varGamma , {\mathcal {T}} ) \), viz. \((s_{1},\ldots , s_{|I|} )\). We want to show that there is a universally measurable equilibrium of the Bayesian game \( (\varGamma , {\mathcal {T}}^{* } ) \), viz. \(( s_{1}^{* },\ldots ,s_{|I|}^{* }) \), with \( ( s_{1},\ldots , s_{|I|} ) = (s_{1}^{* }\circ h_{1},\ldots , s_{|I|}^{* }\circ h_{|I|} ) \).

Section 6.2 gives the idea of the proof. In particular, we begin by constructing the game of complete information, namely G. The game has a finite or countable number of players, corresponding to \(\bigcup _{i=1}^{J}T_{i}^{* }\backslash h_{i} ( T_{i} ) \). The choice set for a player \(m(i)\in T_{i}^{*}\backslash h_{i} ( T_{i} ) \) is \(C_{i}\). Write \({\mathcal {C}}_{i}\) for the set \( [ C_{i} ] ^{M ( i ) }\), so that \({\mathcal {C}}=\prod _{i=1}^{J}{\mathcal {C}}_{i}\) is the set of choice profiles in this game. Note we can think of \(\overrightarrow{c}_{i}=(c_{i}^{1},c_{i}^{2},\ldots )\in {\mathcal {C}}_{i}\) as a mapping \(\overrightarrow{c}_{i}:T_{i}^{* } \backslash h_{i} ( T_{i} ) \rightarrow C_{i}\). So, when we write \(\overrightarrow{c}_{i} ( t_{i}^{* } )\) we mean the \(t_{i}^{* }\)-th component of \(\overrightarrow{c}_{i}=(c_{i}^{1},c_{i}^{2},\ldots )\). Likewise, given a subset of players \(K\subseteq \{ 1,\ldots , J \} \), we can think of the mapping \(\overrightarrow{c}_{K}:\prod _{i\in K}(T_{i}^{* }\backslash h_{i} (T_{i} ) ) \rightarrow C_{K}\). Write \(\overrightarrow{c}_{K} ( t_{K}^{*} ) \) for the profile in \(C_{K}\) with \(\overrightarrow{c}_{K} ( t_{K}^{*} ) = ( \overrightarrow{c}_{i} ( t_{i}^{* } ) :i\in K ) \). Note we endow \(T_{i}^{* }\backslash h_{i} ( T_{i} ) \) with the discrete topology and so the mapping \(\overrightarrow{c}_{i}\) is continuous.

We now want to define a payoff function \(u_{m(i)}:{\mathcal {C}} \rightarrow {\mathbb {R}}\) for player m(i) (in the game G). To do so, it will be useful to first define auxiliary (payoff) functions for m(i) that depend on subsets of players. The function \(u_{m(i)}\) will be, effectively, the sum of these auxiliary functions.

Fix some player i and consider a subset K of players not containing i, i.e., some \(K\subseteq \{ 1,\ldots , J \} \backslash \{i\} \). Write \(K^{c}= \{ 1,\ldots , I \} \backslash (K\cup \{ i\})\), i.e., all players that are not in \(K\cup \{i\} \). Let us give the loose idea: We will construct a function \(v_{m(i)}[K]\) that takes choice profiles for members of K and a choice for m(i), and maps it into a payoff for player m(i). When we do so, we will assume that players in \(K^{c}\) (if there are any) play according to the equilibrium profile. For instance, if \(I=J=3\) and \(i=1\), then we can have K be either \(\emptyset \), \( \{ 2 \} \), \( \{ 3 \} \), or \( \{ 2,3 \} \). Consider the case of \(K = \{ 2\} \). We will have \(v_{m(1)}[\{2\}]:C_{1}\times {\mathcal {C}}_{2} \rightarrow {\mathbb {R}}\), so that we are computing expected payoffs for m(1) when types for player 2 are in \(T_{2}^{* }\backslash h_{2}(T_{2})\) and types for player 3 are in \(h_{3} (T_{3}) \). Because (for this subset K) types for player 2 are in \(T_{2}^{* }\backslash h_{2} ( T_{2} ) \), \(v_{m(1)}[\{2\}]\) maps a choice for player m(1) plus choices for players in \(T_{2}^{* }\backslash h_{2}(T_{2}) \), i.e., \(C_{1}\times {\mathcal {C}}_{2}\), into a payoff. Because (for this subset K) types for player 3 are in \(h_{3}(T_{3}) \), we assume they play according to the given equilibrium.

Once we have the functions \(v_{m(i)}[K]\) for all subsets \(K\subseteq \{ 1,\ldots , J\} \backslash \{i\}\), we can extend these functions to a function \(u_{m(i)}:{\mathcal {C}}\rightarrow {\mathbb {R}}\). Specifically, set \(u_{m(i)} = \sum _{K \subseteq J}[ v_{m(i)}[K] \circ {\mathrm {proj\,}}_{C_{i} \times {\mathcal {C}}_{K}}]\), where we write \({\mathrm {proj\,}}_{C_{i} \times {\mathcal {C}}_{K}}: {\mathcal {C}}\rightarrow C_{i} \times {\mathcal {C}}_{K}\) for the projection map. The functions \(u_{m(i)}\) are the payoff functions for the game G.

Now, let’s specify the functions \(v_{m(i)}[K]\). To do so, it will be useful to recall that, for each \(j=1,\ldots , |I|\), \(h_{j}:T_{j} \rightarrow T_{j}^{* }\) is injective and bimeasurable. As such, we can define a bimeasurable map \(g_{j}: h_{j}(T_{j}) \rightarrow T_{j}\) so that \(g_{j}(h_{j}(t_{j}))=t_{j}\). Now, fix a \(K \subseteq \{1,\ldots ,J\} \backslash \{i\}\). Let \(v_{m(i)}[K]: C_{i} \times {\mathcal {C}}_{K} \rightarrow {\mathbb {R}}\) be such that

$$\begin{aligned}&v_{m(i)}[K] (c_{i},\overrightarrow{c}_{K})\\&\quad =\int _{\varTheta \times \prod _{j \in K}(T_{j}^{*} \backslash h_{j}(T_{j})) \times h_{K^{c}}(T_{K^{c}})} \pi _{i}(\theta ,c_{i}, \overrightarrow{c}_{K}(t_{K}^{*}), s_{K^{c}} (g_{K^{c}}(t_{K^{c}}^{*}))) \mathrm{d}\beta _{i}^{*}(m(i)). \end{aligned}$$

(Note if \(K=\emptyset \), then we take the convention that \(\varTheta \times \prod _{j \in K} (T_{j}^{*} \backslash h_{j}(T_{j})) \times h_{K^{c}}(T_{K^{c}}) =\varTheta \times h_{K^{c}}(T_{K^{c}})\) so that \(v_{m(i)}[K] \) reduces to a mapping from \(C_{i}\) to \({\mathbb {R}}\). If \(K^{c}=\emptyset \), then we take the convention that \(\varTheta \times \prod _{j \in K} (T_{j}^{*} \backslash h_{j}(T_{j})) \times h_{K^{c}}(T_{K^{c}}) = \varTheta \times \prod _{j \in K} (T_{j}^{*} \backslash h_{j}(T_{j}))\), so that \(v_{m(i)}[K]\) reduces with \(s_{K^{c}}(g_{K^{c}}(t_{K^{c}}^{*}))\) no longer being a factor.)

We begin by showing that each \(v_{m(i)}[K]\) is continuous. For this, we will need a mathematical result.

Lemma 18

Fix metrizable spaces \(\varOmega _{1},\varOmega _{2}\). Let \(\mu \in \varDelta (\varOmega _2)\) and \(f:\varOmega _{1}\times \varOmega _{2} \rightarrow {\mathbb {R}}\) be a bounded function so that each \(f(\omega _{1},\cdot ): \varOmega _{2} \rightarrow {\mathbb {R}}\) is \(\mu \)-measurable and each \(f(\cdot ,\omega _{2}) :\varOmega _{1} \rightarrow {\mathbb {R}}\) is continuous. Define \(F:\varOmega _{1} \rightarrow {\mathbb {R}}\) so that

$$\begin{aligned} F(\omega _{1}) = \int _{E_{2}} f(\omega _{1},\omega _{2}) \mathrm{d}\mu \text {,} \end{aligned}$$

where \(E_{2} \in {\mathcal {B}}(\varOmega _{2})\). Then, F is a bounded continuous function.

Proof

The fact that F is bounded follows directly from the fact that f is bounded and \(\mu (E_{2}) \le 1\). We focus on showing that F is continuous. For this, fix a sequence \((\omega _{1}^{n}:n=1,2,\ldots )\) contained in \(\varOmega _{1}\) and suppose \(\omega _{1}^{n} \rightarrow \omega _{1}^{* }\). To show that F is continuous, it suffices to show that \(F(\omega _{1}^{n}) \rightarrow F(\omega _{1}^{*})\).

Write \(f^{*} (\cdot ) :\varOmega _{2} \rightarrow {\mathbb {R}}\) for the \(\omega _{1}^{* }\)-section of the map f. Also, for each n, write \(f^{n} ( \cdot ) :\varOmega _{2} \rightarrow {\mathbb {R}}\) for the \(\omega _{1}^{n}\)-section of the map f. By assumption, each of \(f^{* },f^{1},f^{2},\ldots \) is \(\mu \)-measurable. Moreover, since f is bounded, \( f^{* }\) is bounded and the sequence \( ( f^{n}:n=1,2,\ldots ) \) is uniformly bounded. Given this, it suffices to show that pointwise \(f^{n} \rightarrow f^{* }\) (that is, \(f^{n}(\omega _2) \rightarrow f^{* }(\omega _2)\), for all \(\omega _2 \in \varOmega _2\) ) If so, then, by the dominated convergence theorem, \(F ( \omega _{1}^{n} ) \rightarrow F ( \omega _{1}^{* } ) \). (see Aliprantis and Border 2007, p. 407).

To show that \(f^{n} \rightarrow f^{* }\): Note that \(\omega _{1}^{n} \rightarrow \omega _{1}^{* }\). It follows from the fact that each \( f ( \cdot , \omega _{2} ) \) is continuous that \(f^{n} \rightarrow f^{* }\). \(\square \)

Lemma 19

For each \(m(i) \in T_{i}^{* }\backslash h_{i}(T_{i}) \) and each \(K\subseteq \{ 1,\ldots ,J\} \backslash \{i\} \), \(v_{m(i)}[K]: C_{i}\times {\mathcal {C}}_{K} \rightarrow {\mathbb {R}}\) is continuous.

Proof

Define a mapping \(f_{i}[K]:C_{i} \times {\mathcal {C}}_{K} \times \varTheta \times \prod _{j \in K} (T_{j}^{* }\backslash h_{j}(T_{j})) \times h_{K^{c}} (T_{K^{c}} ) \rightarrow {\mathbb {R}}\) so that

$$\begin{aligned} f_{i}[K] (c_{i},\overrightarrow{c}_{K}, \theta , t_{K}^{*}, t_{K^{C}}^{*}) = \pi _{i} (\theta , c_{i}, \overrightarrow{c}_{K}(t_{K}^{*}), s_{K^{c}}(g_{K^{c}}(t_{K^{c}}^{*}))). \end{aligned}$$

Certainly, then, \(f_{i}[K] \) is bounded. We will show that each \(f_{i}[K](c_{i},\overrightarrow{c}_{K},\cdot )\) is universally measurable and each \(f_{i}[K](\cdot , \theta , t_{K}^{* }, t_{K^{C}}^{* } )\) is continuous. Then the result follows from Lemma 18 and the fact that

$$\begin{aligned}&v_{m(i)}[K] (c_{i},\overrightarrow{c}_{K}) \\&\quad =\int _{\varTheta \times (\prod _{j \in K} (T_{j}^{*} \backslash h_{j}(T_{j}))) \times h_{K^{c}}(T_{K^{c}})} f_{i}[K](c_{i},\overrightarrow{c}_{K},\theta , t_{K}^{*},t_{K^{C}}^{*}) \mathrm{d}\beta _{i}^{*}(m(i)). \end{aligned}$$

First we show that, for each \((c_{i},\overrightarrow{c}_{K})\), \(f_{i}[K](c_{i},\overrightarrow{c}_{K},\cdot )\) is universally measurable: Write

$$\begin{aligned} F_{i}[c_{i},\overrightarrow{c}_{K}]: \varTheta \times \prod _{j \in K} (T_{j}^{* }\backslash h_{j}(T_{j})) \times h_{K^{c}} (T_{K^{c}} ) \rightarrow \varTheta \times \{c_{i}\} \times C_{K} \times \prod _{j \in K^{C}} \varDelta (C_j) \end{aligned}$$

for the mapping \((\theta , t_{K}^{*}, t_{K^{c}}^{*}) \mapsto (\theta , c_{i}, \overrightarrow{c}_{K} (t_{K}^{*}), s_{K^{c}}(g_{K^{c}}(t_{K^{c}}^{*}))) \). To show that \(f_{i}[K](c_{i},\overrightarrow{c}_{K},\cdot )\) is universally measurable, it suffices to show that \(F_{i}[c_{i},\overrightarrow{c}_{K}] \) is universally measurable: Then \(f_{i}[K]( c_{i},\overrightarrow{c}_{K},\cdot ) = \pi _{i} \circ F_{i}[c_{i},\overrightarrow{c}_{K}] \) is the composite of universally measurable maps and so universally measurable.

To see that \(F_{i}[c_{i},\overrightarrow{c}_{K}] \) is universally measurable: Applying Lemma 9 and the fact that each \(s_{j}\) is universally measurable, \(s_{K^{c}}\) is universally measurable. So, the restriction of \(s_{K^{c}}\) to the domain \(h_{K^{c}}(T_{K^{c}})\) is universally measurable. Now, note that \(F_{i}[c_{i},\overrightarrow{c}_{K}] \) is the product of universally measurable maps, each of which has a separable metrizable domain. So, again applying Lemma 9, \(F_{i}[c_{i},\overrightarrow{c}_{K}] \) is universally measurable.

Next we show that, for each \( (\theta , t_{K}^{* }, t_{K^{C}}^{*})\), \(f_{i}[K](\cdot , \theta , t_{K}^{*}, t_{K^{C}}^{*})\) is continuous: For this, suppose that \((c_{i}^{n},\overrightarrow{c}_{K}^{n}) \rightarrow (c_{i},\overrightarrow{c}_{K})\). Then, note that \((\theta , c_{i}^{n}, \overrightarrow{c}_{K}^{n}(t_{K}^{*}), s_{K^{c}}(g_{K^{c}}(t_{K^{c}}^{*}))) \rightarrow (\theta , c_{i}, \overrightarrow{c}_{K}(t_{K}^{*}), s_{K^{c}}(g_{K^{c}}(t_{K^{c}}^{*})) )\). So, using the continuity of \(\pi _{i}\), \(f_{i}[ K ](c_{i}^{n},\overrightarrow{c}_{K}^{n},\theta ,t_{K}^{* },t_{K^{C}}^{*}) \rightarrow f_{i}[K](c_{i},\overrightarrow{c}_{K},\theta , t_{K}^{* },t_{K^{C}}^{*})\), as required. \(\square \)

Note the proof of Lemma 19 explicitly uses the fact that each \(s_{i}\) is universally measurable. We do not know if it would attain, if we instead assumed that each \(\varPi _{i}[c_{i},s_{-i}]\) is universally measurable.

Lemma 20

The map \(u_{m(i)}\) is continuous.

Proof

Each \({\mathrm {proj\,}}_{C_{i} \times {\mathcal {C}}_{K}}\) is a continuous function. So, by Lemma 19, each \(v_{m(i)}[K] \circ {\mathrm {proj\,}}_{C_{i} \times {\mathcal {C}}_{K}}\) is a continuous function. Thus, \(u_{m(i)}\) is a finite sum of continuous functions and so continuous. \(\square \)

Write \({\mathcal {D}}_{i}\) for the set \([\varDelta (C_{i})]^{M(i)}\) write \(\overrightarrow{\sigma }_{i}\) for an arbitrary element of \({\mathcal {D}}_{i}\). Take \({\mathcal {D}}=\prod _{i=1}^{J} {\mathcal {D}}_{i}\) and write \(\overrightarrow{\sigma }=(\overrightarrow{\sigma }_{1},\ldots ,\overrightarrow{\sigma }_{j})\) for an arbitrary element of \({\mathcal {D}}\). For a given player m(i), take \({\mathcal {D}}_{-m(i)}\) to be \( [\varDelta (C_{i})]^{(M(i)-1)} \times \prod _{j \ne i} {\mathcal {D}}_{j}\) if M(i) is finite and \({\mathcal {D}}\) otherwise. Note if M(i) is (countably) infinite \({\mathcal {D}}_{-m(i)}={\mathcal {D}}\). An arbitrary element of \({\mathcal {D}}_{-m(i)}\) will be denoted as \(\overrightarrow{\sigma }_{-m(i)}\).

Extend payoff functions to \(u_{m(i)}: {\mathcal {D}}\rightarrow {\mathbb {R}}\) in the usual way. Note the extended functions remain continuous. (Use, e.g., Fristedt and Gray 1996, Theorem 20, Chapter 18 and the definition of weak convergence.)

Lemma 21

There exists some mixed choice equilibrium for the game G.

Proof

For each player m(i), define a best response correspondence \({\mathrm {BR\,}}_{m(i)}: {\mathcal {D}}_{-m(i)} \twoheadrightarrow \varDelta (C_{i})\) so that

$$\begin{aligned} {\mathrm {BR\,}}_{m(i)}(\overrightarrow{\sigma }_{-m(i)}) = \{ \sigma _{m(i)} \in \arg \max u_{m(i)}(\cdot ,\overrightarrow{\sigma }_{-m(i)}) \}. \end{aligned}$$

Extend this correspondence to a best response correspondence \({\mathbb {BR\,}}_{m(i)}: {\mathcal {D}}\twoheadrightarrow {\mathcal {D}}\) so that

$$\begin{aligned} {\mathbb {BR\,}}_{m(i)}(\sigma _{m(i)},\overrightarrow{\sigma }_{-m(i)}) = {\mathrm {BR\,}}_{m(i)}(\overrightarrow{\sigma }_{-m(i)}) \times {\mathcal {D}}_{-m(i)}. \end{aligned}$$

Define \({\mathbb {BR\,}}: {\mathcal {D}}\twoheadrightarrow {\mathcal {D}}\) so that \({\mathbb {BR\,}}(\overrightarrow{\sigma }) = \bigcap _{i=1}^{J} \bigcap _{m(i)=1}^{M(i)} {\mathbb {BR\,}}_{m(i)}(\overrightarrow{\sigma })\). To show that there is a mixed strategy equilibrium of the game G, it suffices to show that there is a fixed point of \({\mathbb {BR\,}}\).

To show that there is a fixed point of \({\mathbb {BR\,}}\), we will apply the Glicksberg’s (1952) Theorem. For this, it suffices to show that \({\mathcal {D}}\) is a non-empty, compact, convex subset of a convex Hausdorff linear topological space and that \({\mathbb {BR\,}}\) has a closed graph and is non-empty convex valued.

Note that each \(\varDelta (C_{i})\) is a non-empty, compact, convex subset of a convex Hausdorff linear topological space. So, \({\mathcal {D}}\) satisfies the desired conditions. We focus on the properties of \({\mathbb {BR\,}}\).

First, we show that \({\mathbb {BR\,}}\) has a closed graph: By Berge’s maximum theorem (see 17.31 in Aliprantis and Border 2007), each \({\mathrm {BR\,}}_{m(i)}\) is compact valued and upper hemicontinuous. It follows that \({\mathbb {BR\,}}_{m(i)}\) is a compact valued and upper hemicontinuous correspondence to a Hausdorff space. So, applying Theorem 17.10 in Aliprantis and Border (2007), it follows that \({\mathbb {BR\,}}_{m(i)}\) has a closed graph. By Theorem 17.25 in Aliprantis and Border (2007), \({\mathbb {BR\,}}\) has a closed graph.

Next we show that \({\mathbb {BR\,}}\) is non-empty convex valued: By Berge’s maximum theorem (see 17.31 in Aliprantis and Border 2007), for each m(i), \({\mathrm {BR\,}}_{m(i)}\) is non-empty valued. It is standard that \({\mathrm {BR\,}}_{m(i)}\) is convex valued. (This follows from the fact that payoffs are linear in mixtures of probabilities of choices.) It follows from construction then that \({\mathbb {BR\,}}_{m(i)}\) and \({\mathbb {BR\,}}\) are non-empty valued. \(\square \)

In what follows, we fix strategies \(r_{i}^{* }\) of \( (\varGamma , {\mathcal {T}}^{* } ) \) satisfying \(r_{i}^{*} \circ h_{i}=s_{i}\). Note such strategies are well defined since \(h_{i}\) is injective. If \(T_{i}^{*} \backslash h_{i} (T_{i} ) \ne \emptyset \), then given some \(r_{i}^{*}\) we write \(\overrightarrow{r}_{i}^{*}\) for \( (r_{i}^{*}(1), r_{i}^{*}(2), \ldots )\), i.e., the associated element of \({\mathcal {D}}_{i}\) played by types in \(T_{i}^{*} \backslash h_{i} (T_{i}) \) under \(r_{i}^{*}\). A standard argument establishes the next remark.

Remark 6

Fix some \(m(i) \in T_{i}^{*} \backslash h_{i}(T_{i})\). For any \((r_{1}^{* },\ldots , r_{|I|}^{*})\) with \( (r_{1}^{*}\circ h_{1}, \ldots , r_{|I|}^{*}\circ h_{|I|}) = (s_{1},\ldots ,s_{|I|})\),

$$\begin{aligned} \int _{\varTheta \times T_{-i}^{*}} \varPi _{i}^{*}[r_{i}^{*}(m(i)), r_{-i}^{*} ]\text {d}\beta _{i}^{*}(m(i)) = u_{m(i)}(\overrightarrow{r}_{1}^{*},\ldots , \overrightarrow{r}_{j}^{*}). \end{aligned}$$

Conversely, given some \((\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{j}) \in {\mathcal {D}}\), there is a unique strategy profile \( ( r_{1}^{*},\ldots , r_{|I|}^{* } )\) with \((\overrightarrow{r}_{1}^{*},\ldots , \overrightarrow{r}_{j}^{*})=(\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{j})\) and \((r_{1}^{* }\circ h_{1},\ldots , r_{|I|}^{*}\circ h_{|I|} ) = ( s_{1},\ldots , s_{|I|} ) \). In this case,

$$\begin{aligned} \int _{\varTheta \times T_{-i}^{*}} \varPi _{i}^{*}[r_{i}^{*}(m(i)), r_{-i}^{*} ]\text {d}\beta _{i}^{*}(m(i)) = u_{m(i)}(\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{j}). \end{aligned}$$

Lemma 22

Let \(\varOmega ,\varOmega ^{*}\) be Polish. If \(f:\varOmega \rightarrow \varOmega ^{*}\) is an embedding, then f maps sets in \({\mathcal {B}}_{\text {UM}}(\varOmega )\) to sets in \({\mathcal {B}}_{\text {UM}}(\varOmega ^{*})\).

Proof

Fix some \(E \in {\mathcal {B}}_{\text {UM}}(\varOmega )\) and some \(\mu ^{*} \in \varDelta (\varOmega ^{*})\). We will show that f(E) is \(\mu ^{*}\)-measurable.

Note that \(f(\varOmega ) \in {\mathcal {B}}(\varOmega ^{*})\), since f is an embedding. Thus, if \(\mu ^{*}(f(\varOmega )) = 0\), then \(\emptyset \subseteq f(E) \subseteq f(\varOmega )\) with \(\mu ^{*}(\emptyset )=\mu ^{*}(f(\varOmega ))=0\), i.e., \(f(E) \in {\mathcal {B}}(\varOmega ^{*};\mu ^{*})\). As such, take \(\mu ^{*}(f(\varOmega )) > 0\).

For each \(G \in {\mathcal {B}}(\varOmega )\), set \(\mu (G)= \frac{\mu ^{*}(f(G))}{\mu ^{*}(f(\varOmega ))}\). Given that f is injective (and, so, bimeasurable) this is well defined and defines a probability measure \(\mu \in \varDelta (\varOmega )\). Given that \(E \in {\mathcal {B}}_{\text {UM}}(\varOmega )\), there exist \(X,Y \in {\mathcal {B}}(\varOmega )\) so that \(X \subseteq E \subseteq Y\) and \(\mu (X)=\mu (Y)\). Since f is bimeasurable, \(f(X), f(Y) \in {\mathcal {B}}(\varOmega ^{*})\). By construction, \(f(X) \subseteq f(E) \subseteq f(Y)\) with \(\mu ^{*}(f(X))=\mu ^{*}(f(Y))\), as required. \(\square \)

Proof of Proposition 6

Fix a universally measurable equilibrium \( ( s_{1},\ldots , s_{|I|} ) \) of the Bayesian game \( ( \varGamma , {\mathcal {T}} ) \). As above, construct the game G (based on \( ( s_{1},\ldots , s_{|I|} ) \)). By Lemma 21, there exists a mixed choice profile, viz. \((\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{j})\), that is an equilibrium for the game G. Now, by Remark 6, we can find a strategy profile \( ( s_{1}^{* },\ldots , s_{|I|}^{* } ) \) so that \((\overrightarrow{s}_{1}^{* },\ldots , \overrightarrow{s}_{j}^{* })=(\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{j})\) and \( ( s_{1}^{* }\circ h_{1},\ldots , s_{|I|}^{* }\circ h_{|I|} ) = ( s_{1},\ldots , s_{|I|} ) \). We will show that \( (s_{1}^{* },\ldots , s_{|I|}^{* } ) \) is a universally equilibrium for the Bayesian game \( (\varGamma , {\mathcal {T}}^{* } ) \).

First we show that each \(s_{i}^{*}\) is universally measurable. Fix a Borel \(E_{i}\) in \(\varDelta (C_{i} ) \) and note that

$$\begin{aligned} (s_{i}^{*})^{-1} (E_{i}) = h_{i}((s_{i})^{-1}(E_{i})) \cup \{t_{i}^{*} \in T_{i}^{*}\backslash h_{i}(T_{i}): s_{i}^{*}(t_{i}^{*}) \in E_{i} \}. \end{aligned}$$

Since \(s_{i}\) is universally measurable, \((s_{i})^{-1}(E_{i}) \) is a universally measurable set. Using the fact that \(h_{i}\) is an embedding and Lemma 22, \(h_{i}((s_{i})^{-1}(E_{i}))\) is a universally measurable set. Next, notice that \(T_{i}^{* } \backslash h_{i}(T_{i})\) is countable (and possibly empty), so \( \{t_{i}^{*} \in T_{i}^{*} \backslash h_{i}(T_{i}): s_{i}^{* } (t_{i}^{*}) \in E_{i} \} \) is Borel. It follows that \((s_{i}^{*})^{-1} (E_{i})\) is the union of two universally measurable sets and so universally measurable.

Now we show Condition 2 of Definition 3: First, fix some type \(h_{i}(t_{i}) \in h_{i}(T_{i}) \). Notice that, for each \(c_{i} \in C_{i}\),

$$\begin{aligned}&\int _{\varTheta \times T_{-i}^{*}} \pi _{i} (\theta , s_{i}^{*}(h_{i}(t_{i})), s_{-i}^{*}(t_{-i}^{*})) \mathrm{d}\beta _{i}^{*}(h_{i}(t_{i})) \\&\quad = \int _{\varTheta \times T_{-i}} \pi _{i}(\theta , s_{i}^{*}(h_{i}(t_{i})), s_{-i}^{*}(h_{-i}(t_{-i}))) \mathrm{d}\beta _{i}(t_{i}) \\&\quad \ge \int _{\varTheta \times T_{-i}} \pi _{i}(\theta , c_{i}, s_{-i}^{*}(h_{-i}(t_{-i}))) \mathrm{d}\beta _{i}(t_{i})\\&\quad = \int _{\varTheta \times T_{-i}^{*}} \pi _{i}(\theta , c_{i}, s_{-i}^{*}(t_{-i}^{*})) \mathrm{d}\beta _{i}^{*}(h_{i}(t_{i})), \end{aligned}$$

where the first and last lines use the Change of Variables Theorem (Billingsley 2008, Theorem 16.12 plus the fact that \(h_{-i}\) is injective, and the second line uses the fact that \((s_{1},\ldots , s_{|I|}) = (s_{1}^{* }\circ h_{1}, \ldots , s_{|I|}^{* }\circ h_{|I|}) \) is an equilibrium for the Bayesian game \(( \varGamma , {\mathcal {T}} ) \).

Next, fix some type \(t_{i}^{* }\in T_{i}^{* }\backslash h_{i} ( T_{i} ) \), if one exists. Here, Condition 2 follows from Remark 6 and the fact that \((\overrightarrow{\sigma }_{1},\ldots , \overrightarrow{\sigma }_{i})\) is an equilibrium of the constructed strategic form game G.

Thus, we have that \( ( s_{1}^{* },\ldots , s_{|I|}^{* } ) \) is a universally measurable Bayesian equilibrium of \( ( \varGamma , {\mathcal {T}}^{* } ) \). Moreover, \( ( s_{1}^{* }\circ h_{1},\ldots , s_{|I|}^{* }\circ h_{|I|} ) = ( s_{1},\ldots , s_{|I|} ) \), as required. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Friedenberg, A., Meier, M. The context of the game. Econ Theory 63, 347–386 (2017). https://doi.org/10.1007/s00199-015-0938-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00199-015-0938-z

Keywords

JEL Classification

Navigation