Strict robustness to incomplete information

We study a strict version of the notion of equilibrium robustness by Kajii and Morris (Econometrica 65:1283–1309, 1997) that allows for a larger class of incomplete information perturbations of a given complete information game, where with high probability, players believe that their payoffs are close to (but may be different from) those of the complete information game. We show that a strict monotone potential maximizer of a complete information game is strictly robust if either the game or the associated strict monotone potential is supermodular, and that the converse also holds in all binary-action supermodular games.


Introduction
Equilibrium may be fragile to seemingly small departure from common knowledge (Rubinstein, 1989;Carlsson and van Damme, 1993). Then how can an analyst predict strategic behavior when he is almost certain about players' payoff functions? Kajii and Morris (1997) formalized this question and proposed the notion of robust equilibrium as an equilibrium of a complete information game such that every "nearby" incomplete information game has at least one equilibrium that induces an action distribution close to the original equilibrium. In this definition, a "nearby" incomplete information game refers to a game where with high probability, all players know that their own payoff functions are given by those of the complete information game. Kajii and Morris (1997) showed that if a complete information game has a unique correlated equilibrium or a p-dominant equilibrium with ∑ i∈I p i < 1 , then it is robust. Subsequent papers provided sufficient conditions for robustness in terms of potential maximizer (Ui, 2001) and generalized/monotone potential maximizer (Morris and Ui, 2005); Oyama and Takahashi (2020) showed the converse of the latter result in generic binary-action supermodular games. 1 In this paper, we propose the notion of strict robustness by adopting a larger class of "nearby" incomplete information games than Kajii and Morris (1997), where with high probability, all players believe that their payoffs are close to, but may be different from, those of the complete information game. 2 This class of incomplete information games arises naturally if the analyst assigns high probability on a given profile of payoff functions, but he allows for the possibility that players may not know their own payoff functions exactly. We then establish strict counterparts of the existing results on robustness (with appropriate modifications whenever necessary). More specifically, we show that if the complete information game has a unique correlated equilibrium, it is the unique strictly robust equilibrium; a strict monotone potential maximizer is strictly robust if either the game or the associated strict monotone potential is supermodular; and finally, a strictly robust equilibrium must be a strict monotone potential maximizer in all binary-action supermodular games.
Our results for strict robustness are closely analogous to those for robustness in the sense of Kajii and Morris (1997). We think that strict robustness may be more natural than Kajii-Morris' robustness. However, our main reason for pursuing the results in this paper is that the strict robustness notion connects more cleanly with other work. Indeed, the present study stemmed from our previous study (Morris et al., 2022) on (smallest equilibrium or full) implementation by information design in games, which built in part on the literature on higher order beliefs in general and incomplete information robustness in particular. The robustness question has a metaphorical interpretation, that behind the scene, there is an "adversarial" information designer that tries to design an information structure such that all the equilibrium outcomes thereof are bounded away from the analyst's prediction about the players' behavior; loosely speaking, the prediction is strictly robust only if the information 1 3 The Japanese Economic Review (2023) 74:357-376 designer is unable to design such an information structure. In the Appendix, we formalize this argument and, more generally, document the implications of the results in Morris et al. (2022) to strict robustness. In the context of information design, it is natural to allow the information designer to construct information structures in which players do not necessarily know their own payoffs; thus, our definitions of "nearby" incomplete information games and hence of strict robustness are compatible with information design problems.

Complete information games
A complete information game consists of a finite set I of players, a finite set A i of actions for each i ∈ I , and a payoff function For brevity, we suppress I and A, and identify a complete information game with the profile g = (g i ) i∈I of its payoff functions.
An action distribution ∈ Δ(A) is a correlated equilibrium of g if it satisfies the obedience condition: for all i ∈ I and a i , a � i ∈ A i . 3

Incomplete information games
With the player set I and the action sets (A i ) i∈I fixed, an incomplete information game consists of a countable set Θ of payoff-relevant states, a bounded payoff function u i ∶ A × Θ → ℝ for each i ∈ I , a countable set T i of types for each i ∈ I , and a common prior P ∈ Δ(Θ × T) , where we denote T = ∏ i∈I T i and T −i = ∏ j≠i T j . We refer to an incomplete information game as (Θ, u, T, P) , where u = (u i ) i∈I . For each i ∈ I and t i ∈ T i , we assume without loss of generality that We denote by Σ i the set of strategies for i ∈ I , and write Σ = ∏ i∈I Σ i and Σ −i = ∏ j≠i Σ j . A strategy profile = ( i ) i∈I ∈ Σ induces an action distribution P ∈ Δ(A) by

3
A strategy profile ∈ Σ is a Bayes-Nash equilibrium of (Θ, u, T, P) if ) j≠i , and the domain of u i is extended to mixed action profiles in the usual way. By Kakutani's fixed point theorem (with, e.g., the product topology on Σ ), a Bayes-Nash equilibrium always exists.

Strict robustness
Now we introduce our class of incomplete information perturbations. Given a complete information game g and an incomplete information game (Θ, u, T, P) , for i ∈ I and ≥ 0 , let T denote the set of all types of player i for which the payoff function differs from g i by at most in expectation, i.e., Our notion of strict robustness requires an action distribution to be close to some equilibrium behavior in all ( , )-elaborations for sufficiently small > 0 and > 0.
Definition 2 An action distribution ∈ Δ(A) is strictly robust in g if for every > 0 , there exist > 0 and > 0 such that every ( , )-elaboration (Θ, u, T, P) of g has a Bayes-Nash equilibrium such that max a∈A | P (a) − (a)| ≤ .
An immediate implication of the definition is that if is strictly robust in g , then it must be the action distribution of an essential equilibrium of g in the sense of Wu and Jiang (1962) (hence a Nash equilibrium). Indeed, by the definition of strict robustness, it is necessary that for every > 0 , there exists > 0 such that every (0, )-elaboration with |Θ| = 1 and |T i | = 1 for all i ∈ I , which is a complete information game in the -neighborhood of g , has an equilibrium ∈ ∏ i∈I Δ(A i ) such that max a∈A � ∏ i∈I i (a i ) − (a)� ≤ , which is precisely the definition of essential equilibrium.
Strict robustness strengthens robustness of Kajii and Morris (1997) by requiring robustness against a larger class of perturbations than the latter. In our language, Kajii and Morris (1997) only considered ( , 0)-elaborations, where with probability at least 1 − , every player knows that his own payoffs are given by g i , whereas this "known own payoffs" constraint is relaxed by for strict robustness. Formally, we say that an action distribution is KM-robust in g if for every > 0 , there exists > 0 such that every ( , 0)-elaboration of g has a Bayes-Nash equilibrium such that max a∈A | P (a) − (a)| ≤ . A strictly robust equilibrium is KM-robust while the converse does not hold.
Example 1 Consider the trivial game g with g i (a) = 0 for all i ∈ I and a ∈ A with |A| ≥ 2 . Then all product action distributions are KM-robust, but none of them is strictly robust. Kajii and Morris (1997, Proposition 3.2) showed that if a complete information game has a unique correlated equilibrium, then it is the unique KM-robust equilibrium in this game. We show that the unique correlated equilibrium is indeed strictly robust.

Games with unique correlated equilibria
Proposition 1 If g has a unique correlated equilibrium, then it is the unique strictly robust equilibrium of g.
Proof For any Bayes-Nash equilibrium of any ( , )-elaboration (Θ, u, T, P) of g , the induced action distribution P satisfies for all i ∈ I and a i , a � i ∈ A i . By passing to a subsequence, P converges to a correlated equilibrium of g as , → 0 . Thus, if g has a unique correlated equilibrium, then it must be strictly robust in g . ◻

Strict monotone potential games
Morris and Ui (2005, Proposition 2) provided a sufficient condition for KM-robustness in terms of monotone potential maximizer (MP maximizer). In this section, we show that a strict version of their condition implies strict robustness.
In what follows, we equip each action set A i with a linear order ≤ . For i ∈ I and denote the set of player i's best responses against i when he is restricted to play within B i and his payoff function is given by h, i.e., The definition of MP maximizer of Morris and Ui (2005) (of a simpler form) is as follows 4 : We employ the strict version of MP maximizer due to Oyama et al. (2008): for all a ≠ a * such that for all i ∈ I and i ∈ Δ(A −i ) . Such a function v is called a strict monotone potential for a * in g.
In words, a * is an MP maximizer of g if there exists a common payoff function v strictly maximized at a * such that every player i, if restricted to play above (resp. below) a * i , has at least one best response under v that is larger (resp. smaller) than or equal to some best response under g i . In contrast, strict MP maximizer requires that every player i, if restricted to play above (resp. below) a * i , have at least one best response under v that bounds all best responses under g i from above (resp. below).
By definition, a strict MP maximizer is an MP maximizer. The converse holds in generic games, but not in general. For example, in the nongeneric game in Example 1, all action profiles are MP maximizers, but none of them is a strict MP maximizer.
A complete information game g is supermodular if for any i ∈ I and a i , 1 3 The Japanese Economic Review (2023) 74:357-376 (2005, Proposition 2) showed that an MP maximizer is KM-robust if either the complete information game or the monotone potential is supermodular. The following theorem shows that a strict MP maximizer is strictly robust under the same supermodularity assumption as in Morris and Ui (2005).
Theorem 1 If a * ∈ A is a strict MP maximizer of g with strict monotone potential v, and either g or v is supermodular, then the degenerate distribution on a * is strictly robust in g.
The theorem immediately implies that a unique potential maximizer of a (weighted) potential game (Monderer and Shapley, 1996) is strictly robust if the game is supermodular. 5 In the statement of Theorem 1, "strict MP maximizer" cannot be weakened to "MP maximizer". Indeed, in Example 1, the game is supermodular and all action profiles are MP maximizers, but none of them is strictly robust.
The proof of Theorem 1, which is given in Appendix A.1, proceeds almost in the same way as in Morris and Ui (2005). But in one step of the proof, we use the property of strict MP maximizer that the monotone relationship on best responses extends to -best responses (Lemma A.1) and absorb uncertainty about own payoffs in ( , ∕2)-elaborations.
For p = (p i ) i∈I ∈ [0, 1) I , an action profile a * ∈ A is a strictly p-dominant equilibrium in g if for all i ∈ I and i ∈ Δ(A −i ) . Strict p-dominance is a natural generalization of the classical condition of risk dominance for 2 × 2 coordination games to many-player many-action games. If a * is strictly p-dominant in g with ∑ i∈I p i < 1 , then the function v ∶ A → ℝ given by is a strict monotone potential for a * in g . Moreover, v is supermodular if we reorder actions so that a * i = max A i for all i ∈ I . Thus, together with Lemma 5.5 in Kajii and Morris (1997), Theorem 1 implies the following, which parallels the corresponding result for KM-robustness by Kajii and Morris (1997, Corollary 5.6).
Corollary 2 If a * is strictly p-dominant in g with ∑ i∈I p i < 1 , then the degenerate distribution on a * is the unique strictly robust equilibrium of g.

Binary-action supermodular games
In this subsection, we restrict our attention to binary-action supermodular (BAS) games: all players have binary action sets A i = {0, 1} , and the payoff functions are supermodular, i.e., for each i ∈ I , the payoff difference is weakly increasing in a −i ∈ A −i . Oyama and Takahashi (2020, Theorem 3) showed that a KM-robust equilibrium is a strict MP maximizer in generic BAS games. Here, we show that a strictly robust equilibrium is a strict MP maximizer in all BAS games. Thus, together with Theorem 1, we establish the exact equivalence between strict robustness and strict MP maximization in this class of games.
The main tool in the proof is a certain obedience condition: it was implicitly introduced in Oyama and Takahashi (2020) and extended to an incomplete information environment and termed sequential obedience in Morris et al. (2022). Let Γ denote the set of all sequences of distinct players (including the null sequence), and for each i ∈ I , let Γ i denote the set of all sequences in Γ where player i is listed. For each ∈ Γ and k = 0, 1 , let a k ( ) ∈ A denote the action profile such that player i plays action k if and only if i is listed in ; for each i ∈ I , ∈ Γ i , and k = 0, 1 , let a k −i ( ) ∈ A −i denote the action profile of player i's opponents such that player j ≠ i plays action k if and only if j is listed in before i (therefore, player j plays action 1 − k if and only if either j is not listed in or j is listed in after i).
Definition 5 Let g be a BAS game. Action distribution ∈ Δ(A) satisfies sequential obedience in g if there exists ∈ Δ(Γ) such that for all i ∈ I and (a) = ({ ∈ Γ | a 1 ( ) = a}) for all a ∈ A ; satisfies reverse sequential obedience in g if there exists � ∈ Δ(Γ) such that for all i ∈ I and (a) = � ({ ∈ Γ | a 0 ( ) = a}) for all a ∈ A.
In any BAS game, there exists at least one (in fact, degenerate) action distribution that satisfies both sequential obedience and reverse sequential obedience (Morris et al., 2022, Proposition B.2). Uniqueness of such an action distribution characterizes existence of a strict MP maximizer.
Proposition 3 In any BAS game g , a * is a strict MP maximizer if and only if the degenerate action distribution on a * is the unique action distribution that satisfies sequential obedience and reverse sequential obedience.
With this characterization, we can show the converse of Theorem 1 for all BAS games.
Theorem 2 In any BAS game g , an action distribution is strictly robust if and only if it is degenerate on a strict MP maximizer.
By Proposition 3, proving the "only if" direction in Theorem 2 reduces to proving that if an action distribution is strictly robust, then it must be the unique action distribution that satisfies sequential obedience and reverse sequential obedience. In Appendix A.3, we prove the latter statement by showing that for any action distribution that satisfies sequential obedience and reverse sequential obedience, there exists an ( , )-elaboration where a unique Bayes-Nash equilibrium induces an action distribution in a neighborhood of : the existence of an action distribution that satisfies sequential obedience and reverse sequential obedience is guaranteed (see Appendix A.2), and if there are multiple such action distributions, then no action distribution is strictly robust.
The arguments in Appendix A.3 are an application of those developed by Morris et al. (2022) in the framework of information design. There, a (finite) state space Θ and a profile u = (u i ) i∈I of payoff functions u i ∶ A × Θ → ℝ are exogenously given and a prior on Θ is fixed. A version of the question studied in Morris et al. (2022) concerns what outcomes can be implemented as a unique Bayes-Nash equilibrium in some information structure (full implementability). Under a dominance state assumption that each action is a dominant action at some state, Morris et al. (2022) showed that an outcome is fully implementable if and almost only if it satisfies strict and incomplete information versions of sequential obedience and reverse sequential obedience. We apply this characterization to the limit case as the prior on Θ becomes degenerate on some state * (limit full implementability), noticing that the implementing information structures are indeed ( , )-elaborations of u(⋅, * ) in our sense with , → 0.
Oyama and Takahashi (2020) earlier obtained a generic characterization of KMrobustness in BAS games, where they appealed to a genericity assumption and assumed strict inequalities in (1) or (2) in Definition 5 to guarantee strict incentives in the desired ( , 0)-elaborations. In contrast, we exploit payoff uncertainty allowed in ( , )-elaborations to provide strict incentives.

Concluding remarks
In this paper, we introduced the notion of strict robustness, which tests the robustness of an action distribution against all ( , )-elaborations, where, unlike in ( , 0) -elaborations used for KM-robustness, the players are only required to believe with high probability that their payoff functions are close in expectation to those of the original complete information game g . We showed that a strict MP maximizer of g is strictly robust if either g or the associated strict monotone potential is supermodular, and that the converse also holds if g is a BAS game, thus establishing the equivalence between strict robustness and strict MP maximization in all BAS games. Oyama and Takahashi (2023) refined Oyama-Takahashi's (2020) generic characterization of KM-robustness and showed that the equivalence between KM-robustness and MP maximization holds in all BAS games for extreme action profiles 0 and 1 . This equivalence, however, fails for nonextreme action profiles, as the following example shows: Example 2 Consider the following (nongeneric) three-player BAS game from Oyama and Takahashi (2019, Section A.6): As shown in Oyama and Takahashi (2019, Proposition A.2) the degenerate distribution on action profile (1, 1, 0) is KM-robust. On the other hand, one can verify that (1, 1, 0) is not an MP maximizer.
It is known that in generic supermodular games, a KM-robust equilibrium, if any, is a noise-independent global game selection (Basteck et al., 2013;Oury and Tercieux, 2007). Given that our perturbations allow for uncertainty about own payoffs as in global games (modulo the apparent difference between the formulations with discrete and continuous states), one may expect that strict robustness implies noiseindependent selection in all supermodular games. Oyama and Takahashi (2023) show that in the class of all BAS games, an action profile is a noise-independent global game selection if and only if it is a strict MP maximizer; combined with the result of the present paper, it implies that noise-independent global game selection is equivalent to strict robustness in these games.
Finally, it is left as an open problem to characterize (strict) robustness in manyaction games beyond BAS games. Note that it is known that the equivalence between robustness and noise-independent global game selection fails in generic symmetric two-player three-action supermodular games (Basteck and Daniëls, 2011;Oyama and Takahashi, 2011); it has not been proved or disproved whether the equivalence between (strict) robustness and (strict) MP maximization holds even in this class of games.

A.1. Proof of Theorem 1
For i ∈ I , i ∈ Δ(A −i ) , ≥ 0 , and B i ⊂ A i , let br

3
The Japanese Economic Review (2023) 74:357-376 The following lemma extends the monotone relationship on best responses in the definition of strict MP maximizer to -best responses. 6 Lemma A.1 Action profile a * ∈ A is a strict MP maximizer of g with strict monotone potential v if and only if there exist > 0 and a function ṽ ∶ A → ℝ with ṽ(a * ) >ṽ(a) for all a ≠ a * such that for all i ∈ I and i ∈ Δ(A −i ) . Moreover, if v is supermodular, then ṽ can be taken to be supermodular.

Proof
The "if" direction is obvious. For the "only if" direction, without loss of generality, we assume A i ⊂ ℝ for any i ∈ I . Let ṽ ∶ A → ℝ be given by Then we have ṽ(a * ) >ṽ(a) for all a ≠ a * , and for all i ∈ I and i ∈ Δ(A −i ) . Moreover, if v is supermodular, then ṽ is also supermodular.
For each i ∈ I and i ∈ Δ(A −i ) , since A i is finite, there exist a neighborhood O( i ) of i and i ( i ) > 0 such that where the first inequality follows from (A.3), the second from (A.1), the third since v is a strict monotone potential for a * , and the fourth from (A.4), and where the first inequality follows from (A.3), the second from (A.2), the third since v is a strict monotone potential for a * , and the fourth from (A.4). ◻ Now we prove Theorem 1. Let ṽ ∶ A → ℝ and > 0 be as in Lemma A.1. Let v * =ṽ(a * ) , v = max a≠a * ṽ(a) , and v = min aṽ (a).
For each i ∈ I , let S i = A T i i denote the set of all pure strategies s i ∶ T i → A i endowed with the product topology, and for s ∈ S . Since S + and S − are nonempty and compact subsets of S, and V is continuous, arg max s∈S + V(s) and arg max s∈S − V(s) are nonempty. Let s + and s − denote arbitrary maximizers of V in S + and in S − , respectively. Note that s + and s − are Bayes-Nash equilibria when players have a common payoff function ṽ and are restricted to play strategies in S + and S − , respectively, i.e.,

3
The Japanese Economic Review (2023) 74:357-376 for all i ∈ I and t i ∈ T Let ŝ + denote the strategy profile given by ŝ + i (t i ) = a * i for i ∈ I and t i ∈ T g i , ∕2 i and ŝ + i (t i ) = max A i otherwise. Since ŝ + ∈ S + and P(Θ × T g, ∕2 ) ≥ 1 − , we have and hence by (A.5), In what follows, we will show that for any i ∈ I and −i ∈ Σ 0 −i , player i's best responses to −i in game (Θ, u, T, P) belong to Σ 0 i . Then, it follows from Kakutani's fixed point theorem that (Θ, u, T, P) has a Bayes-Nash equilibrium in Σ 0 , which induces a * with probability at least 1 − .
Take any i ∈ I , t i ∈ T First, suppose that g is supermodular. Then by (A.8), the supermodularity of g i , Lemma A.1, and (A.6) or (A.7), we have . Second, suppose that v is supermodular. Then by (A.8), Lemma A.1, the supermodularity of ṽ , and (A.6) or (A.7), we have and Thus type t i 's best responses belong to Δ([s −

A.2. Strict monotone potential and (reverse) sequential obedience in BAS games
Fix a BAS game g . The following lemma characterizes strict MP maximizer by the existence of an inequality form of weighted potential. 7 Lemma A.2 Action profile a * is a strict MP maximizer of g if and only if there for all a ≠ a * and ( i ) i∈I with i > 0 for all i ∈ I such that for all i ∈ I and a −i ∈ A −i .
Proof The "if" direction is straightforward. For the "only if" direction, suppose that v is a strict monotone potential for a * in g . Pick any i ∈ I , and assume a * i = 0 without loss of generality. Then there exists no ( i , i ) ∈ ℝ |A −i |+1 + such that By Farkas' lemma, there exists ( i,1 , i,2 ) ∈ ℝ 2 + such that ) ≤ 0 , and hence i,1 > 0 . Then i = i,1 ∕ i,2 > 0 satisfies the desired property. ◻ For an action profile a * ∈ A , let S = {i ∈ I | a * i = 1} , let 0 + denote the profile of action 0 among the players in I ⧵ S , and let 1 − denote the profile of action 1 among the players in S. We consider two games g + = (g + i ) i∈I⧵S and g − = (g − i ) i∈S , where the former is the restricted game among the players in I ⧵ S given by g + i (⋅) = g i (⋅, 1 − ) , and the latter is the restricted game among the players in S given by g − i (⋅) = g i (⋅, 0 + ) . Denote by X + ⊂ Δ(A + ) the set of action distributions that satisfy sequential obedience in g + and by X − ⊂ Δ(A − ) the set of action distributions that satisfy reverse sequential obedience in g − , where A + = ∏ i∈I⧵S A i and A − = ∏ i∈S A i . Given Lemma A.2 above, the duality result of Oyama and Takahashi (2020, Lemma 2) together with Lemma A.1 in Oyama and Takahashi (2020), establishes a characterization of strict MP maximizer in terms of (reverse) sequential obedience.
Proposition A.5 Action distribution ∈ Δ(A) is limit fully implementable in (Θ, u) at * if and only if it satisfies sequential obedience and reverse sequential obedience in u(⋅, * ).

Proof
The "only if" direction is immediate from Proposition A.4. For the "if" direction, suppose that satisfies sequential obedience and reverse sequential obedience in u(⋅, * ) . We assume * ≠ , without loss of generality. For any ∈ (0, 1] , define ∈ Δ(A × Θ) by Then we have ∑ a∈A (a, * ) → 1 and ∑ ∈Θ (⋅, ) → as → 0 . Also satisfies strict sequential obedience and strict reverse sequential obedience in (Θ, u) . To see this, let , � ∈ Δ(Γ) be such that for all a ∈ A and ∈ Θ , as claimed. Thus, by Proposition A.4, is fully implementable in (Θ, u) , and therefore, is limit fully implementable in (Θ, u) at * . ◻ In particular, as long as payoff structure (Θ, u) is supermodular and satisfies the two-sided dominance state assumption, limit full implementation at * depends only on u(⋅, * ) and independent of the payoff structure that embeds u(⋅, * ) , and due to Proposition A.3, there exists at least one (in fact, degenerate) limit fully implementable action distribution at * .
Thus, if is strictly robust in g and (Θ, u) embeds g at * ∈ Θ , then no action distribution other than is limit fully implementable in (Θ, u) at * . Indeed, we have: Proposition A.6 Let g be a BAS game, and let (Θ, u) be a finite-state binary-action supermodular payoff structure that satisfies the two-sided dominance state assumption and u(⋅, * ) = g with * ∈ Θ . Then an action distribution is strictly robust in g if and only if it is the unique action distribution that is limit fully implementable in (Θ, u) at * .

Proof
The "only if" direction follows from Propositions A.3 and A.5 and Lemma A.3. The "if" direction follows from Theorem 1 and Propositions 3 and A.5. ◻ Finally, the "only if" direction of Theorem 2 follows from Propositions 3, A.5, and A.6.
Funding Open access funding provided by The University of Tokyo.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.