Abstract
We consider a group of receivers who share a common prior on a finite state space and who observe private correlated messages that are contingent on the true state of the world. Our focus lies on the beliefs of receivers induced via the signal chosen by the sender and we provide a comprehensive analysis of the inducible distributions of posterior beliefs. Classifying signals as minimal, individually minimal, and language-independent, we show that any inducible distribution can be induced by a language-independent signal. We investigate the role of the different classes of signals for the amount of higher order information that is revealed to receivers. The least informative signals that induce a fixed distribution over posterior belief profiles lie in the relative interior of the set of all language-independent signals inducing that distribution.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In any economic model which involves a group of agents whose decisions depend on their posterior beliefs, one of the essential questions is “what distributions over posterior beliefs of agents can be induced?" In their seminal paper, Kamenica and Gentzkow (2011) consider communication between a sender and a receiver who share a common prior and show that the only restriction on the set of inducible distributions over posterior belief profiles is Bayes plausibility: the expected posterior belief is equal to the prior.Footnote 1 It follows from their insight that Bayes plausibility and identical beliefs are necessary and sufficient in the case of multiple receivers and public communication, that is, when messages are perfectly correlated. Yet, in this case the set of inducible distributions over posteriors is very limited since all receivers have the same ex post belief. In the present paper we are interested in private communication, which, in contrast, enables the sender to achieve a richer belief space. It is straightforward to verify that Bayes plausibility is not sufficient to ensure inducibility in such setups; this raises the first question we tackle in the paper: providing a characterization of the set of inducible posterior beliefs under private communication.
Another aspect which is important for both the sender and receivers is the informativeness of a signal. In the original information design setup introduced by Aumann et al. (1995), the authors were interested in communication that reveals as little private information as possible. In our paper, a signal realization does not only reveal information about the true state of the world: as there are multiple receivers who each obtain a private message, it also induces information partitions that determine what any receiver knows about another receiver’s knowledge of the true state and the signal realization. Thus, we compare the informativeness of signals in terms of “knowledge" in the sense of Hintikka (1962). To be more precise, we compare information sets induced by a signal, which are similar to elements of information partitions in Aumann (1976). The second main question we answer is: what types of signals are the least informative? We provide a characterization for least informative signals that induce a given posterior distribution.
We consider a sender who commits to a signal that sends private correlated messages to the receivers. Receivers know the joint distribution of message profiles, but they only observe their own private message from the message profile realization. We first show that there are posterior belief profiles, which the sender cannot achieve with positive probability. More precisely, for a given posterior belief profile, there exists a signal that induces a distribution which puts positive weight on it if and only if there exists a state which is deemed possible by all receivers according to this belief profile. As an example, consider an operative who follows Machiavelli’s advice divide et impera and, thus, wants to create political unrest in a foreign country by implementing a very heterogeneous belief profile.Footnote 2 Suppose that there are only two states, say X and Y. Then it is impossible for the operative to implement a distribution that puts positive weight on a posterior belief profile in which one receiver believes the state is X with probability 1 and another receiver believes that the state is Y with probability 1. At the same time, a posterior belief profile in which the first receiver’s belief that the state is X is equal to 1, and the second receiver’s belief is arbitrarily close to 0 can be achieved with positive probability.
We next define particular classes of signals. We first consider minimal signals under which distinct message profiles lead to distinct posterior belief profiles. While this ensures that no two message profiles implement the same posterior belief profile, there might still be individual receivers for whom different messages lead to the same posterior. If for each receiver every posterior is induced by a unique message, the signal is called individually minimal. If, additionally, the sent messages are themselves posteriors such that each message induces itself, we call the signals language-independent (LIS). Here, a sender simply tells the receivers what belief they should have, and the messages are sent with probabilities such that receivers will believe the message. We show that restricting attention to language-independent signals is without loss of generality, that is, if a posterior distribution can be induced, it can be induced by an LIS.
As mentioned before, in the presence of multiple receivers Bayes plausibility is necessary but not sufficient for a distribution to be inducible. We characterize the set of inducible distributions of posteriors by showing that a Bayes-plausible distribution is inducible if and only if there exists a non-negative matrix p with dimensions equal to the number of states and the number of posterior belief profiles, respectively, which satisfies a particular system of linear equations. In particular, the set of matrices that satisfy these equations is a convex polytope, which implies that the set of language-independent signals that induce a given distribution over posterior belief profiles is a convex polytope as well.
Once we determine whether a distribution of posterior beliefs is inducible, we explore the informativeness of different signals which induce it. Since messages can be correlated, the message a receiver obtains reveals not only information about the true state of the world, but also about the information that other receivers have. Let us return to our operative who wants to create chaos in a foreign country. If one receiver knew that another receiver knew whether the true state is X or Y, he might decide not to engage in an argument at all. Thus, our operative might want to reveal as little information as possible to any receiver about what other receivers know. As an example suppose that before the operative engages, two receivers believe that either state might be true with probability 1/2. Suppose the operative engages in private communication with both and sends message profiles as follows.
\(\pi\) | (v, x) | (v, y) | (w, w) |
---|---|---|---|
X | \(\frac{1}{2}\) | 0 | \(\frac{1}{2}\) |
Y | 0 | \(\frac{1}{2}\) | \(\frac{1}{2}\) |
The first and second rows of the table reflect the probabilities by which the message profiles are sent conditional on the state being X and Y, respectively. In this case, receiver 2 knows that the true state is X if he observes x, he knows the true state is Y if he observes y, and he learns nothing if he observes w. Receiver 1 never learns anything about the true state. If she observes v, however, she knows that receiver 2 knows the true state. If the sender would replace v by w, she would not learn anything at all.
This example illustrates that a receiver’s knowledge about the true state and the message profile realization can vary across signals, even if the latter induce identical distributions over posterior belief profiles. In particular, a receiver may have different knowledge about another receiver’s knowledge about the true state and the message profile realization. It is then natural to ask what types of signals that induce the same distribution over posteriors restrict this knowledge the most. In the example above, different messages might lead to the same posterior belief but to different higher order knowledge. By employing individually minimal or even language-independent signals we could avoid this issue. But even then: not all language-independent signals reveal the same higher order knowledge. To make this more precise, we define information correspondences that describe what receivers know about the state and the posterior belief profile (instead of the message profile realization), where we call a tuple of a state and a posterior belief profile a posterior history. A signal is more informative than another if for every receiver, every state, and every message profile that can occur in this state, the set of posterior histories that the receiver deems possible is smaller under the former than under the latter. We prove that for any inducible distribution over posterior belief profiles the least informative signals that induce it lie in the relative interior of the set of all language-independent signals that induce it.
The rest of the paper is organized as follows. In Sect. 2, we discuss related literature. In Sect. 3, we provide preliminary definitions and results. We then characterize sets of belief profiles that can be a subset of the support of an inducible distribution over posterior belief profiles in Sect. 4. In Sect. 5, we introduce minimal and individually minimal signals, and in Sect. 6, we turn to language-independent signals. In Sect. 7, we characterize inducible distributions of posteriors and provide several implications. Section 8 introduces information and posterior correspondences, and in Sect. 9, we explore the informativeness of signals.
2 Related literature
Closely related to our analysis of inducible distributions of posteriors is Arieli et al. (2021). They consider multiple receivers who share a common prior belief on a binary state space and study joint posterior belief distributions. They first show that for the case of two receivers a quantitative version of the Agreement Theorem of Aumann (1976) holds; beliefs of receivers are approximately equal when they are approximately common knowledge. For more than two receivers, they relate the feasibility condition to the No Trade Theorem of Milgrom and Stokey (1982) and provide a characterization of feasible joint posteriors. These characterizations are then applied to study independent joint posterior belief distributions. While we pose the same question as Arieli et al. (2021), we obtain a completely different characterization while allowing for an arbitrary finite state space. Morris (2020) studies a similar problem as Arieli et al. (2021) and provides an alternative proof for one of their main results while delivering a many-state generalization. Another related paper is Ziegler (2020), who also provides a characterization of feasible joint posteriors. Arieli et al. (2021) show that the necessary and sufficient condition provided by Ziegler (2020) becomes insufficient if the support of a player’s posterior beliefs contains more than two points. In another related paper, He et al. (2022) consider the feasibility of distributions under private information structures and further analyze optimal communication under the constraint of not revealing any information about a variable correlated to the true state of the world.
Levy et al. (2021) also study the feasibility of joint distributions of posterior belief profiles and provide a necessary condition for which a distribution is feasible. They also show that the convex combination of a symmetric joint distribution and a fully correlated distribution with the same marginal distribution is inducible when the weight on the fully correlated distribution is sufficiently high. Finally, they demonstrate that a joint distribution satisfying their necessary condition becomes feasible when each belief profile in the support is moved sufficiently far in the direction of the prior.
There is literature in mathematics which studies the extent of difference in opinions of agents. Burdzy and Pal (2019) consider two experts who have access to different information and show that they can give radically different estimates of the probability of an event. In a related study, Burdzy and Pitman (2020) show that the opinion of two agents who share the same initial view can substantially differ if they have different sources of information, whereas Cichomski and Osekowski (2021) provide a bound for this difference in opinions. Related to these studies, we consider a sender who aims to drive a wedge between the beliefs of receivers by sending correlated messages. So, while they have the same source of information, their realizations are different from each other.
Like Ziegler (2020), Arieli et al. (2021), and Levy et al. (2021) we provide a characterization of inducible distributions over posterior belief profiles.Footnote 3 Mathevet et al. (2020) focus instead on inducible distributions over belief hierarchies. Their characterization requires Bayes plausibility at the level of the sender and formulates two equations to obtain the correct belief hierarchies of the receivers. A central concept in their characterization is sender’s belief about the state given the entire profile of belief hierarchies. Our central tool is in terms of a matrix with dimensions given by the number of states and the number of posterior belief profiles.
While we focus on inducible distributions of posterior belief profiles, Bergemann and Morris (2016) consider a game-theoretic setup, study the distributions of receivers’ actions that the sender can induce, and characterize the set of Bayes correlated equilibria of the game. An advantage of their approach is that there is no need to make explicit use of information structures. They also develop an extension of the classic sufficiency condition of Blackwell (1953) for the multi-player setup and show that more information according to that criterion results in a smaller set of Bayes correlated equilibria. A similar setup is studied by Taneva (2019), who derives sender’s optimal information structure.
In the single-receiver case, introducing heterogeneity may render Bayes plausibility insufficient for a distribution to be inducible. Alonso and Camara (2016) consider a single receiver who does not share a common prior with the sender and show that an additional condition is required on top of Bayes plausibility. Beauchêne et al. (2019) also consider a single receiver, who is ambiguity averse, and a sender who may use an ambiguous communication device. In that case they are able to show that a modified version of Bayes plausibility holds. When there are multiple receivers, if information is perfectly correlated, then Bayes plausibility is still the only condition for inducibility since in this case all receivers have the same ex post belief. The first part of Wang (2013) and Alonso and Câmara (2016) both consider public communication and are examples of such a situation.
There is a wide literature that focuses on informativeness in the sense of Blackwell (1953).Footnote 4 Rick (2013) considers an informed sender and an uninformed receiver and shows that miscommunication expands the set of distributions of beliefs the sender expects to induce. Gentzkow and Kamenica (2016) consider multiple senders and a single receiver and show that the amount of revealed information increases with the number of senders. Ichihashi (2019) considers a model of a single sender and receiver in which a designer can restrict the most informative message profile that the sender can generate, and he characterizes the information restriction that maximizes the receiver’s payoff. More recently, Brooks et al. (2022) and Mathevet and Taneva (2022) consider information hierarchies. While Brooks et al. (2022) show that one can always construct a collection of signals that induce Blackwell-ordered distributions such that the signals are refinement-ordered, Mathevet and Taneva (2022) also consider incentive compatibility of information transmission. While these papers compare the informativeness of different information structures by investigating the induced distributions of posteriors (or belief hierarchies), our analysis of informativeness follows the refinement tradition of Aumann (1976), where a source of information is represented as a partition of some extended state space. As is carefully explained in Green and Stokey (2022), both representations of information structures are equivalent, though the refinement tradition is more convenient when comparing information structures.
3 Preliminaries and notation
Let \(N=\left\{ 1,\ldots ,n\right\}\) be the set of receivers and let \(\Omega\) be a finite set of states of the world. For any set X, we denote by \(\Delta (X)\) the set of probability distributions over X with finite support.Footnote 5 We assume that sender and receivers share a common prior belief \(\lambda ^0\in \Delta (\Omega )\).
Let \(S_i\) be a non-empty set of messages sender can send to receiver \(i\in N\), and let \(S=\prod _{i\in N}S_i\). The elements of S are called message profiles.Footnote 6 A signal is a function \(\pi :\Omega \rightarrow \Delta \left( S\right)\) that maps each \(\omega \in \Omega\) to a finite probability distribution over S. The set of possible message profile realizations is denoted by \(S^{\pi }=\left\{ s\in S|\,\exists \omega \in \Omega :\pi (s|\omega )>0\right\} ,\) where \(\pi (s\vert \omega )\) denotes the probability of message profile s conditional on the state being \(\omega\). Note that \(S^\pi\) is finite as \(\pi \left( \cdot \vert \omega \right)\) has finite support for all \(\omega \in \Omega\). Moreover, receiver \(i \in N\) knows the joint distributions \(\pi \left( \cdot \vert \omega \right)\) for all \(\omega \in \Omega ,\) but only observes his private message \(s_i\) when message profile s realizes. The set of all signals with message profiles in S is denoted by \(\Pi (S).\) For each \(\pi \in \Pi (S)\), \(s_i\in S_i\), and \(\omega \in \Omega\), let
which is the probability that receiver \(i \in N\) observes \(s_i\) given that the true state is \(\omega\). (As \(\pi (\cdot |\omega )\) has finite support, this is well defined.) For each \(i \in N,\) define \(S^{\pi }_i=\left\{ s_i\in S_i|\,\exists \omega \in \Omega :\pi _{i}(s_i|\omega )>0\right\}\), which is the (finite) set of messages receiver i observes with positive probability under \(\pi\).
Given a signal \(\pi \in \Pi (S)\), a message profile \(s\in S^{\pi }\) generates the posterior belief profile \(\lambda ^{\pi ,s}\in \Delta (\Omega )^n\) defined by
So, \(\lambda ^{\pi ,s}_i(\omega )\) is i’s posterior belief that the true state is \(\omega\) upon receiving message \(s_i\).
Recall that since \(\Omega\) is finite and the sender employs finitely many messages conditional on the realization of \(\omega \in \Omega\), the support of a signal is finite. A signal \(\pi \in \Pi (S)\) induces the distribution \(\sigma ^\pi\) over posterior belief profiles if for all \(\lambda \in \Delta (\Omega )^n\) it holds that
In words, \(\sigma ^\pi (\lambda )\) is the ex ante probability of posterior belief profile \(\lambda\) given \(\pi \in \Pi (S)\).Footnote 7 By our assumptions made so far, the support of any inducible \(\sigma\) is a finite set, i.e., \(\sigma \in \Delta (\Delta (\Omega )^n)\). Given a set of message profiles S, we define the set of inducible distributions over posterior belief profiles by
Observe that \(\Sigma (S)\) depends on the set S of message profiles that the sender can use: a distribution \(\sigma\) might only be inducible if S is sufficiently rich.Footnote 8 This becomes relevant in situations where the sender’s message profile space is a priori limited, for example in case of schools who are bound to reveal information about students’ qualities within a grading framework (Boleslavsky & Cotton, 2015), or in case of a regulator who can reveal information about a bank’s financial situation only by a simple pass/fail stress test (Inostroza and Pavan, 2023). Thus, we will provide necessary and sufficient conditions on the size of S whenever appropriate.
We denote the support of \(\sigma \in \Delta \left( \Delta \left( \Omega \right) ^n\right)\) by \(\text {supp}(\sigma ).\) For each \(i \in N\) and \(\lambda _i \in \Delta (\Omega ),\) define
That is, \(\sigma _i\left( \lambda _i\right)\) is the ex ante probability that receiver i will have posterior belief \(\lambda _i\).
Let \(\sigma , \sigma ^{\prime } \in \Delta (\Delta (\Omega )^{n})\) be two distributions over posterior belief profiles and let \(\alpha \in [0,1].\) The convex combination \(\hat{\sigma } = \alpha \sigma + (1-\alpha ) \sigma ^{\prime }\) is defined by
Even in the case with a single receiver, \(\Sigma (S)\) need not be convex. For instance, if S consists of two messages, then it is possible to induce \(\sigma , \sigma ^{\prime } \in \Sigma (S)\) with disjoint supports of cardinality 2. If \(\hat{\sigma }\) is a strict convex combination of \(\sigma\) and \(\sigma ^{\prime }\), then \(\left| \text {supp}\left( \hat{\sigma }\right) \right| =4\), so that \(\hat{\sigma }\) cannot be induced with two messages only. The next result shows that \(\Sigma (S)\) is convex when the message profile space is sufficiently rich.
Proposition 3.1
Fix a set of message profiles S and let \(\sigma ,\sigma '\in \Sigma (S)\) and \(\alpha \in \left( 0,1\right)\). Then \(\alpha \sigma + (1-\alpha ) \sigma ^{\prime } \in \Sigma (S)\) if and only if \(\left| S_i\right| \ge \left| \rm{supp}(\sigma _i)\cup \rm{supp}(\sigma '_i)\right|\) for all \(i\in N\).
Proof
Let \(\hat{\sigma } = \alpha \sigma + (1-\alpha ) \sigma ^{\prime }\).
If there is \(i\in N\) such that \(\left| S_i\right| < \left| \text {supp}(\sigma _i)\cup \text {supp}(\sigma '_i)\right|\), then there are not sufficient messages to implement all of i’s possible beliefs in \(\text {supp}\left( \hat{\sigma }_i\right)\).
For the other direction, let \(\left| S_i\right| \ge \left| \text {supp}(\sigma _i)\cup \text {supp}(\sigma '_i)\right|\) for all \(i\in N\). Let \(\pi , \pi ^{\prime } \in \Pi (S)\) be such that \(\sigma ^{\pi } = \sigma\) and \(\sigma ^{\pi '} = \sigma '\). Since \(\left| S_i\right| \ge \left| \text {supp}(\sigma _i)\cup \text {supp}(\sigma '_i)\right|\), we can assume without loss of generality that there is \(s\in S\) with \(s_i\in S_i^{\pi }\cap S_i^{\pi '}\) if and only if there are \(\lambda ^{\pi ,s}\in \text {supp}(\sigma )\) and \(\lambda ^{\pi ',s}\in \text {supp}\left( \sigma '\right)\) such that \(\lambda _i^{\pi ,s}=\lambda _i^{\pi ',s}\).
Let \(\hat{\pi } = \alpha \pi + (1-\alpha ) \pi ^{\prime }\). Let \(s \in S^{\hat{\pi }}\) and \(i \in N\). Without loss of generality let \(s_i \in S^{\pi }_{i}\). Assume first that \(s_i\notin S_i^{\pi '}\). It holds that, for every \(\omega \in \Omega\),
Assume next that \(s_i\in S_i^{\pi '}\) and observe that in this case
for all \(\omega \in \Omega\). Thus,
We have shown that \(\text {supp}\left( \hat{\sigma }\right) =\text {supp}(\sigma )\cup \text {supp}\left( \sigma '\right)\). We now have, for every \(\lambda \in \Delta \left( \Omega \right) ^n,\)
Hence, \(\hat{\pi }\) induces \(\hat{\sigma }\). \(\square\)
Most of the literature considers \(S_i\) an arbitrary set that contains all messages that are necessary. The previous proposition implies that in this case the set of inducible posteriors is convex.
A distribution over posterior belief profiles \(\sigma \in \Delta \left( \Delta \left( \Omega \right) ^n\right)\) is Bayes-plausible if
That is, for each receiver the expected posterior belief equals his prior belief. It is a well-known fact since Aumann et al. (1995) that Bayes plausibility (i.e., the martingale condition) is necessary and sufficient for inducibility when there is a single receiver, given that S is sufficiently rich. It now follows for the multiple receiver case that every \(\sigma \in \Sigma (S)\) satisfies Bayes plausibility. We therefore obtain the following result, which is stated for later reference and without proof.
Proposition 3.2
Let S be a set of message profiles. Every \(\sigma \in \Sigma (S)\) is Bayes-plausible.
4 Implementing belief profiles
When a sender is interacting with a single receiver who has no private information, Bayes plausibility of a distribution \(\sigma \in \Delta (\Delta (\Omega ))\) is necessary and sufficient for \(\sigma\) to belong to \(\Sigma (S)\) when S is sufficiently rich. In particular, for any \(\lambda \in \Delta (\Omega )\) there is \(\sigma \in \Sigma (S)\) such that \(\sigma (\lambda )>0\). In contrast, in the multiple receiver case it is not true that any single posterior belief profile \(\lambda \in \Delta (\Omega )^n\) can occur with positive probability for a suitably chosen signal. Our first proposition shows that \(\lambda \in \Delta (\Omega )^n\) can belong to the support of some \(\sigma \in \Sigma (S)\) if and only if there is at least one state which, according to \(\lambda\), is deemed possible by all receivers.
Proposition 4.1
For every \(i \in N,\) let \(S_{i}\) contain at least two messages. Let \(\lambda \in \Delta (\Omega )^n\). There exists \(\sigma \in \Sigma (S)\) with \(\sigma (\lambda )>0\) if and only if there is \(\omega \in \Omega\) such that \(\prod _{i\in N}\lambda _i(\omega )>0\).
Proof
Assume \(\pi \in \Pi (S)\) is such that \(\sigma ^{\pi } = \sigma\) with \(\sigma (\lambda ) > 0.\) Suppose that \(\prod _{i\in N}\lambda _i(\omega )=0\) for all \(\omega \in \Omega\), that is, for all \(\omega \in \Omega\) there exists \(i_{\omega } \in N\) such that \(\lambda _{i_{\omega }}(\omega ) = 0.\) Let \(s \in S^{\pi }\) be such that \(\lambda ^{\pi ,s} = \lambda .\) Then it holds that, for all \(\omega \in \Omega\), \(\pi (s\vert \omega )\le \pi _{i_{\omega }}(s_{i_{\omega }} \vert \omega ) = 0.\) We find by (2) that \(\sigma (\lambda )= 0,\) leading to a contradiction. Consequently, there exists \(\omega \in \Omega\) such that \(\prod _{i\in N}\lambda _i(\omega )>0\).
For the converse, assume there exists \(\omega \in \Omega\) such that \(\prod _{i\in N} \lambda _i(\omega )>0.\) For every \(i\in N\) let \(\beta _i=\max _{\omega \in \Omega }(\lambda _i(\omega )/\lambda ^0(\omega ))\) be the highest ratio across states of posterior belief to prior belief for receiver i. Let \(x_{i},y_{i} \in S_{i}\) be two distinct messages. We define, for every \(\omega \in \Omega ,\)
Notice that \(\rho _{i}(x_{i} \vert \omega ) \le 1.\) We define \(\pi : \Omega \rightarrow \Delta \left( S\right)\) by
It holds that \(\pi\) is a signal with \(\pi _{i}(s_{i} \vert \omega ) = \rho _{i}(s_{i} \vert \omega )\) for every receiver \(i \in N.\)
Let \(i \in N.\) For every \(s \in S^{\pi }\) with \(s_i=x_{i},\) for every \(\omega \in \Omega ,\) it holds that
Thus, \(\lambda ^{\pi ,x} = \lambda ,\) where \(x = (x_{1}, \ldots , x_{n})\).
Let \(\omega \in \Omega\) be such that \(\lambda _i(\omega ) > 0\) for all \(i\in N\). Then
which implies that \(\lambda \in \text {supp}(\sigma ^{\pi })\). \(\square\)
Note that we require \(S_i\) to have at least two elements only for the “if” part of the proof. In the case \(S_i\) consists of only one message, no information is provided and the posterior belief has to conform to the prior.
Let there be two receivers and a binary state space, say \(\Omega =\left\{ X,Y\right\} ,\) as in our example in the introduction. It follows from Proposition 4.1 that a posterior belief profile \(\lambda\) with \(\lambda (X)=(0,1)\) cannot result with positive probability under any signal since \(\lambda _1(X)\lambda _2(X)=0\) and \(\lambda _1(Y)\lambda _2(Y)=0\). At the same time, for each \(\varepsilon >0,\) the posterior belief profile \(\lambda\) with \(\lambda \left( X\right) =\left( \varepsilon ,1\right)\) can be obtained with positive probability.
Proposition 4.1 leads to some interesting comparisons with the Bayesian learning literature. Allon et al. (2021) consider a model in which agents obtain information about the true state of the world in a dynamic setting and study the evolution of beliefs. The authors show that even when the society’s prior belief distribution is not polarized, the prolonged learning process leads to a wedge between the agents’ beliefs; we show that the sender can create almost arbitrarily large disagreement between receivers by one-shot communication with positive probability, even when they have a common prior. In a related model, Mostagir and Siderius (2022) show that a planner who wishes to manipulate the beliefs of Bayesian agents in a particular way should target the agents who are the most polarized, i.e., whose beliefs are farthest away from the truth. Thus, if agents are initially in agreement, such a planner might benefit from first creating disagreement amongst the agents as described by Proposition 4.1 and then attempt to implement her preferred belief distribution.
We now generalize Proposition 4.1 from a single posterior belief profile to finite sets of posterior belief profiles.
Proposition 4.2
Let \(R\subseteq \Delta \left( \Omega \right) ^n\) be finite. For every \(i \in N,\) let \(S_{i}\) contain at least \(\left| R_i\right| + 1\) messages, where \(R_{i} = \text {proj}_i(R).\) There exists \(\sigma \in \Sigma (S)\) with \(R \subseteq \text {supp}(\sigma )\) if and only if for each \(\lambda \in R\) there exists \(\omega \in \Omega\) such that \(\prod _{i\in N}\lambda _i(\omega )>0\).
Proof
Proposition 4.1 implies necessity. For the other direction, let \(R_i=\left\{ \lambda _i^1,\ldots ,\lambda _i^{m_i}\right\}\), let \(\left\{ x_i^1,\ldots , x_i^{m_i}, y_i\right\} \subseteq S_i\) be such that \(x^k_i\ne x_i^{\ell }, y_i\) for all \(k\ne \ell\) and all \(i\in N\). Let \(R=\left\{ \lambda ^1,\ldots ,\lambda ^m\right\}\) and define \(\pi ^1,\ldots ,\pi ^m\) as in the proof of Proposition 4.1, where, for all \(i\in N\) and all \(k = 1,\ldots ,m\) one has \(\lambda ^{k} \in \text {supp}(\sigma ^{\pi ^{k}})\) and \(S_i^{\pi ^k}\subseteq \left\{ x_i^k,y_i\right\}\). Let \(\alpha ^1,\ldots ,\alpha ^m>0\) with \(\sum _{k=1}^m\alpha ^k=1\), and let \(\sigma =\sum _{k=1}^m\alpha ^k\sigma ^{\pi ^{k}}\). Since \(\left| S_i\right| \ge m_i+1 = |\bigcup _{k=1}^m \text {supp}(\sigma _i^{\pi ^{k}})|\), iterative application of Proposition 3.1 implies that \(\sigma \in \Sigma (S)\). Moreover, by construction, \(\sigma ^{\pi }\left( \lambda ^k\right) =\alpha ^k\sigma ^{\pi ^{k}}\left( \lambda ^{k}\right) >0\). \(\square\)
Observe that Proposition 4.2 sharpens an earlier result in Sobel (2014). There the author showed that collections of strictly positive posterior belief profiles can be implemented. Our proposition characterizes the set of posterior belief profiles that can be implemented: in particular, we allow belief profiles that assign zero probability to some states as long as there is no such disagreement as in Proposition 4.1, i.e., as long as for each posterior belief profile there exists at least one state that is deemed possible by all receivers.
At this point we have identified sets that can be subsets of the support of an inducible distribution over posterior belief profiles. In Sect. 7 we characterize all inducible distributions over posterior belief profiles and the sets that can be the support of such distributions.
5 Minimal and individually minimal signals
A large part of the literature is interested in “straightforward” signals (Kamenica & Gentzkow, 2011) that send recommendations to receivers about what action to take. In the present paper, we do not specify sets of feasible actions for receivers, so that sending recommendations has no meaning. Nevertheless, some signals are easier to handle than others and this and the next section will introduce some important classes.
Given a signal \(\pi \in \Pi (S)\) and \(s,s'\in S^{\pi }\) with \(s\ne s'\), it is possible that \(\lambda ^{\pi ,s}=\lambda ^{\pi ,s'}\). That is, two distinct message profiles can generate the same posterior belief profile. This motivates the following definition.
Definition 5.1
Let S be a set of message profiles. A signal \(\pi \in \Pi (S)\) is minimal if \(\vert S^{\pi }\vert =\vert \text {supp}(\sigma ^{\pi })\vert\). The set of minimal signals is denoted by \(\Pi ^{\mathrm m}(S)\).
Under a minimal signal, different message profiles lead to different posterior belief profiles. We give an illustration of a minimal signal in the following example.
Example 5.2
Let \(N=\left\{ 1,2\right\}\), \(\Omega =\left\{ X,Y\right\}\), \(S_1=\left\{ v,w\right\}\), and \(S_2=\left\{ w,x,y\right\}\). Assume that agents have a common prior \(\lambda ^0(X)=1/2\). Recall the \(\pi\) in our illustrative example:
\(\pi\) | (v, x) | (v, y) | (w, w) |
---|---|---|---|
X | \(\frac{1}{2}\) | 0 | \(\frac{1}{2}\) |
Y | 0 | \(\frac{1}{2}\) | \(\frac{1}{2}\) |
We have \(S^{\pi }=\left\{ (v,x), (v,y), (w,w)\right\}\). Irrespective of the message received, receiver 1 gathers no information about the state: he has posterior beliefs \(\lambda ^{\pi ,(v,x)}_1(X) = \lambda ^{\pi ,(v,y)}_1(X) = \lambda ^{\pi ,(w,w)}_1(X) = 1/2.\) For receiver 2, we have \(\lambda ^{\pi ,(v,x)}_2(X)=1\), \(\lambda ^{\pi ,(v,y)}_2(X)=0\), and \(\lambda ^{\pi ,(w,w)}_2(X)=1/2\). It follows that
Since \(\vert S^{\pi }\vert =\vert \text {supp}(\sigma ^{\pi })\vert\), \(\pi\) is minimal. \(\triangle\)
In case of a single receiver, it is sufficient to have a bijection between \(S^{\pi }\) and \(\text {supp}(\sigma ^\pi )\) to ensure that each message leads to a different posterior, that is, to ensure that the signal employs a minimal number of messages. If there are multiple receivers, however, the existence of such a bijection does not guarantee that the number of messages for each receiver is indeed minimal. For instance, the two messages v, w in Example 5.2 both lead to the posterior belief \(\lambda _{1}(X) = 1/2\) for receiver 1.
Definition 5.3
Let S be a set of message profiles. A signal \(\pi \in \Pi (S)\) is i-minimal if it holds that \(\vert S^{\pi }_i\vert =\vert \text {supp}(\sigma ^{\pi }_i)\vert\). If \(\pi\) is i-minimal for all \(i\in N\), then we call \(\pi\) individually minimal. The set of all individually minimal signals is denoted by \(\Pi ^{\mathrm i}(S)\).
Under an individually minimal signal for each receiver any two different messages must lead to two different posterior beliefs. Hence, the number of different posterior beliefs a receiver can have equals the cardinality of \(S^{\pi }_i\).
Example 5.4
Recall the minimal signal \(\pi\) in Example 5.2. Receiver 1 has the same posterior belief after observing v and observing w, i.e., \(\lambda ^{\pi ,(v,x)}_1(X)=\lambda ^{\pi ,(w,w)}_1(X)\). Thus, \(\pi\) is not individually minimal. Consider the signal \(\pi ^{\prime }\) defined by:
\(\pi ^{\prime }\) | (w, x) | (w, y) | (w, w) |
---|---|---|---|
X | \(\frac{1}{2}\) | 0 | \(\frac{1}{2}\) |
Y | 0 | \(\frac{1}{2}\) | \(\frac{1}{2}\) |
We have \(S^{\pi ^{\prime }}=\left\{ (w,x), (w,y), (w,w)\right\}\) and accordingly we can write the support of \(\sigma ^{\pi ^{\prime }}\) as
Note that \(\text {supp}(\sigma ^{\pi })= \text {supp}(\sigma ^{\pi ^{\prime }})\). Since for all \(s,t\in S^{\pi ^{\prime }}\) and each \(i\in N\) we have \(\lambda ^{\pi ',s}_i=\lambda ^{\pi ',t}_i\) if and only if \(s_i=t_i\), \(\pi ^{\prime }\) is individually minimal. \(\triangle\)
For any signal \(\pi \in \Pi (S)\), \(\vert S^{\pi }_i\vert =\vert \text {supp}(\sigma ^{\pi }_i)\vert\) for all \(i\in N\) guarantees that a minimal number of messages is employed and implies that the number of employed message profiles is minimal as well. Thus, the following lemma does not come as a surprise.
Lemma 5.5
Let S be a set of message profiles. It holds that \(\Pi ^{\rm{i}}(S)\subseteq \Pi ^{\rm{m}}(S)\).
Proof
Let \(\pi \in \Pi ^{\mathrm i}(S)\). For each \(i\in N\) there exists a bijection \(\phi _i:S_i^{\pi }\rightarrow \text {supp}\left( \sigma ^{\pi }_i\right)\) since \(\pi\) is individually minimal. In particular, for every \(s \in S^{\pi },\) we have \(\lambda ^{\pi ,s}=\left( \phi _i\left( s_{i}\right) \right) _{i\in N}\) so that there is a bijection between \(S^{\pi }\) and \(\text {supp}\left( \sigma ^{\pi }\right)\). Hence, \(\vert S^{\pi }\vert =\vert \text {supp}(\sigma ^{\pi })\vert\), that is, \(\pi\) is minimal. \(\square\)
We close this section by claiming that any distribution in \(\Sigma (S)\) can be induced by an individually minimal signal. We do not provide a proof of Theorem 5.6 here, as it will follow easily from later results. The proof can be found after Corollary 7.3.
Theorem 5.6
Let S be a set of message profiles. If \(\sigma \in \Sigma (S),\) then there exists \(\pi \in \Pi ^{\rm{i}}(S)\) such that \(\sigma ^{\pi } = \sigma .\)
6 Language-independent signals
The same distribution over posterior belief profiles can be induced by various signals with potentially disjoint message profile spaces. We now proceed to show that there is a canonical way to describe signals. The principal idea is that the sender sends to each receiver the belief that he should have after observing the message.
Definition 6.1
Let S be a set of message profiles. A signal \(\pi \in \Pi (S)\) is a language-independent signal (LIS) if \(S^{\pi } \subseteq \Delta (\Omega )^n\) and, for all \(s \in S^{\pi },\) \(\lambda ^{\pi ,s} = s.\) The set of language-independent signals is denoted by \(\Pi ^{\ell }(S)\).
Example 6.2
Let \(N = \left\{ 1,2\right\}\), \(\Omega =\left\{ X,Y\right\}\), and \(\lambda ^0(X)=1/3\). The signal \(\pi \in \Pi (S)\) is defined as follows:
\(\pi\) | (x, x) | (x, y) | (y, x) | (y, y) |
---|---|---|---|---|
X | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) |
Y | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{5}{8}\) |
For any \(i\in N\), we have \(\lambda ^{\pi ,(x,x)}_i(X)=1/2\) and \(\lambda ^{\pi ,(y,y)}_i(X)=1/4\). Hence, \(\pi\) is in fact individually minimal. The support of \(\sigma ^{\pi }\) is equal to
It holds that \(\sigma ^{\pi }\left( \lambda ^{\pi ,(x,x)}\right) =\sigma ^{\pi }\left( \lambda ^{\pi ,(x,y)}\right) =\sigma ^{\pi }\left( \lambda ^{\pi ,(y,x)}\right) =1/6\) and \(\sigma ^{\pi }\left( \lambda ^{\pi ,(y,y)}\right) =1/2\).
The signal \(\pi ^{\prime } \in \Pi (S)\) is obtained by switching messages x and y, so
\(\pi ^{\prime }\) | (x, x) | (x, y) | (y, x) | (y, y) |
---|---|---|---|---|
X | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) |
Y | \(\frac{5}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) |
It is immediate that \(\sigma ^{\pi } = \sigma ^{\pi ^{\prime }}.\)
Next, consider the signal \(\hat{\pi }\) that corresponds to the convex combination of \(\pi\) and \(\pi ^{\prime }\) with equal weights: \(\hat{\pi } = 1/2 \pi + 1/2 \pi ^{\prime }.\) We have that
\(\hat{\pi }\) | (x, x) | (x, y) | (y, x) | (y, y) |
---|---|---|---|---|
X | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) |
Y | \(\frac{3}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{3}{8}\) |
Perhaps surprisingly, it holds that \(\sigma ^{\hat{\pi }} \ne \sigma ^{\pi } = \sigma ^{\pi ^{\prime }}\).Footnote 9 In fact, as can be verified easily, \(\sigma ^{\hat{\pi }}\) is the distribution that assigns probability 1 to the posterior belief profile \(\left( \lambda ^0,\lambda ^0\right)\). It follows that the set of signals which induce a particular distribution is not convex. Observe that \(\hat{\pi }\) is not individually minimal, which implies that \(\Pi^{\mathrm{i}}(S)\) is also not convex.
The signals \(\pi ^{\ell },\) \(\pi ^{\prime \ell },\) and \(\hat{\pi }^{\ell }\) are obtained by relabeling the message profiles sent by \(\pi ,\) \(\pi ^{\prime },\) and \(\hat{\pi },\) respectively, with the posterior belief profiles they lead to. We have that \(\pi ^{\ell } = \pi ^{\prime \ell }.\) Both are equal to
\(\pi ^{\ell }, \pi ^{\prime \ell }\) | \(\left( (\frac{1}{2},\frac{1}{2}),(\frac{1}{2},\frac{1}{2})\right)\) | \(\left( (\frac{1}{2},\frac{1}{2}),(\frac{1}{4},\frac{3}{4})\right)\) | \(\left( (\frac{1}{4}, \frac{3}{4}),(\frac{1}{2},\frac{1}{2})\right)\) | \(\left( (\frac{1}{4},\frac{3}{4}),(\frac{1}{4},\frac{3}{4})\right)\) |
---|---|---|---|---|
X | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) |
Y | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{5}{8}\) |
Each receiver has posterior belief (1/2, 1/2) upon observing message (1/2, 1/2) and has posterior belief (1/4, 3/4) upon observing message (1/4, 3/4). Thus, \(\pi ^{\ell }\) and \(\pi ^{\prime \ell }\) are language-independent.
Finally, \(\hat{\pi }^{\ell }\) sends \(\lambda ^0\) to both players with probability 1. In particular, \(\hat{\pi }^{\ell }\) is not a convex combination of \(\pi ^{\ell }\) and \(\pi '^{\ell }\). \(\triangle\)
The next result states that an LIS is individually minimal.
Lemma 6.3
Let S be a set of message profiles. It holds that \(\Pi ^{\mathrm \ell }(S) \subseteq \Pi ^{\mathrm i}(S).\)
Proof
Let \(\pi \in \Pi ^{\ell }(S),\) \(s \in S^{\pi },\) and \(i \in N.\) It holds that \(\lambda ^{\pi ,s}_{i} = s_{i}\) by definition of an LIS. This defines an identity between \(S^{\pi }_{i}\) and \(\text {supp}(\sigma ^{\pi }_{i}).\) It follows that \(|S^{\pi }_{i}| = |\text {supp}(\sigma ^{\pi }_{i})|.\) \(\square\)
By Lemma 6.3 we know that an LIS is individually minimal and by Lemma 5.5 individual minimality implies minimality. Thus, there is a chain of inclusions between \(\Pi ^{\ell }(S)\), \(\Pi ^\mathrm{{i}}(S)\), and \(\Pi ^\mathrm{{m}}(S)\).
Corollary 6.4
Let S be a set of message profiles. Then \(\Pi ^{\ell }(S)\subseteq \Pi ^\mathrm{{i}}(S)\subseteq \Pi ^\mathrm{{m}}(S)\subseteq \Pi (S)\).
Since we can transform any given individually minimal signal into an LIS by relabeling each message with the posterior belief that message leads to, an immediate consequence of Theorem 5.6 is that any element of \(\Sigma (S)\) can be induced by an LIS if \(\Delta (\Omega )^{n} \subseteq S,\) a result also obtained by Arieli et al. (2021) for a binary state space. We denote the set of all language-independent signals that induce a distribution \(\sigma \in \Sigma (S)\) by \(\Pi ^\ell (\sigma ).\) One advantage of language-independent signals is that for each \(\sigma \in \Sigma (S)\) it holds that \(\Pi ^\ell (\sigma )\) is convex. The proof of this statement, however, is postponed as it follows easily from later results, and can be found after Corollary 7.3.
Proposition 6.5
Let S be a set of message profiles, \(\Delta (\Omega )^n \subseteq S\), and \(\sigma \in \Sigma (S).\) Then \(\Pi ^{\ell }(\sigma )\) is non-empty and convex.
Proposition 6.5 contrasts Example 6.2 where we showed that both the set of all signals and the set of all individually minimal signals that induce a given \(\sigma\) are typically not convex.Footnote 10 This makes language-independent signals particularly attractive. Note that \(\Pi ^\ell (\sigma )\) is not necessarily a singleton, as there might be different language-independent signals that induce a particular distribution of beliefs, which we illustrate later in Example 7.6. The multiplicity of elements of \(\Pi ^\ell (\sigma )\) stems from the fact that language-independent signals might differ on the higher order beliefs even when they are inducing the same first-order beliefs.
Equivalency of signals
Recall that given an individually minimal signal, we can obtain an LIS by simply replacing messages with the posterior beliefs they lead to. More generally, given a signal \(\pi \in \Pi (S)\), one can define \(\pi '\in \Pi (S)\) by a one-to-one change in the names of messages in \(S^{\pi }_i\) for each \(i\in N\). In this case, we typically have \(S^{\pi '}\ne S^{\pi }\), though we intuitively think of both signals as equivalent. More formally, we have the following definition.
Definition 6.6
Let S and \(\hat{S}\) be two sets of message profiles. Two signals \(\pi : \Omega \rightarrow \Delta (S)\) and \(\hat{\pi }: \Omega \rightarrow \Delta (\hat{S})\) are equivalent \((\pi \sim \hat{\pi })\) if for every \(i \in N\) there is a bijection \(\psi _{i}: S^{\pi }_{i} \rightarrow \hat{S}^{\hat{\pi }}_{i}\) such that, for every \(\omega \in \Omega ,\) for every \(s \in S^{\pi },\) \(\hat{\pi }\left( \psi (s)\vert \omega \right) =\pi (s|\omega )\).Footnote 11
We can interpret equivalent signals as providing the same information in different languages. Indeed, let \(s_{i} \in S^{\pi }_{i}\) and \(\hat{s}_{i}\in \hat{S}^{\hat{\pi }}_{i}\) be such that \(\psi _{i}(s_{i}) = \hat{s}_{i}.\) It holds that
Now consider \(s \in S^{\pi }\) and \(\hat{s} \in \hat{S}^{\hat{\pi }}\) such that \(\hat{s} = \psi (s).\) For every \(i \in N,\) we have that
It follows from (5) that sending message profile s under signal \(\pi\) and sending message profile \(\hat{s}\) under signal \(\hat{\pi }\) results in the same posterior belief profile. It is also immediate from Definition 6.6 that \(\hat{S}^{\hat{\pi }} = \psi (S^{\pi }).\)
The next proposition, stating that equivalent signals induce the same distribution over posterior belief profiles, now follows easily.
Proposition 6.7
Fix two sets of message profiles S and \(\hat{S}\), and let \(\pi : \Omega \rightarrow \Delta (S)\) and \(\hat{\pi }: \Omega \rightarrow \Delta (\hat{S})\) be such that \(\pi \sim \hat{\pi }.\) It holds that \(\sigma ^{\pi } = \sigma ^{\hat{\pi }}.\)
Proof
For every \(i \in N\) there is a bijection \(\psi _{i}: S^{\pi }_{i} \rightarrow \hat{S}^{\hat{\pi }}_{i}\) such that, for every \(\omega \in \Omega ,\) for every \(s \in S^{\pi },\) \(\hat{\pi }\left( \psi (s)\vert \omega \right) =\pi (s|\omega )\). Let \(s\in S^{\pi }\) and \(\hat{s} \in \hat{S}^{\hat{\pi }}\) be such that \(\psi (s) = \hat{s}.\) It follows from (5) that \(\lambda ^{\pi ,s} = \lambda ^{\hat{\pi },\hat{s}}.\) Since \(\hat{S}^{\hat{\pi }} = \psi (S^{\pi }),\) we have that \(\text {supp}(\sigma ^{\hat{\pi }})= \text {supp}(\sigma ^{\pi }).\) Moreover, it holds that, for every \(\lambda \in \text {supp}(\sigma ^{\pi }),\)
\(\square\)
Note that the converse of Proposition 6.7 is not true: as we will see in Example 7.6 there are signals that induce the same distribution over posterior belief profiles but that are not equivalent.
The next proposition shows that each set of equivalent signals contains at most one LIS.
Proposition 6.8
Fix a set of message profiles S and let \(\pi ,\pi '\in \Pi ^\ell (S)\) with \(\pi \sim \pi '\). It holds that \(\pi =\pi ^{\prime }\).
Proof
By Proposition 6.7 it holds that \(\sigma ^{\pi } = \sigma ^{\pi ^{\prime }},\) so \(S^{\pi }= \text {supp}\left( \sigma ^{\pi }\right) = \text {supp}\left( \sigma ^{\pi ^{\prime }}\right) = S^{\pi '}.\) As \(\pi \sim \pi ',\) for every \(i \in N\) there is a bijection \(\psi _{i}: S^{\pi }_{i} \rightarrow S^{\pi '}_{i}\) such that, for every \(\omega \in \Omega ,\) for every \(s \in S^{\pi },\) \(\pi '\left( \psi (s)\vert \omega \right) =\pi (s|\omega )\). In particular, since \(\pi , \pi ' \in \Pi ^{\ell }(S),\) we have, for every \(i \in N,\) for every \(\lambda \in S^{\pi },\)
where the first and third equality follow since \(\pi , \pi ^{\prime } \in \Pi ^{\ell }(S)\), and the second equality uses (5). It follows that \(\pi =\pi ^{\prime }\). \(\square\)
Observe that a signal that is not individually minimal cannot be equivalent to an LIS as the required bijection between message spaces cannot exist. Nevertheless for every signal there is a canonical way to find an LIS that induces the same posterior. The construction heavily relies on the following lemma, which is straightforward and, therefore, stated without proof.Footnote 12
Lemma 6.9
Fix a set of message profiles S and let \(\pi \in \Pi (S)\) be a signal. It holds that
Lemma 6.9 extends the formula for Bayesian updating and applies it to all messages simultaneously that lead to a particular posterior belief. According to the lemma, distinct messages that lead to the same posterior can be replaced by the same message. Thus, the following result is immediate and we present it without proof.
Corollary 6.10
Fix a set of message profiles S and let \(\Delta (\Omega )^{n} \subseteq S\). For \(\pi \in \Pi (S)\) define \(\pi ^{\ell }: \Omega \rightarrow \Delta (S)\) as
Then \(\sigma ^{\pi ^{\ell }}=\sigma ^{\pi }\). Moreover, if \(\pi \in \Pi ^\mathrm{{i}}(S)\) then \(\pi ^{\ell }\sim \pi\).
7 Inducible distributions
Unlike in the single-receiver case, when dealing with multiple receivers, Bayes plausibility alone is not sufficient to ensure that a distribution over posterior belief profiles belongs to \(\Sigma (S).\)Footnote 13
Example 7.1
Let \(N=\left\{ 1,2,3\right\}\), \(\Omega =\left\{ X,Y\right\}\), and \(S = \Delta (\Omega )^{3}.\) Assume the agents have common prior \(\lambda ^0(X)=1/6\). Let \(\lambda ^1(X)=(1/2,1/2,0)\), \(\lambda ^2(X)=(1/2,0,1/2)\), \(\lambda ^3(X)=(0,1/2,1/2),\) and \(\lambda ^4(X) = (0,0,0)\) and let \(\sigma \in \Delta \left( \Delta \left( \Omega \right) ^3\right)\) be given by \(\sigma \left( \lambda ^1\right) =\sigma \left( \lambda ^2\right) =\sigma \left( \lambda ^3\right) =1/6\) and \(\sigma \left( \lambda ^4\right) =1/2\). Then, for each \(i\in N,\) we have \(\sigma _i \left( 1/2,1/2\right) = 1/3\) and \(\sigma _i(0,1) = 2/3.\)
First note that \(\sigma\) is Bayes-plausible:
Suppose that signal \(\pi \in \Pi (S)\) induces \(\sigma .\) By Corollary 6.10 it is without loss of generality to assume that \(\pi \in \Pi ^{\ell }(S).\) In this case, for any receiver, observing (1/2,1/2) leads to posterior belief (1/2,1/2), and observing (0,1) leads to posterior belief (0,1). This implies that no receivers can observe (0,1) in state X, i.e., \(\pi _i((0,1)|X)=0\) for all \(i\in N\). It follows that \(\pi \left( \lambda ^1|X\right) =\pi \left( \lambda ^2|X\right) =\pi \left( \lambda ^3|X\right) =\pi \left( \lambda ^4|X\right) =0\), which obviously leads to a contradiction. \(\triangle\)
To guarantee that a distribution over posterior belief profiles belongs to \(\Sigma (S),\) additional conditions need to be imposed on top of Bayes plausibility. In Theorem 7.2, we provide necessary and sufficient conditions for a distribution over posterior belief profiles to belong to \(\Sigma (S)\).
Theorem 7.2
Let S be a set of message profiles and \(\sigma \in \Delta (\Delta (\Omega )^n)\) be such that, for every \(i \in N,\) \(|S_{i}| \ge |\rm{supp}(\sigma _{i})|.\) Then \(\sigma \in \Sigma (S)\) if and only if \(\sigma\) is Bayes-plausible and there exists \(p \in \mathbb R^{\Omega \times \text {supp}(\sigma )}_+\) such that
If \(\sigma \in \Sigma (S),\) then the signal \(\pi :\Omega \rightarrow \Delta (\Delta (\Omega )^n)\) defined by
is an LIS such that \(\sigma ^{\pi } = \sigma .\)
Condition (i) can be interpreted as “posterior marginality" as it states that the probability of a posterior belief profile \(\lambda\) is the marginal of \(p(\omega ,\lambda )\). The right-hand side of condition (ii) is the probability that \(\omega\) is the true state according to i’s belief \(\lambda _i\) multiplied with the probability that i has belief \(\lambda _i\). Thus, the sum on the left-hand side is the probability that i has belief \(\lambda _i\) and \(\omega\) is the true state. Intuitively, \(p(\omega ,\lambda )\) can be interpreted as the probability of the state being \(\omega\) and the induced posterior being \(\lambda\).
Proof
Assume that \(\sigma\) is Bayes-plausible and there exists \(p \in \mathbb R^{\Omega \times \text {supp}(\sigma )}_+\) such that (i) and (ii) are satisfied. Let \(\pi\) be defined as in (8). We first show that \(\pi\) is a signal.
Let \(\omega \in \Omega .\) Obviously, it holds that, for every \(\lambda \in \Delta (\Omega )^{n},\) \(\pi (\lambda | \omega ) \ge 0.\) In formula (9) that follows next, \(i \in N\) is an arbitrarily chosen receiver. It holds that
where the last equality is true as \(\sigma\) is Bayes-plausible. We find that
which proves that \(\pi\) is a signal.
Next, we show that \(\pi\) is an LIS. Let \(\omega \in \Omega ,\) \(i \in N,\) and \(\lambda _i \in \text {supp}(\sigma _{i}).\) It holds that
As message \(\lambda _i\) leads to posterior \(\lambda _i\), \(\pi\) is an LIS.
We show next that \(\sigma ^{\pi } = \sigma .\) If \(\sigma \left( \lambda \right) =0\), then \(\sigma ^{\pi }\left( \lambda \right) =0\) by construction. So, let \(\lambda \in \text {supp}(\sigma )\). It holds that
At this point we have shown that \(\sigma\) is inducible if \(\text {supp}(\sigma _i)\subseteq S_i\). Recall that \(\left| S_i\right| \ge \text {supp}(\sigma _i)\). For every \(i \in N,\) let \(T_{i}\) be a subset of \(S_{i}\) with cardinality equal to \(\left| \text {supp}(\sigma _{i})\right|\) and take a bijection \(\psi _{i}: \text {supp}(\sigma _{i}) \rightarrow T_{i}.\) Define the signal \(\pi ^{\prime }: \Omega \rightarrow \Delta (S)\) by
Then \(\pi \sim \pi ^{\prime }\), so by Proposition 6.7 we have that \(\sigma ^{\pi ^{\prime }} = \sigma ^{\pi } = \sigma\). It follows that \(\sigma \in \Sigma (S).\)
Now assume that \(\sigma \in \Sigma (S).\) It follows from Proposition 3.2 that \(\sigma\) is Bayes-plausible. Let \(\pi \in \Pi (S)\) be such that \(\sigma ^{\pi } = \sigma .\) For every \(\omega \in \Omega ,\) for every \(\lambda \in \text {supp}(\sigma )\), define
We first show that (i) holds. We have that
Next, we show (ii) holds. Let \(\omega \in \Omega ,\) \(i\in N,\) and \(\lambda _i \in \text {supp}(\sigma _i).\) We have that
where the first equality follows from Lemma 6.9. \(\square\)
Theorem 7.2 explicitly shows what is needed in addition to Bayes plausibility to ensure that a distribution over posterior belief profiles belongs to \(\Sigma (S).\) Observe that any \(p \in \mathbb R^{\Omega \times \text {supp}(\sigma )}_+\) which satisfies Condition (i) is a finite probability distribution, that is, \(p\in \Delta \left( \Omega \times \text {supp}(\sigma )\right)\).Footnote 14
Note that while we pose a similar question to Arieli et al. (2021) and Ziegler (2020), we obtain a completely different characterization. To obtain a characterization for more than two players and a binary state space, Arieli et al. (2021) utilize the No Trade Theorem of Milgrom and Stokey (1982). After introducing a mediator who trades with the agents they provide an interval for the mediator’s expected payoff for a distribution to be inducible.Footnote 15 Ziegler (2020) generalizes Kamenica and Gentzkow (2011) to two players and makes use of “belief-dependence bounds” to provide a characterization for inducible distributions, which are defined over the CDFs associated with distributions of beliefs. In contrast, we allow for both a general finite state space and a finite number of receivers, and provide a characterization of inducible posteriors in terms of a system of equations, which represents the properties of the agents’ marginal beliefs.
Observe that by Equations (8) and (9) p is a common prior over \(\Omega \times \text {supp}(\sigma )\). Thus, Theorem 7.2 bears some resemblance to Proposition 1 in Mathevet et al. (2020). Yet, while they impose conditions on the common prior over belief hierarchies from which the posterior distribution emerges, our condition is formulated as a system of separate marginality conditions for all players.
While Theorem 7.2 is useful in determining whether a distribution of beliefs is inducible, it also provides an LIS that induces the desired distribution. In Example 7.4, we use Theorem 7.2 to show that a given distribution of beliefs is not inducible. In Example 7.6, we use two solutions to conditions (i) and (ii) to provide two distinct LIS’s that induce the same distribution. Yet, beforehand, we provide the proofs that have been left out in Sects. 5 and 6.
For any \(\sigma \in \Sigma (S),\) define
As \(P(\sigma )\) is defined as the set of non-negative matrix solutions to a system of linear equalities, where the system is such that the components of any solution matrix sum up to one, we immediately have the following result.
Corollary 7.3
Let S be a set of message profiles. For every \(\sigma \in \ \Sigma (S),\) \(P(\sigma )\) is a non-empty, compact, and convex polytope.
We are now ready to provide the remaining proofs of Sects. 5 and 6.
Proof of Theorem 5.6
Let \(\sigma \in \Sigma (S)\). Then it holds that, for every \(i \in N,\) \(|S_{i}| \ge \text {supp}(\sigma _{i})\). Theorem 7.2 implies that there is an LIS \(\pi : \Omega \rightarrow \Delta (\Delta (\Omega )^{n})\) which induces \(\sigma .\) For every \(i \in N,\) let \(T_{i}\) be a subset of \(S_{i}\) with cardinality equal to \(|\text {supp}(\sigma _{i})|\) and take a bijection \(\psi _{i}: \text {supp}(\sigma _{i}) \rightarrow T_{i}\). Let the signal \(\pi ^{\prime }: \Omega \rightarrow \Delta (S)\) be defined by
Then \(\pi \sim \pi ^{\prime },\) so by Proposition 6.7 we have that \(\sigma ^{\pi ^{\prime }} = \sigma ^{\pi } = \sigma\). As the LIS \(\pi\) is individually minimal, it follows that \(\pi ^{\prime } \in \Pi^{\mathrm{i}}(S).\) \(\square\)
Proof of Proposition 6.5
As \(P(\sigma )\) is a non-empty, compact, and convex polytope by Corollary 7.3 and \(\Pi ^\ell \left( \sigma \right)\) is a linear transformation of \(P(\sigma )\) by (8), \(\Pi ^\ell \left( \sigma \right)\) is a non-empty, compact, and convex polytope as well. \(\square\)
In the next example, we use Theorem 7.2 to determine whether a given distribution over posterior belief profiles belongs to \(\Sigma (S).\)
Example 7.4
Recall the distribution over posterior belief profiles \(\sigma\) in Example 7.1 with
and, we have \(\sigma (\lambda ^{1}) = \sigma (\lambda ^{2}) = \sigma (\lambda ^{3}) = 1/6\) and \(\sigma (\lambda ^{4}) = 1/2.\)
Suppose \(\sigma \in \Sigma (S).\) Then, by Theorem 7.2 there exists \(p \in P(\sigma )\) such that
where we make use of Condition (ii) for \(\omega = X.\) From the first line we obtain \(p\left( X,\lambda ^1\right) =p\left( X,\lambda ^2\right) =p\left( X,\lambda ^3\right) =1/12\). Combining this with the second, we find \(p\left( X,\lambda ^4\right) =-1/12\). Thus, p fails to be non-negative and \(\sigma \notin \Sigma (S).\) \(\triangle\)
Proposition 4.2 gives a necessary and sufficient condition for a finite set \(R \subseteq \Delta \left( \Omega \right) ^n\) to be a subset of \(\text {supp}(\sigma )\) for some \(\sigma \in \Sigma (S)\). We will now provide a necessary and sufficient condition for the opposite inclusion, i.e., we characterize those sets \(R \subseteq \Delta \left( \Omega \right) ^n\) such that there is some inducible \(\sigma \in \Sigma (S)\) whose support is restricted to R. We also characterize those sets R such that \(R=\text {supp}(\sigma )\) for some \(\sigma \in \Sigma (S)\).
Proposition 7.5
Fix a set of message profiles S and let the non-empty and finite \(R \subseteq \Delta (\Omega )^n\) be such that, for every \(i \in N,\) \(|S_{i}| \ge |R_{i}|.\) There exists \(\sigma \in \Sigma (S)\) with \(\rm{supp}(\sigma )\) \(\subseteq R\) if and only if there is \(p \in \mathbbm {R}^{\Omega \times R}_{+}\) such that
If such p exists, then the signal \(\pi :\Omega \rightarrow \Delta (R)\) defined by
is an LIS such that \(\rm{supp}(\sigma ^{\pi })\subseteq R\). Moreover, if p is such that, for all \(\lambda \in R\), \(\sum _{\omega \in \Omega }p\left( \lambda ,\omega \right) >0\), then \(\rm{supp}\left( \sigma ^{\pi }\right) =R\).
Proof
Assume that there is \(p\in \mathbbm {R}^{\Omega \times R}_{+}\) such (i) and (ii) hold. Let \(\pi :\Omega \rightarrow \Delta (R)\) be as defined in (11). We have that
Moreover, for every \(\omega \in \Omega ,\) \(i \in N,\) and \(\lambda _{i} \in S_i^{\pi }\), it holds that
Thus, \(\pi\) is an LIS and \(\text {supp}(\sigma ^{\pi }) = S^{\pi }\subseteq R\).
To account for message sets \(S_i\) that do not allow for language-independent messages, note that, for all \(i\in N\), \(|\text {supp}\left( \sigma ^{\pi }_i\right) |\le |R_i| \le |S_i|\). For every \(i \in N\) let \(T_{i}\) be a subset of \(S_{i}\) with \(|T_i|= |\text {supp}\left( \sigma ^{\pi _i}\right) |\) and take a bijection \(\psi _{i}: \text {supp}\left( \sigma ^{\pi }_i\right) \rightarrow T_{i}\). Let the signal \(\pi ^{\prime }: \Omega \rightarrow \Delta (S)\) be defined by
It holds that \(\pi \sim \pi ^{\prime },\) so by Proposition 6.7 we have that \(\sigma ^{\pi ^{\prime }} = \sigma ^{\pi }\) and \(\text {supp}(\sigma ^{\pi ^{\prime }}) = \text {supp}(\sigma ^{\pi })\subseteq R.\)
Now assume that \(\sigma \in \Sigma (S)\) is such that \(\text {supp}(\sigma ) \subseteq R.\) Then, by Theorem 7.2, there is an LIS \(\pi : \Omega \rightarrow \Delta (R)\) that induces \(\sigma\). Let
By construction, \(S^{\pi }=\text {supp}(\sigma )\subseteq R\) and \(p\left( \omega ,\lambda \right) =0\) for all \(\lambda \in R{\setminus } S^{\pi }\) and all \(\omega \in \Omega\). So, (i) is satisfied since
Further, for every \(\omega \in \Omega ,\) \(i \in N,\) and \(\lambda _{i} \in R_{i},\) it holds that
Hence, (ii) is satisfied.
Lastly, let p be such that, for all \(\lambda \in R\), \(\sum _{\omega \in \Omega }p\left( \lambda ,\omega \right) >0\). Then for each \(\lambda \in R\), there is \(\omega \in \Omega\) such that \(\pi \left( \lambda |\omega \right) >0\). Thus, \(\text {supp}\left( \sigma ^{\pi }\right) =S^{\pi }=R\). \(\square\)
As \(\pi\) is defined by (11), (i) ensures that \(\pi \left( \cdot |\omega \right) \in \Delta (\Omega )^{n}\) for all \(\omega \in \Omega\) so that \(\pi\) is a signal. Condition (ii) ensures correct belief updating: as before, the left-hand side is the probability that i has belief \(\lambda _i\) and the true state is \(\omega\); the right-hand side is the product of the probability that the state is \(\omega\) conditional on i’s having belief \(\lambda _i\) and the probability that i has belief \(\lambda _i\).
In our discussion of Proposition 6.7, stating that equivalent signals induce the same distribution, we announced that the converse need not be true. We can now easily provide the required counterexample.
Example 7.6
Let \(N = \{1,2\},\) \(\Omega = \{X,Y\},\) \(\lambda ^{0}(X) = 1/3,\) and \(S = \Delta (\Omega )^{n}.\) Consider the distribution \(\sigma\) defined by
\(\sigma (\lambda ^{1}) = \sigma (\lambda ^{2}) = \sigma (\lambda ^{3}) = 1/6\) and \(\sigma (\lambda ^{4}) = 1/2.\) One can easily verify that \(p, p^{\prime } \in \mathbb {R}^{\Omega \times \text {supp}(\sigma )}_{+}\) defined by
\(p(\omega ,\lambda )\) | \(\lambda ^1\) | \(\lambda ^2\) | \(\lambda ^3\) | \(\lambda ^4\) |
---|---|---|---|---|
X | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{1}{12}\) |
Y | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{5}{12}\) |
\(p'(\omega ,\lambda )\) | \(\lambda ^1\) | \(\lambda ^2\) | \(\lambda ^3\) | \(\lambda ^4\) |
---|---|---|---|---|
X | \(\frac{1}{6}\) | 0 | 0 | \(\frac{1}{6}\) |
Y | 0 | \(\frac{1}{6}\) | \(\frac{1}{6}\) | \(\frac{1}{3}\) |
are both solutions to the system of equations in Theorem 7.2. We define \(\pi , \pi ^{\prime } \in \Pi ^{\ell }(S)\) by applying (8) to p and \(p^{\prime },\) respectively, that is,
\(\pi (\lambda | \omega )\) | \(\lambda ^1\) | \(\lambda ^2\) | \(\lambda ^3\) | \(\lambda ^4\) |
---|---|---|---|---|
X | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{4}\) |
Y | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{1}{8}\) | \(\frac{5}{8}\) |
\(\pi '(\lambda | \omega )\) | \(\lambda ^{1}\) | \(\lambda ^{2}\) | \(\lambda ^{3}\) | \(\lambda ^{4}\) |
---|---|---|---|---|
X | \(\frac{1}{2}\) | 0 | 0 | \(\frac{1}{2}\) |
Y | 0 | \(\frac{1}{4}\) | \(\frac{1}{4}\) | \(\frac{1}{2}\) |
Both \(\pi\) and \(\pi '\) induce \(\sigma .\) Yet, as \(\pi \ne \pi ^{\prime },\) Proposition 6.8 implies that \(\pi\) and \(\pi ^{\prime }\) are not equivalent. \(\triangle\)
8 The information and posterior correspondences
Our objective in this section is to provide a framework in which we can analyze what receivers know about each other’s messages, so that we can later answer the question of how a sender can make sure that receivers know “as little as possible”. We follow the standard approach based on information correspondences, as in, for instance, Osborne and Rubinstein (1994), Chapter 5.
Given a signal \(\pi \in \Pi (S),\) we refer to an element \((\omega ,s) \in \Omega \times S^{\pi }\) such that \(\pi (s\vert \omega ) > 0\) as a history and to an element \((\omega ,\lambda ) \in \Omega \times \text {supp}(\sigma ^{\pi })\) such that there exists \(s \in S^{\pi }\) with \(\pi (s \vert \omega ) > 0\) and \(\lambda ^{\pi ,s} = \lambda\) as a posterior history. We denote the sets of histories and posterior histories, respectively, by
Note that if \(\pi \in \Pi ^{\ell }(S),\) then \(H^{\pi } = \Lambda ^{\pi }.\)
Example 8.1
Recall \(\pi\) and \(\pi '\) from Example 7.6. The sets of possible histories are:
As both signals are language-independent, we have \(\Lambda ^{\pi }=H^{\pi }\) and \(\Lambda ^{\pi '}=H^{\pi '}\). \(\triangle\)
We next introduce the standard notion of an information correspondence.
Definition 8.2
Fix a set of message profiles S and let \(\pi \in \Pi (S).\) The information correspondence \(P^{\pi }_i: H^{\pi }\rightrightarrows H^{\pi }\) of \(i \in N\) is defined as
That is, \(P^{\pi }_i(\omega ,s)\) is the set of histories receiver i considers possible when the true history is \((\omega ,s)\). As we call \(P^{\pi }_i\) an information correspondence, it seems appropriate to briefly show that this name is deserved, i.e., consistent with the common definition of an information correspondence.
Lemma 8.3
Fix a set of message profiles S and let \(\pi \in \Pi (S)\) and \(i \in N.\) The information correspondence \(P^{\pi }_i\) satisfies the following two conditions:
-
C1
For all \((\omega ,s)\in H^{\pi }\), \((\omega ,s)\in P^{\pi }_i(\omega ,s)\).
-
C2
If \((\omega ',s')\in P^{\pi }_i(\omega ,s)\), then \(P^{\pi }_i(\omega ',s')=P^{\pi }_i(\omega ,s)\).
Proof
Let \((\omega ,s) \in H^{\pi }.\) Suppose \((\omega ,s)\notin P^{\pi }_i(\omega ,s)\). Then, \(s_i\ne s_i\), a contradiction. Thus, C1 is satisfied.
Next, let \((\omega ',s')\in P^{\pi }_i(\omega ,s)\) and \((\omega '',s'')\in P^{\pi }_i(\omega ',s')\). Then, \(s''_i=s'_i= s_i\), so \((\omega '',s'')\in P^{\pi }_i(\omega ,s)\), and consequently, \(P^{\pi }_i(\omega ',s')\subseteq P^{\pi }_i(\omega ,s)\). Since \(s_i'=s_i\), it holds that \((\omega ,s)\in P^{\pi }_i(\omega ',s')\) as well, and the same arguments imply \(P^{\pi }_i(\omega ,s)\subseteq P^{\pi }_i(\omega ',s')\). So, C2 is satisfied. \(\square\)
It should be noted that an information correspondence satisfies conditions C1 and C2 if and only if it partitions the set of histories into information sets (see Osborne and Rubinstein, 1994, Lemma 68.3). In our case, we can use \(P_i^{\pi }\) to define a partition of the set \(H^{\pi }\) as
This partition reflects i’s knowledge about the true history: whenever the true history is \(\left( \omega ,s\right)\), i knows that the true history lies in \(P^{\pi }_i(\omega ,s)\).
Example 8.4
Recall \(\pi\) in Example 5.2. The information correspondence partitions the set of histories as follows:
Now consider \(\pi ^{\prime }\) in Example 5.4. The information correspondence partitions the set of histories as follows:
It is easy to verify that both C1 and C2 are satisfied. In particular, the information partitions of \(\mathcal P_i^\pi\) and, respectively, \(\mathcal P_i^{\pi '}\) are given by
\(\triangle\)
Even though \(\pi\) and \(\pi '\) in Example 8.4 induce the same distribution, it is not possible to compare their information partitions since they employ different messages and thus have distinct sets of histories. Still, we can compare such signals via the sets of possible posterior histories of receivers.
Definition 8.5
Fix a set of message profiles S and let \(\pi \in \Pi (S).\) The posterior correspondence \(Q^{\pi }_i: H^{\pi }\rightrightarrows \Lambda ^{\pi }\) of \(i \in N\) is defined as
The set \(Q^{\pi }_i(\omega ,s)\) contains all posterior histories i deems possible if the true history is \((\omega ,s)\).Footnote 16
Example 8.6
Recall the information correspondences in Example 8.4. The posterior correspondences related to \(\pi\) are as follows.
The posterior correspondences related to \(\pi ^{\prime }\) are as follows.
One can easily see that there is a bijection between the set of histories and the set of posterior histories for both \(\pi\) and \(\pi '\) that preserves the partition structure. \(\triangle\)
For \(\pi \in \Pi (S)\) and \(i\in N,\) define \(\mathcal Q^{\pi }_i=\left\{ Q^{\pi }_i(\omega ,s)\vert \,(\omega ,s)\in H^{\pi }\right\}\). Note that in Example 8.6 both \(\mathcal Q^{\pi }_i\) and \(\mathcal Q^{\pi '}_i\) are partitions for any \(i \in N.\) However, this is not always true.
Example 8.7
Let \(N=\left\{ 1,2\right\}\), \(\Omega =\left\{ X,Y\right\}\), and \(\lambda ^0(X)=1/3\). Let signal \(\pi \in \Pi (S)\) be given as follows:
\(\pi\) | (x, x) | (x, y) | (y, x) | (y, y) | (a, a) | (a, b) | (b, a) | (b, b) |
---|---|---|---|---|---|---|---|---|
X | \(\frac{1}{6}\) | 0 | 0 | \(\frac{1}{6}\) | \(\frac{1}{6}\) | \(\frac{1}{6}\) | \(\frac{1}{6}\) | \(\frac{1}{6}\) |
Y | 0 | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{1}{6}\) | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{1}{12}\) | \(\frac{5}{12}\) |
For the posterior correspondence we find
Since \(Q^{\pi }_1(X,(x,x))\ne Q^{\pi }_1(X,(a,a))\) and \(\left( X,\left( 1/2,1/2\right) \right) \in Q^{\pi }_1(X,(x,x))\cap Q^{\pi }_1(X,(a,a))\), \(\mathcal Q^{\pi }_1\) is not a partition. \(\triangle\)
The reason why \(Q^{\pi }_1\) in Example 8.7 is not a partition is that message profiles (x, x) and (a, a) lead to the same posterior belief profile, yet \(\left( x,x\right)\) realizes only in state X whereas \(\left( a,a\right)\) realizes in both states. This situation, of course, can happen only as long as the signal is not minimal. Thus, \(\pi \in \Pi^{\mathrm{m}}(S)\) is sufficient for \(\mathcal Q^{\pi }_i\) to be a partition for all \(i\in N\). To prove this we define, for \(\pi \in \Pi (S),\) the (surjective) map \(\phi : H^{\pi } \rightarrow \Lambda ^{\pi }\) by
Proposition 8.8
Fix a set of message profiles S and let \(\pi \in \Pi ^\mathrm{{m}}(S)\). Then \(\phi\) is a bijection and, for every \((\omega ,s),(\omega ',s')\in H^{\pi }\) and every \(i\in N\), it holds that \((\omega ,s)\in P^{\pi }_i(\omega ',s')\) if and only if \(\phi (\omega ,s)\in Q^{\pi }_i(\omega ',s')\). In particular, \(\mathcal Q^{\pi }_i\) is a partition.
Proof
First note that since \(\pi \in \Pi ^\mathrm{{m}}(S)\), for any \((\omega ,s),(\omega ',s')\in H^{\pi }\) with \(s\ne s'\), it holds that \(\left( \omega ,\lambda ^{\pi ,s}\right) \ne \left( \omega ',\lambda ^{\pi ,s'}\right)\). That is, no two distinct histories are mapped to the same posterior history. Thus, \(\phi\) is a bijection.
Let \((\omega ,s),(\omega ',s')\in H^{\pi }\) and \(i \in N.\) If \((\omega ,s)\in P^{\pi }_i(\omega ',s')\), then \(\phi \left( \omega ,s\right) =\left( \omega ,\lambda ^{\pi ,s}\right) \in Q^{\pi }_i\left( \omega ',s'\right)\) by the definition of \(Q^{\pi }_i\left( \omega ',s'\right)\). If \(\left( \omega ,\lambda ^{\pi ,s}\right) =\phi (\omega ,s)\in Q^{\pi }_i(\omega ',s')\), then \((\omega ,s)\in P^{\pi }_i(\omega ',s')\). Therefore, \((\omega ,s)\in P^{\pi }_i(\omega ',s')\) if and only if \(\phi (\omega ,s)\in Q^{\pi }_i(\omega ',s')\).
Suppose \(Q_i^{\pi }\left( \omega ,s\right) \cap Q_i^{\pi }\left( \omega ',s'\right) \ne \emptyset\). It follows that \(P_i^{\pi }\left( \omega ,s\right) \cap P_i^{\pi }\left( \omega ',s'\right) \ne \emptyset\), so \(P_i^{\pi }\left( \omega ,s\right) = P_i^{\pi }\left( \omega ',s'\right)\). Therefore, \(Q_i^{\pi }\left( \omega ,s\right) =\phi \left( P_i^{\pi }\left( \omega ,s\right) \right) = \phi \left( P_i^{\pi }\left( \omega ',s'\right) \right) = Q_i^{\pi }\left( \omega ',s'\right) ,\) so \(\mathcal Q^{\pi }_i\) is a partition. \(\square\)
The converse of Proposition 8.8 is not true. That is, even if the map \(\phi\) in (13) is a bijection with the required properties, it is still possible that \(\pi\) is not minimal.
Example 8.9
Let \(N=\left\{ 1,2\right\}\), \(\Omega =\left\{ X,Y\right\}\), and \(\lambda ^0(X)=1/3\). Let the signal \(\pi \in \Pi (S)\) be defined by
\(\pi\) | (a, a) | (b, b) | (a, c) | (c, a) | (b, d) | (d, b) | (e, e) |
---|---|---|---|---|---|---|---|
X | \(\frac{1}{6}\) | 0 | 0 | 0 | \(\frac{1}{4}\) | \(\frac{1}{6}\) | \(\frac{5}{12}\) |
Y | 0 | \(\frac{1}{4}\) | \(\frac{1}{6}\) | \(\frac{1}{4}\) | 0 | 0 | \(\frac{1}{3}\) |
Then, for receiver 1 we have \(\lambda ^{\pi ,(a,a)}_1(X)=\lambda ^{\pi ,(b,b)}_1(X)=1/3\), \(\lambda ^{\pi ,(c,a)}_1(X)=0\), \(\lambda ^{\pi ,(d,b)}_1(X)=1\), and \(\lambda ^{\pi ,(e,e)}_1(X)=5/13\). For receiver 2 we have \(\lambda ^{\pi ,(a,a)}_2(X)=\lambda ^{\pi ,(b,b)}_2(X)=1/4\), \(\lambda ^{\pi ,(a,c)}_2(X)=0\), \(\lambda ^{\pi ,(b,d)}_2(X)=1\), and \(\lambda ^{\pi ,(e,e)}_2(X)=5/13\). Note that message profiles (a, a) and (b, b) lead to the same posterior belief profile, (1/3, 1/4). Thus, \(\pi\) is not minimal. For the support of the induced distribution \(\sigma\), we find
The sets \(\mathcal{P}^{\pi }_{1}\) and \(\mathcal{Q}^{\pi }_{1}\) defined by the information and posterior correspondences of receiver 1 are as follows:
Similar calculations can be made for receiver 2. It is easily checked that not only are \(\mathcal Q_1^{\pi }\) and \(\mathcal{Q}^{\pi }_{2}\) partitions, but \(\phi\) is a bijection as well. The reason \(\mathcal Q^{\pi }_1\) and \(\mathcal{Q}^{\pi }_{2}\) are partitions, even though \(\pi \notin \Pi ^\mathrm{{m}}(S),\) is that the message profiles which lead to the same posterior, (a, a) and (b, b), never realize in the same state. \(\triangle\)
Observe that if \(\pi \in \Pi ^\ell (S),\) then \(\phi\) in (13) is the identity. Hence, Proposition 8.8 implies that the partitions \(\mathcal P_i^{\pi }\) and \(\mathcal Q_i^{\pi }\) are identical. For all \(\pi \in \Pi ^\mathrm{{i}}(S)\), let \(\pi ^{\ell }\in \Pi ^{\ell }(S)\) be defined as in (7), i.e., \(\pi ^{\ell }\) denotes the LIS obtained by replacing the messages of \(\pi\) by the posteriors they lead to. Then the posterior history partition of \(\pi\) is equal to the history partition of \(\pi ^{\ell }\). Thus, we have the following corollary.
Corollary 8.10
Fix a set of message profiles S and let \(\pi \in \Pi ^\mathrm{{i}}(S)\) and \(\pi ^{\ell } \in \Pi ^{\ell }(S)\) be defined as in (7). Then, for all \(i\in N\), \(\mathcal Q^{\pi }_i=\mathcal Q^{\pi ^{\ell }}_i=\mathcal P^{\pi ^{\ell }}_i\).
9 Informativeness of signals
Example 8.6 derives the posterior correspondences of the receivers under \(\pi\) and \(\pi '\) from Examples 5.2 and 5.4. Observe that receiver 1 has more precise information about receiver 2’s knowledge of the true state under \(\pi\): while he only observes w under \(\pi ^{\prime }\) and, thus, never learns what message receiver 2 has observed, under \(\pi\) upon observing v he knows that receiver 2 knows the true state. In this sense \(\pi\) is “more informative”: a notion that depends on the posterior correspondence and which we will make more formal soon. Beforehand, we make the brief observation that the posterior correspondence itself is invariant under equivalence. This stems from the fact that equivalence does not only require identical posteriors, but also identical information sets. In particular, under two equivalent signals a receiver will have identical higher order knowledge about everybody’s beliefs of any order.
Lemma 9.1
Fix a set of message profiles S and let \(\pi ,\pi '\in \Pi (S)\) with \(\pi \sim \pi '\). Then, for every \(i \in N,\) \(\mathcal Q_i^{\pi }=\mathcal Q_i^{\pi '}.\)
Proof
Since \(\pi \sim \pi '\), for every \(i \in N\) there is a bijection \(\psi _{i}: S^{\pi }_{i} \rightarrow S^{\pi ^{\prime }}_{i}\) such that, for every \(\omega \in \Omega ,\) for every \(s \in S^{\pi },\) \(\pi ^{\prime }(\psi (s) | \omega ) = \pi (s | \omega ).\)
Let \((\omega ,s) \in H^{\pi }\) and \(i \in N.\)
We have that \((\omega ^{\prime }, s^{\prime }) \in P^{\pi }_{i}(\omega ,s)\) if and only if \((\omega ^{\prime },s^{\prime }) \in H^{\pi }\) and \(s^{\prime }_{i} = s_{i}\), which is equivalent to \((\omega ^{\prime }, \psi (s^{\prime })) \in H^{\pi ^{\prime }}\) and \(\psi _{i}(s^{\prime }_{i}) = \psi _{i}(s_{i})\), which in turn is equivalent to \((\omega ^{\prime }, \psi (s^{\prime })) \in P^{\pi ^{\prime }}_{i}(\omega , \psi (s)).\)
Let \(\left( \omega ',\lambda '\right) \in Q_i^{\pi '}\left( \omega ,\psi \left( s\right) \right)\). Then, by the definition of \(Q_i^{\pi '}\), there is \(\left( \omega ',\psi \left( s'\right) \right) \in P_i^{\pi '}\left( \omega ,\psi \left( s\right) \right)\) with \(\lambda ^{\pi ',\psi \left( s^{\prime }\right) } = \lambda ^{\prime }.\) As shown in the previous paragraph, this implies \(\left( \omega ',s'\right) \in P_i^{\pi }\left( \omega ,s\right)\). Since by construction \(\lambda ^{\pi ,s'}=\lambda ^{\pi ',\psi \left( s'\right) }=\lambda '\), it follows that \(\left( \omega ',\lambda '\right) \in Q^{\pi }\left( \omega ,s\right)\) and therefore \(Q_i^{\pi '}\left( \omega ,\psi \left( s\right) \right) \subseteq Q^{\pi }_{i}\left( \omega ,s\right)\).
Since \(\sim\) is symmetric, we also have that \(Q^{\pi }_{i}\left( \omega ,s\right) \subseteq Q_i^{\pi '}\left( \omega ,\psi \left( s\right) \right) .\) \(\square\)
We argued in the beginning of this section that the signal \(\pi\) is “more informative” for receiver 1 than signal \(\pi ^{\prime }\); under \(\pi\), receiver 1 knows (with positive probability) that receiver 2 knows the true history. In particular, partition \(\mathcal Q^\pi _1\) is finer than partition \(\mathcal Q^{\pi '}_1\), so that receiver 1 can distinguish more between posterior histories. In other words, for any element of \(Q^\pi _1\) we can find an element of \(Q^{\pi '}_1\) that includes the former. We now formalize the definition of being more informative.
Definition 9.2
Fix a set of message profiles S. Let \(\sigma \in \Sigma (S)\) and \(\pi , \pi ' \in \Pi (\sigma ).\) The signal \(\pi '\) is at least as informative as \(\pi\) if for all \(i\in N\) it holds that
-
(i)
for all \(Q' \in \mathcal Q^{\pi '}_i\) there exists \(Q \in \mathcal Q^{\pi }_i\) such that \(Q' \subseteq Q\),
-
(ii)
for all \(Q \in \mathcal Q^{\pi }_i,Q' \in \mathcal Q^{\pi '}_i\) with \(Q \cap Q' \ne \emptyset\) it holds that \(Q' \subseteq Q\).
Moreover, \(\pi\) and \(\pi '\) are equally informative if \(\pi\) is at least as informative as \(\pi '\) and vice versa; \(\pi '\) is more informative than \(\pi\) if \(\pi '\) is at least as informative as \(\pi\) and not equally informative.
Observe that to compare \(\pi\) and \(\pi '\) in terms of their informativeness, Definition 9.2 does not require \(\mathcal Q^\pi _i\) and \(\mathcal Q^{\pi '}_i\) to be partitions: condition (ii) ensures that we are able to compare them even if they are not. When they are partitions, which is the case if \(\pi ,\pi '\in \Pi ^\mathrm{{m}}(S)\) by Proposition 8.8, then condition (ii) in Definition 9.2 is redundant.
It is easily verified that the notion of being at least as informative is transitive. Our next observation serves as a sanity check: two signals should be equally informative if and only if they induce the same posterior history. And this is true.
Lemma 9.3
Fix a set of message profiles S. Let \(\sigma \in \Sigma (S)\) and \(\pi , \pi ' \in \Pi (\sigma ).\) Then \(\pi\) and \(\pi '\) are equally informative if and only if, for every \(i \in N,\) \(\mathcal{Q}^{\pi }_{i} = \mathcal{Q}^{\pi ^{\prime }}_{i}\).
Proof
Clearly, if, for every \(i \in N,\) \(\mathcal Q_i^{\pi }= \mathcal Q_i^{\pi '},\) then \(\pi\) and \(\pi '\) are equally informative. For the other direction, assume that \(\pi\) and \(\pi '\) are equally informative. Let \(i \in N.\) As \(\pi '\) is as informative as \(\pi\), for all \(Q'\in \mathcal Q_i^{\pi '}\) there is \(Q\in \mathcal Q_i^{\pi }\) such that \(Q'\subseteq Q\). As \(Q'\cap Q\ne \emptyset\) and as \(\pi\) is as informative as \(\pi '\), it must hold that \(Q\subseteq Q'\), i.e., \(Q'=Q\). Thus, \(\mathcal Q_i^{\pi '}\subseteq \mathcal Q_i^{\pi }\). Using the same arguments one finds \(\mathcal Q_i^{\pi }\subseteq \mathcal Q_i^{\pi '}\). \(\square\)
An immediate consequence of Lemmas 9.1 and 9.3 is that equivalent signals are equally informative. This is in line with our interpretation of equivalent signals as using different languages: if the same messages were conveyed in different languages, one would not expect them to become more or less informative.
In Example 8.6, we found the posterior correspondences under signals \(\pi\) and \(\pi '\) but did not consider their informativeness. In the next example, we show that \(\pi\) is more informative than \(\pi '\).
Example 9.4
Recall the signals \(\pi\) and \(\pi '\) from Examples 5.2 and 5.4. The posterior history correspondences of \(\pi\) and \(\pi '\) were derived in Example 8.6. Note that \(\Lambda ^\pi =\Lambda ^{\pi '}\) and that \(\pi ,\pi '\in \Pi ^\mathrm{{m}}(S)\). Thus, Proposition 8.8 implies that, for every \(i\in N\), \(\mathcal Q^{\pi }_i\) and \(\mathcal Q^{\pi '}_i\) are partitions of the same set. More precisely, they are given as
It holds that \(\mathcal Q^{\pi }_1\) is a finer partition than \(\mathcal Q^{\pi ^{\prime }}_1\) and that \(\mathcal Q^{\pi }_2=\mathcal Q^{\pi ^{\prime }}_2\). Thus, \(\pi\) is more informative than \(\pi ^{\prime }\). \(\triangle\)
While we do not require \(\mathcal Q^{\pi }_i\) and \(\mathcal Q^{\pi '}_i\) to be partitions to compare \(\pi\) and \(\pi '\), if they are partitions, then \(\pi '\) is more informative than \(\pi\) if for all \(i \in N\) the restriction of \(\mathcal Q_i^{\pi }\) to \(\Lambda ^{\pi '}\) is at least as coarse as \(\mathcal Q_i^{\pi '}\) and for some \(i \in N\) the partitions \(\mathcal Q^{\pi }_i\) and \(\mathcal Q^{\pi '}_i\) are not the same.
Proposition 9.5
Fix a set of message profiles S. Let \(\sigma \in \Sigma (S)\), \(\pi , \pi ' \in \Pi (\sigma )\), and \(\pi \in \Pi ^\mathrm{{i}}(S)\). Then \(\pi '\) is at least as informative as \(\pi\) if and only if \(\Lambda ^{\pi '}\subseteq \Lambda ^{\pi }\).
Proof
Suppose that \(\Lambda ^{\pi '}\setminus \Lambda ^\pi \ne \emptyset\). Then there exist \(i\in N\) and \(Q'\in \mathcal Q^{\pi '}_i\) such that there is no \(Q\in \mathcal Q^\pi _i\) with \(Q'\subseteq Q\). Hence, condition (ii) of Definition 9.2 is not satisfied and \(\pi '\) is not at least as informative as \(\pi\).
Now let \(\Lambda ^{\pi '}\subseteq \Lambda ^{\pi }\). By Corollary 8.10 and Lemma 9.1 we can assume without loss of generality that \(\pi \in \Pi ^\ell (S)\), so that \(\mathcal Q_i^{\pi }=\mathcal P_i^{\pi }\) for all \(i\in N\).
We first show Condition (ii) of Definition 9.2. So, let \(i \in N\) and assume \(Q\in \mathcal Q_i^{\pi }\) and \(Q'\in \mathcal Q_i^{\pi '}\) are such that \(Q\cap Q'\ne \emptyset\). We have to show that \(Q'\subseteq Q\). Let \(\left( \omega ^*,\lambda ^*\right) \in Q\cap Q'\). There is \(\left( \omega ,\lambda \right) \in H^{\pi }\) such that \(Q=Q_i^{\pi }\left( \omega ,\lambda \right) =P_i^{\pi }\left( \omega ,\lambda \right)\). Thus, by Lemma 8.3, we have that \(Q = P_i^{\pi }\left( \omega ^*,\lambda ^*\right) .\) Consider \(\left( \bar{\omega },\bar{\lambda }\right) \in Q'.\) There is \(\left( \omega ',s'\right) \in H^{\pi '}\) such that \(Q'=Q_i^{\pi '}\left( \omega ',s'\right)\) and there is \(\left( \omega '',s''\right) \in P_i^{\pi '}\left( \omega ',s'\right)\) with \(\lambda ^{\pi ',s''}=\bar{\lambda }\). In particular, since \(s''_i=s'_i\), we have \(\bar{\lambda }_i=\lambda _i^{\pi ',s''}=\lambda _i^{\pi ',s'}=\lambda _i^*\). Since \(\Lambda ^{\pi '}\subseteq \Lambda ^{\pi }\), we have \(\left( \bar{\omega },\bar{\lambda }\right) \in \Lambda ^{\pi }\), and since \(\bar{\lambda }_i=\lambda _i^*\), we have \(\left( \bar{\omega },\bar{\lambda }\right) \in P_i^{\pi }\left( \omega ^*,\lambda ^*\right) =Q\). We have shown that \(Q' \subseteq Q.\)
To prove Condition (i) of Definition 9.2 it is now sufficient to show that for each \(Q'\in \mathcal Q_i^{\pi '}\) there is \(Q\in \mathcal Q_i^{\pi }\) with \(Q\cap Q'\ne \emptyset\). Let \(\left( \omega ',s'\right) \in H^{\pi '}\) be such that \(Q' = Q_i^{\pi ^{\prime }}\left( \omega ',s'\right) .\) It holds that \((\omega ', \lambda ^{\prime ^{s'}}) \in Q'\subseteq \Lambda ^{\pi '}\subseteq \Lambda ^{\pi }\). Thus, there is \(Q \in \mathcal Q_i^{\pi }\) with \((\omega ', \lambda ^{\prime ^{s'}}) \in Q.\) \(\square\)
Proposition 9.5 reveals that among those signals that induce the same distribution over posterior belief profiles, those that are individually minimal and have the largest number of posterior histories are the least informative. We can interpret the condition \(\Lambda ^{\pi '}\subseteq \Lambda ^{\pi }\) as \(\pi '\) providing additional information about what posterior histories are impossible. It is worth mentioning that this condition together with the individual minimality of \(\pi\) implies that \(\mathcal Q_i^{\pi '}\) contains at least the same number of elements as \(\mathcal Q_i^{\pi }\) and that these elements are smaller in the sense of set inclusion.
In Corollary 6.10 a signal is transformed into an LIS that induces the same distribution over posterior vectors. Although they are not equivalent if \(\pi\) is not individually minimal, they have the same set of posterior histories as the next lemma shows.
Lemma 9.6
Fix a set of message profiles S. Let \(\Delta (\Omega )^{n} \subseteq S\) and \(\pi \in \Pi (S).\) For \(\pi ^{\ell }\) as defined in (7) it holds that \(\Lambda ^{\pi ^\ell }=\Lambda ^{\pi }\).
Proof
Observe that \(\left( \omega ,\lambda \right) \in \Lambda ^{\pi }\) if and only if there is \(s\in S^{\pi }\) such that \(\lambda =\lambda ^{\pi ,s}\) and \(\pi \left( s|\omega \right) >0\). This, however, is equivalent to \(\pi ^{\ell }\left( \lambda |\omega \right) =\sum _{s\in S^{\pi }:\lambda ^{\pi ,s}=\lambda } \pi \left( s|\omega \right) >0,\) which holds if and only if \(\left( \omega ,\lambda \right) \in H^{\pi ^{\ell }}=\Lambda ^{\pi ^{\ell }}\). \(\square\)
Proposition 9.5 and Lemma 9.6 immediately imply the following result.
Corollary 9.7
Fix a set of message profiles S. Let \(\Delta (\Omega )^{n} \subseteq S,\) \(\pi \in \Pi (S),\) and \(\pi ^{\ell } \in \Pi ^\ell (S)\) as defined in (7). Then \(\pi\) is at least as informative as \(\pi ^{\ell }\).
Corollary 9.7 might suggest that language-independent signals reveal as little information as possible. The following example demonstrates that this is, in general, not true.
Example 9.8
Recall \(\pi\) and \(\pi '\) from Example 7.6. Both signals are language-independent and, hence, individually minimal. However, as shown in Example 8.1, \(\Lambda ^{\pi '} = H^{\pi '}\subsetneq H^{\pi } = \Lambda ^{\pi }.\) Thus, by Proposition 9.5, \(\pi '\) is more informative than \(\pi\). Observe that it is not relevant that \(\pi\) is an LIS: when translating each message sent under \(\pi\) in two different languages and sending both with equal probability, we obtain a signal that is not even minimal, but equally informative as \(\pi .\) \(\triangle\)
Our final result identifies those signals that are least informative. Let \(\sigma \in \Sigma (S)\) and recall that the set \(P(\sigma )\) is convex. The relative interior of \(P(\sigma )\) is defined as
Proposition 9.9
Fix a set of message profiles S. Let \(\Delta (\Omega )^{n} \subseteq S\) and \(\sigma \in \Sigma (S)\). For every \(p \in P(\sigma ),\) define the signal \(\pi ^{p} \in \Pi ^{\ell }(S)\) by
If \(p\in \text {relint}P(\sigma )\), then every \(\pi \in \Pi (\sigma )\) is at least as informative as \(\pi ^p.\)
Proof
First observe that for every \(p\in \text {relint}P(\sigma )\) it holds that \(p\left( \omega ,\lambda \right) >0\) whenever there is \(p'\in P\left( \sigma \right)\) with \(p'\left( \omega ,\lambda \right) >0\). Thus, for any such \(p,p'\) it holds that
So, by Proposition 9.5, it holds that \(\pi ^{p'}\) is at least as informative as \(\pi ^p.\)
Let \(\pi \in \Pi (\sigma )\) and define \(\pi ^{\ell } \in \Pi ^{\ell }(S)\) as in (7). Define \(p^{\prime } \in P(\sigma )\) by
Then \(\pi ^{\ell }=\pi ^{p'}\). Thus, as seen before, \(\pi ^{\ell }\) is at least as informative as \(\pi ^p\). Moreover, by Corollary 9.7, \(\pi\) is at least as informative as \(\pi ^{\ell }\). Hence, \(\pi\) is at least as informative as \(\pi ^p\). \(\square\)
Given a distribution \(\sigma \in \Sigma (S),\) if p is in the relative interior of \(P(\sigma ),\) then \(\pi ^p\) is a least informative signal that induces \(\sigma\). As \(\Pi ^\ell (\sigma )\) is but a positive linear transformation of \(P\left( \sigma \right)\), Proposition 9.9 implies that any signal \(\pi \in \rm{relint}\Pi ^\ell (\sigma )\) is a least informative signal.
Recall signals \(\pi\) and \(\pi '\) from Example 7.6. We concluded in Example 9.8 that \(\pi '\) is more informative than \(\pi\). The result also follows from Proposition 9.9 as \(p\in \text {relint}P(\sigma )\) and, hence, \(\pi\) is a least informative signal.
10 Conclusion
We consider a group of receivers who share a common prior on a finite state space and investigate (i) the inducible distributions of posterior belief profiles and (ii) the informativeness of signals. The sender can restrict attention to particular classes of signals without loss of generality. In particular, any distribution over posterior belief profiles can be induced by a language-independent signal. Moreover, any individually minimal signal can be transformed into an equivalent LIS.
Extending Kamenica and Gentzkow (2011) by allowing for multiple receivers and private communication imposes further constraints on inducible distributions over posterior belief profiles, so that Bayes plausibility is no longer a sufficient condition. We formulate the additional conditions in the form of a linear system of equations that needs to have a non-negative solution. These conditions, together with Bayes plausibility, are necessary and sufficient.
We define informativeness in terms of knowledge about the true posterior history. For every signal there is a language-independent signal that is not more informative. Any element in the relative interior of the set of all language-independent signals which induce a particular distribution belongs to the set of least informative signals.
One potential extension of the model would be to allow for distributions with countably infinite support. While some of our results would survive this extension, it is not immediately clear, for example, how one would define minimal or individually minimal signals, since their current definitions rely on distributions having finite support. Therefore, this provides an interesting topic for future research.
Notes
This is also known as the martingale property.
This is a well-known example, e.g. Arieli et al. (2021) consider a sender who wishes to maximize the distance between two receivers’ beliefs.
All these papers were developed independently from ours and vice versa and written roughly around the same time.
Countably infinite support might be important in some information design applications. We would like to suggest the extension of our results to distributions with countably infinite support as an interesting direction for future research.
In this paper, we are explicit about the set of messages that are available to sender. Thus, in our results we will provide appropriate conditions on the sets \(S_i\) whenever needed.
Recall that \(\Delta (X)\) is defined as the set of distributions over X with finite support and note that if \(\lambda\) is such that there is no s with \(\lambda =\lambda ^{\pi ,s}\), then the right-hand side of (2) is 0.
Koessler et al. (2022) adopt a similar definition for inducible distributions called splittings and employ it to characterize Bayes–Nash equilibria of an information design game with multiple senders.
Observe that this is no contradiction to the proof of Proposition 3.1: there we used that any fixed message induces under every signal where it is sent with positive probability the same posterior. Here, message x induces posterior \(\left( 1/2,1/2\right)\) under \(\pi\) but \(\left( 1/4,3/4\right)\) under \(\pi '\).
Observe that Proposition 6.5 can be reformulated to allow for a finite set S. In particular, given a finite S, for \(\pi \in \Pi (S)\) with \(\sigma ^\pi \in \Sigma (S)\) and \(\text {supp}\left( \sigma ^\pi \right) \subseteq S\) it holds that \(\Pi ^\ell \left( \sigma ^\pi \right)\) is non-empty and convex.
The relation \(\sim\) is reflexive, symmetric, and transitive, so is an equivalence relation.
It is implied by the proof of Lemma A.2 in Kerman et al. (2023).
Other papers have also pointed this out and provided examples, see, e.g. Arieli et al. (2021).
Note that \(0\in \mathbb R_+\).
Morris (2020) provides an alternative proof for the no trade result that also applies to a finite state space.
Note that an alternative definition of \(Q^\pi _i\) is to employ \(\Lambda ^\pi\) also as the domain. However, since the (first-order) posterior does not contain any information about higher order beliefs (which is encoded in messages), two posterior correspondences that are different according our definition of \(Q^\pi _i\) could be the same under the alternative one, e.g. \(Q^\pi _1\) and \(Q^{\pi '}_1\) in Example 8.6 would be equal.
References
Allon, G., Drakopoulos, K., & Manshadi, V. (2021). Information inundation on platforms and implications. Operations Research, 69, 1784–1804.
Alonso, R., & Camara, O. (2016). Bayesian persuasion with heterogeneous priors. Journal of Economic Theory, 165, 672–706.
Alonso, R., & Câmara, O. (2016). Persuading voters. American Economic Review, 106, 3590–3605.
Arieli, I., Babichenko, Y., Sandomirskiy, F., & Tamuz, O. (2021). Feasible joint posterior beliefs. Journal of Political Economy, 129, 2546–2594.
Aumann, R.J., Maschler, M.B., & Stearns, R.E. (Eds.), (1995). Repeated games with incomplete information. MIT Press.
Aumann, R. J. (1976). Agreeing to disagree. The Annals of Statistics, 4, 1236–1239.
Beauchêne, D., Li, J., & Li, M. (2019). Ambiguous persuasion. Journal of Economic Theory, 179, 312–365.
Bergemann, D., & Morris, S. (2016). Bayes correlated equilibrium and the comparison of information structures in games. Theoretical Economics, 11, 487–522.
Blackwell, D. (1953). Equivalent comparisons of experiments. The Annals of Mathematical Statistics, 24, 265–272.
Boleslavsky, R., & Cotton, C. (2015). Grading standards and education quality. American Economic Journal: Microeconomics, 7, 248–179.
Brooks, B., Frankel, A., & Kamenica, E. (2022). Information hierarchies. Econometrica, 90, 2187–2214.
Burdzy, K., Pal, S. (2019). Contradictory predictions. ArXiv preprint arXiv:1912.00126.
Burdzy, K., & Pitman, J. (2020). Bounds on the probability of radically different opinions. Electronic Communications in Probability, 25, 1–12.
Cichomski, S., & Osekowski, A. (2021). The maximal difference among expert’s opinions. Electronic Journal of Probability, 26, 1–17.
Ganuza, J. J., & Penalva, J. S. (2010). Signal orderings based on dispersion and the supply of private information in auctions. Econometrica, 78, 1007–1030.
Gentzkow, M., & Kamenica, E. (2016). Competition in persuasion. The Review of Economic Studies, 84, 300–322.
Green, J. R., & Stokey, N. L. (2022). Two representations of information structures and their comparisons. Decisions in Economics and Finance, 45, 541–547.
He, K., Sandomirskiy, F., & Tamuz, O. (2022). Private private information. Working Paper.
Hintikka, J. (1962). Knowledge and belief: an introduction to the logic of the two notions. Cornell University Press.
Ichihashi, S. (2019). Limiting sender’s information in bayesian persuasion. Games and Economic Behavior, 117, 276–288.
Inostroza, N., & Pavan, A. (2023). Adversarial coordination and public information design. Working paper available at SSRN 4531654.
Kamenica, E., & Gentzkow, M. (2011). Bayesian persuasion. American Economic Review, 101, 2590–2615.
Kerman, T.T., Herings, P.J.J., & Karos, D. (2023). Persuading sincere and strategic voters. Journal of Public Economic Theory.
Koessler, F., Laclau, M., & Tomala, T. (2022). Interactive information design. Mathematics of Operations Research, 47, 153–175.
Levy, G., Moreno de Barreda, I., & Razin, R. (2021). Feasible joint distributions of posteriors: a graphical approach. Working Paper.
Li, C. (2017). A model of bayesian persuasion with transfers. Economics Letters, 161, 93–95.
Mathevet, L., & Taneva, I. (2022). Organized information transmission. CEPR Discussion Paper No. DP16959.
Mathevet, L., Perego, J., & Taneva, I. (2020). On information design in games. Journal of Political Economy, 128, 1370–1404.
Milgrom, P., & Stokey, N. (1982). Information, trade and common knowledge. Journal of Economic Theory, 26, 17–27.
Morris, S. (2020). No trade and feasible joint posterior beliefs. Working Paper.
Mostagir, M., & Siderius, J. (2022). Learning in a post-truth world. Management Science, 68, 2860–2868.
Osborne, M.J., & Rubinstein, A. (1994). A course in game theory, 67-68. MIT press.
Rick, A. (2013). The benefits of miscommunication. Working Paper.
Sobel, J. (2014). On the relationship between individual and group decisions. Theoretical Economics, 9, 163–185.
Taneva, I. (2019). Information design. American Economic Journal: Microeconomics, 11, 151–85.
Wang, Y. (2013). Bayesian persuasion with multiple receivers. Working Paper.
Ziegler, G. (2020). Adversarial bilateral information design. Working Paper.
Acknowledgements
We thank Andrés Perea, Alex Smolin, Ina Taneva, and Elias Tsakas, and the participants of TARK XVIII, 18th European Meeting on Game Theory (SING16), 32nd Stony Brook International Conference on Game Theory, Corvinus University Game Theory Seminar, EEA-ESEM Congress 2022, 13th Conference on Economic Design, and 31st European Workshop on Economic Theory (EWET2023) for their comments. We would also like to thank the anonymous referees for their suggestions.
Funding
Open access funding provided by Corvinus University of Budapest. Dominik Karos acknowledges funding by the ERC, Project Number 747614. Toygar T. Kerman acknowledges funding by the Hungarian National Research, Development and Innovation Office, Project Number K-143276.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Herings, P.JJ., Karos, D. & Kerman, T.T. Belief inducibility and informativeness. Theory Decis 96, 517–553 (2024). https://doi.org/10.1007/s11238-023-09963-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11238-023-09963-7