Abstract
This paper develops a model of belief influence through communication in an exogenous social network. The network is weighted and directed, and it enables individuals to listen to others’ opinions about some exogenous parameter of interest. Agents use Bayesian updating rules. The weight of each link is exogenously given, and it specifies the quality of the corresponding information flow. We explore the effects of the network on the agents’ firstorder beliefs about the parameter and investigate the aggregation of private information in large societies. We begin by characterizing an agent’s limiting beliefs in terms of some entropybased measures of the conditional distributions available to him from the network. Our results on consensus and correctness of limiting beliefs are in consonance with some of the literature on opinion influence under nonBayesian updating rules. First, we show that the achievement of a consensus in the society is closely related to the presence of prominent agents who are able to crucially change the evolution of other agents’ opinions over time. Secondly, we show that the correct aggregation of private information is facilitated when the influence of the prominent agents is not very high.
This is a preview of subscription content, access via your institution.
Notes
To fix ideas, consider, e.g., the sort of environments that are commonly modeled as beauty contest games.
The parameter could correspond to some economic, social, or political variable. Examples include the profitability of an investment, the effects of some public policy, the ideology of a certain politician, or whether a certain social movement will spread out.
At a more intuitive level, the network describes exogenously given conduits through which the agents listen to others speak about their private learning. As a motivating example, consider a group of investors deciding their investment in a collective fund. Each investor begins with some priors about the potential profitability of the fund and collects over time some further information by studying privately a number of characteristics of the fund. In addition, through communication, each investor has, in each period, some (noisy) access to the private analyses of the fund features made by other investors.
Since de Condorcet (1785)s seminal essay, the problem of whether a group of agents who have dispersed information will be able to aggregate their pieces of information and reach a correct consensus has been the focus of a large body of mathematical and philosophical work.
Thus, in this paper, we are not interested in the rich strategic interactions present in a sender–receiver game.
In contrast to other measures extensively used in decision theory, such as the Blackwell (1953)s ordering, the power measure induces complete orders over sets of information structures.
Including some exogenous decay in the flow of information across connections has been common in the economic literature on networks since the seminal papers by Jackson and Wolinsky (1996) and by Bala and Goyal (2000). An interesting feature of our model is that the presence of decay can be described very precisely in terms of the power measures of sources and links.
This can be naturally interpreted as agent \(j\) being able, as time evolves, to convince agent \(i\) to share his views about the uncertain parameter.
For applications, this approach seems very useful in those cases where observables could be used to estimate distributions over signals and messages. In these cases, the power measures proposed in this paper can be used as a proxy to describe the strength of connexions in networks. Some recent empirical papers (Banerjee et al. 2013, 2014) have obtained estimations of the strength of connexions in particular social communication networks.
In the DeGroot’s model, agents update their beliefs by averaging their neighbors’ beliefs according to some exogenous weights that describe the intensity of the links between the agents. While a major advantage of these models lies in their tractability, common features with the present paper are that the weights of the links are exogenous and constant over time, and that the induced beliefrevision processes are stationary. A classical contribution in this literature is DeMarzo et al. (2003) who propose a networkbased explanation for the emergence of “unidimensional” opinions.
In a setting without communication among the agents, Cripps et al. (2008) show that (approximate) common learning of the parameter is attained when signals are sufficiently informative and the sets of signals are finite. They assume that the agents start with common priors and ask whether each agent not only assigns sufficiently high probability to some given parameter value but also to the event that each other agent assigns high probability to such a value, and so on, ad infinitum.
Thus, we do not consider exante probabilistic assessments that the agents could make over the histories underlying their beliefs as we do not explore their higherorder beliefs. Importantly, the result of common learning attainment by Cripps et al. (2008) mentioned in footnote 12 requires that the sets of signals and messages be finite. This is not surprising since they assume that each agent is able to keep track of the higherorder beliefs of all agents about the signals each of them is receiving at each period. Clearly, this approach is less appealing when one considers a society where the number of its members is large. In fact, the argument given by Rubinstein (1989) in his celebrated email game suggests that common learning of the true parameter is precluded with arbitrarily large societies.
In other words, this strand of the literature uses an expost perspective to regard parameter values as being correct while we use an exante viewpoint. Our model can be regarded as an attempt to introduce Bayesian updating rules into the DeGroot’s framework of opinion influence and evolution of firstorder beliefs. Accordingly, as in the approach pursued by DeMarzo et al. (2003), and by Golub and Jackson (2010), our notion of correctness asks whether the network structure allows for the aggregation of the decentralized sources of private information.
As mentioned earlier, the crucial difference is that the amount of information transmitted in our model is not endogenously chosen but it is exogenously given by the description of sources and links.
Although the parameter space is assumed to be finite, the extension of our main results to a compact, but not necessarily finite, parameter space would only change sums to integrals in the appropriate formulae.
For example, Acemoglu et al. (2009) show that, under mild assumptions, Bayesian updating from signals does not necessarily lead to agreement about the parameter true value. This result challenges the classical justification for the common priors premise.
Nevertheless, to ease the technical details and notational burden, the result in Lemma 1 and all our examples are presented for the common priors case.
By construction, the information that the agent receives through his source does not include any information that he can receive from other agents in the society.
Nevertheless, our model also includes the possibility that the agents do obtain full information about \(\theta \) using their sources. Following the terminology of sender–receiver games, this is the extreme case described by a completely separating information source.
We assume that \(\left S\right =\left M\right =\left \varTheta \right \) in order to allow both an information source and a directed link for full information disclosure.
In principle, our description of message vectors captures a situation where each agent receives messages from each other agent in the society. Nevertheless, the specification of a link \(\varPsi _{ij}\) will determine the degree of informativeness of the messages \(m_{ijt}\) that flow through it. In some cases, the corresponding degree of informativeness may be null, which is interpreted as if there is actually no directed link from agent \(i\) to agent \(j\) and, therefore, as if \(i\) receives no message whatsoever from \(j\).
For a discussion both of (a) the challenges implied by this type of informational requirements in social situations and of (b) of the restrictiveness of the assumption of common priors and common knowledge of the true generating data processes, see, e.g., Acemoglu and Ozdaglar (2011)s excellent survey of Bayesian and nonBayesian models of opinion influence in social networks.
Thus, agents \(i\) and \(j\) commonly know the main features of the link that connects them but such information remains unknown to any other agent.
Recall that, although the conditional distributions associated with \(\varPhi _{j}\) are known by an agent \(i\) who has a link \(\varPsi _{ij}\), the particular signal realizations \(s_{jt}\) remain \(j\)’s private information.
See, e.g., the classical sender–receiver framework introduced by Crawford and Sobel (1982).
Using the terminology of sender–receiver games, this is the extreme case described by a completely separating message protocol \(\varSigma _{ij}\).
In Definition 1, it follows the convention \(0 \log 0 = 0\), which is justified by continuity.
The following conventions are used \(0 \log (0/0)=0\) and, based on continuity arguments, \(0 \log (0/a)=0\) and \(a \log (a/0)=\infty \).
In particular, the relative entropy is not symmetric, and it does not satisfy the triangle inequality either.
The technical arguments under these claims appear in the proof of Lemma 1.
Note that the information that an agent \(i\) receives from another agent \(j\) through two different paths \(\gamma _{ij}, \gamma ^{\prime }_{ij} \in \varGamma _{ij}[\varPsi ]\) refers only to the information provided by the same source \(\varPhi _{j}\) to agent \(j\). Thus, the information that flows through any of these paths does not include any information attached to the sources of the agents located along any of the two paths. Suppose, e.g., that \({\mathbb {P}}(\gamma _{ij})>{\mathbb {P}}(\gamma ^{\prime }_{ij})\). This corresponds intuitively to a situation where the information about \(j\)’s private learning that flows through \(\gamma ^{\prime }_{ij}\) is relatively more affected by decay (as formally described in Lemma 1) than the information that flows through \(\gamma _{ij}\). Thus, \(\gamma _{ij}\) and \(\gamma ^{\prime }_{ij}\) would provide \(i\) with two different Bayesian updating processes about \(j\)’s private learning from \(\varPhi _{j}\). Yet, since these two processes cannot be combined to obtain a more informative updating process, we choose to focus only on the path \(\gamma _{ij}\).
More formally, \(\big \{q^{h^{t}_{i}}_{i}(\theta )\big \}^{\infty }_{t=1}\) is a bounded martingale with respect to the (conditional) distribution on \(\varTheta \), which is induced by the priors \(p_{i},\,i \in N\), and the conditional distributions \(\phi ^{\theta }_{i},\,\widehat{\psi }^{\theta }_{ij}\), for \(i,j \in N\).
Again, for each value of the parameter \(\theta \), the sequence of random variables \(\big \{q^{h^{t}}_{\mathrm{ob}}(\theta )\big \}^{\infty }_{t=1}\) is a bounded martingale so that the external observer’s posteriors converge almost surely.
For instance, Doob’s (1949) consistency theorem.
There are different formulations of asymptotic learning in social contexts. For the benchmark proposed in this paper, asymptotic learning would require that, as the number of agents tends to infinity, the average of the posteriors converge almost surely to some beliefs that place probability one on the true parameter value. If a consensus occurs, then the average of the limiting beliefs trivially coincides with the consensus beliefs. As a consequence, the surely convergence criterion used in our notion of correct limiting beliefs would imply the almost surely convergence required for asymptotic learning.
This corresponds formally to completely separating \(\varPhi _{j}\) and/or \(\varSigma _{ij}\). In intuitive terms, it describes situations where agent \(j\) learns from his source without any noise and/or he directly transmits to agent \(i\) directly the signals that he observes, instead of the (noisy) messages.
For the heterogenous priors case, an analogous upper bound on \({\mathbb {P}}(\varPsi _{ij})\) that depends on both entropies \(H(p_{i})\) and \(H(p_{j})\) can be derived. We do not provide the details since it only implies a more sophisticated mathematical expression which, however, conveys no further intuitions.
Following the terminology of sender–receiver games, this case corresponds to a pooling message protocol \(\varSigma _{ij}\). Notice that this extreme case can be alternatively obtained if we simply exclude the possibility of network connections.
The interpretation is that the agents \(j\) are able to influence agents \(i\)’s opinions so that all of them put positive probability in the long run to the same parameter values that agents \(j\) considered with positive probability based solely on their sources.
The set of networks for which some \(\varTheta ^{*}_{i}\) is not singleton has Lebesgue measure zero in the set of all possible networks.
It can be easily verified that, for any given priors \(p_{j}\), lower values of \(Q(p_{i},p_{j})\) are associated with agent \(i\)’s priors that are close to the uniform case \(\overline{p}_{i}(\theta )=1/L\) for each \(\theta \in \varTheta \).
Higher values of \(E_{\widehat{\psi }_{ij}}[H(q^{m^{1}_{ij}}_{i}[\widehat{\gamma }_{ij}])]\) are associated with posteriors that put large probabilities on a few parameter values.
Since \(Q(p_{j},p_{k})\) increases with \(H(p_{j})\), the message here is that it is easer for an agent to influence others when he begins with priors that put relatively large probability weights on a small number of parameter values. A natural interpretation, compared to priors more close to the uniform case, is that the agent begins with “strong opinions” about which parameter values are most likely. If, in addition, the network does not allow this agent to hear other agents who have strong beliefs in favor of different parameter values, then we would obtain the interpretation that this is a “stubborn agent,” hardly influenced by others in the social group.
The information centrality measure was introduced by Stephenson and Zelen (1989) with the motivation that information flows through a social network. This measure is specified as the harmonic average of the distance between a given agent and any other agent.
A typical formulation of a decay centrality measure for nondirected social networks would only consider, in our benchmark, the positive component \(\sum _{\left\{ i \in N \, : \, \widehat{\gamma }_{ij} \in \varGamma _{ij}[\varPsi ]\right\} } {\mathbb {P}}(\widehat{\gamma }_{ij})\).
Their definition of belief correctness also requires that some external observer aggregates the pieces of information initially held by the agents.
This feature contrast sharply with the insights into the DeGroot’s model, wherein extreme opinions are smoothed out as time evolves.
Although Proposition 2 provides formally this insight only for the specific case of a star network, given the logic behind the result, the message that it conveys is robust under more general network structures in which prominent agents enjoy positions with high centrality according to the measure \(C_i\).
References
Acemoglu, D., Chernozhukov, V., Yildiz, M.: Fragility of Asymptotic Agreement Under Bayesian Learning, mimeo (2009)
Acemoglu, D., Como, D., Fagnani, D., Ozdaglar, A.: Opinion fluctuations and disagreement in social networks. Math. Oper. Res. 38(1), 1–27 (2013)
Acemoglu, D., Dahleh, M.A., Lobel, I., Ozdaglar, A.: Bayesian learning in social networks. Rev. Econ. Stud. 78(4), 1201–1236 (2011)
Acemoglu, D., Ozdaglar, A.: Opinion dynamics and learning in social networks. Dyn. Games Appl. 1(1), 3–49 (2011)
Acemoglu, D., Ozdaglar, A., ParandehGheibi, A.: Spread of (mis)information in social networks. Games Econ. Behav. 70, 194–227 (2010)
Azomahou, T.T., Opolot, D.C.: Beliefs Dynamics in Communication Networks. UNUMERIT WP (2014)
Bala, V., Goyal, S.: A noncooperative model of network formation. Econometrica 68(5), 1181–1229 (2000)
Banerjee, A., Chandrasekhar, A.G., Duflo, E., Jackson, M.O.: The diffusion of microfinance. Science 341, 1–7 (2013)
Banerjee, A., Chandrasekhar, A.G., Duflo, E., Jackson, M.O.: Gossip: Identifying Central Individuals in a Social Network, mimeo (2014)
Billingsley, P.: Probability and Measure, 3rd edn. Wiley, New York (1995)
Blackwell, D.: Equivalent comparisons of experiments. Ann. Math. Stat. 24, 265–272 (1953)
Cabrales, A., Gossner, O., Serrano, R.: Entropy and the value of information for investors. Am. Econ. Rev. 103, 360–377 (2013)
Campbell, A.: Signaling in social network and social capital formation. Econ. Theory 57, 303–337 (2014)
Crawford, V.P., Sobel, J.: Strategic information transmission. Econometrica 50(6), 1431–1451 (1982)
Cripps, M.W., Ely, J.C., Mailath, G.J., Samuelson, L.: Common learning. Econometrica 76(4), 909–933 (2008)
Cripps, M.W., Ely, J.C., Mailath, G.J., Samuelson, L.: Common learning with intertemporal dependence. Int. J. Game Theory 42(1), 55–98 (2013)
de Condorcet, N.C.: Essai sur l’Application de l’Analyse a la Probabilite des Decisions Rendues a la Pluralite des Voix. Imprimerie Royale, Paris (1785)
DeGroot, M.H.: Reaching a consensus. J. Am. Stat. Assoc. 69(345), 118–121 (1974)
DeMarzo, P., Vayanos, D., Zwiebel, J.: Persuasion bias, social influence, and unidimensional opinions. Q. J. Econ. 118(3), 909–968 (2003)
Doob, J.L.: Application of the theory of martingales. Le Calcul des Probabilitiés et ses Applications. Colloques Internationaux du Centre National de le Recherche Scientifique, vol. 13, pp. 23–27. Centre National de la Recherche Scientifique, Paris (1949)
Golub, B., Jackson, M.O.: Naïve learning in social networks and the wisdom of crowds. Am. Econ. J. Microecon. 2(1), 112–149 (2010)
Heifetz, A.: Comment on consensus without common knowledge. J. Econ. Theory 70, 273–277 (1996)
Jackson, M.O., RogríguezBarraquer, T., Tan, X.: Social capital and social quilts: network pattern of favor exchange. Am. Econ. Rev. 102(5), 1857–1897 (2012)
Jackson, M.O., Wolinsky, A.: A strategic model of social and economic networks. J. Econ. Theory 71, 44–74 (1996)
Koessler, F.: Common knowledge and consensus with noisy communication. Math. Soc. Sci. 42, 139–159 (2001)
Parikh, R., Krasucki, P.: Communication, consensus, and knowledge. J. Econ. Theory 52(1), 178–189 (1990)
Rubinstein, A.: The electronic mail game: strategic behavior under ‘almost common knowledge’. Am. Econ. Rev. 79(3), 385–391 (1989)
Savage, L.J.: The Foundations of Statistics. Dover reprint, New York, 1972 (1954)
Sciubba, E.: Asymmetric information and survival in financial markets. Econ. Theory 25, 353–379 (2005)
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)
Steiner, J., Stewart, C.: Communication, timing, and common learning. J. Econ. Theory 146(1), 230–247 (2011)
Stephenson, K., Zelen, M.: Rethinking centrality: methods and examples. Soc. Netw. 11, 1–37 (1989)
Author information
Authors and Affiliations
Corresponding author
Additional information
A companion paper appears in Emile Borel and the Notion of Strategy: Ninetieth Anniversary under the title “Evolution of Beliefs in Networks.” This project has benefited greatly from very useful conversations with Rabah Amir, Luciana MoscosoBoedo, Alejandro Manelli, Larry Samuelson, and Myrna Wooders. For helpful comments and suggestions, I am also grateful to two anonymous referees, an editor, and seminar audiences at CIDE and ITAM.
Appendix
Appendix
Proof of Lemma 1

(a)
Consider a social network \(\varPsi \). Take two different agents \(i,j \in N\) and a directed link \(\varPsi _{ij} \in \varPsi \) from agent \(i\) to agent \(j\). Suppose that the agents \(i\) and \(j\) to begin with some common priors \(p\). Using the definition of power of a directed link in (3), we have
$$\begin{aligned} {\mathbb {P}}(\varPsi _{ij})&= \sum _{M} \psi _{ij}(m) D \big ( q^{m}_{i} \,  \, p \big )\nonumber \\&= \sum _{M} \psi _{ij}(m) \sum _{\varTheta } q^{m}_{i}(\theta ) \log \frac{q^{m}_{i}(\theta )}{p(\theta )}\nonumber \\&= \sum _{M} \psi _{ij}(m) \sum _{\varTheta } \frac{\psi ^{\theta }_{ij}(m) p(\theta )}{\psi _{ij}(m)} \log \frac{\psi ^{\theta }_{ij}(m)}{\psi _{ij}(m)}\nonumber \\&= \sum _{\varTheta } \sum _{M} p(\theta ) \sum _{S} \sigma ^{s}_{ij}(m) \phi ^{\theta }_{j}(s) \log \frac{\sum \nolimits _{S} \sigma ^{s^{\prime }}_{ij}(m) \phi ^{\theta }_{j}(s^{\prime })}{\sum \nolimits _{S} \sigma ^{s^{\prime }}_{ij}(m ) \phi _{j}(s^{\prime })}. \end{aligned}$$(11)Now, by applying, for each given \(\theta \in \varTheta \) and each given \(m \in M\), the logsum inequality to the expression in (11) above, we obtain
$$\begin{aligned} {\mathbb {P}}(\varPsi _{ij}) \le \sum _{\varTheta } \sum _{M} p(\theta ) \sum _{S} \sigma ^{s}_{ij}(m) \phi ^{\theta }_{j}(s) \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}. \end{aligned}$$(12)On the other hand, using the definition of power of a source in (2), we have
$$\begin{aligned} {\mathbb {P}}(\varPhi _{j})&= \sum _{S} \phi _{j}(s) D \big ( q^{s}_{j} \,  \, p \big )\nonumber \\&= \sum _{S} \phi _{j}(s) \sum _{\varTheta } q^{s}_{j}(\theta ) \log \frac{q^{s}_{j}(\theta )}{p(\theta )}\nonumber \\&= \sum _{S} \phi _{j}(s) \sum _{\varTheta } \frac{\phi ^{\theta }_{j}(s) p(\theta )}{\phi _{j}(s)} \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}\nonumber \\&= \sum _{\varTheta } \sum _{S} p(\theta ) \phi ^{\theta }_{j}(s) \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}. \end{aligned}$$(13)By combining the inequality in (12) with the expression in (13) above, we obtain
$$\begin{aligned} {\mathbb {P}}(\varPsi _{ij})&\le \sum _{\varTheta } \sum _{M} p(\theta ) \sum _{S} \sigma ^{s}_{ij}(m) \phi ^{\theta }_{j}(s) \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}\\&= \sum _{\varTheta } \sum _{S} p(\theta ) \phi ^{\theta }_{j}(s) \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}\left[ \sum _{M} \sigma ^{s}_{ij}(m) \right] \\&= \sum _{\varTheta } \sum _{S} p(\theta ) \phi ^{\theta }_{j}(s) \log \frac{\phi ^{\theta }_{j}(s)}{\phi _{j}(s)}={\mathbb {P}}(\varPhi _{j}), \end{aligned}$$as stated. Moreover, by combining the expressions in Eqs. (11) and (13), we obtain
$$\begin{aligned} {\mathbb {P}}(\varPsi _{ij}) = {\mathbb {P}}(\varPhi _{j}) +R(\varSigma _{ij}), \end{aligned}$$(14)where
$$\begin{aligned} R(\varSigma _{ij}):= \sum _{\varTheta }p(\theta ) \sum _{M}\sum _{S}\sigma ^{s}_{ij}(m)\phi ^{\theta }_{j}(s) \log \frac{\phi _{j}(s)\sum \nolimits _{S} \sigma ^{s^{\prime }}_{ij}(m) \phi ^{\theta }_{j}(s^{\prime })}{\phi ^{\theta }_{j}(s) \sum \nolimits _{S}\sigma ^{s^{\prime }}_{ij}(m) \phi _{j}(s^{\prime })}. \end{aligned}$$Since \(\sum _{S} \sigma ^{s^{\prime }}_{ij}(m) \phi ^{\theta }_{j}(s^{\prime })\) gives us the probability of agent \(i\) receiving message \(m\) from agent \(j\) conditional on the parameter value being \(\theta \) while \(\sum _{S}\sigma ^{s^{\prime }}_{ij}(m) \phi _{j}(s^{\prime })\) gives the corresponding unconditional probability, it follows that \(R(\varSigma _{ij}) \le 0\) for any message protocol \(R(\varSigma _{ij})\). Now, note that the message protocol \(\varSigma _{ij}\), associated with the directed link \(\varPsi _{ij}\), allows agent \(i\) to learn fully the signal that agent \(j\) observes if and only if \(\varSigma _{ij}\) completely separates all the signal realizations \(s \in S\) available to agent \(j\). Without the loss of generality, \(\varSigma _{ij}\) completely separates all the signal realizations in \(S\) if and only if \(\sigma ^{s_{l}}_{ij}(m_{l})=1\) for each \(l \in \left\{ 1,\ldots ,L\right\} \). In this case, for each \(\theta \in \varTheta \), we obtain
$$\begin{aligned}&\sum _{M} \sum _{S} \sigma ^{s}_{ij}(m) \phi ^{\theta }_{j}(s) \log \frac{\phi _{j}(s)\sum \nolimits _{S} \sigma ^{s^{\prime }}_{ij}(m) \phi ^{\theta }_{j}(s^{\prime })}{\phi ^{\theta }_{j}(s)\sum \nolimits _{S} \sigma ^{s^{\prime }}_{ij}(m) \phi _{j}(s^{\prime })}\\&\qquad \qquad \qquad = \sum ^{L}_{l=1} \phi ^{\theta }_{j}(s_{l})\log \frac{\phi _{j}(s_{l})\phi ^{\theta }_{j}(s_{l})}{\phi ^{\theta }_{j}(s_{l})\phi _{j}(s_{l})}=0 \iff R(\varSigma _{ij})=0. \end{aligned}$$Therefore, from the expression in (14), we obtain that the message protocol \(\varSigma _{ij}\) allows agent \(i\) to fully learn about the signal observed by agent \(j\) if and only if \({\mathbb {P}}(\varPsi _{ij})={\mathbb {P}}(\varPhi _{j})\).

(b)
The proof of part (b) uses exactly the same arguments given above for part (a). The only difference is that the role played in (a) by the source \(\varPhi _{j}\) is now played by the directed link \(\varPsi _{kj}\). All the formal expressions required would replicate the previous ones used in (a) upon adaptation to the appropriate formulae. Therefore, we forego a formal statement. \(\square \)
Proof of Theorem 1
Consider a given social network \(\varPsi \) and take an agent \(i \in N\). For a history \(h^{t}_{i} \in H_{i}\), let \(\alpha (s;h^{t}_{i})\) be the number of periods in which agent \(i\) has observed signal \(s\) up to period \(t\) and let \(\beta _{j}(m;h^{t}_{i})\) be the number of periods in which agent \(i\) has received message \(m\) from agent \(j\) (through the directed path that transmits the highest amount of information from \(j\) to \(i\)) up to period \(t\). Consider a history \(h^{t}_{i} \in H_{i}\) and a given \(\theta \in \varTheta \). From the expression derived in (6), we obtain
Since observed frequencies approximate distributions, i.e., \(\lim \nolimits _{t \rightarrow \infty } \alpha (s;h^{t}_{i})/t= \phi _{i}(s)\) and \(\lim \nolimits _{t \rightarrow \infty } \beta _{j}(m;h^{t}_{i})/t= \widehat{\psi }_{ij}(m)\), we have
Therefore, studying the converge of \(q^{h^{t}_{i}}_{i}(\theta )\) reduces to studying whether each term, for \(\theta ^{\prime } \ne \theta \),
exceeds or not one. By taking logs, this is equivalent to studying whether, for each \(\theta ^{\prime } \ne \theta \), the expression
exceeds or not zero. Then, using the definitions of \(G_{i}\) and of \(F_{ij}\) in (7) and in (8), respectively, we obtain that

(i)
\(\lim _{t \rightarrow \infty } q^{h^{t}_{i}}_{i}(\theta )=0\) if
$$\begin{aligned} G_{i}(\theta )+ \sum _{j \ne i} F_{ij}(\theta )< G_{i}(\theta ^{\prime })+ \sum _{j \ne i} F_{ij}(\theta ^{\prime }) \ \ \text {for some}\,\, \theta ^{\prime } \ne \theta \ \ \iff \ \ \theta \notin \varTheta ^{*}_{i}; \end{aligned}$$ 
(ii)
\(\lim _{t \rightarrow \infty } q^{h^{t}_{i}}_{i}(\theta )=1\) if
$$\begin{aligned}&G_{i}(\theta )+ \sum _{j \ne i} F_{ij}(\theta )> G_{i}(\theta ^{\prime })+ \sum _{j \ne i} F_{ij}(\theta ^{\prime }) \ \ \text {for each} \\&\quad \quad \theta ^{\prime } \in \varTheta \setminus \{\theta \} \ \ \iff \ \ \varTheta ^{*}_{i}=\{\theta \}; \end{aligned}$$ 
(iii)
$$\begin{aligned} \lim _{t \rightarrow \infty }q^{h^{t}_{i}}_{i}(\theta )= \left[ 1+\sum _{\theta ^{\prime } \in \varTheta ^{*}_{i} \setminus \{\theta \}}\frac{p_{i}(\theta ^{\prime })}{p_{i}(\theta )} \right] ^{1}=\frac{p_{i}(\theta )}{\sum \nolimits _{\theta ^{\prime } \in \varTheta ^{*}_{i}} p_{i}(\theta ^{\prime })} \end{aligned}$$
if \(\varTheta ^{*}_{i}\) is not singleton and \(\theta \in \varTheta ^{*}_{i}\).
\(\square \)
Proof of Theorem 2
Consider a given social network \(\varPsi \), and take two different agents \(i,j \in N\) and the directed path \(\widehat{\gamma }_{ij} \in \varGamma _{ij}[\varPsi ]\) which conveys the highest amount of information from agent \(j\) to agent \(i\). Using the definition of power of a directed path in (5), we have
Using Definition 7, it follows that agent \(j\) influences agent \(i\) if and only if the two following conditions are satisfied

(i)
\(\varTheta ^{*}_{i}=\varTheta ^{*}_{j}\).
This condition is satisfied if and only if for any \(\theta \in \varTheta _{j}\),
Since we know that, for each \(\theta \in \varTheta _{i},\,G_{i}(\theta ) \ge G_{i}(\theta ^{\prime })\) for each \(\theta ^{\prime } \in \varTheta \), the above condition is equivalent to require that for any \(\theta _{j} \in \varTheta _{j}\) and any \(\theta _{i} \in \varTheta _{i}\)
By adding the identity obtained in (15) to both sides of the inequality above, we obtain the following necessary and sufficient condition for \(\varTheta ^{*}_{i}=\varTheta ^{*}_{j}\):
which coincides with the condition stated in (7a).

(ii)
\(\varTheta ^{*}_{j}=\varTheta _{j}\).
This condition is satisfied if and only if, for any \(\theta _{j} \in \varTheta _{j}\),
By adding the identity obtained in (15) (upon changing the agents’ subscripts to consider \({\mathbb {P}}(\widehat{\gamma }_{jk})\)), where \(k \in N_{j}\), to both sides of the inequality above, we obtain the condition
for each \(k \in N_{j}\), which, by rearranging terms, coincides with the condition stated. \(\square \)
Proof of Proposition 1
First, note that the application of the result in Theorem 1 to the external observer leads directly to the result that, for each history \(h_{t},\,\lim _{t \rightarrow \infty } q^{h^{t}}_{\mathrm{ob}}(\theta ^{*})\) \(=\!1\) if and only if \(\text {arg} \max _{\theta \in \varTheta } \sum _{i \in N}G_{i}(\theta )\) is a singleton with \(\text {arg} \max _{\theta \in \varTheta } \sum _{i \in N}G_{i}(\theta )\) \(=\left\{ \theta ^{*}\right\} \).
Second, suppose that a consensus is achieved in the society in a way such that, for some \(\theta ^{*} \in \varTheta \), we have \(\lim _{t \rightarrow \infty } q^{h^{t}_{i}}_{i}(\theta ^{*})=1\) for each history \(h^{t}_{i}\), for each agent \(i \in N\). Then, by using the result in Theorem 1, it follows that, for each agent \(i \in N\),
which, by summing over all agents, implies
Therefore, provided that the consensus described above is achieved in the society, if
holds, then the condition in (16) above implies that \(\sum _{i \in N} G_{i}(\theta ^{*})\ge \sum _{i \in N} G_{i}(\theta )\) for each \(\theta \in \varTheta \), with strict inequality if \(\theta \ne \theta ^{*}\). As a consequence, correct limiting beliefs are attained in the society. \(\square \)
Proof of Proposition 2
Consider the centerdirected star network \(\varPsi ^{s}=\) \(\left\{ \varPsi _{j1} \, : \, j \in N \setminus \left\{ 1\right\} \right\} \) and suppose that a consensus is achieved in a way such that, from the results of Theorem 1, for each agent \(j \in N \setminus \left\{ 1\right\} \), we have \(\varTheta ^{*}_{j}=\varTheta _{1}=\left\{ \theta ^{*}\right\} \) for some given parameter value \(\theta ^{*} \in \varTheta \). Let us define, for \(\theta \in \varTheta \setminus \left\{ \theta ^{*}\right\} ,\,\overline{\eta }(\theta ):= \max _{j \in N \setminus \left\{ 1\right\} } \left[ G_{j}(\theta )G_{j}(\theta ^{*}) \right] \) and \(\underline{\eta }(\theta ):= \min _{j \in N \setminus \left\{ 1\right\} } \left[ G_{j}(\theta )G_{j}(\theta ^{*}) \right] \).
First, note that by applying the logsum inequality for each given \(m \in M\), we know that
Since we are supposing that the central agent is able to influence each other agent \(j\) so that all agents’ limiting beliefs put probability one on \(\theta ^{*}\) being the true parameter value (i.e., \(\varTheta ^{*}_{j}=\theta ^{*}\) for each \(j \in N \setminus \left\{ 1\right\} \)), then it must be the case that \(F_{j1}(\theta ^{*})F_{j1}(\theta ) > G_{j}(\theta )G_{j}(\theta ^{*})\) for each parameter value \(\theta \in \varTheta \setminus \left\{ \theta ^{*}\right\} \) and each agent \(j \in N \setminus \left\{ 1\right\} \). It then follows from the inequality in (17) above that \(G_{1}(\theta ^{*})G_{1}(\theta ) > G_{j}(\theta )G_{j}(\theta ^{*})\) for each \(\theta \in \varTheta \setminus \left\{ \theta ^{*}\right\} \) and each \(j \in N \setminus \left\{ 1\right\} \). This condition is equivalent to require \(G_{1}(\theta ^{*})G_{1}(\theta ) > \overline{\eta }(\theta )\) for each \(\theta \in \varTheta \setminus \left\{ \theta ^{*}\right\} \).
Secondly, correct limiting beliefs put probability one on some parameter value \(\widehat{\theta } \ne \theta ^{*}\) being the true one if and only if
A straightforward sufficient condition for the requirement above to be satisfied is
Therefore, since considering that the influence of the central agent leads to a consensus in which all agents put probability one to \(\theta ^{*}\) being the truth necessarily requires that \(G_{1}(\theta ^{*})G_{1}(\theta ) > \overline{\eta }(\theta )\) for any \(\theta \ne \theta ^{*}\), it follows that correct limiting beliefs are not attained if, for some \(\widehat{\theta } \ne \theta ^{*}\), the following condition is satisfied.
The proof is completed by noting that \(\overline{\eta }(\widehat{\theta })>\underline{\eta }(\widehat{\theta })\) is satisfied by construction and that, in addition, we can always find some large enough finite \(n^{*} \ge 1\) such that, for each \(n \ge n^{*}\), the sufficient condition above holds. \(\square \)
Rights and permissions
About this article
Cite this article
JiménezMartínez, A. A model of belief influence in large social networks. Econ Theory 59, 21–59 (2015). https://doi.org/10.1007/s0019901508613
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0019901508613
Keywords
 Communication networks
 Opinion influence, Bayesian updating rules
 Private signals
 Private messages
 Consensus
 Correct limiting beliefs
JEL Classification
 D82
 D83
 D85