Truth and Conformity on Networks

Abstract

Typically, public discussions of questions of social import exhibit two important properties: (1) they are influenced by conformity bias, and (2) the influence of conformity is expressed via social networks. We examine how social learning on networks proceeds under the influence of conformity bias. In our model, heterogeneous agents express public opinions where those expressions are driven by the competing priorities of accuracy and of conformity to one’s peers. Agents learn, by Bayesian conditionalization, from private evidence from nature, and from the public declarations of other agents. Our key findings are that networks that produce configurations of social relationships that sustain a diversity of opinions empower honest communication and reliable acquisition of true beliefs, and that the networks that do this best turn out to be those which are both less centralized and less connected.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Notes

  1. 1.

    See Zollman (2007, 2009, 2013), Mayo-Wilson et al. (2013), and Grim et al. (2013).

  2. 2.

    See Heesen (2017), Kitcher (1990, 1993), and Strevens (2003, 2013).

  3. 3.

    See O’Connor et al., and Bruner and O’Connor (2016).

  4. 4.

    See Asch (1955), Bond and Smith (1996), and Morganand and Laland (2012).

  5. 5.

    See Rosenstock et al. (2017) for an analysis of the specific conditions under which the effect described in Zollman (2007) obtains.

  6. 6.

    Note that an agent’s truth-seeking payoff for a declaration is based on her expectation that it corresponds to the true state of the world—agents do not know, and do not find out, whether their assessments are accurate.

  7. 7.

    In past work on social networks, the network has been used to represent each patterns of transmission of social influence and patterns of transmission of information. Here, we focus on the effect of patterns of social influence, and so the network structure captures the former but not the latter, and we follow (Banerjee 1992; Bikhchandani et al. 1992; Smith and Sørensen 2000) in assuming that individual actions are observable to all individuals in the community.

  8. 8.

    Regular networks are those in which all nodes are of the same degree, or number of edges. Here, this will correspond to all agents having the same number of neighbors.

  9. 9.

    An ignorance prior is a probability distribution assigning equal probability to all possibilities. Our proofs will require only non-degenerate priors, and our simulations will employ a range of priors.

  10. 10.

    In the case of payoff ties, the agent chooses among her best responses at random.

  11. 11.

    Note that individuals can observe the proportions of a declaring agent’s neighbors making each declaration. We take this assumption to be plausible under some, but not all, conditions. In the context of public discourse, one can often observe–at least qualitatively–the social influences acting on other individuals. That is, when someone makes a declaration in favor of Caesar, we typically have a fair idea of whether her social network is predominantly pro-Caesar or pro-Pomepeia, some mixture of the two, and so on, and we use this information in assessing whether we think her assertion is more or less likely to be more or less socially or epistemically motivated. That said, future research exploring the effects of limiting observability of the network will be valuable.

  12. 12.

    See “Appendix A” for the mathematical details.

  13. 13.

    All proofs can be found in “Appendix A”.

  14. 14.

    For an excellent exposition of the classic results, see Smith and Sørensen (2000).

  15. 15.

    In our simulation plots (Fig. 4), we mark the performance of learning in the absence of any conformity bias—that is, of unimpeded Bayesian learning—with a dashed line. We will continue to compare our results to this control case, denoting the case of learning in the absence of conformity bias in further plots (Figs. 5, 6, 7) each time with a dashed line.

  16. 16.

    Our observations are computationally verified for the following distributions of types and evidence: the distribution of truth-seeking orientations in the population was varied from Beta(1,5) (corresponding to high conformism), to uniform, and Beta(5,1) (corresponding to high truth-seeking). And the distributions of evidence induced by each state of the world were varied between the linear case described before, and Gaussian distributions with means of 1 and -1, and variances of 1, 10, and 100.

  17. 17.

    Note that we have omitted the normalizing term from the definition of the influence of a declaration.

  18. 18.

    Given the assumption of symmetry of expected informativeness across \(N_\theta =1/2,\) we have that \(\varvec{I}(0)=\varvec{I}(1)\), and, more generally, that \(\varvec{I}(1/2-c)=\varvec{I}(1/2+c)\) for \(c \in [0,1/2]\).

References

  1. Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193(5), 31–35.

    Article  Google Scholar 

  2. Banerjee, A. V. (1992). A simple model of herd behavior. The Quarterly Journal of Economics, 107, 797–817.

    Article  Google Scholar 

  3. Barrett, J., Skyrms, B., & Mohseni, A. (2017). Self-assembling networks. British Journal for the Philosophy of Science.

  4. Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100, 992–1026.

    Article  Google Scholar 

  5. Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using asch’s (1952b, 1956) line judgment task. Psychological Bulletin, 119(1), 111–137.

    Article  Google Scholar 

  6. Bruner, J., & O’Connor, C. (2016). Power, bargaining, and collaboration. In T. Boyer, C. Mayo-Wilson & M. Weisberg (eds.) Scientific collaboration and collective knowledge (pp. 1–25). Oxford University Press.

  7. Goeree, J. K., Riedl, A., & Ule, A. (2009). In search of stars: Network formation among heterogeneous agents. Games and Economic Behavior, 67(2), 445–466.

    Article  Google Scholar 

  8. Goyal, S. (2007). Connections: An introduction to the economics of networks (Vol. 1). Princeton University Press.

  9. Grim, P., Singer, D. J., Fisher, S., Bramson, A., Berger, W. J., Reade, C., et al. (2013). Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. Episteme, 10, 441–464.

    Article  Google Scholar 

  10. Heesen, R. (2017). Communism and the incentive to share in science. Philosophy of Science, 84(4), 698–716.

    Article  Google Scholar 

  11. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5.

    Article  Google Scholar 

  12. Kitcher, P. (1993). The advancement of science: Science without legend, objectivity without illusions. Oxford: Oxford University Press.

    Google Scholar 

  13. Landemore, H. E. (2012). Why the many are smarter than the few and why it matters. Journal of Public Deliberation, 8, 7.

    Google Scholar 

  14. Mayo-Wilson, C., Zollman, K. J. S., & Danks, D. (2013). Wisdom of crowds versus groupthink: Learning in groups and in isolation. International Journal of Game Theory, 42(3), 695–723.

    Article  Google Scholar 

  15. Mercier, H., & Landemore, H. (2012). Reasoning is for arguing: Understanding the successes and failures of deliberation. Political Psychology, 33, 243–258.

    Article  Google Scholar 

  16. Morganand, T. J., & Laland, K. N. (2012). The biological bases of conformity.

  17. O’Connor, C., Bright, L., & Bruner, J. (2017). The Evolution of Intersectional Oppression. Philosophy of Science.

  18. Rosenstock, S., Bruner, J., & O’Connor, C. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.

    Article  Google Scholar 

  19. Smith, L., & Sørensen, P. (2000). Pathological outcomes of observational learning. Econometrica, 68(2), 371–398.

    Article  Google Scholar 

  20. Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100, 55–79.

    Article  Google Scholar 

  21. Strevens, M. (2013). Herding and the quest for credit. Journal of Economic Methodology, 20(1), 19–34.

    Article  Google Scholar 

  22. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442.

    Article  Google Scholar 

  23. Zollman, K. J. S. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Article  Google Scholar 

  24. Zollman, K. J. S. (2009). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17–35.

    Article  Google Scholar 

  25. Zollman, K. J. S. (2010). Social structure and the effects of conformity. Synthese, 172, 317–340.

    Article  Google Scholar 

  26. Zollman, K. J. S. (2013). Network epistemology: Communication in epistemic communities. Philosophy Compass, 8(1), 15–27.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Aydin Mohseni.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Mathematical Appendix

Learning from Others’ Declarations

When agent i declares \(x=\theta \), we know that it was her best response. As previously mentioned, this implies that the following condition holds:

$$\begin{aligned} \alpha _i(2P_i(\theta )-1)&+ (1-\alpha _i)(2N_i(\theta )-1)>0 . \qquad \qquad \qquad \qquad ({\dagger }) \end{aligned}$$

We plug agent i’s (publicly unknown) posterior belief \(P(\theta |\sigma )\) into (\(\dagger \)) to get the elaborated condition

$$\begin{aligned} \alpha _i \left( \dfrac{2}{1+\dfrac{1-\bar{P}}{\bar{P}}\dfrac{1-\sigma }{\sigma }}-1 \right) +(1-\alpha _i)(2N_i(\theta )-1)>0 \qquad \qquad \qquad \qquad ({\ddagger }) \end{aligned}$$

where \(\bar{P}\) denotes the (publicly known) prior \(P(\theta |\varvec{h}^t)\). We then compute the likelihood of agent i’s declaration \(\theta \), given our public prior, as follows.

Let \(\phi \) denote the left-hand term of our elaborated condition \((\ddagger )\), under which our agent would have declared \(\theta \), so that \(\mathbb {I}[\phi >0]\) is its indicator function. We then get the likelihood of the declaration given each possible state of the world,

$$\begin{aligned} P(x=\theta |\theta ,\bar{P}) =&\int _{A}\int _{\Sigma }\mathbb {I}[\phi>0] \mathrm {d}F_{\theta }(\sigma ) \mathrm {d}G(\alpha ), \\ P(x=\theta |\lnot \theta ,\bar{P}) =&\int _{A}\int _{\Sigma }\mathbb {I}[\phi >0] \mathrm {d}F_{\lnot \theta }(\sigma ) \mathrm {d}G(\alpha ). \end{aligned}$$

From these we obtain the posterior belief of the other agents in the network in light of agent i’s declaration of \(\theta \) using Bayes’ rule

$$\begin{aligned} P(\theta |x=\theta ,\bar{P})=\left( 1+\dfrac{\int _{A}\int _{\Sigma }\mathbb {I}[\phi>0] \mathrm {d}F_{\lnot \theta }(\sigma ) \mathrm {d}G(\alpha )}{\int _{A}\int _{\Sigma }\mathbb {I}[\phi >0] \mathrm {d}F_{\theta }(\sigma ) \mathrm {d}G(\alpha )}\dfrac{1-\bar{P}}{\bar{P}}\right) ^{-1} \end{aligned}$$

which yields the new public belief.

Proof of Proposition 1

There are two states of the world \(\theta \) and \(\lnot \theta \). Without loss of generality, suppose \(\theta \) to be the true state of the world. Let \(q(\varvec{h}^t)=P(\theta |\varvec{h}^t)\) be the public belief and \(\varvec{h}^t\) the history of declarations up to time t. As is well-known, the likelihood ratio

$$\begin{aligned} \ell (\varvec{h}^t)\equiv \frac{1-q(\varvec{h}^t)}{q(\varvec{h}^t)} \end{aligned}$$

is a martingale conditional on \(\theta \). Let X be the finite set of declarations. For any given declaration \(x\in X\),

$$\begin{aligned} \ell (\varvec{h}^t,x)= \ell (\varvec{h}^t)\frac{P(x|\varvec{h}^t, \lnot \theta )}{P(x|\varvec{h}^t, \theta )} \end{aligned}$$

and thus the martingale property follows:

$$\begin{aligned} E\big [\ell (\varvec{h}^{t+1})|\theta \big ]=\sum _{x \in X} \ell (\varvec{h}^t,x)P(x|\varvec{h}^t,\theta ) = \sum _{x \in X} \ell (\varvec{h}^t)P(x|\varvec{h}^t,\lnot \theta )=\ell (\varvec{h}^t). \end{aligned}$$

By Theorem 3(b) of Smith and Sørensen (2000), when evidence is unbounded, individuals almost surely converge in belief to the true state. \(\square \)

We show that convergence in beliefs implies a convergence in declarations. In particular, we show that convergence in beliefs implies that the community’s belief in the true state will be bounded from bellow over time. We then observe, using simple probabilistic arguments, that given sufficient time the community will almost surely arrive at a consensus state where all individuals are declaring the true state. Finally, we show that, having arrived at such a consensus with individual beliefs in the true state appropriately bounded from bellow, the community must remain at this consensus forever.

Proof of Corollary 2

Let q and \(q'\) denote the public belief before and after hearing a declaration, respectively. Consider a focal agent i having received her evidence from Nature on a given turn. Let \(P_i\) denote the focal agent’s posterior belief \(P(\theta |\sigma , \varvec{h}^t)\), and suppose that this agent declared \(x=\lnot \theta \). It is straightforward to show that if the population could observe the focal agent’s posterior, the public belief would be precisely equal to her posterior

$$\begin{aligned} q'(\lnot \theta , q,N_i(\theta ), P_i)=P_i. \qquad \qquad \qquad ({*}) \end{aligned}$$
(1)

Let \(\Pi (\cdot |\lnot \theta , q, N_\theta )\) be the distribution over the focal agent’s posterior belief given her declaration of \(\lnot \theta \), q the public belief when she selected her action, and \(N_i(\theta )\) the proportion of her neighbors declaring \(\theta \). By (1) we can write

$$\begin{aligned} q'(\lnot \theta , q, N_i(\theta ))=\int _0^1 P_i\; \mathrm {d} \Pi (P_i|\lnot \theta , q, N_i(\theta )). \end{aligned}$$

We can thus interpret the public belief as the public’s expectations of the focal agent’s posterior. As the public belief almost surely converges to certainty on the truth, for almost all trajectories of the public belief \(\{q_t\}_{t=0}^{+\infty }\), for all \(\epsilon >0\) there exists a time \(T_{\epsilon }\) such that, if \(t>T_{\epsilon }\) then \(q_t>1-\epsilon \). That is, there is a time after which the public belief in \(\theta \) will always be at least \(1-\epsilon \). Then choose \(\epsilon = 1/2\).

With probability 1 at some point along the trajectory after \(T_{\epsilon }\) all agents will be declaring \(\theta \). To see this, let \(\lambda \) be the probability all N agents choose declarations in sequence, each has an \(\alpha \) sufficiently high such that they declare the state they believe to be more likely regardless of their neighbors’ declarations, and they receive evidence such that their posterior assigns higher probability on \(\theta \). However small the probability \(\lambda \) might be, it exceeds 0. Hence, the probability that this event does not occur goes to zero as \(t \rightarrow + \infty \).

Assume, for the sake of contradiction, that at some point after \(T_\epsilon \) an agent goes against the consensus and declares \(\lnot \theta \), then her posterior must satisfy

$$\begin{aligned} P_i\le - \frac{1-\alpha _i}{2\alpha _i} +\frac{1}{2}. \end{aligned}$$

But then we get that \(E[P_i|\lnot \theta , \cdot ] \le 1/2\). That is, her belief in \(\theta \) was less than 1 / 2, which contradicts the fact that her belief was bounded from bellow. Hence, no agent can deviate from the consensus after time \(T_\epsilon \), and convergence in belief implies convergence in declaration. \(\square \)

Lemma 9

(Monotonicity of informativeness in influence) The informativeness of a declaration about a state is monotonically increasing in its influence on the public belief.

Proof

Without loss of generality, let the focal agent declare \(x=\theta \). We show that the informativeness of her declaration, \(H(q|q(\theta )=1/2)-H(q|x=\theta ))\), is monotonically increasing in its influence, \(q(\theta |x=\theta )-q(\theta )\).

First, we unpack the definition of informativeness, temporarily omitting the assumption of the maximal entropy prior \(q(\theta )=1/2\), to get

$$\begin{aligned} H(q) - H(q(\theta |x=\theta )&= \text {E}[-\text {ln}(q(\theta |x=\theta ))] - \text {E}[-\text {ln}(q(\theta ))] \\&= \text {E}[\text {ln}(q(\theta ))-\text {ln}(q(\theta |x=\theta ))] \\&=\text {E} \left[ \text {ln}\left( \frac{q(\theta )}{ q(\theta |x=\theta ) } \right) \right] \\&=q(\theta )\cdot \text {ln}\left( \frac{q(\theta )}{ q(\theta |x=\theta ) } \right) +q(\lnot \theta )\cdot \text {ln}\left( \frac{q(\lnot \theta )}{ q(\lnot \theta |x=\theta ) } \right) \end{aligned}$$

Now, let \(A \equiv q(\theta )\) and \(B \equiv q(\theta |x=\theta )\), so that \(C \equiv B-A\) denotes the influence of the declaration \(x=\theta \). Then we can re-write the preceding expression as

$$\begin{aligned} A\cdot \text {ln}\left( \frac{A}{A+C} \right) +(1-A)\cdot \text {ln}\left( \frac{1-A}{1-(A+C)} \right) \end{aligned}$$

Taking the partial derivative with respect to influence C, and solving for when it is positive—i.e., for when informativeness is increasing—yields

$$\begin{aligned} A+C -1> 0 \quad \text {or} \quad B > 1/2. \end{aligned}$$

And when \(q(\theta )=1/2\), we have that \(B=q(\theta |x=\theta )\ge 1/2\), and so informativeness is monotonically increasing in influence, as desired. \(\square \)

We will show that \(q'(\theta ,N_i(\theta )')<q'(\theta ,N_i(\theta ))\) whenever \(N_i(\theta )'>N_i(\theta )\). From this it follows straightforwardly that, given \(N_i(\theta ) \in [0,1]\), the most influential declaration occurs just when \(N_i(\theta )=0\).

To do so, consider a given focal agent i having received evidence \(\sigma \sim f_{\theta }(\sigma )\) from Nature. Let \(r= r(\sigma )\equiv P_i(\lnot \theta |\sigma )\) be one minus her private belief, \(G_{\lnot \theta }(r)\) and \(G_{\theta }(r)\) the conditional cdf’s for r, and \(g(r)\equiv \frac{dG_{\lnot \theta }}{dG_{\theta }}(r)\) the Radon-Nikodym derivative of \(G_{\lnot \theta }\) with respect to \(G_{\theta }\).

Lemma 10

\(g(r)=\frac{r}{1-r}\) almost surely.

Proof

If an agent updates her belief after observing r, it will remain unchanged. Thus from Bayes’ theorem \(r = P_i(\lnot \theta | r)= \frac{g(r)}{g(r)+1}\). \(\square \)

Lemma 11

The ratio\(\frac{G_{\lnot \theta }}{G_{\theta }}(r)\)is strictly increasing forrin the common support of\(G_{\theta }\)and\(G_{\lnot \theta }\).

Proof

Let \(r'>r\). From Lemma 10 we have that g(r) is strictly increasing, hence,

$$\begin{aligned} G_{\lnot \theta }(r)&= \int ^{r}_0 g(x) dG_{\theta }(x) <g(r) G_{\theta }(r) \\ \end{aligned}$$

And thus

$$\begin{aligned} G_{\lnot \theta }(r')-G_{\lnot \theta }(r)&= \int ^{r'}_r g(x)dG_{\theta }(x). \\&>[G_{\theta }(r')-G_{\lnot \theta }(r)]g(r) \\&>[G_{\theta }(r')-G_{\lnot \theta }(r)] \frac{G_{\lnot \theta }(r)}{G_{\theta }(r)}. \end{aligned}$$

It follows that \(\frac{G_{\lnot \theta }(r')}{G_{\theta }(r')}>\frac{G_{\lnot \theta }(r)}{G_{\theta }(r)}\). \(\square \)

Proof of Proposition 3

Now, we proceed to show that \(q'(\theta ,N_i(\theta )')<q'(\theta ,N_i(\theta ))\) whenever \(N_i(\theta )'>N_i(\theta )\). Define \(q'\) to be the posterior public belief, q the prior public belief, \(N_i(\theta )\) the proportion of the focal agent’s neighbors declaring \(\theta \), and \(\Pi (\cdot |x_i,q,N_i(\theta ))\) the posterior belief over the declaring agent’s truth-seeking orientation \(\alpha _i\in [0,1]\). Then

$$\begin{aligned} q'(\theta ,N_i(\theta ))=\int _{0}^1 q'(\theta ,N_i(\theta ),\alpha _i) {\text {d}}\Pi (\alpha _i|\theta ,N_i(\theta ),q). \end{aligned}$$

For a given \(\alpha _i\) in the support of \(\Pi (\cdot |\theta ,N_i(\theta ),q)\), there exists a threshold \(\bar{r}=\bar{r}(\alpha _i,q,N_i(\theta ))\) such that the agent only selects \(x_i=\theta \) if \(r\le \bar{r}\). From Bayes’ theorem,

$$\begin{aligned} q'(\theta ,N_i(\theta ),\alpha _i) = \bigg (1+ \frac{1-q}{q} \frac{G_{\lnot \theta }(\bar{r})}{G_{\theta }(\bar{r})}\bigg )^{-1}. \end{aligned}$$

If \(\bar{r}(\alpha _i,N_i(\theta )',q) \ge \bar{r}(\alpha _i,N_i(\theta ),q)\) holds, and further holds strictly for a subset of \(\alpha _i\) with positive posterior probability, then, by Lemma 11, \(q'(\theta ,N_i(\theta )')<q'(\theta ,N_i(\theta ))\).

It can be shown that the threshold \(\bar{r}(\alpha _i, N_i(\theta ), q)\) is strictly increasing in \(N_i(\theta )\). This gives us that \(q'(\theta ,N_i(\theta )',\alpha _i)\le q'(\theta ,N_i(\theta ),\alpha _i)\). Furthermore, having assumed that \(\alpha _i\) and r take full support in [0, 1], we can find a neighborhood of \(\alpha _i=1\) with positive probability such that \(\bar{r}(\alpha _i,N_i(\theta ),q)>0\) for all \(\alpha _i\) in this neighborhood. Hence, in this neighborhood \(q'(\theta ,N_i(\theta )',\alpha _i)<q'(\theta ,N_i(\theta ),\alpha _i)\). \(\square \)

Proof of Corollary 4

We have, from proposition 3, that \(q'(\theta ,N_i(\theta )')>q'(\theta ,N_i(\theta ))\) whenever \(N_i(\theta )'<N_i(\theta )\). It follows directly that

$$\begin{aligned} {{\,\mathrm{arg \ max}\,}}_{N_i(\theta ) \in [0,1]}q'(\theta , N_i(\theta ))=0. \end{aligned}$$

Thus, the most influential declaration is made just when \(N_i(\theta )=0\). And we have, from Lemma 9, that this is also the most informative declaration. \(\square \)

Proof of Proposition 5

On a large star network, proportion one of individuals have a single neighbor. So, for any proportion of the population declaring \(\theta \), every individual is in the minimally informative state where either \(N_\theta =0\) or 1. Hence, for all \(N_\theta \in [0,1]\), and symmetric \(\varvec{I}\), \(\varvec{I}(\mathcal {G}_{star}) = \varvec{I}(0) \le \varvec{I}(\mathcal {G})\) for any connected network \(\mathcal {G}\). \(\square \)

Proof of Proposition 6

On a complete network, every individual is neighbors with every other. Hence, the proportions of an individuals neighbors declaring \(\theta \) is the same as the proportion of the population declaring \(\theta \). That is \(N_{i}(\theta )=N_\theta \) for each i. The expected informativeness is maximized when an individual’s neighbors are equally split \(N_i(\theta )=1/2\). Thus, when exactly half the population is declaring \(\theta \), the declaration of every individual in the population is at maximal expected informativeness. Hence, no other network can be more informative in this state. That is, when \(N_\theta =1/2\), \(\varvec{I}(\mathcal {G}_{complete}) = \varvec{I}(1/2) \ge \varvec{I}(\mathcal {G})\) for any connected network \(\mathcal {G}\). \(\square \)

To show that the circle is maximally informative near consensus, first we show that for regular networks of degree at least 2 informativeness is decreasing in degree near consensus. This implies that any regular network of degree greater than two is less informative than the circle network. We combine this with Proposition 5, which implies that networks of degree 1 are also less informative than the circle network, to show that the circle network is the maximally informative regular network. Next, using the fact that any network can be formulated as an admixture of individuals of various degrees we derive that the circle network is maximally informative near consensus.

Lemma 12

For regular networks of degree at least 2, informativeness is decreasing in degree near consensus.

Proof

Take the derivative of the informativeness of any regular network \(\mathcal {G}_d\) of degree \(d\ge 2\) with respect to the proportion of the population declaring the true state.

$$\begin{aligned} \frac{d}{dN_\theta } \left[ \varvec{I}(\mathcal {G}_{d}) \right]&= \frac{d}{dN_\theta } \left[ \sum ^d_{k=0} \left( {\begin{array}{c}d\\ k\end{array}}\right) N_\theta ^k(1-N_\theta )^{d-k} \varvec{I}\left( \frac{k}{d}\right) \right] . \end{aligned}$$

Let \(N_\theta \) go to 0. This makes it so only the constant terms of the derivative remain, and the expression simplifies to

$$\begin{aligned} \lim _{N_\theta \rightarrow 0^+} \frac{d}{dN_\theta } \left[ \varvec{I}(\mathcal {G}_{d}) \right] = d[\varvec{I}(1/d)-\varvec{I}(0)]. \end{aligned}$$

This term corresponds to the slope of the secant line connecting \(\varvec{I}(0)\) and \(\varvec{I}(1/d)\). Since \(\varvec{I}\) is an increasing function, this term must be decreasing in d. Thus, for networks of degree two and greater, informativeness is decreasing in degree near consensus. \(\square \)

Lemma 13

The circle is the maximally informative regular network near consensus.

Proof

This follows from Lemma 12 and Proposition 5, which state that a regular network of degree 2 (the circle) is more informative than any network of greater degree near consensus, and that a regular network of degree 1 is less informative than any other at any state. Taken together, they imply that, near consensus, regular networks of degree two are maximally informative among regular networks. \(\square \)

Proof of Proposition 7

Now, recall that any large connected network \(\mathcal {G}_\mu \) can be formulated as an admixture \(\mu =\left<\mu _d\right>\) of proportions of individuals of degree \(d\ge 1\), where \(\sum _d \mu _d=1\) and \(\mu _d\ge 0\). The expected informativeness of any network then is a proportion-weighted sum of the expected informativeness of the individuals of each degree contained in the network. That is, \(\varvec{I}(\mathcal {G}_\mu |N_\theta )=\sum _d \mu _d \cdot E_{N_\theta }[\varvec{I}_d]\). It follows from Lemma 13 that, near consensus, any network not entirely composed of individuals of degree two is strictly less informative than one which is in fact composed entirely of individuals of degree two. Thus, when \(N_\theta =0\) or 1, \(\varvec{I}(\mathcal {G}_{circle}) > \varvec{I}(\mathcal {G}_\mu )\) for any \(\mathcal {G}_\mu \) such that \(\mu _0 = 0\) and \(\mu _2 \ne 1\). \(\square \)

Proof of Proposition 8

It follows directly from Lemma 12 that, near consensus, the maximally and minimally informative regular networks of degree at least two are the circle and complete network, respectively. We combine this with the fact that any large network \(\mathcal {G}_\mu \) can be formulated as an admixture \(\mu =\left<\mu _d\right>\) of regular networks of degree d, and with the linearity of expected informativeness, to adduce that the informativeness of any network is bounded above by that of the circle network and bounded bellow by the complete network. That is, when \(N_\theta =0\) or 1, \(\varvec{I}(\mathcal {G}_{circle}) \ge \varvec{I}(\mathcal {G}_\mu ) \ge \varvec{I}(\mathcal {G}_{complete})\) for any \(\mathcal {G}_\mu \) such that \(\text {min}\{ d:\mu _d > 0 \} \ge 2\). \(\square \)

Appendix B: Simulation Code

The full R source code for our simulations can be found at: https://github.com/amohseni/Truth-and-Conformity-on-Networks.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mohseni, A., Williams, C.R. Truth and Conformity on Networks. Erkenn (2019). https://doi.org/10.1007/s10670-019-00167-6

Download citation