Skip to main content
Log in

The importance of expertise in group decisions

  • Original Paper
  • Published:
Social Choice and Welfare Aims and scope Submit manuscript

Abstract

Prior to a collective binary choice, members of a group receive binary signals correlated with the better option. A larger group size may produce less accurate decisions, but expertise is everywhere beneficial. If a group accounts for correlation in signals, a relatively expert member puts an upper bound on the probability of a false belief. The bound holds for any group size and signal distribution. Furthermore, a population investing in expertise is better off cultivating a small mass of elites than adopting an egalitarian policy of education.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Skill refinement is another form of expertise, but decisional ability is the focus here.

  2. A number of psychological pitfalls can also beset group decision making (e.g., Bénabou 2013; Sunstein 2005). This article emphasizes the importance of expertise in a rational setting, but groupthink can devalue its function in practice. On the other hand, Charness and Sutter (2012) cite a body of experimental evidence in which groups are less susceptible to cognitive biases than individuals in games of strategy.

  3. While Condorcet first proposed his vision for collective wisdom in 1785, Laplace offered the first mathematical proof of the hypothesis in 1812 (Ben-Yashar and Paroush 2000).

  4. On a related question, Ahn and Oliveros (2014) show the asymptotic lack of an informational advantage for either joint or separate trials when multiple outcomes are decided. If a sequence of equilibria exists for which the optimal outcome is chosen in the limit for one trial format, such a sequence exists for the other format too.

  5. This assumption is slightly stronger than signal uniqueness because it does not allow signals to differ by a set of measure zero.

  6. The priors play no important role in any result, but their equality does abstract from heterogeneous initial beliefs.

  7. Suppose the group were to choose one of two actions based on which binary state they believed more likely. If so, the imprecision in this description would be harmless.

  8. Note that Proposition 3 does require n to be odd. As discussed in Sect. 2, allowing an even n with a coin flip tie-breaking rule only introduces extra cases without altering the main conclusion. In an asymptotically large group, the parity of n is unimportant if the chance of a tie vote converges to zero.

  9. Henry David Thoreau once quipped, “The mass never comes up to the standard of its best member, but on the contrary degrades itself to a level with the lowest” (Thoreau and Shepard 1961, at 4). Proposition 4 shows just the opposite—the best member elevates the lowest.

  10. This result complements (Glaeser and Sunstein 2009), who present a similar conclusion with learning over a continuous, normal distribution.

  11. This formulation of the problem abstracts from the incentive to free ride on the participation of others. Some real world settings largely solve the participation incentive, as in jury duty and other organizations with committee service requirements.

  12. A diagnosis is not a binary decision, but for the sake of argument, consider the question of whether or not a patient has a particular disease.

  13. Three is the minimal number of signals required to uncover the true state. To see why, consider the general case for \(n=2\) in Table 1. If \(x=y=0\), then \(P(a_2|A)=0\), a contradiction. If \(y>0\), then \(x=0\), which implies \(y=1-p\), but then \(P(a_2|A)=P(a_2 \cap a_1|A)+P(a_2 \cap b_1|A)=0+1-p<1/2,\) again a contradiction. Lastly, if \(x>0\), then \(y=0\), which implies \(x=p\), but then both signals are identical.

References

  • Ahn DS, Oliveros S (2014) The Condorcet Jur(ies) theorem. J Econ Theory 150:841–851

    Google Scholar 

  • Austen-Smith D, Banks JS (1996) Information aggregation, rationality, and the Condorcet jury theorem. Am Polit Sci Rev 90(1):34–45

    Google Scholar 

  • Banerjee AV (1992) A simple model of herd behavior. Q J Econ 107(3):797–817

    Google Scholar 

  • Bénabou R (2013) Groupthink: collective delusions in organizations and markets. Rev Econ Stud 80(2):429–462

    Google Scholar 

  • Ben-Yashar R (2006) Information is important to Condorcet jurors. Public Choice 127(3/4):313–327

    Google Scholar 

  • Ben-Yashar R (2014) The generalized homogeneity assumption and the Condorcet jury theorem. Theor Decis 77(2):237–241

    Google Scholar 

  • Ben-Yashar R, Paroush J (2000) A nonasymptotic Condorcet Jury theorem. Soc Choice Welf 17(2):189–199

    Google Scholar 

  • Ben-Yashar R, Zahavi M (2011) The Condorcet jury theorem and extension of the franchise with rationally ignorant voters. Public Choice 148(3/4):435–443

    Google Scholar 

  • Berend D, Paroush J (1998) A nonasymptotic Condorcet Jury theorem. Soc Choice Welf 15(4):481–488

    Google Scholar 

  • Berend D, Sapir L (2007) Monotonicity in Condorcet’s Jury Theorem with dependent voters. Soc Choice Welf 28(3):507–528

    Google Scholar 

  • Berg S (1993) Condorcet’s jury theorem, dependency among jurors. Soc Choice Welf 10(1):71–83

    Google Scholar 

  • Berg S (1994) Evaluation of some weighted majority decision rules under dependent voting. Math Soc Sci 28(2):71–83

    Google Scholar 

  • Bikhchandani S, Hirshleifer D, Welch I (1992) A theory of fads, fashion, custom, and cultural change as informational cascades. J Polit Econ 100(5):992–1026

    Google Scholar 

  • Bloom N, Eifert B, Mahajan A, McKenzie D, Roberts J (2013) Does management matter? Evidence from India. Q J Econ 128(1):1–51

    Google Scholar 

  • Boland PJ (1989) Majority systems and the Condorcet Jury theorem. J R Stat Soc 38(3):181–189

    Google Scholar 

  • Bronnenberg BJ, Dubé J-P, Gentzkow M, Shapiro JM (2015) Do pharmacists buy Bayer? Informed shoppers and the brand premium. Q J Econ 130(4):1669–1726

    Google Scholar 

  • Charness G, Sutter M (2012) Groups make better self-interested decisions. J Econ Perspect 26(3):157–176

    Google Scholar 

  • Condorcet M (1785) Essai sur l’Application de l’Analyse à la Probabilité des Décisions Rendues à la Pluralité des Voix, Paris. See I. McLean and F. Hewitt, trans., 1994

  • Congleton RD (2007) Informational limits to democratic public policy: the jury theorem, yardstick competition, and ignorance. Public Choice 132(3/4):333–352

    Google Scholar 

  • Coughlan PJ (2000) In defense of unanimous jury verdicts: mistrials, communication, and strategic voting. Am Polit Sci Rev 94(2):375–393

    Google Scholar 

  • Crawford VP, Sobel J (1982) Strategic information transmission. Econometrica 50(6):1431–1451

    Google Scholar 

  • DeGroot MH (1974) Reaching a consensus. J Am Stat Assoc 69(345):118–121

    Google Scholar 

  • Dharmapala D, McAdams RH (2003) The Condorcet Jury theorem and the expressive function of the law: a theory of informative law. Am Law Econ Rev 5(1):1–31

    Google Scholar 

  • Dietrich F (2008) The premises of Condorcet’s Jury theorem are not simultaneously justified. Episteme 5(1):56–73

    Google Scholar 

  • Dietrich F, List C (2004) A model of jury decisions where all jurors have the same evidence. Synthese 142(2):175–202

    Google Scholar 

  • Dietrich F, Spiekermann K (2013a) Epistemic democracy with defensible premises. Econ Philos 29(1):87–120

    Google Scholar 

  • Dietrich F, Spiekermann K (2013b) Independent opinions? On the causal foundations of belief formation and jury theorems. Mind 122(487):655–685

    Google Scholar 

  • Dietrich F, Spiekermann K (2019) Jury theorems. In: Fricker M, Graham PJ, Henderson D, Pedersen NJLL (eds) The Routledge companion to social epistemology. http://www.franzdietrich.net/Papers/DietrichSpiekermann-JuryTheorems.pdf

  • Feddersen T, Pesendorfer W (1998) The inferiority of unanimous jury verdicts under strategic voting. Am Polit Sci Rev 92(1):23–35

    Google Scholar 

  • Gerardi D, Yariv L (2008) Costly expertise. Am Econ Rev Pap Proc 98(2):187–193

    Google Scholar 

  • Glaeser EL, Sunstein CR (2009) Extremism and social learning. J Legal Anal 1(1):263–324

    Google Scholar 

  • Hilger NG (2016) Why don’t people trust experts. J Law Econ 59(2):293–311

    Google Scholar 

  • Jackson MO, Rogers BW, Zenou Y (2017) The economic consequences of social-network structure. J Econ Lit 55(1):49–95

    Google Scholar 

  • Kaniovski S (2010) Aggregation of correlated votes and Condorcet’s Jury theorem. Theor Decis 69(3):453–468

    Google Scholar 

  • Kaniovski S, Zaigraev A (2011) Optimal jury design for homogeneous juries with correlated votes. Theor Decis 71(4):439–459

    Google Scholar 

  • Katzner DW (1995) Participatory decision-making in the firm. J Econ Behav Organ 26(2):221–236

    Google Scholar 

  • Koriyama Y, Szentes B (2009) A resurrection of the Condorcet Jury Theorem. Theor Econ 4:227–252

    Google Scholar 

  • Krishna V, Morgan J (2001) A model of expertise. Q J Econ 116(2):747–775

    Google Scholar 

  • Ladha KK (1992) The Condorcet Jury Theorem, free speech, and correlated votes. Am J Polit Sci 36(3):617–634

    Google Scholar 

  • Ladha KK (1993) Condorcet’s jury theorem in light of de Finetti’s theorem: majority-rule voting withcorrelated votes. Soc Choice Welf 10(1):69–85

    Google Scholar 

  • Ladha KK (1995) Information pooling through majority-rule voting: Condorcet’s jury theorem with correlated votes. J Econ Behav Organ 26(3):353–372

    Google Scholar 

  • Laslier J-F, Weibull JW (2011) An incentive-compatible Condorcet Jury theorem. Scand J Econ 115(1):84–108

    Google Scholar 

  • Levy G, Razin R (2015) Correlation neglect, voting behavior, and information aggregation. Am Econ Rev 105(4):1634–1645

    Google Scholar 

  • Li H, Suen W (2004) Delegating decisions to experts. J Polit Econ 112(1):311–335

    Google Scholar 

  • Lindner I (2008) A generalization of Condorcet’s Jury Theorem to weighted voting games with many small voters. Econ Theor 35(3):607–611

    Google Scholar 

  • McCannon BC, Walker P (2016) Endogenous competence and a limit to the Condorcet Jury Theorem. Public Choice 169(1):1–18

    Google Scholar 

  • McLennan A (1998) Consequences of the Condorcet Jury Theorem for beneficial information aggregation by rational agents. Am Polit Sci Rev 92(2):413–418

    Google Scholar 

  • McMurray JC (2013) Aggregating information by voting: the wisdom of the experts versus the wisdom of the masses. Rev Econ Stud 80(1):277–312

    Google Scholar 

  • Meirowitz A (2002) Informative voting and Condorcet Jury Theorems with a continuum of types. Soc Choice Welf 19(1):219–236

    Google Scholar 

  • Miller NR (1986) Information, electorates, and democracy: Some extensions and interpretations of the Condorcet Jury Theorem. In: Grofman B, Owen G (eds) Information pooling and group decision making. Jai, Greenwich

    Google Scholar 

  • Mukhopadhaya K (2003) Jury size and the free rider problem. J Law Econ Organ 19(1):24–44

    Google Scholar 

  • Myerson RB (1998) Extended Poisson games and the Condorcet Jury Theorem. Games Econ Behav 25(1):111–131

    Google Scholar 

  • Nitzan S, Paroush J (1984) The significance of independent decisions in uncertain dichotomous choice situations. Theor Decis 17(1):47–60

    Google Scholar 

  • Paroush J (1998) Stay away from fair coins: a Condorcet Jury theorem. Soc Choice Welf 15(1):15–20

    Google Scholar 

  • Pelig B, Zamir S (2012) Extending the Condorcet Jury Theorem to a general dependent jury. Soc Choice Welf 39(1):91–125

    Google Scholar 

  • Persico N (2004) Committee design with endogenous information. Rev Econ Stud 71(1):165–191

    Google Scholar 

  • Pivato M (2017) Epistemic democracy with correlated voters. J Math Econ 72:51–69

    Google Scholar 

  • Rabin M, Schrag JL (1999) First impressions matter: a model of confirmatory bias. Q J Econ 114(1):37–82

    Google Scholar 

  • Sah RK, Stiglitz JE (1988) Committees, hierarchies, and polyarchies. Econ J 98(391):451–470

    Google Scholar 

  • Stone P (2015) Introducing difference into the Condorcet Jury Theorem. Theor Decis 78(3):399–409

    Google Scholar 

  • Such V, Monica RL, Beckman T, Naessens JM (2017) Extent of diagnostic agreement among medical referrals. J Eval Clin Pract 23(4):870–874

    Google Scholar 

  • Sunstein CR (2005) Group judgments: statistical means, deliberation, and information markets. N Y Univ Law Rev 80(3):962–1049

    Google Scholar 

  • Surowiecki J (2005) The wisdom of crowds. Anchor Books, New York

    Google Scholar 

  • Thoreau HD, Shepard O (1961) The heart of Thoreau’s journals, 1st edn. Dover Publications, New York

    Google Scholar 

  • Wit J (1998) Rational choice and the Condorcet Jury Theorem. Games Econ Behav 22:364–376

    Google Scholar 

  • Zaigraev A, Kaniovski S (2013) A note on the probability of at least \(k\) successes in \(n\) correlated binary trials. Oper Res Lett 41(1):116–120

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Lundberg.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Proposition 1

See Boland (1989). \(\square \)

Proof of Lemma 1

Assume (A1)–(A4). Denote the ordered elements of \({\mathbf {g}} \in {\mathbb {G}}_{\mathbf {n}}\) by \((g_1,g_2,\ldots ,g_{|{\mathbf {g}}|})\), where \(|{\mathbf {g}}| > 1\) is the cardinality of \({\mathbf {g}}\). Let \(\alpha _i = a_{g_i}\) and \(\beta _i = b_{g_i}\) for \(i \in \{1,\ldots ,|{\mathbf {g}}|\}\). Define \(\delta _0 = \mathbb {1}(g_1 > 1) \sum _{j=1}^{g_1-1} d_j\), where \(\mathbb {1}(\cdot )\) is the indicator function. Define \(\delta _1 = \sum _{j=1}^{g_1} d_j\). Next, recursively define \(\delta _{i+1}=\sum _{j=1}^{g_{i+1}-1} d_j - \delta _i\) for \(i \in \{1,\ldots ,|{\mathbf {g}}|-1\}\). Then for \(i, m \in \{1,\ldots ,|{\mathbf {g}}|\}\) (with \(i < m\), \(i+m \le |{\mathbf {g}}|\)),

$$\begin{aligned} P(\alpha _{i+m} \cap (\underset{j \le i}{\cap } \alpha _j)|A) = P(a_{g_{i+m}} \cap (\underset{j \le i}{\cap } a_{g_j})|A) = p- \sum _{j=a_{g_i}}^{a_{g_{i+m}}-1} d_j=p-\sum _{j=i}^{i+m-1} \delta _j. \end{aligned}$$

Similarly,

$$\begin{aligned} P(\beta _{i+m} \cap (\underset{j \le i}{\cup } \beta _i)|A) = P(b_{g_{i+m}} \cap (\underset{j \le i}{\cup } a_{g_j})|A) = \sum _{j=a_{g_i}}^{a_{g_{i+m}}-1} d_j= \sum _{j=i}^{i+m-1} \delta _j. \end{aligned}$$

Lastly,

$$\begin{aligned} \sum _{j=1}^{|{\mathbf {g}}|}\delta _j=\sum _{j=1}^{g_{|{\mathbf {g}}|}}d_j \le \sum _{j=1}^{n-1} d_j \le \sum _{j=1}^{\infty } d_j \le 1-p \end{aligned}$$

by construction. \(\square \)

Proof of Lemma 2

Assume (A1)–(A4). Note (A3) implies

$$\begin{aligned} P(\mathbf {s}_{{\mathbf{g}}}|A)=P({\bar{\mathbf{s}}}_\mathbf{g}|B) \;\; \forall {\mathbf {g}} \in {\mathbb {G}}_{\mathbf {n}}, \; {{\mathbf{s}}}_{{\mathbf{g}}} \in {\mathbb {S}}_{\mathbf {g}}. \end{aligned}$$

We seek to show

$$\begin{aligned} P({{\mathbf{s}}}_{{\mathbf{g}}}|A)=P({\bar{\mathbf{s}}}_{{\mathbf{g}}}|A) \; \forall {\mathbf {g}} \in {\mathbb {G}}_{\mathbf {n}}, \;{{\mathbf{s}}}_{{\mathbf{g}}} \in {\mathbb {S}}_{\mathbf {g}} \; \text {s.t.} \; \exists \, i,j \in {\mathbf {g}} \; \text {for which} \; s_i \ne s_j.\qquad \qquad (\hbox {L2}^\prime ) \end{aligned}$$

By (A3), (L2\(^\prime \)) implies \(P({{\mathbf{s}}}_{{\mathbf{g}}}|A)=P({{\mathbf{s}}}_{{\mathbf{g}}}|B)\).

The proof uses the following two results. First, suppressing the conditional probability notation,

$$\begin{aligned} P({{\mathbf{s}}}_{{\mathbf{g}}})=0 \;\; \text {if} \;\; \exists \, i<j<k \;\text {for which} \; s_i = s_k \ne s_j. \end{aligned}$$
(L2.A)

Second, defining \({\mathbf {g}}_j^{+}=\{i \in {\mathbf {g}}|i\ge j\}\) and \({\mathbf {g}}_j^{-}=\{i \in {\mathbf {g}}|i < j\}\),

$$\begin{aligned} \begin{aligned} P({{\mathbf{s}}}_{{\mathbf{g}}})=\delta _{j-1} \;\, \text {if} \;\, \exists j \in {\mathbf {g}} \;\; \text {s.t.} \;&(s_i=a_i \, \forall i \in {\mathbf {g}}_j^{-}) \; \text {and} \; (s_i=b_i \, \forall i \in {\mathbf {g}}_j^{+}) \\ \text {or} \;&(s_i=b_i \, \forall i \in {\mathbf {g}}_j^{-}) \; \text {and} \; (s_i=a_i \, \forall i \in {\mathbf {g}}_j^{+}), \end{aligned} \end{aligned}$$
(L2.B)

where \(\delta _{j-1}\) is constructed per Lemma 1. Since (L2\(^\prime \)) implies at least two of the signals are different, (L2.A) and (L2.B) cover all relevant cases. Suppose \(\exists \, i<j<k \;\text {for which} \; s_i = s_k \ne s_j\). Then \(P({{\mathbf{s}}}_{{\mathbf{g}}})=0\) by (L2.A). But \(s_i = s_k \ne s_j \Rightarrow {\bar{s}}_i = {\bar{s}}_k \ne {\bar{s}}_j\). Then \(P({\bar{\mathbf{s}}}_{{\mathbf{g}}})=0\) by (L2.A) again, so \(P({{\mathbf{s}}}_{{\mathbf{g}}})=P({\bar{\mathbf{s}}}_{{\mathbf{g}}})\). For the remaining sequences in the class of (L2.B), the result is immediate. \(\square \)

Proof of L2.A

Without loss of generality, suppose \(s_i=a_i\), \(s_j=b_j\), and \(s_k=a_k\). Assumption (A4) applies to the redefined system with \(i=1\), \(j=2\), and \(k=3\), per Lemma 1. Note that

$$\begin{aligned} P(\alpha _1 \cap \alpha _3) = P(\alpha _1 \cap \alpha _2 \cap \alpha _3) + P (\alpha _1 \cap \beta _2 \cap \alpha _3). \end{aligned}$$

By (A4), \(P(\alpha _1 \cap \alpha _2 \cap \alpha _3) = p - \delta _1 - \delta _2\). By another application of Lemma 1, then, \(P(\alpha _1 \cap \alpha _3)=p - \delta _1 - \delta _2\). Thus, \(P(\alpha _1 \cap \beta _2 \cap \alpha _3)=0\).

Proof of L2.B

The proof is by induction. Apply Lemma 1 to subgroup \(\mathbf {g+1}\) nesting \({\mathbf {g}}\) (with \(|\mathbf {g+1}|=|{\mathbf {g}}|+1\)). For \(|{\mathbf {g}}|=2\), by the application of (A4) to the redefined system, \(P(\alpha _1 \cap \beta _2)=\delta _1\). Then

$$\begin{aligned} P(\beta _1 \cap \alpha _2)=1-P(\overline{\beta _{1} \cap \alpha _{2}}) =1-P(\alpha _1 \cup \beta _2)&=1-[P(\alpha _1)+P(\beta _2)-P(\alpha _1 \cap \beta _2)]\\&=1-[p+1-p - \delta _1]\\&=\delta _1, \end{aligned}$$

where the second equality follows from De Morgan’s Law, so the induction hypothesis holds for \(|{\mathbf {g}}|=2\). For \(|{\mathbf {g}}|+1\), define \({\mathbf {g}}_j^{+}=\{i|j \le i \le |{\mathbf {g}}|\}\) and \({\mathbf {g}}_j^{-}=\{i|1 \le i < j\}\) for \(j \in {\mathbf {g}}\). Assume \(s_i=\alpha _i \; \forall i \in {\mathbf {g}}_j^{-}\) and \(s_i=\beta _i \, \forall i \in {\mathbf {g}}_j^{+}\) (the proof for the opposite case is similar). That is, consider a sequence of the form \((\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j,\beta _{j+1},\ldots , \beta _{|{\mathbf {g}}|})\) for some \(j \in \{2,3,\ldots ,|{\mathbf {g}}|\}\). Then

$$\begin{aligned} P(\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j,\beta _{j+1},\ldots , \beta _{|{\mathbf {g}}|})&=P(\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j, \beta _{j+1},\ldots ,\beta _{|{\mathbf {g}}|},\beta _{|{\mathbf {g}}|+1})\\&\quad +P(\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j,\beta _{j+1},\ldots , \beta _{|{\mathbf {g}}|},\alpha _{|{\mathbf {g}}|+1})\\&=P(\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j,\beta _{j+1},\ldots , \beta _{|{\mathbf {g}}|},\beta _{|{\mathbf {g}}|+1})\\&\quad +0 \end{aligned}$$

by (L2.A), so

$$\begin{aligned} P(\alpha _1,\alpha _2,\ldots ,\alpha _{j-1},\beta _j,\beta _{j+1},\ldots , \beta _{|{\mathbf {g}}|},\beta _{|{\mathbf {g}}|+1})=P(\alpha _1,\alpha _2,\ldots , \alpha _{j-1},\beta _j,\beta _{j+1},\ldots ,\beta _{|{\mathbf {g}}|})=\delta _{j-1}. \end{aligned}$$

\(\square \)

Proof of Proposition 2

Let \(\forall \mathbf {g'} \subseteq {\mathbf {g}} \in {\mathbb {G}}_{\mathbf {n}}\), and assume (S1) and (A1)–(A4). Without loss of generality, let \(S=A\), and suppress the conditional notation for the state of the world. Per Lemma 1,

$$\begin{aligned} P(\mu ^+|{\mathbf {g}})=P\left( \underset{i \in {\mathbf {g}}}{\cap } \alpha _i\right) \le P\left( \underset{i \in \mathbf {g'}}{\cap } \alpha _i\right) =P(\mu ^+|\mathbf {g'}), \end{aligned}$$

where the equalities follow from Lemma 2 and the inequality from \(\mathbf {g'} \subseteq {\mathbf {g}}\). Similarly,

$$\begin{aligned} P(\mu ^-|{\mathbf {g}})=P\left( \underset{i \in {\mathbf {g}}}{\cap } \beta _i\right) \le P\left( \underset{i \in \mathbf {g'}}{\cap } \beta _i\right) =P(\mu ^-|\mathbf {g'}), \end{aligned}$$

which establishes (1). For (2), note that (A4) and Lemma 2 imply

$$\begin{aligned} P(\mu ^+|{\mathbf {n}})=P\left( \underset{i \in {\mathbf {n}}}{\cap } \alpha _i\right) =P\left( \underset{i \in {\mathbf {g}}}{\cap } a_i\right) =p-\sum\nolimits_{j=1}^{n-1} d_j. \end{aligned}$$

Likewise,

$$\begin{aligned} P(\mu ^-|{\mathbf {n}})=P\left( \underset{i \in {\mathbf {n}}}{\cap } b_i\right) =1-P\left( \underset{i \in {\mathbf {n}}}{\cup } a_i\right) =(1-p)-\sum _{j=1}^{n-1} d_j. \end{aligned}$$

To establish the last equality, note \(P(a_1 \cup a_2) = P(a_1) + P(a_2) - P(a_1 \cap a_2) = 2p - (p - d_1)\) by (A4), so induction with Lemma 1 confirms \( P\left(\bigcup\nolimits_{{i \in {\mathbf{n}}}} {a_{i} } \right) = p + \sum\limits_{{j = 1}}^{{n - 1}} {d_{j} } \). \(\square \)

Proof of Proposition 3

For \({\mathbf {g}} \in {\mathbb {G}}_{\mathbf {n}}\) s.t. \(|{\mathbf {g}}| \in \{2x-1|x\in {\mathbb {N}}^+\}\), assume (S2), (A1)–(A4), and, without loss of generality, \(S={A}\). Apply Lemma 1, and suppress the conditional notation for the binary state. Let \(\mathbb {1}(\cdot )\) denote the indicator function. Define

$$\begin{aligned} {\mathbb {M}}_\alpha = \{ {\mathbf {s}}_{\mathbf {g}} \in {\mathbb {S}}_{\mathbf {g}} | \sum _{i \in {\mathbf {g}}} \mathbb {1}(s_i =\alpha _i) > \sum _{i \in {\mathbf {g}}} \mathbb {1}(s_i =\beta _i)\}, \end{aligned}$$

with \({\mathbb {M}}_\beta \) its complementary set. Because \(|{\mathbf {g}}|\) is odd, the two sets cover all signal profiles.

Assumption (A4) implies \(P(\cap _{i \in {\mathbf {g}}} \alpha _i)=p-\sum _{j=1}^{|{\mathbf {g}}|-1}\delta _j\) and \(P(\cap _{i \in {\mathbf {g}}} \beta _i)=(1-p)-\sum _{j=1}^{|{\mathbf {g}}|-1}\delta _j\), where the proof of Proposition 2 confirms the latter equality. Define \({\mathbf {g}}_j^{+}=\{i \in {\mathbf {g}}|i\ge j\}\) and \({\mathbf {g}}_j^{-}=\{i \in {\mathbf {g}}|i < j\}\). By Lemma2, \(P({{\mathbf{s}}}_{{\mathbf{g}}})=0\) if \(\exists i<j<k\) for which \(s_i = s_k \ne s_j\). Also,

$$\begin{aligned} \begin{aligned} P({{\mathbf{s}}}_{{\mathbf{g}}})=\delta _{j-1} \;\, \text {if} \;\, \exists j \in {\mathbf {g}} \;\; \text {s.t.} \;&(s_i=\alpha _i \, \forall i \in {\mathbf {g}}_j^{-}) \cap (s_i=\beta _i \, \forall i \in {\mathbf {g}}_j^{+}) \\ \text {or} \;&(s_i=\beta _i \, \forall i \in {\mathbf {g}}_j^{-}) \cap (s_i=\alpha _i \, \forall i \in {\mathbf {g}}_j^{+}). \end{aligned} \end{aligned}$$

But then

$$\begin{aligned} P(\mu ^+|{\mathbf {g}})&=P({{\mathbf{s}}}_{{\mathbf{g}}} \in {\mathbb {M}}_\alpha )= p-\sum _{j=1}^{|{\mathbf {g}}|-1} \delta _j + \sum _{j=1}^{|{\mathbf {g}}|-1}\delta _j = p, \; \text {and} \\ P(\mu ^-|{\mathbf {g}})&=P({{\mathbf{s}}}_{{\mathbf{g}}} \in {\mathbb {M}}_\beta )= (1-p)-\sum _{j=1}^{|{\mathbf {g}}|-1} \delta _j+\sum _{j=1}^{|{\mathbf {g}}|-1}\delta _j=(1-p).\; \end{aligned}$$

\(\square \)

Proof of Proposition 4

The proof is by induction. Assume (S1), (A1)–(A3), and \(S=A\) (the proof for \(S=B\) is similar). The induction hypothesis clearly holds for \(n=1\). For illustration, consider the general case of \(n=2\) presented in Table 1 of Sect. 4.

By definition, \(x+y=p_2\), and \(x>1-p_1-y\) since \(p_1,p_2>1/2\). If \(p_2>p_1\) (equivalently, \(p_1-x<y\)), then \(P(\mu ^-|{\mathbf {n}})=p_1-x+1-p_1-y=1-p_2=1-\text {max}\{p_1,p_2\}\). If \(p_2<p_1\) (equivalently, \(p_1-x>y\)), then \(P(\mu ^-|{\mathbf {n}})=y+1-p_1-y=1-p_1=1-\text {max}\{p_1,p_2\}\). Lastly, if \(p_2=p_1\), then \(P(\mu ^-|{\mathbf {n}})=1-p_1-y<1-p_1=1-\text {max}\{p_1,p_2\}\). Therefore, \(P(\mu ^-|{\mathbf {n}})\le 1-\text {max}\{p_1,p_2\}\), and the induction hypothesis holds for \(n=2\).

Next consider the general case for \(n>2\). Index the \(n^2\) possible signal vectors by \(j=1,2,\ldots ,n^2\). Without loss of generality, order the signal vectors according to whether they create an “incorrect” or “correct” inference, with the “uninformative” cases at the end:

$$\begin{aligned} P({\mathbf {s}}_j|A) = {\left\{ \begin{array}{ll} \mu ^-_j \quad \text {for} \quad j=1,2,\ldots ,k \\ \mu _j \quad \text {for} \quad j=k+1,\ldots ,n^2/2 \\ \mu ^+_j \quad \text {for} \quad j=n^2/2+1,\ldots ,n^2/2+k+1 \\ \mu _j \quad \text {for}\quad j=n^2/2+k+2,\ldots ,n^2 \end{array}\right. } \end{aligned}$$

for some \(k \in \{1,2,\ldots ,n^2/2\}\) (Proposition 3 shows \(k\ge 1\)), where it is understood that if \(k=n^2/2\), no event is “uninformative.” Furthermore, let \({\mathbf {s}}_j=\mathbf {{\bar{s}}}_{j+n^2/2}\) for \(j=1,2,\ldots ,n^2/2\). Define \(\mu ^-_{aj}\), \(\mu _{aj}\), and \(\mu ^+_{aj}\) for the respective probabilities of intersections with \(a_{n+1}\). Likewise, define \(\mu ^-_{bj}\) for the conditional intersection of \(b_{n+1}\) and so on. Let \(\mathbb {1}(\cdot )\) denote the indicator function. Then, for \(\mathbf {n+1}\) nesting \({\mathbf {n}}\) (with \(|\mathbf {n+1}|=n+1\)),

$$\begin{aligned} P(\mu ^-|\mathbf {n+1})&=\sum _{j=1}^{k} \mathbb {1}(\mu ^-_{aj}<\mu ^+_{bj+n^2/2})\mu ^-_{aj}\\&\quad +\mathbb {1} (\mu ^-_{bj}<\mu ^+_{aj+n^2/2})\mu ^-_{bj} \\&\quad + \mathbb {1}(\mu ^+_{aj+n^2/2}<\mu ^-_{bj})\mu ^+_{aj+n^2/2}+\mathbb {1} (\mu ^+_{bj+n^2/2}<\mu ^-_{aj})\mu ^+_{bj+n^2/2}\\&\quad +\sum _{j=k+1}^{n^2/2} \mathbb {1}(\mu _{aj}<\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1}(\mu _{bj}<\mu _{aj+n^2/2})\mu _{bj}\\&\quad + \mathbb {1}(\mu _{aj+n^2/2}<\mu _{bj})\mu _{aj+n^2/2}+\mathbb {1}(\mu _{bj +n^2/2}<\mu _{aj})\mu _{bj+n^2/2}.\\ \end{aligned}$$

Because

$$\begin{aligned}&\mathbb {1}(\mu ^-_{aj}<\mu ^+_{bj+n^2/2})\mu ^-_{aj} +\mathbb {1}(\mu ^-_{bj}<\mu ^+_{aj+n^2/2})\mu ^-_{bj} + \mathbb {1}(\mu ^+_{aj+n^2/2}<\mu ^-_{bj})\mu ^+_{aj+n^2/2}\\&\qquad +\mathbb {1}(\mu ^+_{bj+n^2/2}<\mu ^-_{aj})\mu ^+_{bj+n^2/2} \\&\quad \le \mu ^-_j, \end{aligned}$$

and

$$\begin{aligned}&\mathbb {1}(\mu _{aj}<\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1}(\mu _{bj}<\mu _{aj+n^2/2}) \mu _{bj} +\mathbb {1}(\mu _{aj+n^2/2}<\mu _{bj})\mu _{aj+n^2/2}\\&\qquad +\mathbb {1} (\mu _{bj+n^2/2}<\mu _{aj})\mu _{bj+n^2/2} \\&\quad \le \mu _j, \end{aligned}$$

the induction hypothesis implies (via Lemma 1):

$$\begin{aligned} P(\mu ^-|\mathbf {n+1})\le \sum _{j=1}^{k} \mu ^-_j +\sum _{j=k+1}^{n^2/2} \mu _j\le 1-\max \limits _{i \in \mathbf {n+1}} p_i, \end{aligned}$$

where the last inequality follows from Lemma 5. \(\square \)

Lemma 5

Assume (S1) and (A1)–(A3). Adopting the terminology and ordering of Proposition 4,

$$\begin{aligned} \sum _{j=1}^{k} \mu ^-_j +\sum _{j=k+1}^{n^2/2} \mu _j \le 1-\max \limits _{i \in \mathbf {n+1}} p_i. \end{aligned}$$

Proof of Lemma 5

Assume (S1) and (A1)–(A3). The proof is by induction. The induction hypothesis clearly holds for \(n=1\). Suppose \(n>1\). We seek to show

$$\begin{aligned} \Gamma \equiv \sum _{j=1}^{k} \mu ^-_j +\sum _{j=k+1}^{n^2/2} \mu _j =&\sum _{j=1}^{k} \mathbb {1}(\mu ^-_{aj}<\mu ^+_{bj+n^2/2})\mu ^-_{aj}\\&+\mathbb {1} (\mu ^-_{bj}<\mu ^+_{aj+n^2/2})\mu ^-_{bj}\\&+ \mathbb {1}(\mu ^+_{aj+n^2/2}<\mu ^-_{bj})\mu ^+_{aj+n^2/2}+\mathbb {1} (\mu ^+_{bj+n^2/2}<\mu ^-_{aj})\mu ^+_{bj+n^2/2}\\&+\mathbb {1}(\mu ^-_{aj}=\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}=\mu ^+_{aj+n^2/2})\mu ^-_{bj}\\&+\sum _{j=k+1}^{n^2/2} \mathbb {1}(\mu _{aj}<\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}<\mu _{aj+n^2/2})\mu _{bj}\\&+\mathbb {1}(\mu _{aj+n^2/2}<\mu _{bj})\mu _{aj+n^2/2}+\mathbb {1} (\mu _{bj+n^2/2}<\mu _{aj})\mu _{bj+n^2/2}\\&+\mathbb {1}(\mu _{aj}=\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}=\mu _{aj+n^2/2})\mu _{bj}\\ \le&1-\max \limits _{i \in \mathbf {n+1}} p_i. \end{aligned}$$

Because

$$\begin{aligned}&\mathbb {1}(\mu ^-_{aj}<\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}<\mu ^+_{aj+n^2/2})\mu ^-_{bj}\\&\quad +\mathbb {1}(\mu ^+_{aj+n^2/2}<\mu ^-_{bj})\mu ^+_{aj+n^2/2}+\mathbb {1} (\mu ^+_{bj+n^2/2}<\mu ^-_{aj})\mu ^+_{bj+n^2/2}\\&\quad +\mathbb {1}(\mu ^-_{aj}=\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}=\mu ^+_{aj+n^2/2})\mu ^-_{bj} \le \mu ^-_{bj}+\mu ^+_{bj+n^2/2},\\&\mathbb {1}(\mu _{aj}<\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}<\mu _{aj+n^2/2})\mu _{bj} + \mathbb {1} (\mu _{aj+n^2/2}<\mu _{bj})\mu _{aj+n^2/2}\\&\quad +\mathbb {1} (\mu _{bj+n^2/2}<\mu _{aj})\mu _{bj+n^2/2}\\&\quad +\mathbb {1}(\mu _{aj}=\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}=\mu _{aj+n^2/2})\mu _{bj} \le \mu _{bj}+\mu _{bj+n^2/2}, \end{aligned}$$

and

$$\begin{aligned} 1-p_{n+1}=\sum _{j=1}^{k} (\mu ^-_{bj} +\mu ^+_{bj+n^2/2}) +\sum _{j=k+1}^{n^2/2} (\mu _{bj} +\mu _{bj+n^2/2}) \end{aligned}$$

by construction, then \(\Gamma \le 1-p_{n+1}\).

Next, because

$$\begin{aligned}&\mathbb {1}(\mu ^-_{aj}<\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}<\mu ^+_{aj+n^2/2})\mu ^-_{bj} + \mathbb {1} (\mu ^+_{aj+n^2/2}<\mu ^-_{bj})\mu ^+_{aj+n^2/2}\\&\quad +\mathbb {1} (\mu ^+_{bj+n^2/2}<\mu ^-_{aj})c\mu ^+_{bj+n^2/2}\\&\quad +\mathbb {1}(\mu ^-_{aj}=\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}=\mu ^+_{aj+n^2/2})\mu ^-_{bj} \le \mu ^-_j, \end{aligned}$$

and

$$\begin{aligned}&\mathbb {1}(\mu _{aj}<\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}<\mu _{aj+n^2/2})\mu _{bj} + \mathbb {1} (\mu _{aj+n^2/2}<\mu _{bj})\mu _{aj+n^2/2}+\mathbb {1} (\mu _{bj+n^2/2}<\mu _{aj})\mu _{bj+n^2/2}\\&\quad +\mathbb {1}(\mu _{aj}=\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1} (\mu _{bj}=\mu _{aj+n^2/2})\mu _{bj} \le \mu _j, \end{aligned}$$

the induction hypothesis implies

$$\begin{aligned} \Gamma \le \sum _{j=1}^{k} \mu ^-_j +\sum _{j=k+1}^{n^2/2} \mu _j\le 1 -\max \limits _{i \in {\mathbf {n}}} p_i. \end{aligned}$$

Thus, \(\Gamma \le 1-\max \nolimits _{i \in \mathbf {n+1}} p_i\). \(\square \)

Proof of Proposition 5

Assume (S1) and (A1)–(A3). Following the same approach and terminology of Proposition 4, the induction hypothesis clearly holds for \(n=1\). Let \(\mathbf {n+1}\) nest \({\mathbf {n}}\) (with \(|\mathbf {n+1}|=n+1\)). For \(n+1\), then,

$$\begin{aligned} P(\mu |\mathbf {n+1})&=\sum _{j=1}^{k} \mathbb {1}(\mu ^-_{aj}=\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}=\mu ^+_{aj+n^2/2})\mu ^-_{bj}\\&\quad + \mathbb {1}(\mu ^+_{aj+n^2/2}=\mu ^-_{bj})\mu ^+_{aj+n^2/2} +\mathbb {1}(\mu ^+_{bj+n^2/2}=\mu ^-_{aj})\mu ^+_{bj+n^2/2}\\&\quad +\sum _{j=k+1}^{n^2/2} \mathbb {1}(\mu _{aj}=\mu _{bj+n^2/2})\mu _{aj} +\mathbb {1}(\mu _{bj}=\mu _{aj+n^2/2})\mu _{bj}\\&\quad + \mathbb {1}(\mu _{aj+n^2/2}=\mu _{bj})\mu _{aj+n^2/2}+\mathbb {1} (\mu _{bj+n^2/2}=\mu _{aj})\mu _{bj+n^2/2}. \end{aligned}$$

Because

$$\begin{aligned}\mathbb {1}(\mu ^-_{aj} & =\mu ^+_{bj+n^2/2})\mu ^-_{aj}+\mathbb {1} (\mu ^-_{bj}=\mu ^+_{aj+n^2/2})\mu ^-_{bj} + \mathbb {1}(\mu ^+_{aj+n^2/2}= \mu ^-_{bj})\mu ^+_{aj+n^2/2}\\&\qquad +\mathbb {1}(\mu ^+_{bj+n^2/2} =\mu ^-_{aj})\mu ^+_{bj+n^2/2} \\&\quad \le 2\mu ^-_{bj}+2\mu ^+_{bj+n^2/2},\\\mathbb {1}(\mu _{aj} & =\mu _{bj+n^2/2})\mu _{aj}+\mathbb {1}(\mu _{bj} =\mu _{aj+n^2/2})\mu _{bj} + \mathbb {1}(\mu _{aj+n^2/2} =\mu _{bj})\mu _{aj+n^2/2}\\&\qquad +\mathbb {1}(\mu _{bj+n^2/2}=\mu _{aj})\mu _{bj+n^2/2} \\&\quad \le 2\mu _{bj}+2\mu _{bj+n^2/2}, \end{aligned}$$

and

$$\begin{aligned} 1-p_{n+1}=\sum _{j=1}^{k} (\mu ^-_{bj} +\mu ^+_{bj+n^2/2})+\sum _{j=k+1}^{n^2/2} (\mu _{bj} +\mu _{bj+n^2/2}) \end{aligned}$$

by construction, then \(P(\mu |\mathbf {n+1}) \le 2(1-p_{n+1})\). An exact analogy to Lemma 5 then establishes \(P(\mu |\mathbf {n+1}) \le 2(1-\max \nolimits _{i \in {\mathbf {n}}} p_i)\). Thus, \(P(\mu |\mathbf {n+1}) \le 2(1-\max \nolimits _{i \in \mathbf {n+1}} p_i)\). \(\square \)

Proof of Proposition 6

First, reversing the order of the summation,

$$\begin{aligned} F(p,n)&={n \atopwithdelims ()n}p^n+{n \atopwithdelims ()n-1}p^{n-1}(1-p)+{n \atopwithdelims ()n-2}p^{n-2}(1-p)^2+\ldots \\&\quad +{n \atopwithdelims ()\frac{n+1}{2}}p^{n-(n+1)/2}(1-p)^{(n-1)/2}. \end{aligned}$$

Then

$$\begin{aligned} \frac{\partial F(p,n)}{\partial p}&=np^{n-1}+\frac{n!}{(n-1)!}\bigg [(n-1)p^{n-2}(1-p)-p^{n-1}\bigg ]+\\& \quad \times \frac{n!}{2!(n-2)!}\bigg [(n-2)p^{n-3}(1-p)^2-2p^{n-2}(1-p)\bigg ]+\ldots \\&\quad +\frac{n!}{\big (\frac{n+1}{2}\big )!\big (\frac{n-1}{2}\big )!} \bigg [((n+1)/2)p^{(n+1)/2-1}(1-p)^{(n-1)/2} \\&\quad -((n-1)/2)p^{n-1}(1-p)^{(n-1)/2-1}\bigg ] \end{aligned}$$

is a telescoping series, leaving

$$\begin{aligned} \frac{\partial F(p,n)}{\partial p}&=\frac{n!}{\big (\frac{n+1}{2}\big )! \big (\frac{n-1}{2}\big )!}\bigg [((n+1)/2)p^{(n+1)/2-1}(1-p)^{(n-1)/2}\bigg ]\\&=\frac{n!}{\big (\big (\frac{n-1}{2}\big )!\big )^2}\big [p(1-p)\big ]^{(n-1)/2}. \end{aligned}$$

Then

$$\begin{aligned} \frac{\partial ^2 F(p,n+2)}{\Delta n \partial p} = \frac{(n+2)!}{\big (\big (\frac{n+1}{2}\big )!\big )^2}\big [p(1-p)\big ]^{(n+1)/2} -\frac{n!}{\big (\big (\frac{n-1}{2}\big )!\big )^2}\big [p(1-p)\big ]^{(n-1)/2}, \end{aligned}$$

which is greater than zero if and only if \(4p(1-p)>(n+1)/(n+2)\). Because \(Q(p) \equiv 4p(1-p)=1\) at its maximizer of \(p=1/2\), the solution to the corresponding quadratic equation yields

$$\begin{aligned} \frac{\partial ^2 F(p,n+2)}{\Delta n \partial p} {\left\{ \begin{array}{ll} \ge 0 \; \text {if} \quad p \in [1/2,1/2+\sqrt{1-(n+1)/(n+2)}] \\ \le 0 \; \text {if} \quad p \in [1/2+\sqrt{1-(n+1)/(n+2)},1] \end{array}\right. }. \end{aligned}$$

Because EU(pn) is an affine function of F(pn), the optimal group size is single-peaked. \(\square \)

Proof of Lemma 3

Kaniovski and Zaigraev (2011) show F(pn) is an increasing function, with at most one inflection point, that lies above a chord between \(p=0\) and \(p=1\). Thus, F(pn) is concave. \(\square \)

Proof of Lemma 4

Writing the optimal investment as a function of the group size, the first-order condition implies

$$\begin{aligned} \frac{\partial F(p^*(n),n)}{\partial p}uN-n\phi '(p^*(n))=0 \end{aligned}$$

for any n. To preserve monotonicity, suppose the group expands by two members. Then

$$\begin{aligned} \frac{\frac{\displaystyle \partial F(p^*(n+2),n+2)}{\displaystyle \partial p}}{\frac{\displaystyle \partial F(p^*(n),n)}{\displaystyle \partial p}}=\frac{(n+2)\phi '(p^*(n+2))}{n\phi '(p^*(n))}. \end{aligned}$$
(L5)

Now suppose \(p^*(n+2)\ge p^*(n)\). The right-hand side of (L5) is then strictly greater than one by the convexity of \(\phi (p)\). By Proposition 1, F(pn) is increasing and approaching one as n grows. Furthermore, the difference \(F(p,n+2)-F(p,n)\) is decreasing in n. From Lemma 3, F(pn) is concave and approaches one as p grows. Thus, the left-hand side of (L5) must be strictly less than one if \(p^*(n+2)\ge p^*(n)\), a contradiction. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lundberg, A. The importance of expertise in group decisions. Soc Choice Welf 55, 495–521 (2020). https://doi.org/10.1007/s00355-020-01253-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00355-020-01253-3

Navigation