Abstract
We present a minimal pragmatic restriction on the interpretation of the weights in the “Equal Weight View” (and, more generally, in the “Linear Pooling” view) regarding peer disagreement and show that the view cannot respect it. Based on this result we argue against the view. The restriction is the following one: if an agent, \(\hbox {i}\), assigns an equal or higher weight to another agent, \(\hbox {j}\), (i.e. if \(\hbox {i}\) takes \(\hbox {j}\) to be as epistemically competent as him or epistemically superior to him), he must be willing—in exchange for a positive and certain payment—to accept an offer to let a completely rational and sympathetic \(\hbox {j}\) choose for him whether to accept a bet with positive expected utility. If \(\hbox {i}\) assigns a lower weight to \(\hbox {j}\) than to himself, he must not be willing to pay any positive price for letting \(\hbox {j}\) choose for him. Respecting the constraint entails, we show, that the impact of disagreement on one’s degree of belief is not independent of what the disagreement is discovered to be (i.e. not independent of j’s degree of belief).
This is a preview of subscription content, access via your institution.
Notes
 1.
See, e.g. Christensen (2011) for a characterization of this view.
 2.
Christensen (2011, pp. 1–2) locates this as the central issue dividing Conciliatory and Steadfast views.
 3.
We thank an anonymous referee for helping us see the nuances here.
 4.
The “splitting the difference” (averaging over the difference in degrees of belief) interpretation of the EW view is defended by Elga (2007), at least as understood by Christensen (2007, p. 199, n. 15) and others (see footnote 10), and by Christensen (2011) (with some qualifications) and more recently by Cohen (2013): “when peers discover they disagree, each should adopt the simple average of their credences, i.e. they should split the difference between their credences. If I am at .8 and my peer is at .2, then we should each move to .5.”
Since our argument applies to any splitting the difference case, it also applies to the Conciliatory view developed by Christensen in his more recent (2009) and (2011) publications. Though Christensen distances his view from the splitting the difference interpretation of the EW view (2009, pp. 758–759), he advises peers to converge in some cases (and within a limited range of degrees of belief). The argument can apply even after taking into consideration the rather daunting list of obstacles in articulating a principle for when averaging of this sort is required (2009, 766, note 11). Moreover, as we will make clear, our argument does not rely on exact averaging, in fact, it merely relies on “Linear Pooling,” i.e. on a view that attaches a fixed weight (e.g. 1/3) to a peer’s degree of belief within a range. So our argument also seems to apply to Christensen’s (2011) (e.g. p. 3, note 3) view that takes an even bleaker view regarding the exact impact a peer’s view should have at the present stage of the discussion on rational disagreement (2011, p. 17). His remarks there suggest that he expects the impact of a peer’s degree of belief to be less than splitting the difference but still significant, i.e. suggesting that it is fixed. Our argument, however, goes against even these vague general expectations. This is because respecting our constraint will entail, at the very least, different levels of impact to different degrees of belief that a peer may be discovered as having.
 5.
We are not committed, however, to the idea that the resulting view will be Conciliatory in Christensen’s sense. Such a view may very well violate his independence constraint. See note 2 above.
 6.
The Total Evidence view seems to be committed to a rejection of the constraint we offer. Kelly (2010) endorses the split the difference view in cases where peers lose access to their original evidence (when the evidence is purely psychological in his sense—see note 20 below). There seems to be, however, no straightforward way for the Total Evidence view to escape our argument in such a case. But perhaps there is an externalist type of interpretation of the view that can do this by marginalizing the role of access to the evidence. This possibility will not be explored here. Thanks here to an anonymous referee.
 7.
Throughout the paper we will assume that \(\hbox {E}\) is shared between \(\hbox {i}\) and \(\hbox {j}\). This will allow us to investigate the role epistemic competence plays in cases of epistemic disagreement in isolation from informational considerations.
 8.
See Steele (2012) and LasonenAarino (2013) for two statements of the kind of attitude (with some minor differences) that we prescribe to. The demand that a rational agent’s response for discovering a disagreement will be consistent with Bayesian updating is also discussed in Jehle and Fitelson (2009) and Shogenji (2007). We return to this issue in Sect. 3.
 9.
At least within a certain range of values x might take. It seems that Christensen (2007), a prominent split the difference view advocate at the time (but see footnote 4 above), agrees that when distance is substantial enough, applying the equal weight principle is not necessarily warranted. Such is the case, for instance, when a peer is presenting an obviously “crazy” view about relatively simple matters. In more recent writing, Christensen specifies another case where if your own credence is high and so is your peer’s, you should perhaps be more confident that the proposition in question is true even if your peer’s degree of belief is slightly lower than yours (Christensen 2009, p. 759). Yet for other cases, like the one he specifies just before giving this counterexample, within a range at least, he advocates splitting the difference (Ibid pp. 758–759).
Since our argument works against views that restrict the LP formula to a range of possible x values, Christensen’s counterexamples—as far as we can see—do not motivate a view that goes far enough away from LP. We see no reason, however, why a view such as Christensen’s could not incorporate the idea that, even if the discovery of a disagreement should not influence the degree to which one takes an advisor to be rational, the impact of this view should not be constant (even within a range of values).
 10.
In addition to advocates of the EW view mentioned in footnote 3 and 6, Jehle and Fitelson (2009) and Kelly (2010) formulate the EW as the split the difference view, as do also Enoch (2010, p. 2) and Wilson (2010) as an interpretation of Elga’s (2007) EW view. In contrast, LasonenAarino (2013) discusses the relation between the EW view—that one assigns the same probability to being correct as a peer being correct—and splitting the difference. She contends that the relation between the two depends on transparency of the agent’s second order attitudes.
 11.
While the answer to this question is straightforward in a nonBayesian framework (“getting things right” with respect to a proposition, P, is to believe P in case P is true and believe −P in cases it is false), in a Bayesian framework it is not. LasonenAarino (2013) suggests that “getting things right” should be understood in terms of identity between one’s subjective probability and the “correct” evidential (or “epistemic”) probability. She then shows that such an understanding leads to some problematic results for the EW view. This is, however, far from being the only possible way to interpret the phrase. Another interesting direction is to understand “peerhood” in terms of getting equal “scores” in some measure of accuracy (see for example AndersLevinstein 2015), and there are also other possible directions. One of the advantages of the approach we take here is that we do not have to commit ourselves to any single interpretation of peerhood. Instead we use our minimal pragmatic constraint that any advocate of the EW view must accept (regardless of the way she understands the term “peerhood”).
 12.
We thank an anonymous referee for suggesting the name.
 13.
We prove the theorem only for the case in which \(\hbox {i}\) assigns a positive probability to a countable number of hypothesis about j’s degree of belief in \(\hbox {P}\). This is so since it is unclear how the LP view should be defined in the case of uncountably many such hypothesis (due to the fact that in such cases each hypothesis typically gets 0 probability). In any case, if the LP view is correct it surely must be correct also for the countable case and so proving the theorem only for this case is enough. We discuss this a little bit further in the “appendix”.
 14.
Wilson (2010) discusses the same mathematical phenomenon from a different philosophical angle.
 15.
See Dietrich and List (Forthcoming) for a good introduction.
 16.
This was proven first by Lehrer and Wagner (1981).
 17.
Thus, after discussing several sets of axioms, List and Dietrich argue that “it should be clear that there is no onesizefitsall approach to probabilistic opinion pooling” (Dietrich and List Forthcoming, p. 20).
 18.
Thanks to an anonymous referee for the suggestion to justify the EEI by symmetry considerations. This referee also suggested that there are interesting connections between the EEI and some discussions in the literature, e.g., how Keren (2007) understands deference in the context of testimonial knowledge transfer. Unfortunately, discussing possible connections would take us too far afield.
 19.
One way, though, was hinted at by a referee of this journal. Suppose we employ Kelly’s (2010) distinction between the discovery of one’s peer’s reaction to the evidence—psychological evidence—and nonpsychological evidence—the original evidence. The EW advocate can claim that without the psychological evidence, a peer’s credence is not to be relied on.
We don’t find this line particularly plausible, but we don’t want to claim that no development in this direction will work. Kelly himself uses the distinction to argue against the EW view. Roughly, his claim is that it is implausible that psychological evidence swamps nonpsychological evidence. See note 6 above for the claim that Kelly’s Total Evidence view is susceptible to our argument for some cases, cases for which the possible reply on behalf of the EW view will be equally problematic.
 20.
Here is Christensen’s (2011) formulation: “In evaluating the epistemic credentials of another’s expressed belief about \(\hbox {P}\), in order to determine how (or whether) to modify my own belief about \(\hbox {P}\), I should do so in a way that doesn’t rely on the reasoning behind my initial belief about \(\hbox {P}\).”
 21.
We thank an anonymous referee for suggesting this move.
References
AndersLevinstein, B. (2015). With all due respect: The macroepistemology of disagreement. Philosophers’ Imprint, 15(13), 1–20.
Bradley, R. (2007). Reaching a consensus. Social Choice and Welfare, 29, 609–632.
Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116, 187–217.
Christensen, D. (2009). Disagreement as evidence: The epistemology of controversy. Philosophy Compass, 4(5), 756–767.
Christensen, D. (2011). Disagreement, questionbegging, and epistemic selfcriticism. Philosophers’ Imprint, 11(6), 1–22.
Cohen, S. (2013). A defense of the (almost) equal weight view. In D. Christensen & J. Lackey (Eds.), The epistemology of disagreement (pp. 98–117). Oxford: Oxford University Press.
Dietrich, F., & List, C. (Forthcoming). Probabilistic opinion pooling. In A. Hajek & C. Hitchcock (Eds.), Oxford handbook of philosophy and probability. Oxford: Oxford University Press.
Elga, A. (2007). Reflection and disagreement. Nous, XLI(3), 478–502.
Enoch, D. (2010). Not just a truthometer: Taking oneself seriously (but not too seriously) in cases of peer disagreement. Mind, 119(476), 953–997.
Jehle, D., & Fitelson, B. (2009). What is the “equal weight view”? Episteme, 6(3), 280–293.
Kelly, T. (2010). Peer disagreement and higher order evidence. In A. I. Goldman & D. Whitcomb (Eds.), Social epistemology: Essential readings (pp. 183–217). Oxford: Oxford University Press.
Keren, A. (2007). Epistemic authority, testimony and the transmission of knowledge. Episteme, 4(3), 368–381.
Lackey, J., & Christensen, D. (Eds.). (2013). The epistemology of disagreement: New essays. Oxford: Oxford University Press.
LasonenAarino, M. (2013). Disagreement and evidential attenuation. Nous, 47(4), 767–794.
Lehrer, K., & Wagner, C. (1981). Rational consensus in science and society. Dordrecht: Reidel.
Shogenji, T. (2007). My way or her way: A conundrum in Bayesian epistemology of disagreement. Unpublished manuscript.
Steele, K. S. (2012). Testimony as evidence: More problems for linear pooling. Journal of Philosophical Logic, 41(6), 983–999.
Wilson, A. (2010). Disagreement, equal weight and commutativity. Philosophical Studies, 149(3), 321–326.
Acknowledgements
Earlier versions of this paper were presented in the Haifa workshop on rational belief and normative commitment (2017) and the Stockholm Higher Seminar in the department of philosophy in Stockholm University (2015). We thank the participants of these events for useful discussions. We especially want to thank David Enoch, Zeev Goldschmidt, Noam Nisan, Orri Schneebaum, and two anonymous referees for their very useful comments and suggestions as well as an anonymous referee from another journal. Levi Spectre’s research was supported by the Israeli Science Foundation (Grant No. 463/12).
Author information
Affiliations
Corresponding author
Appendix
Appendix
Theorem 1
For any credence function of \(\hbox {i}\) that assigns a nontrivial probability value to the possibility that j’s degree of belief in \(\hbox {P}\) is different from i’s degree of belief in \(\hbox {P}\), and for any nontrivial weight, \(\hbox {w}\):

1.
Even if \(\hbox {w} > 1/2\) (i.e. even if \(\hbox {i}\) takes himself to be more epistemically competent than \(\hbox {j}\)), there always exists a bet such that \(\hbox {i}\) will be willing to pay a positive amount of utility in exchange for letting (a completely rational and sympathetic) \(\hbox {j}\) choose for him whether to accept this bet.

2.
Even if \(\hbox {w} \le 1/2\) (i.e. even if \(\hbox {i}\) takes himself to be less epistemically competent than \(\hbox {j}\)), there always exists a bet such that \(\hbox {i}\) will be willing to pay a positive amount of utility in exchange for avoiding passing the choice of whether to accept the bet to \(\hbox {j}\).
Proof
Assume \({0}< \hbox {c}(\hbox {P})< 1\) and let us rescale i’s utility function so that the payment \(\hbox {i}\) receives by choosing to accept the bet in case \(\hbox {P}\) is true is 1, and the payment \(\hbox {i}\) receives by choosing to accept the bet in case \(\hbox {P}\) is false is 0. Thus, i’s expected utility from accepting the bet is \(\hbox {c}(\hbox {P})\). If \(\hbox {i}\) chooses to reject the bet he gets a payment of \(0<\hbox {a}<\hbox {c}(\hbox {P})\). Thus, \(\hbox {i}\) prefers accepting the bet to rejecting it.
Now let \(\hbox {Y} = ``\hbox {j}\)’s degree of belief in \(\hbox {P}\) is lower than or equal to \(\hbox {a}\)”. Since \(\hbox {j}\) is rational and sympathetic, in case \(\hbox {Y}\) is true (i.e. in case j’s degree of belief in \(\hbox {P}\) is lower than or equal to \(\hbox {a}\)), \(\hbox {j}\) will choose to reject the bet. Similarly, in case − Y is true (i.e. in case j’s degree of belief in \(\hbox {P}\) is higher than \(\hbox {a}\)), \(\hbox {j}\) will choose to accept the bet.
Let \(\hbox {L}\) be the act of letting \(\hbox {j}\) choose whether to accept the bet, and let \(\lnot \hbox {L}\) be the act of rejecting the offer to let \(\hbox {j}\) choose whether to accept the bet. We saw that i’s expected utility from \(\lnot \hbox {L}\) is just the expected utility of the bet, which is \(\hbox {c}(\hbox {P})\). We can now rewrite \(\hbox {c}(\hbox {P})\) using the Theorem of Total Probability applied to the partition \(\{\hbox {Y}, \hbox {Y}\}\):

4.
\(\hbox {EU}(\lnot \hbox {L}) = \hbox {c}(\hbox {P}) = \hbox {c}(\hbox {P}{\vert }\hbox {Y})\hbox {c}(\hbox {Y}) + \hbox {c}( \hbox {P}{\vert }\hbox {Y})\hbox {c}(\hbox {Y})\)
What is the expected utility of \(\hbox {L}\)?

5.
\(\hbox {EU}(\hbox {L}) = \hbox {c}(\hbox {P}{\vert }{}\, \hbox {Y})\hbox {c}({}\, \hbox {Y}) + \hbox {ac}(\hbox {Y})\)
In words: i’s expected utility from passing the choice on to \(\hbox {j}\) equals the probability \(\hbox {i}\) assigns to the possibility that \(\hbox {j}\) will accept the bet multiplied by the expected utility of the bet from the point of view of \(\hbox {i}\) after learning that \(\hbox {j}\) choses to accept the bet (i.e. \(\hbox {c}(\hbox {P}{\vert }\hbox {Y})\)) plus the probability i assigns to the possibility of j rejecting the bet multiplied by the expected utility of rejecting the bet (i.e. a).
\(\hbox {i}\) Will choose to pass the choice on to \(\hbox {j}\) iff \(\hbox {EU}(\lnot \hbox {L}) \le \hbox {EU}(\hbox {L})\). Thus, it immediately follows from 4 and 5 that if \(\hbox {c}(\hbox {Y}) > 0\), \(\hbox {i}\) chooses to pass the choice on to \(\hbox {j}\) iff \(\hbox {a} > \hbox {c}(\hbox {P}{\vert }\hbox {Y})\).
Let \(\hbox {Y}\)* be the set of all propositions of the form “j’s degree of belief in \(\hbox {P}\) is \(\hbox {x}_{\mathrm{i}}\)” such that \(\hbox {x}_{\mathrm{i}} \le \hbox {a}\), to which \(\hbox {i}\) assigns a positive probability. In other words, \(\hbox {Y}\) is the disjunction of all the propositions in \(\hbox {Y}\)*. We assume that \(\hbox {Y}\)* is countable. We suspect that a similar theorem to the one we present here can be proved for the uncountable case (this becomes clear by considering the rest of the proof). However, our characterization of the LP view is not welldefined in the uncountable infinite case (in which typically each proposition gets a 0 probability) and we believe that the theoretical cost involved in the philosophical interpretation of a characterization which is mathematically welldefined also in the uncountable case outweighs the benefits associated with such a characterization. This is so since a position according to which LP is true only in the case in which i assigns positive probabilities to uncountably many hypothesis regarding j’s degree of belief in \(\hbox {P}\), seems unmotivated. If LP is true, it must be true also in the countable case (indeed, also for the finite case), it seems to us. Thus, proving the theorem for the countable case is enough for the philosophical goal we have set for ourselves.
Now, since, by definition \(\hbox {Y}\) is the disjunction of all the propositions in \(\hbox {Y}\)* (we assume, of course, that all the propositions in \(\hbox {Y}\)* are disjoint, i.e. that j’s degree of belief in \(\hbox {P}\) is unique):

6.
\(c\left( {P{}Y} \right) =\frac{\mathop \sum \nolimits _{X_i \epsilon {Y}^{*}} c\left( {PX_i } \right) }{c\left( Y \right) }=\frac{\mathop \sum \nolimits _{X_i \epsilon {Y}^{*}} c\left( {PX_i } \right) c\left( {X_i } \right) }{c\left( Y \right) }\)
However, from LP, for each \(\hbox {X}_{\mathrm{i}}\), \(\hbox {c}(\hbox {P}{\vert }\hbox {X}_{\mathrm{i}}) = \hbox {wc}(\hbox {P}) + (1\hbox {w})\hbox {x}_{\mathrm{i}}\). Thus:

7.
\(c\left( {P{}Y} \right) =\frac{\mathop \sum \nolimits _{X_i \epsilon Y^{*}} c\left( {X_i } \right) \left( {wc\left( p \right) +\left( {1w} \right) x_i } \right) }{c\left( Y \right) }=wc\left( p \right) +\left( {1w} \right) \frac{\mathop \sum \nolimits _{X_i \epsilon Y^{*}} c\left( {X_i } \right) x_i }{c\left( Y \right) }\)
Let us now define \(Z=\frac{\mathop \sum \nolimits _{X_i \epsilon Y^{*}} c\left( {X_i } \right) x_i }{c\left( Y \right) }\) and we get:

8.
\(\hbox {c}(\hbox {P}{\vert }\hbox {Y}) ={wc}\left( {p} \right) +\left( {1{w}} \right) {Z}\)
Notice that \(\hbox {Z}\) is a weighted average of all \(\hbox {x}_{\mathrm{i}}\) in \(\hbox {Y}\)* and thus is lower than a (or equal to it in the case in which \(\hbox {Y}\)* includes only one proposition: “\(\hbox {j}\) believes \(\hbox {P}\) to degree \(\hbox {a}\)”). Similarly \(\hbox {c}(\hbox {P}{\vert }\hbox {Y})\) must be strictly lower than \(\hbox {c}(\hbox {P})\).
We are interested in the expression “\(\hbox {a}  \hbox {c}(\hbox {P}{\vert }\hbox {Y})\)”. We want to show that for some values of a such that \(\hbox {c}(\hbox {P})> \hbox {a} > 0\), and \(\hbox {c}(\hbox {Y}) > 0\), this expression is positive and for some values it is negative independently of w. If this is the case then, independently of the weight, there are always bets such that \(\hbox {EU}(\hbox {L}) > \hbox {EU}(\hbox {L})\) and bets such that \(\hbox {EU}(\hbox {L}) > \hbox {EU}(\hbox {L})\).
\({{{\underline{a  c(PY) < 0}}}}\)
Case 1 if there is an \(\hbox {x}\)* which is the lowest value for j’s degree of belief in \(\hbox {P}\) to which \(\hbox {i}\) assigns a positive probability, set \(\hbox {a}=\hbox {x}\)* (so that \(\hbox {c}(\hbox {Y}) >0\)). From the LP formula we get:

9.
\(\hbox {c}(\hbox {P}{\vert }\hbox {Y}) =\hbox {wc}(\hbox {P}) + (1\hbox {w})\hbox {a}\)
and since \(\hbox {a} < \hbox {c}(\hbox {P})\), it immediately follows that \(\hbox {c}(\hbox {P}{\vert }\hbox {Y}) > \hbox {a}\) and so \(\hbox {a}\)\(\hbox {c}(\hbox {P}{\vert }\hbox {Y})\) is negative.
Case 2 If there is no \(\hbox {x}\)* which is the lowest value for j’s degree of belief in \(\hbox {P}\) to which \(\hbox {i}\) assigns a positive probability, then pick an a such that a \(< w_i c\left( P \right) \), and then (from equation 8):

10.
\(\hbox {c}(\hbox {P}{\vert }\hbox {Y}) = {wc}\left( {P} \right) +\left( {1{w}} \right) {Z} > \hbox {a}\)
and trivially \(\hbox {a} \hbox {c}(\hbox {P}{\vert }\hbox {Y}) < 0\) and \(\hbox {c}(\hbox {Y}) > 0\) (since there is no lowest \(\hbox {x}\)*).^{Footnote 21}
\({{{\underline{a  c(PY) > 0}}}}\):
Let \(\hbox {V}\) be the propositions “j’s degree of belief in \(\hbox {P}\) is lower than \(\hbox {c}(\hbox {P})\)” and let \(\hbox {V}\)* be the set of all propositions of the form “j’s degree of belief in \(\hbox {P}\) is \(\hbox {x}_{\mathrm{i}}\)” such that \(\hbox {x}_{\mathrm{i}}< \hbox {c}(\hbox {P})\) (i.e. \(\hbox {V}\) is the disjunction of all the propositions in \(\hbox {V}\)*). Let us now define \(\hbox {Z}* = \frac{\mathop \sum \nolimits _{X_i \epsilon V^{*}} c\left( {X_i } \right) x_i }{c\left( V \right) }\) . Notice that \(\hbox {Z}\)* is a weighted average of all \(\hbox {x}_{\mathrm{i}}\) in \(\hbox {V}\)* and thus is strictly lower than \(\hbox {c}(\hbox {P})\).
Now set:
With such an \(\hbox {a}\), it follows from equation 8 that

11.
\(\hbox {a}  \hbox {c}(\hbox {P}{\vert }\hbox {Y}) = \left( {1\hbox {w}} \right) (\hbox {Z}*  \hbox {Z})\)
However, since a \(< \hbox {c}(\hbox {P})\), \((\hbox {Z}*  \hbox {Z}) \ge 0\).
Case 1 there is a \(\hbox {X}_{\mathrm{i}}\) in \(\hbox {V}^{*}\) such that \(\hbox {x}_{\mathrm{i}} > wc\left( P \right) +\left( {1w} \right) Z^{*}\). In such a case clearly \((\hbox {Z}*  \,\hbox {Z}) > 0\) and thus \(\hbox {a}  \hbox {c}(\hbox {P}{\vert }\hbox {Y}) > 0\).
Case 2 there is no such \(\hbox {X}_{\mathrm{i}}\). In such a case there exists a \(\hbox {X}_{\mathrm{i}}\) in \(\hbox {V}\)* with the highest \(\hbox {x}_{\mathrm{i}}\) of all the \(\hbox {x}_{\mathrm{i}}\hbox {s}\) in all the \(\hbox {X}_{\mathrm{i}}\hbox {s}\) in \(\hbox {V}\)* (in other words there exists a hypothesis about j’s degree of belief in \(\hbox {P}\) to which \(\hbox {i}\) assigns a positive probability that assigns to \(\hbox {j}\) the maximal degree of belief in \(\hbox {P}\) which is strictly lower that \(\hbox {c}(\hbox {P})\)). Let us call this \(\hbox {x}_{\mathrm{i}}\), \(\hbox {x}^{**}\). By the definition of \(\hbox {Y}\), \(\hbox {c}(\hbox {P}{\vert }\hbox {Y})\) is equal for all values of a as long as a \(> \hbox {x}^{**}\) and so it is always possible to find a highenough a such that \(\hbox {c}(\hbox {P})> \hbox {a} > \hbox {c}(\hbox {P}{\vert }\hbox {Y})\). In this case \(\hbox {c}(\hbox {Y}) > 0\) is trivially true.
This concludes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
NissanRozen, I., Spectre, L. A pragmatic argument against equal weighting. Synthese 196, 4211–4227 (2019). https://doi.org/10.1007/s1122901716511
Received:
Accepted:
Published:
Issue Date:
Keywords
 Linear pooling
 Peer disagreement
 The equal weight view
 Bayesian conditionalization
 Expected utility maximization