## Abstract

How should an agent revise her epistemic state in the light of doxastic disagreement? The problems associated with answering this question arise under the assumption that an agent’s epistemic state is best represented by her degree of belief function alone. We argue that for modeling cases of doxastic disagreement an agent’s epistemic state is best represented by her confirmation commitments and the evidence available to her. Finally, we argue that given this position it is possible to provide an adequate answer to the question of how to rationally revise one’s epistemic state in the light of disagreement.

This is a preview of subscription content, access via your institution.

## Notes

- 1.
- 2.
The epistemic compromise of the group need not be interpreted as an epistemic state in the sense of a mental state. One might rather interpret it as a kind of disposition to act.

- 3.
The definition of peerhood, respectively an approach to how to assign different weights to the opinions of epistemic agents, is very problematic. In their careful and valuable discussion of the equal weight view, Jehle and Fitelson suggest that “two agents, \(a_1\) and \(a_2\), are epistemic peers regarding a proposition \(A\): that is, [...] \(a_1\) and \(a_2\) are equally competent, equally impartial, and equally able to evaluate and assess the relevant evidence regarding \(A\)” (Jehle and Fitelson 2009: p. 280; notation adapted). Even though this is not a completely satisfying definition and it does not provide exact weights for all agents involved in a doxastic disagreement, this characterization of peerhood belongs among the best we have. However, for Bayesians the definition of peerhood is even more problematic. Many Bayesians hold that all a priori credence functions are equally acceptable. According to Hájek (2011), e.g., “Orthodox Bayesians in the style of de Finetti recognize no rational constraints on subjective probabilities beyond conformity to the probability calculus, and a rule for updating probabilities in the face of new evidence, known as conditioning.” Thus, all probabilistic agents should be deemed equally competent and therefore all probabilistic agents should treat each other as peers with respect to every proposition. If this is correct, one wonders, couldn’t we construct a justification of objective Bayesianism

*á la*Williamson (2010) on basis of this assumption: before receiving any evidence we should consider all possible agents as peers independently of which credence function they chose. Then, if we come to a joint credence function with all possible probabilistic agents we end up with the “objective degree of belief that every agent should have”. (A similar remark was made by Lorenzo Casini in conversation.) Finally we update these objective credences “in the face of new evidence”. - 4.
- 5.
Requirement (U) is discussed in Allard et al. (2012). Other authors discuss weaker unanimity requirements. For example, Wagner (2010) restricts this requirement to hold not for all propositions, but only for possible worlds: \(AR[\Pr ^{old}_{C_{a_1}},\ldots , \Pr ^{old}_{C_{a_n}}](w)=r \), if \(\Pr ^{old}_{C_{a_i}}(w)=r\), for all \(a_i \in G\). For the results mentioned and presented in the present paper the difference between (U) and weaker unanimity requirements is irrelevant.

- 6.
The name of this requirement is inspired by Wagner (2010), who calls it

*Commutativity with Conditionalization*. In addition, Wagner (2010) actually discusses a stronger requirement, according to which an adequate aggregation rule should be commutative with Jeffrey Conditionalization. For simplicity, we discuss the weaker requirement here. - 7.
It is noteworthy that (No-ZP) is not the negation of the weak zero preservation property, discussed in Genest and Zidek (1986). This latter requirement states that if all agents have credences 0, then the aggregated credence is 0 too. This requirement is implied by (U). (No-ZP) is the negation of a strong zero preservation property, which requires that if one of the members of the group has credence 0 in a proposition, then all agents should adopt credence 0. This strong zero preservation property is not implied by (U). Allard et al. (2012) discuss and defend a related requirement, the 0/1-forcing property, which is the negation of our (No-ZP).

- 8.
To be precise, Jehle and Fitelson claim this for specific rules on how to revise one’s credence in the light of peer disagreement, so-called

*equal weight rules*. We return to these rules in Sect. 2.2.3. - 9.
- 10.
*The Logarithmic Opinion Pooling Operator*can be rewritten as follows using the logarithms of the respective probabilities and using \(\ln \big [\sum _{\omega \in \Omega }\prod _{i=1}^n \Pr ^{old}_{C_{a_i}}(\omega )^{w^G_i}\big ]\) as a normalizing constant: (GM\(^\prime \)) If a group of agents \(G = \{a_1, \ldots , a_n\}\) aggregates their old credence functions, \(\Pr ^{old}_{C_{a_1}}, \ldots , \Pr ^{old}_{C_{a_n}}\), then their aggregated credence function, \(GM^\prime [\Pr ^{old}_{C_{a_1}}, \ldots \Pr ^{old}_{C_{a_n}}]\), can be calculated as follows:$$\begin{aligned}&\textit{For all } \omega \in \Omega : \ln \big [GM^\prime [\Pr ^{old}_{C_{a_1}}, \ldots \Pr ^{old}_{C_{a_n}}](\omega )\big ]=\sum _{i=1}^n\ln \big [w^G_i\times \Pr ^{old}_{C_{a_i}}(\omega )\big ]\\&\quad -\ln \big [\sum _{\omega \in \Omega }\prod _{i=1}^n \Pr ^{old}_{C_{a_i}}(\omega )^{w^G_i}\big ] \end{aligned}$$where \(w^G_i\in \mathbb {R}^+\), \(\sum _{i=1}^n w^G_i=1\).

- 11.
One might even argue that in the light of this observation, Jehle and Fitelson (2009) do not even suggest a proper aggregation rule, but rather a further normative requirement on aggregation rules. We just want to flag this issue, without discussing it further.

- 12.
The most intuitive approach to resolving this problem is to redefine doxastic disagreement. In particular, one could circumvent this objection by defining that two agents \(a_1\) and \(a_2\) are in doxastic disagreement with respect to some proposition \(A\) iff \(\Pr _{C_{a_1}}(A)\not \approx \Pr _{C_{a_2}}(A)\).

- 13.
Genest et al. (1986) discuss

*(Generalized) Logarithmic Opinion Pooling Operators*for probability density functions over continuous variables. In the present context, we concentrate on probability functions over finite sets of possible worlds. Accordingly, we follow Wagner 2010 and Allard et al. 2012 in our presentation of*(Generalized) Logarithmic Opinion Pooling Operators*. - 14.
In the “Appendix” we prove that aggregation rules of the form of (GLP) do not satisfy (C).

- 15.
An anonymous referee of this journal encouraged us to discuss in more detail why we suggest that adding \(a_2\)’s evidence to \(a_1\)’s is best understood as taking the intersection of their evidence pools instead of taking, for example, the union. The reason is that given our setup the agents’ evidence is represented by a proposition understood as a set of possible worlds. Thus, the intersection of two propositions is a logically stronger proposition, namely the conjunction of both propositions, the union of two propositions is a logically weaker proposition, namely the disjunction of both propositions. Intuitively, adding \(a_2\)’s evidence to \(a_1\)’s evidence should result in a logically stronger proposition and this is why we suggest taking the intersection instead of the union. Thus, given our assumption that agent \(a_1\) wants to add \(a_2\)’s evidence to her own evidence, the intersection, respectively the conjunction, is the natural choice. However, suppose agent \(a_2\)’s evidence \(E_2\) is logically inconsistent with \(a_1\)’s evidence \(E_1\), then the straightforward option, i.e., taking the conjunction of both agents’ evidence, is not available. Currently, we are not in the position to give a complete and well-founded reply to this problem. Tentatively, we suggest that if agent \(a_1\) wants to add \(a_2\)’s evidence \(E_2\) to hers even tough it is logically inconsistent with \(a_1\)’s evidence \(E_1\), then \(a_1\) should use the belief merging operator suggested in Konieczny and Pino Pérez (2011) instead of taking the conjunction or disjunction of both, \(E_1\) and \(E_2\). This would presuppose, however, that we do not represent an agent’s evidence by a single proposition. Instead we would have to represent the agents’ evidence by a set of propositions.

- 16.
The proof can be found in the “Appendix”.

- 17.
The proof can be found in the “Appendix”.

## References

Abbas, A. (2009). A Kullback-Leibler view of linear and log-linear pools.

*Decision Analysis*,*6*, 25–37.Allard, D., Comunian, A., & Renard, P. (2012). Probability aggregation methods in geoscience.

*Mathematical Geosciences*,*44*, 545–81.Brössel, P. (2012).

*Rethinking Bayesian confirmation theory-steps towards a new theory of confirmation*. PhD-dissertation, University of Konstanz.Christensen, D. (2009). Disagreement as evidence: The epistemology of controversy.

*Philosophy Compass*,*4*, 756–767.Genest, C., McConway, K., & Schervish, M. (1986). Characterization of externally Bayesian pooling operators.

*The Annals of Statistics*,*14*, 487–501.Genest, C., & Wagner, C. (1987). Further evidence against independence preservation in expert judgement synthesis.

*Aequationes Mathematicae*,*32*, 74–86.Genest, C., & Zidek, J. (1986). Combining probability distributions: A critique and annotated bibliography.

*Statistical Science*,*1*, 114–135.Hájek, A. (2011). Interpretations of probability. In E.N. Zalta (Ed.),

*Stanford encyclopedia of philosophy*.Jeffrey, R. (1987). Indefinite probability judgment.

*Philosophy of Science*,*54*, 586–591.Jehle, D., & Fitelson, B. (2009). What is the “equal weight view”?

*Episteme*,*6*, 280–293.Konieczny, S., & Pino Pérez, R. (2011). Logic based merging.

*Journal of Philosophical Logic*,*40*, 239–270.Laddaga, R. (1977). Lehrer and the consensus proposal.

*Synthese*,*36*, 473–477.Lange, M. (1999). Calibration and the epistemological role of Bayesian conditionalization.

*The Journal of Philosophy*,*96*, 294–324.Lehrer, K., & Wagner, C. (1983). Probability amalgamation and the independence issue: A reply to Laddaga.

*Synthese*,*55*, 339–346.Levi, I. (1980).

*The enterprise of knowledge*. Cambridge: MIT Press.McConway, K. (1981). Marginalization and linear opinion pools.

*Journal of the American Statistical Association*,*76*, 410–414.Raiffa, H. (1968).

*Decision analysis: Introductory lectures on choices under uncertainty*. Reading: Addison-Wesley.Schurz, G. (2012). Tweety, or why probabilism and even Bayesianism need objective and evidential probabilities. In D. Dieks et al. (Eds.),

*Probabilities, laws and structures*(pp. 57–74). New York:Springer.Unterhuber, M., & Schurz, G. (2013). The new tweety puzzle: Arguments against Monistic Bayesian approaches in epistemology and cognitive science.

*Synthese*,*190*, 1407–1435.Wagner, C. (1982). Allocation, Lehrer models, and the consensus of probabilities.

*Theory and Decision*,*14*, 207–220.Wagner, C. (2010). Jeffrey conditioning and external Bayesianity.

*Logic Journal of the IGPL*,*18*, 336–345.Williamson, J. (2010).

*In defence of objective Bayesianism*. Oxford: Oxford University Press.

## Acknowledgments

Early versions of this paper have been presented at conferences, respectively workshops, in Bochum (*Recent Debates in Epistemology*), Lund (*CPH LU Workshop on Social Epistemology*), and Salzburg (*SOPhia 2013*) and at the Tilburg Center for Philosophy of Science. We thank the audience for their insightful comments on various versions of the paper. We are also grateful to the MCMP (Munich Center for Mathematical Philosophy) reading-group on social epistemology for fruitful discussion on Jehle and Fitelson’s paper. Special thanks go to Lorenzo Casini, Stephan Hartmann, Albert Newen, Carlo Proietti, Gerhard Schurz, Jan Sprenger, and Frank Zenker. We would also like to thank two anonymous referees for very helpful commentaries on an earlier version of this paper. Anna-Maria A. Eder’s research on this paper was partly funded by a fellowship (Stipendium nach dem Landesgraduiertenförderungsgesetz) sponsored by the State of Baden-Württemberg (Germany). Peter Brössel’s research was supported by a Visiting Fellowship by the Tilburg Center for Philosophy of Science.

## Author information

### Affiliations

### Corresponding author

## Appendix

### Appendix

###
*Proof*

*(GLP) does not satisfy (C)* Suppose \(\Omega =\{\omega _1,\omega _2,\omega _3\}\) and the following probability distributions \(\Pr _{a_1}\) and \(\Pr _{a_2}\) over \(\Omega \):

According to the above tabular, \(\Pr _{a_1}(\{\omega _2,\omega _3\})=\frac{1}{3}\) and \(\Pr _{a_2}(\{\omega _2,\omega _3\})=\frac{2}{3}\). Thus, an aggregation rule \(AR\) that satisfies (C) has the following property: \(\Pr _{a_1}(\{\omega _2,\omega _3\})\le AR[\Pr _{a_1}, \Pr _{a_2}](\{\omega _2,\omega _3\}) \le \Pr _{a_2}(\{\omega _2,\omega _3\})\). If the aggregation rule \(AR\) satisfies (ASAMC), then \(\Pr _{a_1}(\{\omega _2,\omega _3\}) < AR[\Pr _{a_1}, \Pr _{a_2}](\{\omega _2,\omega _3\}) < \Pr _{a_2}(\{\omega _2,\omega _3\})\), because of its strict between-ness requirement. However, (rules of the form) (GLP) cannot be such an aggregation rule, except dictatorially by setting the weight of one of the agents, i.e., \(w^G_1\) or \(w^G_2\), to 0. According to the definition:

where \(w^G_i\in \mathbb {R}\), \(\sum _{i=1}^2 w^G_i=1\), and \(g\) is some arbitrary bounded function with \(g(\omega )\in \mathbb {R}\). Thus, if neither \(w^G_1\) nor \(w^G_2\) is set to 0, \({ GLP}[\Pr _{a_1}, \Pr _{a_2}](\omega )=0\) for all \(\omega \in \{\omega _2,\omega _3\}\), since either \(\Pr _{a_1}(\omega )=0\) or \(\Pr _{a_2}(\omega )=0\) for all \(\omega \in \{\omega _2,\omega _3\}\). This implies that \({ GLP}[\Pr _{a_1}, \Pr _{a_2}](\{\omega _2,\omega _3\})=0\) and therefore that it is not the case that \(\Pr _{a_1}(\{\omega _2,\omega _3\})\le GLP[\Pr _{a_1}, \Pr _{a_2}](\{\omega _2,\omega _3\}) \le \Pr _{a_2}(\{\omega _2,\omega _3\})\). Thus, (GLP) does not satisfy (C) and it does not satisfy (ASAMC). Please note, even if we set the weight of one of the agents to 0, (GLP) would not satisfy (ASAMC), but it would satisfy (C).

###
*Proof*

*(AM*) does not satisfy (IA)* In order to show that (AM*) does not satisfy (IA) we have to show that the aggregated credence is not a function of the credence of the individual agents. For this purpose let \(G\) be some group with agents \(a_1\) and \(a_2\), and let us suppose that \(\Pr _{C_{a_1}}(A)=\Pr _{conf_{a_1}}(A|E)=\frac{1}{2}\) and \(\Pr _{C_{a_2}}(A)=\Pr _{conf_{a_2}}(A|E)=\frac{1}{4}\) and \(w^G_1=w^G_2=\frac{1}{2}\).

In *Scenario I* suppose (\(i\)) \(\Pr _{conf_{a_1}}(E)=\frac{1}{2}\) and \(\Pr _{conf_{a_2}}(E)=\frac{1}{4}\), (\(ii\)) \(\Pr _{conf_{a_1}}(A\cap E)=\frac{1}{4}\) and \(\Pr _{conf_{a_2}}(A\cap E)=\frac{1}{16}\). Therefore, the new credence of the agent’s equals the conditional aggregated credences \(AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](A|E)=\frac{AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](A\cap E)}{AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](E)}=\frac{5}{12}\).

In *Scenario II* suppose (\(i\)) \(\Pr _{conf_{a_1}}(E)=\frac{1}{2}\) and \(\Pr _{conf_{a_2}}(E)=\frac{1}{2}\), (\(ii\)) \(\Pr _{conf_{a_1}}(A\cap E)=\frac{1}{4}\) and \(\Pr _{conf_{a_2}}(A\cap E)=\frac{1}{8}\). Therefore, the new credence of the agents equals the conditional aggregated credences \(AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](A|E)=\frac{AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](A\cap E)}{AM[\Pr _{conf_{a_1}},\Pr _{conf_{a_2}}](E)}=\frac{3}{8}\).

Thus, even though in both scenarios \(\Pr _{C_{a_1}}(A)=\Pr _{conf_{a_1}}(A|E)=\frac{1}{2}\) and \(\Pr _{C_{a_2}}(A)=\Pr _{conf_{a_2}}(A|E)=\frac{1}{4}\) the aggregated credence in both scenarios differs. This shows that the aggregated credence is not a function of the credences of the individual agents. The reason for this result is that in both scenarios the agents disagree in their confirmation commitments differently. After aggregating their confirmation commitments (in \(E\) and (\(A\cap E\))) first, they come to different results in both scenarios with respect to the question what their aggregated credence should be.

###
*Proof*

*(AM*) satisfies (C)* Let \(G\) be some group with agents \(a_1, \ldots , a_m\) with confirmation commitments \(\Pr _{conf_{a_1}}, \ldots , \Pr _{conf_{a_m}}\) and weights \(w^G_{1}, \ldots , w^G_{m}\), where \(\sum _{j=1}^m w^G_j=1\) and \(w^G_i\ge 0\) for all \(i\). We have to show that \(\min \left\{ \frac{\Pr _{conf_{a_i}}(H\cap E)}{\Pr _{conf_{a_i}}(E)}|a_i\in G\right\} \le \frac{AM[\Pr _{conf_{a_1}}, \ldots \Pr _{conf_{a_m}}](H\cap E)}{AM[\Pr _{conf_{a_1}}, \ldots \Pr _{conf_{a_m}}](E)}\le \max \left\{ \frac{\Pr _{conf_{a_i}}(H\cap E)}{\Pr _{conf_{a_i}}(E)}|a_i\in G\right\} \). Without loss of generality let us assume that \(\frac{\Pr _{conf_{a_1}}(H\cap E)}{\Pr _{conf_{a_1}}(E)}=\min \left\{ \frac{\Pr _{conf_{a_i}}(H\cap E)}{\Pr _{conf_{a_i}}(E)}|a_i\in G\right\} \) and that \(\frac{\Pr _{conf_{a_m}}(H\cap E)}{\Pr _{conf_{a_m}}(E)}=\max \left\{ \frac{\Pr _{conf_{a_i}}(H\cap E)}{\Pr _{conf_{a_i}}(E)}|a_i\in G\right\} \).

We know by this assumption that for all \(a_i\in G\): \(\frac{\Pr _{conf_{a_1}}(H\cap E)}{\Pr _{conf_{a_1}}(E)}\le \frac{\Pr _{conf_{a_{i}}}(H\cap E)}{\Pr _{conf_{a_{i}}}(E)}\le \frac{\Pr _{conf_{a_m}}(H\cap E)}{\Pr _{conf_{a_m}}(E)}\) and thus by simple arithmetic that (i) \(\Pr _{conf_{a_1}}(H\cap E)\times \Pr _{conf_{a_{i}}}(E)\le \Pr _{conf_{a_1}}(E)\times \Pr _{conf_{a_{i}}}(H\cap E)\) and (ii) \(\Pr _{conf_{a_m}}(E)\times \Pr _{conf_{a_{i}}}(H\cap E)\le \Pr _{conf_{a_m}}(H\cap E)\times \Pr _{conf_{a_{i}}}( E)\).

In a first step (i) implies that for all \(a_i\in G\): \(\Pr _{conf_{a_1}}(H\cap E)\times [w^G_{i}\Pr _{conf_{a_{i}}}(E)]\le \Pr _{conf_{a_1}}(E)\times [w^G_{i}\Pr _{conf_{a_{i}}}(H\cap E)]\). In a second step we can conclude that \(\sum _{j=1}^m\Pr _{conf_{a_1}}(H\cap E)\times [w^G_{j}\Pr _{conf_{a_{j}}}(E)]\le \sum _{j=1}^m\Pr _{conf_{a_1}}(E)\times [w^G_{j}\Pr _{conf_{a_{j}}}(H\cap E)]\). In a third step this implies that \(\Pr _{conf_{a_1}}(H\cap E)\times \sum _{j=1}^m [w^G_{j}\Pr _{conf_{a_{j}}}(E)]\le \Pr _{conf_{a_1}}(E)\times \sum _{j=1}^m [w^G_{j}\Pr _{conf_{a_{j}}}(H\cap E)]\), which implies in the fourth step as desired that \(\frac{\Pr _{conf_{a_1}}(H\cap E)}{\Pr _{conf_{a_1}}(E)} \le \frac{\sum _{j=1}^{m} \big [w^G_j\times \Pr _{conf_{a_j}}(H\cap E)\big ]}{\sum _{j=1}^{m} \big [w^G_j\times \Pr _{conf_{a_j}}(E)\big ]}\).

Similarly, in a first step (ii) implies that for all \(a_i\in G\): \(\Pr _{conf_{a_m}}(E)\times [w^G_{i}\Pr _{conf_{a_{i}}}(H\cap E)]\le \Pr _{conf_{a_m}}(H\cap E)\times [w^G_{i}\Pr _{conf_{a_{i}}}( E)]\). In a second step we can conclude that \(\sum _{j=1}^m\Pr _{conf_{a_m}}(E)\times [w^G_{j}\Pr _{conf_{a_{j}}}(H\cap E)]\le \sum _{j=1}^m\Pr _{conf_{a_m}}(H\cap E)\times [w^G_{j}\Pr _{conf_{a_{j}}}( E)]\). In the third step we can infer that \(\Pr _{conf_{a_m}}(E)\times \sum _{j=1}^m [w^G_{j}\Pr _{conf_{a_{j}}} (H\cap E)]\le \Pr _{conf_{a_m}}(H\cap E)\times \sum _{j=1}^m[w^G_{j}\Pr _{conf_{a_{j}}}( E)]\). Finally, in the fourth step we can conclude as desired that \(\frac{\sum _{j=1}^{m} \big [w^G_j\times \Pr _{conf_{a_j}}(H\cap E)\big ]}{\sum _{j=1}^{m} \big [w^G_j\times \Pr _{conf_{a_j}}(E)\big ]}\le \frac{\Pr _{conf_{a_m}}(H\cap E)}{\Pr _{conf_{a_m}}(E)}\).

## Rights and permissions

## About this article

### Cite this article

Brössel, P., Eder, AM.A. How to resolve doxastic disagreement.
*Synthese* **191, **2359–2381 (2014). https://doi.org/10.1007/s11229-014-0431-4

Received:

Accepted:

Published:

Issue Date:

### Keywords

- Bayesian epistemology
- epistemic disagreement
- probability aggregation
- social epistemology