Abstract
Rédei and Gyenis recently displayed strong constraints of Bayesian learning (in the form of Jeffrey conditioning). However, they also presented a positive result for Bayesianism. Despite the limited significance of this positive result, I find it useful to discuss its two possible strengthenings to present new results and open new questions about the limits of Bayesianism. First, I will show that one cannot strengthen the positive result by restricting the evidence to socalled “certain evidence”. Secondly, strengthening the result by restricting the partitions—as parts of one’s evidence—to Jeffreyindependent partitions requires additional constraints on one’s evidence to preserve its commutativity. So, my results provide additional grounds for caution and support for the limitations of Bayesian learning.
Similar content being viewed by others
Notes
It may help to say that, to me, the idea of a conservative learner bears a lot of similarity to an urconditionaliser with a stable urprior; e.g., see Meacham (2016) for a definition and a thorough discussion on urconditionalisation. I focus primarily on a bold learner in this reply, so I will not consider this similarity any further, but I wanted to mention it as a point of reference.
There are two caveats. First, assume that an agent learns \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) for which \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(q_{{\mathcal {E}}}(E_{i\ne k})=0\) for all \(E_{i}\in {\mathcal {E}}\). If \(E_{i=k}\) is a singleton, then the ratios are trivially broken: the posterior probability of the single element in \(E_{i=k}\) is 1 and every other element of \(\Omega \) gets the probability of 0. Secondly, if \(E_{i=k}\) is not a singleton, the ratios are also trivially broken for \(\omega \in \Omega \) that have been excluded by learnt evidence since their probability is now 0.
References
Billingsley, P. (1995). Probability and measure (3rd ed.). Wiley.
Diaconis, P., & Zabell, L. (1982). Updating subjective probability. Journal of the American Statistical Association, 17(380), 822–830.
Field, H. (1978). A note on Jeffrey conditionalization. Philosophy of Science, 45(3), 361–367.
Lange, M. (2000). Is Jeffrey conditionalization defective by virtue of being noncommutative? Remarks on the sameness of sensory experiences. Synthese, 123(3), 393–403.
Meacham, C. J. G. (2016). Urpriors, conditionalization, and Urprior conditionalization. Ergo, 3(17), 444–489.
Rédei, M., & Gyenis, Z. (2017). General properties of Bayesian learning as statistical inference determined by conditional expectations. Review of Symbolic Logic, 10(4), 719–755.
Rédei, M., & Gyenis, Z. (2021). Having a look at the Bayes blind spot. Synthese, 198, 3801–3832.
Rosenthal, J. S. (2006). A first look at rigorous probability theory (3rd ed.). World Scientific Publishing Co.
Tao, T. (2011).An introduction to measure theory. AMS.
Wagner, C. G. (2002). Probability kinematics and commutativity. Philosphy of Science, 69(2), 266–278.
Weisberg, J. (2009). Commutativity or holism? A dilemma for conditionalizers. The British Journal for the Philosophy of Science, 60(4), 393–403.
Williams, D. (1991). Probability with martingales (1st ed.). CUP.
Acknowledgements
I am grateful to my colleagues from the Czech Academy of Sciences, the LoPSE group at the University of Gdańsk, and the University of Bristol for their comments on various stages of my paper. Thanks to Miklós Rédei and Zalán Gyenis for answering my questions about their paper. Thanks to anonymous referees for their very useful comments, to the editors, and to Jonáš Gray for linguistic advice.
Funding
I confirm that the work on this paper was supported by the Formal Epistemology – the Future Synthesis grant, in the framework of the Praemium Academicum programme of the Czech Academy of Sciences. These funding sources have no involvement in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A Proofs for Section 3 (Limitations of the twostep strategy)
Proposition 1
Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), and \(E_{i=k}\in {\mathcal {E}}\) be given. If \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(E_{i=k}\) is not a singleton, then ratios in Eq. 3 established by a faithful prior p of a bold Bayesian agent remain constant for any \(\omega _{i},\omega _{j}\in E_{i=k}\).
Proof
Assume that a bold agent learns \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) such that \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(q_{{\mathcal {E}}}(E_{i\ne k})=0\) for all \(E_{i}\in {\mathcal {E}}\). Assume that \(E_{i=k}\) is not a singleton (see Footnote 3) and \(p(E_{i=k})\ne 0\). Consequently, \(\nicefrac {q_{{\mathcal {E}}}(E_{i=k})}{p(E_{i=k})}=\nicefrac {1}{p(E_{i=k})}\) and \(\nicefrac {q_{{\mathcal {E}}}(E_{i\ne k})}{p(E_{i\ne k})}=0\). By additivity, I can discuss elements of any \(E_{i}\). The prior probability of every \(\omega \in E_{i\ne k}\) will be multiplied by 0, and I can ignore it. The prior probability of every \(\omega \in E_{i=k}\) will be multiplied by a scalar \(\nicefrac {1}{p(E_{i=k})}\). Assume that \(\omega _{i},\omega _{j}\in E_{i=k}\), then the ratio of priors, \(p(\{\omega _{i}\})\) and \(p(\{\omega _{j}\})\), will be the same as the ratio of posteriors, \(q(\{\omega _{i}\})\) and \(q(\{\omega _{j}\}\):
Assume that q becomes the agent’s new prior. Further, assume that she learns new certain evidence \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), i.e., \(r_{{\mathcal {F}}}(F_{i=k})=1\), and \(F_{i=k}\) is not a singleton. Further, assume that \(\omega _{i},\omega _{j}\in F_{i=k}\) and let r be a posterior after updating q on \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\). Following the strategy discussed earlier (about updating p on \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\)), I can show that \(\nicefrac {r(\{\omega _{i}\})}{r(\{\omega _{j}\})}=\nicefrac {q(\{\omega _{i}\})}{q(\{\omega _{j}\})}\), which means that \(\nicefrac {r(\{\omega _{i}\})}{r(\{\omega _{j}\})}=\nicefrac {p(\{\omega _{i}\})}{p(\{\omega _{j}\})}\), etc. \(\square \)
Claim 1
Given Definition 2, \({\mathcal {E}}\) and \({\mathcal {F}}\) in Example 3 and Example 4 are not Jeffreyindependent.
Proof
By Definition 2, Jeffrey independence needs to hold for all i and j. So, it is enough to consider a single example of \(F_{j}\) that violates Jeffrey independence to show that \({\mathcal {E}}\) and \({\mathcal {F}}\) are not Jeffreyindependent. Consider \(F_{0}=\{\omega _{1},\omega _{2}\}\in {\mathcal {F}}\). In Example 3, \(p^{{\mathcal {E}}}\) is q. So, for Jeffrey independence to hold, one needs \(p^{{\mathcal {E}}}(F_{0})=q(F_{0})=p(F_{0})\). But, by additivity, one has that \(p(F_{0})=p(\{\omega _{1}\})+p(\{\omega _{2}\})=\nicefrac {3}{4}\) and \(q(F_{0})=q(\{\omega _{1}\})+q(\{\omega _{2}\})=\nicefrac {1}{2}\ne \nicefrac {3}{4}\). So, Jeffrey independence is violated. \(\square \)
Lemma 1
Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), Definition 2, and Theorem 1 be given. If \({\mathcal {E}}\) and \({\mathcal {F}}\) are Jeffreyindependent with respect to p, \(q_{{\mathcal {E}}}\), and \(r_{{\mathcal {F}}}\), i.e., \(p^{{\mathcal {E}}{\mathcal {F}}} = p^{{\mathcal {F}}{\mathcal {E}}}\), then \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})\) and \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})\).
Proof
Let me have \((\Omega ,{\mathcal {S}})\). Assume that p is one’s faithful prior, and the agent learns uncertain evidence \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\) with Jeffreyindependent \({\mathcal {E}}\) and \({\mathcal {F}}\). Then, by Definition 2, one has that \(p^{{\mathcal {E}}}(F_{j})=p(F_{j})\) and \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) holds for all i and j. By assumption, any \(F_{j},E_{i}\in {\mathcal {S}}\), so I can take any \(F_{j}\) or \(E_{i}\) to be B.

1.
Since \({\mathcal {E}}\) and \({\mathcal {F}}\) come from (4), \(F_{j}\) can be either a singleton \(F_{j}=\{\omega _{j}\}\) (for \(j=1,\dots ,n\)) or \(F_{0}=\{a,b\}\). So, the condition \(p^{{\mathcal {E}}}(F_{j})=p(F_{j})\) gives either \(p^{{\mathcal {E}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\) or \(p^{{\mathcal {E}}}(\{a,b\})=p(\{a,b\})\). If \(F_{j}=\{\omega _{j}\}\), then \(\omega _{j}\in E_{3}\). Assuming that \(p^{{\mathcal {E}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\) and Equation 1 hold, one has that:
$$\begin{aligned} p^{{\mathcal {E}}}(\{\omega _{j}\})=\frac{q_{{\mathcal {E}}}(E_{3})}{p(E_{3})} \,p(\{\omega _{j}\})\;\;\text {so}\;\;p(E_{3})=q_{{\mathcal {E}}}(E_{3}). \end{aligned}$$(5) 
2.
Since \({\mathcal {E}}\) and \({\mathcal {F}}\) come from (4), \(E_{i}\) can be \(E_{1}=\{a\}\), \(E_{2}=\{b\}\), or \(E_{3}\). Assume that \(E_{i}=\{a\}\) or \(E_{i}=\{b\}\). One knows that \(a,b\in F_{0}\) since \(F_{0}=\{a,b\}\). If \(E_{i}=E_{1}=\{a\}\), then, by the assumption that \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) and Equation 1, one has that:
$$\begin{aligned} p^{{\mathcal {F}}}(\{a\})=\frac{r_{{\mathcal {F}}}(F_{0})}{p(F_{0})}\,p(\{a\})\;\;\text {so}\;\; p(F_{0})=r_{{\mathcal {F}}}(F_{0}). \end{aligned}$$(6)If \(E_{i}=E_{2}=\{b\}\), then, by \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) and Equation 1, one has that:
$$\begin{aligned} p^{{\mathcal {F}}}(\{b\})=\frac{r_{{\mathcal {F}}}(F_{0})}{p(F_{0})}\,p(\{b\})\;\;\text {so}\;\; p(F_{0})=r_{{\mathcal {F}}}(F_{0}). \end{aligned}$$(7)
\(\square \)
Claim 2
If Lemma 1 holds, then \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\).
Proof
By additivity, \(p(\{a,b\})=p(\{a\})+p(\{b\})=p(E_{1})+p(E_{2})\). One can now write \(p(E_{3})+p(E_{1})+p(E_{2})=p(E_{3})+p(\{a,b\})=1\). Since \(\{a,b\}=F_{0}\), one has \(p(E_{3})+p(F_{0})=1\). By Lemma 1, \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})\) and \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})\), so \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\). \(\square \)
Claim 3
If Lemma 1 and Claim 2 hold, then \(p(F_{0})=q_{{\mathcal {E}}}(F_{0})=r_{{\mathcal {F}}}(F_{0})\) and \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})\).
Proof
One knows that \(q_{{\mathcal {E}}}(E_{1})+q_{{\mathcal {E}}}(E_{2})+q_{{\mathcal {E}}}(E_{3})=1\). That is, \(q_{{\mathcal {E}}}(\{a\})+q_{{\mathcal {E}}}(\{b\})+q_{{\mathcal {E}}}(E_{3})=1\). By additivity, \(q_{{\mathcal {E}}}(\{a,b\})=q_{{\mathcal {E}}}(\{a\})+q_{{\mathcal {E}}}(\{b\})\). So, I can write that \(q_{{\mathcal {E}}}(\{a,b\})+q_{{\mathcal {E}}}(E_{3})=1\). This means that, \(q_{{\mathcal {E}}}(E_{3})=1q_{{\mathcal {E}}}(\{a,b\})\). Since \(\{a,b\}=F_{0}\), one has \(q_{{\mathcal {E}}}(E_{3})=1q_{{\mathcal {E}}}(F_{0})\). By Claim 2, \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\). So, one has that \(1q_{{\mathcal {E}}}(F_{0})+r_{{\mathcal {F}}}(F_{0})=1\). Thus, \(r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})\). By Lemma 1, \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})\)
One knows that \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(F_{1})+\dots +r_{{\mathcal {F}}}(F_{n})=1\). That is, \(r_{{\mathcal {F}}}(\{a,b\})+r_{{\mathcal {F}}}(\{\omega _{1}\})+\dots +r_{{\mathcal {F}}}(\{\omega _{n}\})=1\). Now, by additivity, \(r_{{\mathcal {F}}}(F_{1})+\dots +r_{{\mathcal {F}}}(F_{n})=r_{{\mathcal {F}}}(\{\omega _{1},\dots ,\omega _{n}\})\). So, \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(\{\omega _{1},\dots \omega _{n}\})=1\). But one also knows that \(\{\omega _{1},\dots \omega _{n}\}=E_{3}\). So, \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(E_{3})=1\). By Claim 2, \(r_{{\mathcal {F}}}(F_{0})=1q_{{\mathcal {E}}}(E_{3})\). Finally, this gives \(1q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(E_{3})=1\), and so \(r_{{\mathcal {F}}}(E_{3})=q_{{\mathcal {E}}}(E_{3})\). By Lemma 1, \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})\). \(\square \)
Claim 4
If Lemma 1 and Claim 3 hold, then:
Proof
By Lemma 1 and simple arithmetic operations, one has that:
So, by Claim 3, I can write that:
\(\square \)
Claim 5
Given Claim 3, if Jeffrey conditioning in Example 3 is commutative with respect to \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), then the posterior r is unreachable from the prior p with the twostep strategy.
Proof
In Example 3, one has \({\mathbf {r}}=[\nicefrac {1}{16},\nicefrac {2}{16},\nicefrac {3}{16},\nicefrac {4}{16},\nicefrac {5}{16},\nicefrac {1}{16}]\) and \({\mathbf {p}}=[\nicefrac {1}{2},\nicefrac {1}{4},\nicefrac {1}{8},\nicefrac {1}{16},\nicefrac {1}{32},\nicefrac {1}{32}]\). Given \({\mathbf {p}}\), by Claim 3, one has that \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})=\nicefrac {1}{4}\). So, by additivity, credences in \(\{\omega _{3}\}, \{\omega _{4}\}, \{\omega _{5}\}\), and \(\{\omega _{6}\}\) (whose union forms \(E_{3}\)) must sum to \(\nicefrac {1}{4}\). But, as indicated in \({\mathbf {r}}\), the final credences in \(\{\omega _{3}\}, \{\omega _{4}\}, \{\omega _{5}\}\), and \(\{\omega _{6}\}\) should be \(\nicefrac {3}{16},\nicefrac {4}{16},\nicefrac {5}{16}\), and \(\nicefrac {1}{16}\), respectively. This means that \(\nicefrac {3}{16}+\nicefrac {4}{16}+\nicefrac {5}{16}+\nicefrac {1}{16}=\nicefrac {13}{16}>\nicefrac {1}{4}\). \(\square \)
Proposition 2
Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\) be given. If Claim 2, Claim 3, and Claim 4 hold, then the bold Bayes (p, 2)Blind Spot is infinitely large and has at least continuum cardinality.
Proof
Assume that p is a faithful prior. By Claim 4, \(\nicefrac {p(F_{0})}{p(E_{3})}=\nicefrac {r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}\). Now, given one’s prior p, \(\nicefrac {p(F_{0})}{p(E_{3})}\) will equal a constant c, i.e., \(\nicefrac {p(F_{0})}{p(E_{3})}=c\). So, \(\nicefrac {r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}=c\). Then, easily, one has that \(r_{{\mathcal {F}}}(F_{0})=cq_{{\mathcal {E}}}(E_{3})\). By Claim 2, \(r_{{\mathcal {F}}}(F_{0})=1q_{{\mathcal {E}}}(E_{3})\). So, \(1q_{{\mathcal {E}}}(E_{3})=cq_{{\mathcal {E}}}(E_{3})\). Thus, \(q_{{\mathcal {E}}}(E_{3})=\nicefrac {1}{(1+c)}\). Since \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\), one has that \(r_{{\mathcal {F}}}(F_{0})=\nicefrac {c}{(1+c)}\). By Claim 3, it follows that \(r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})=\nicefrac {c}{(1+c)}\) and \(q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})=\nicefrac {1}{(1+c)}\).
One knows that \(F_{0}=\{a,b\}\) and the complement of \(F_{0}\) is \(\{\omega _{1},\dots ,\omega _{n}\}\). By additivity, in two rounds of Jeffrey updating, a bold Bayesian agent cannot reach from her prior p any posterior r such that \(r(\{a\})+r(\{b\})\ne \nicefrac {c}{(1+c)}\) and \(r(\{\omega _{1}\})+\dots +r(\{\omega _{n}\})\ne \nicefrac {1}{(1+c)}\). This amounts to an infinite number of unreachable posteriors. The set of unreachable posteriors (those that do not meet \(r(\{a\})+r(\{b\})\ne \nicefrac {c}{(1+c)}\) and \(r(\{\omega _{1}\})+\dots +r(\{\omega _{n}\})\ne \nicefrac {1}{(1+c)}\)) has at least continuum cardinality. \(\square \)
B Proofs for Section 4 (Generalisations and discussion)
Lemma 2
Let a prior p, \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) be given such that \(P_{i}=K_{j}=\{\omega _{i}\}\). If \(p^{{\mathcal {P}}{\mathcal {K}}} = p^{{\mathcal {K}}{\mathcal {P}}}\), then \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})\) and \(p^{{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {K}}{\mathcal {P}}}(\{\omega _{i}\})\).
Proof
Assume that \(\{\omega _{i}\}\) is a singleton in both \({\mathcal {P}}\) and \({\mathcal {K}}\), i.e., \(\{\omega _{i}\}\in {\mathcal {P}}\) and \(\{\omega _{i}\}\in {\mathcal {K}}\). For example, \({\mathcal {P}}\) and \({\mathcal {K}}\) could look as follows:
Let then \(P_{i}=K_{j}=\{\omega _{i}\}\) and assume that p is a faithful prior of a bold agent. Since \(\{\omega _{i}\}\in {\mathcal {S}}\), one can take \(\{\omega _{i}\}\) as one’s B (see Eq. 1). Assume the agent updates \(p(\{\omega _{i}\})\) on \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), specifically, \(P_{i}=\{\omega _{i}\}\); see Observation 2 for updating on singleton sets.
Now, assume that the agent updates \(p(\{\omega _{i}\})\) on \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\), specifically, \(K_{j}=\{\omega _{i}\}\)
If \({\mathcal {K}}\) and \({\mathcal {P}}\) are Jeffreyindependent, then, by Definition 2, \(p^{{\mathcal {P}}}(K_{j})=p(K_{j})\) and \(p^{{\mathcal {K}}}(P_{i})=p(P_{i})\) holds for all i and j. For \(P_{i}=K_{j}=\{\omega _{i}\}\), one has that \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p(\{\omega _{i}\})\) and \(p^{{\mathcal {K}}}(\{\omega _{i}\})=p(\{\omega _{i}\})\). This means that \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\).
By Condition 1, the agent is allowed to first update with \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and then \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), or the other way around. For the sake of argument, assume that the bold agent first updates p on \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\). Her new prior after the first update is \(p^{\mathcal {P}}\). Now, assume that the agent updates \(p^{\mathcal {P}}\) on \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\), specifically, \(K_{j}=\{\omega _{i}\}\):
By (9) and (10), \(p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\). But since \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\), one has that \(p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {P}}}(\{\omega _{i}\})\). One could switch the order of \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) to get \(p^{{\mathcal {K}}{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\). \(\square \)
Lemma 3
Let a prior p, \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) be given. If \(p^{{\mathcal {P}}{\mathcal {K}}} = p^{{\mathcal {K}}{\mathcal {P}}}\), then \(p(P^*)=l_{{\mathcal {P}}}(P^*)\) and \(p(K^*)=g_{{\mathcal {K}}}(K^*)\).
Proof
Assume that p is one’s faithful prior and the agent learns nontrivial uncertain evidence \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\). By Condition 3, there is \(\omega _{j}\) such that \(\{\omega _{j}\}\in {\mathcal {K}}\) and \(\omega _{j}\in P^*\), where \(P^*\in {\mathcal {P}}\) is not a singleton. Similarly, there is \(\omega _{i}\) such that \(\{\omega _{i}\}\in {\mathcal {P}}\) and \(\omega _{i}\in K^*\), where \(K^*\in {\mathcal {K}}\) is not a singleton. Assume that one proceeds with the singleton \(K_{j}=\{\omega _{j}\}\in {\mathcal {K}}\) such that \(\omega _{j}\in {\mathcal {P}}^*\). By Definition 2, \(p^{{\mathcal {P}}}(K_{j})=p(K_{j})\) gives \(p^{{\mathcal {P}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\). So, using Eq. 1, one has:
With the identical proof strategy, one can prove an analogous result if one takes a singleton set in \(\{\omega _{i}\}\in {\mathcal {P}}\) and a nonsingleton \(K^*\in {\mathcal {K}}\). \(\square \)
Lemma 4
Let a prior p, \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) be given. If Lemma 3 holds, then \(p^{{\mathcal {P}}}(P^*)=p(P^*)\) and \(p^{{\mathcal {K}}}(K^*)=p(K^*)\).
Proof
Condition 1 says that every evidence is commutative with every other evidence. In other words, the order of evidence can be permuted such that any evidence can be the first evidence the agent uses in the updating process. From Lemma 3, one knows that \(P^*\in {\mathcal {P}}\) and \(K^*\in {\mathcal {K}}\) are nonsingleton sets such that \(p(P^*)=l_{{\mathcal {P}}}(P^*)\) and \(p(K^*)=g_{{\mathcal {K}}}(K^*)\). Let me focus only on \(P^*\). By assumption, \(P^*\in {\mathcal {S}}\), and so let \(B=P^*\). One knows that \(p(P^*\cap P^*)=p(P^*)\) and, by Lemma 3, \(p(P^*)=l_{{\mathcal {P}}}(P^*)\). So, by Equation 1, it holds that:
An analogous reasoning holds for \(K^*\) and proving that \(p^{{\mathcal {K}}}(K^*)=p(K^*)\). \(\square \)
Proposition 3
If Lemma 4 holds, then the infinite bold Bayes Blind Spot is infinitely large and has at least continuum cardinality.
Proof
By Condition 1, a bold Bayesian agent can use any evidence first in the updating process. So, by Lemma 4, any posterior, e.g., \(p^{{\mathcal {P}}}\), will be bounded by the original prior p. That is, there will be a nonsingleton set \(\{\omega _{1},\dots ,\omega _{n}\}=B\in {\mathcal {S}}\) (I have previously called such a set \(P^*\in {\mathcal {P}}\) or \(K^*\in {\mathcal {K}}\)) for which, by additivity, the following equality will hold:
So, a part of any posterior \(p^{{\mathcal {P}}}\) will be determined by the values originally given by the prior p. That is, no posterior credence function \(p^{{\mathcal {P}}}\) can be such that \(p^{{\mathcal {P}}}(\{\omega _{1}\})+\dots +p^{{\mathcal {P}}}(\{\omega _{n}\})\ne p(\{\omega _{1}\})+\dots +p(\{\omega _{n}\})\). Consequently, the probability \(p^{{\mathcal {P}}}(B^c)\) that a bold Bayesian agent can assign to the complement \(B^c\) of B must be such that \(p^{{\mathcal {P}}}(B^c)=1p^{{\mathcal {P}}}(\{\omega _{1},\dots ,\omega _{n}\})\). Any posteriors which do not meet those equalities cannot be reached in the Bayesian updating process under consideration. This, however, amounts to an infinite number of posteriors. Similarly to Proposition 3 and Example 5, this set of unreachable posteriors will have at least continuum cardinality. \(\square \)
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author selfarchiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author selfarchiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Janda, P. How much are bold Bayesians favoured?. Synthese 200, 336 (2022). https://doi.org/10.1007/s11229022038255
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229022038255