1 Introduction

The notion of rational belief is central to philosophy and formal epistemology, and provides the conceptual foundation for theories of reasoning and rational decision-making in a number of fields, from economics to artificial intelligence. In the philosophical discussion, virtually all commentators agree on the idea that belief is fallible: even an ideally rational agent may believe some statement without being fully and definitely certain of its truth. Most theorists also agree that probability provides the best formal tool to model fallible belief: the “degrees of belief” of a rational agent in the propositions he accepts should be construed as degrees of probability.

Interestingly, the attempt to develop a model of fallible, rational belief along these lines has proved rather challenging. Well-known results—like the so-called Preface and Lottery paradoxes—seem to undermine the intuitive idea that a rational agent should believe exactly those propositions that he accepts with a sufficiently high degree of belief, an intuition known in the literature as the Lockean thesis. One central problem is the following: there are situations where the Lockean thesis forces an agent into rationally believing each one of n propositions \({a_1},\ldots ,a_n\), but prevents him from rationally believing their conjunction \({a_1}\wedge \dots \wedge a_n\). In reaction to such results, participants to the debate have explored various ways to re-think the link between probability and rational belief, sometimes departing in radical ways from the Lockean thesis (e.g., Jeffrey (1970); see Cevolani and Schurz (2017) for discussion).

In this paper, we address this issue in a somehow conservative manner. More precisely, we retain two crucial ideas: first, that degrees of belief are governed by the rules of the probability calculus; second, that the connection between plain belief and degrees of belief is governed by the Lockean thesis. We then explore how far the use of probability can be pushed in order to gain useful insights on the notion of rational belief. In particular, we present two results: the former is based on standard probability functions taking values in the usual real unit interval, while the latter makes use of infinitesimal probabilities defined on a non-Archimedean extension of the real field.Footnote 1 Each result provides a way to preserve both the notion of rational belief (as consistent and logically closed belief), the probabilistic nature of degrees of belief, and the Lockean thesis connecting them.

The plan of the paper is as follows. In Sect. 2 we recap some classical results from the logico-philosophical discussion on rational belief and probability, which serve as the background for our discussion. In Sect. 3, we show how it is possible to believe long conjunctions of beliefs under the Lockean thesis even in the face of the Preface paradox. Finally, in Sect. 4 we discuss some implications of our approach, and compare it with other proposals in the literature. The "Appendix" contains the proofs and a concise introduction to non-standard probability for the reader interested in the more technical details.

2 Fallibilism, Rational Belief, and Probability

The idea that human knowledge, including science, is fallible—uncertain and incomplete at best—is shared by most philosophers and scientists alike. “We never can be absolutely sure of anything”—wrote Charles Sanders Peirce, the founding father of modern fallibilism—and we “cannot attain absolute certainty concerning questions of fact” (Hartshorne et al., 1931–1958). Modeling this notion of knowledge as rational belief in the face of uncertainty and fallibilism, however, has proven challenging.

In this section, we shortly review the discussion of two different formal models of belief: one qualitative account of “plain” or “full” belief, and one quantitative account of probabilistic “degrees of belief” (the reader already acquainted with this debate in formal epistemology may safely skip it). In the next section, we present two ways of connecting these two models into one unified account of rational fallible belief while eschewing some well-known problems with this connection.

2.1 Plain Belief

A simple and natural way of representing the beliefs of a rational agent is to model them as a set B of propositions in some language. The agent believes each proposition a belonging to B, and does not believe those propositions which are not in B. Rationality is here construed in terms of two conditions on such a “belief set”: B has to be consistent and logically closed. The former condition states that a rational agent should not accept contradictory beliefs; the latter, that he should believe all propositions entailed by what he already believes. In particular, this latter condition implies that if the agent believes both \(a_1\) and \(a_2\) then he should also believe their conjunction \(a_1\wedge a_2\). More generally, logical closure entails the principle commonly known as “conjunctive closure” (Schurz 2019, p. 4):

Conjunctive Closure:

For any belief set B, if \(a_1\in B,\dots ,a_n\in B\) then \(a_1\wedge \dots \wedge a_n\in B\).

This rather simple account seems to lack the capability of delivering a proper notion of (rational) fallible belief. The central problem is that, for any proposition a, only three “epistemic attitudes” are possible for a rational agent: either to accept a (if \(a\in B\)), or to reject it (if \(\lnot a\in B\)), or to suspend the judgment on a (if neither a nor \(\lnot a\) are elements of B). Thus, no room is left for the idea that the agent may “fallibly” accept a, i.e., believe a even if he is not completely certain that a is indeed true. This, we submit, is the main lesson taught by the famous “paradox of the preface” introduced by David Makinson in a short, seminal paper in 1965 (Makinson 1965, p. 205):

Suppose that in the course of his book a writer makes a great many assertions, which we shall call \(a_1, a_2,\dots ,a_n\). Given each one of these, he believes that it is true. If he has already written other books, and received corrections from readers and reviewers, he may also believe that not everything he has written in his latest book is true. [...]

However, to say that not everything I assert in this book is true, is to say that at least one statement in this book is false. That is to say that at least one of \(a_1, a_2,\dots ,a_n\) is false, where \(a_1, a_2,\dots ,a_n\) are the statements in the book; that \(a_1 \wedge a_2 \wedge \dots \wedge a_n\) is false; that \(\lnot (a_1 \wedge a_2 \wedge \dots \wedge a_n)\) is true. The author who writes and believes each of \(a_1, a_2,\dots ,a_n\), and yet in a preface asserts and believes \(\lnot (a_1 \wedge a_2 \wedge \dots \wedge a_n)\) is, it appears, behaving very rationally. Yet clearly he is holding logically incompatible beliefs: he believes each of \(a_1, a_2,\dots ,a_n,\lnot (a_1 \wedge a_2 \wedge \dots \wedge a_n)\), which form an inconsistent set. The man is being rational though inconsistent.

In stating in the preface that the book will contain some error, Makinson’s writer is acknowledging his own fallibility. He is saying that, even if he is willing to rationally accept each claim in the book, he cannot fully believe their conjunction. Conjunctive closure, however, makes this kind of doxastic attitude impossible: since the author fully believes each claim, he must also believe their conjunction, and contradict himself in the preface.

What seems needed here is then another, more sophisticated account of rational fallible belief. An account which makes room for fallibilism by admitting not only full but also “partial” belief in a proposition or, in other words, degrees of belief. Such degrees of belief or “credences” may be formally represented as a function \(\beta\) which assigns to each proposition a a degree of belief \(\beta (a)\) between (say) 0 and 1. The agent could then believe different propositions to different degrees, the maximum (1) being reserved to the propositions he fully believes (as tautologies and mathematical truths). This account would straightforwardly capture the idea of fallibilism in the following sense (Schurz 2019):

Weak Fallibilism:

For some \(a\in B\), \(\beta (a)<1\); i.e., a rational agent has non-maximum degrees of belief in at least some of propositions he believes.

This corresponds to a quite minimal conception of fallibilism, which is shared by virtually all commentators. This is for a good reason, since rejecting Weak Fallibilism would require one to embrace the following position:

Dogmatism:

For all \(a\in B\), \(\beta (a)=1\); i.e., a rational agent can only believe propositions of which he is fully certain.

Since Dogmatism is rejected by most of the participants in the debate, introducing degrees of belief appears as a natural move in order to obtain a defensible account of rational fallible belief. As we shall see in a moment, however, this alternative account has problems on its own.

2.2 Degrees of Belief

The idea of degrees of belief can be formalized in many different ways. Here, we will follow a common approach in formal epistemology, which assumes a standard probabilistic reading of degrees of belief coupled with a subjective (i.e., “personalist”, “Bayesian”) interpretation of probability. More precisely, we shall work within Kolmogorov’s theory of finitely additive probability functionsFootnote 2 and follow de Finetti approach to interpret subjective probability as based on so called Dutch book argumentsFootnote 3. Let us recall here the main features of such an approach.

In a series of seminal contributions, de Finetti provided a foundation of subjective probability theory in terms of rational or “coherent” degrees of belief, quantifying the uncertainty subjectively assigned to a set of propositions as considered by a rational agent (cf. de Finetti (1931) and de Finetti (1974)). More precisely, de Finetti starts from an ideal betting game between two players, a bookmaker and a gambler. The gambler wagers money on the occurrence of a number of events, or, equivalently, on the truth of the corresponding statements about the world, let’s call them \(a_1,\ldots , a_n\).Footnote 4 Each event \(a_i\), with \(1\le i\le n\), can either occur (\(w(a_i)=1\)) or not (\(w(a_i)=0\)) in a possible world w. Bookmaker’s beliefs are represented by an assignment \(\beta :\{a_1,\ldots , a_n\}\rightarrow [0,1]\), specifying the degree of belief \(\beta (a_i)\) in each event. To be coherent, the beliefs of an agent must then obey the following criterion:

De Finetti’s coherence criterion:

A belief assignment \(\beta :\{a_1,\ldots , a_n\}\rightarrow [0,1]\) is coherent if, for each choice of real numbers \(\sigma _1,\ldots , \sigma _n\), there exists a possible world w such that:

$$\begin{aligned} \sum _{i=1}^n \sigma _i(\beta (a_i)- w(a_i))\ge 0. \end{aligned}$$
(1)

In the above statement, the real numbers \(\sigma _i\)’s represent gambler’s stakes and the formula (1) expresses bookmaker’s balance.Footnote 5 Thus, an assignment is coherent (and hence bookmaker’s odds are fair prices) if it prevents the bookmaker from what is known as a sure-loss, that is, a gambler’s choice of stakes \(\sigma _1,\ldots , \sigma _n\) forcing bookmaker’s balance to be strictly negative independently of the occurrence of each event.

The connection between probability theory and coherent belief assignments is established by a famous theorem due to de Finetti, stating that an assignment \(\beta :\{a_1,\dots , a_n\}\rightarrow [0,1]\) is coherent if and only if it extends to a finitely additive probability measure over the Boolean algebra generated by \(a_1,\dots ,a_n\). In other words, a belief assignment is coherent if and only if those beliefs are consistent with Kolmogorov’s axioms of (finitely additive) probability theory (Kolmogorov 1933).

De Finetti’s theorem provides a subjective justification of the probabilistic account of rational fallible belief. In a nutshell, the belief of a rational agent (i.e., de Finetti’s bookmaker) can be modeled as a subjective probability distribution under the coherence of \(\beta\); hence, \(\beta (a)\) represents the agent’s credence in a, i.e., the probability the agent assigns to a. In this connection, we will henceforth work only with “Carnap-regular” probability functions, that is to say, probability functions which assign degrees 1 and 0 only to tautologies and contradictions, respectively. This assumption comes with no particular costs from de Finetti’s coherence criterion viewpoint. Indeed, in several contributions (Shimony 1955; Kemeny 1955; Flaminio et al. 2018) it is shown that a refinement of the latter, that nowadays goes under the name of strict-coherence criterion, captures Carnap-regular probabilities in the same way as de Finetti’s coherence captures probability functions. More precisely, Flaminio et al. (2018, Corollary 6.5) show that an assignment \(\beta :\{a_1,\dots , a_n\}\rightarrow [0, 1]\) is strictly-coherent if and only if it extends to a Carnap-regular probability function on the algebra generated by \(\{a_1,\ldots , a_n\}\).

The account of rational partial belief just presented obviously satisfies Weak Fallibilism, since an agent will assign non-extreme credences to all propositions a different from tautologies and contradictions. Interestingly, it also offers a quick way out of Makinson’s paradox of the preface, by rephrasing it in probabilistic terms (Easwaran 2016, sec. 1.2). In this new scenario, the book’s author does not fully believe any of the assertions \(a_1, a_2,\dots ,a_n\) that appear in the book’s preface, nor their conjunctionFootnote 6. However, he has a (high) degree of belief \(\beta (a_i)\) in each of those claims, even if lower than 1. What about their conjunction? The monotonicity of probability measures immediately prescribes that the probability of a conjunction of events is lower than (or equal to) the probability of each of the conjuncts:

$$\begin{aligned} \beta (a_1\wedge \ldots \wedge a_n)\le \beta (a_i), \end{aligned}$$

for each \(1\le i\le n\). This means that the author cannot believe in the conjunction of the statements in the book more firmly than any of those statements. Indeed, the probability of such a conjunction may be significantly lower than that of each of the conjuncts. In this case, the author will believe with correspondingly high probability that the conjunction is false, which is another way to state the prefatory remark according to which the book will contain some mistake. In other words, no paradox needs to arise, if we refrain from talking about belief or acceptance at all, and only take into account the degrees of belief, or credences, expressed by the relevant probabilities.

This “dissolution” of the Preface Paradox is not costless, however. In fact, being purely probabilistic, this approach seems unable to account for the very notion of “plain” or “full” rational belief we started with. This is clearly seen by observing that, assuming Carnap-regularity, no proposition, with the exception of plain tautologies and contradictions, is ever fully accepted or rejected by the agent. We can only talk about more or less extreme credences, leaving the notion of full belief apart: a conclusion that is, to say the least, less than satisfactory.Footnote 7

3 How to Believe Long Conjunctions of Beliefs

The upshot of the discussion in the previous two sections is the following. We have two different accounts of rational belief, one based on consistency and logical closure and one based on probabilistic coherence. Within the former, it is easy to model plain belief, but not to accommodate the idea that some beliefs can be less-than-certain (i.e., the idea of Weak Fallibilism). The latter account makes full justice of this idea in terms of probabilistic degrees of belief, but completely abandons any talk of plain belief. Is it possible to combine the two approaches just outlined, in order to obtain an account of full belief which still respects Weak Fallibilism and leaves room for probabilistic credences? An attempt to get the best of both worlds is assuming the following condition (Foley 2009):

Lockean Thesis:

For all a, \(a\in B\) iff \(\beta (a)\ge r\), with \(0.5<r<1\).

In words, a rational agent fully believes all propositions which are sufficiently probable, i.e., have probability greater than some “Lockean” threshold r, possibly depending on the specific context (Hawthorne and Bovens 1999; Leitgeb 2014; Schurz 2019). Note that, in any case, r needs to be greater than 0.5 (otherwise the agent may accept both a proposition and its negation, thus violating consistency) and smaller than 1 (in order to satisfy Weak Fallibilism). Also note that the Lockean thesis can be split into two conditions: one sufficient (if \(\beta (a)\ge r\) then \(a\in B\)) and one necessary (if \(a\in B\) then \(\beta (a)\ge r\)).

The Lockean thesis is attractive because it provides a way to model plain belief within a probabilistic framework. Unfortunately, the Preface paradox can be turned into an argument against the thesis. To see how, let’s assume that rational belief is governed by the Lockean thesis: an agent assigns a degree of belief to each relevant proposition, and only accepts those which are sufficiently “believable” (i.e., probable). In the original Preface scenario, the author is prepared to believe each of the claims \(a_1, a_2,\dots ,a_n\) in the book and hence, by Conjunctive Closure, also their conjunction. This means (by the necessary condition in the Lockean thesis) that \(\beta (a_i)\ge r\) for each of these claims and also that \(\beta (a_1\wedge \ldots \wedge a_n)\ge r\). However, for any Lockean threshold r, it is possible to find a coherent assignment and a sufficiently large number n of claims for which \(\beta (a_1\wedge \ldots \wedge a_n)\) falls below r.Footnote 8 In turn, this means that either the author believes the conjunction \(a_1\wedge \ldots \wedge a_n\) even if it is improbable (against the Lockean thesis) or he doesn’t believe it even if he believes each of its conjuncts (against the principle of Conjunctive Closure). In sum, it seems that the qualitative and the probabilistic account cannot be combined together after all.Footnote 9

A common reaction to the collapse of the Lockean thesis has been abandoning Conjunctive Closure as a rationality requirement on fallible belief (Christensen 2004; Kyburg 1961): since believing long conjunctions of beliefs (assertions) with high probability seems impossible, a rational agent cannot be always asked to accept the conjunction of what he believes. The debate, however, is still ongoing (Lin and Kelly 2012; Leitgeb 2014; Fitelson and Easwaran 2015; Cevolani 2017; Schurz 2019) and we shall discuss some of these contributions in the next section. In the following, we explore a different question: is believing long conjunctions under the Lockean thesis really impossible? Or: under what conditions can the Lockean thesis be maintained, without falling into the Preface paradox?

3.1 Archimedean Degrees of Belief

Let us rephrase as follows our central question above: is it possible that a long conjunction of n propositions is believable under the Lockean thesis? And: under what conditions does this happen? To answer these questions, we proceed as follows.

Given n propositions \(a_1\wedge \ldots \wedge a_n\), we want to find a threshold value on their probabilities \(\beta (a_1),\ldots , \beta (a_n)\) such that their conjunction is acceptable, i.e. such that

$$\begin{aligned} \beta (a_1\wedge \ldots \wedge a_n) \ge r \end{aligned}$$

for any given Lockean threshold r. To this purpose, we need to know what \(\beta (a_1\wedge \ldots \wedge a_n)\) amounts to in general. Note first that, in the special case where all n relevant events are probabilistically independent from each other, then \(\beta (a_1\wedge \ldots \wedge a_n)\) simply amounts to the product of their probabilities: \(\beta (a_1)\cdot \ldots \cdot \beta (a_n)\) (more on this special case in Sect. 4).

As for the general case, we can rely on a classical result in probability theory usually attributed to Maurice René Fréchet and Wassily Hoeffding, which allows us to calculate a lower and upper bound for the probability of an arbitrary conjunction of events:

Theorem 3.1

(Fréchet-Hoeffding Bounds). Let \(\beta\) be a coherent belief assignment on the set of assertions \(\{a_1,\dots ,a_n\}\). Then:

$$\begin{aligned} \beta (a_1)\odot \dots \odot \beta (a_n)\le \beta (a_1\wedge \dots \wedge a_n)\le \min \{\beta (a_1),\dots ,\beta (a_n)\}. \end{aligned}$$
(2)

The \(\odot\) operator appearing in the first inequality above is the so-called Łukasiewicz t-norm, defined as follows: for any two numbers \(x,y\in [0,1]\), \(x\odot y{:}{=} \max \{x+y-1, 0\}\). In words, Theorem 3.1 states that the probability of a conjunction stands in between the probability of the less probable conjunct (upper bound) and the product of the probabilities of all the conjuncts, modulo the Łukasiewicz t-norm (lower bound). Note that this lower bound is zero if the sum of all relevant probabilities does not exceed 1.

This well-known result leads to an interesting, if unnoticed, implication for the analysis of rational belief. In fact, it immediately allows one to find a lower bound s for the probabilities of the propositions \({a_1},\ldots , {a_n}\) which guarantees the acceptability of their conjunction under any possible (positive) Lockean threshold r:

Proposition 3.2

Let \(a_1,\ldots , a_n\) be n assertions and let \(0 < r \le 1\) be a real number. If \(\beta (a_i)\ge s=\frac{n+r-1}{n}\) for each \(i\in \{1,\dots , n\}\), then \(\beta (a_1\wedge \dots \wedge a_n)\ge r\).

Proof

The result follows by direct computation. We want to have \(\beta (a_1\wedge \ldots \wedge a_n) \ge r\). We know from Theorem 3.1 that \(\beta (a_1\wedge \ldots \wedge a_n) \ge \beta (a_1)\odot \dots \odot \beta (a_n)\). Let consider the less probable among the n propositions and call it \(a_j\): thus, \(\beta (a_j)=\min \{\beta (a_1),\ldots , \beta (a_n)\}\). Clearly, we obtain the lower possible value for \(\beta (a_1)\odot \dots \odot \beta (a_n)\) assuming that all relevant propositions have probability \(\beta (a_j)\). One can check that \(\beta (a_1)\odot \dots \odot \beta (a_n) \ge \beta (a_j)\odot \ldots \odot \beta (a_j)=\max \{n\beta (a_j)-(n-1),0\}\) by definition of the Łukasiewicz t-norm. We can now calculate the minimum value s such that, if \(\beta (a_j)\ge s\) then \(\beta (a_1\wedge \ldots \wedge a_n)\ge r\). Indeed one can check that, assuming \(\beta (a_j)\ge s= \frac{n+r-1}{n}\), then \(n\cdot \beta (a_j) - (n-1)\ge n \cdot \frac{n+r-1}{n} - (n-1) = r > 0\) and thus, by applying Theorem 3.1,

$$\begin{aligned} \beta (a_1\wedge \ldots \wedge a_n) \ge \beta (a_1)\odot \ldots \odot \beta (a_n) \ge \beta (a_{j})\odot \dots \odot \beta (a_{j}) \ge r, \end{aligned}$$

as desired. \(\square\)

The above result provides a sufficient condition for believing arbitrarily long conjunctions of beliefs (assertions) \(a_1\wedge \ldots \wedge a_n\) under the Lockean thesis, when their number n is finite and fixed. More precisely, it provides a threshold \(s=\frac{n+r-1}{n}\) such that if the degree of belief in each \(a_i\) is at least s, then their conjunction is believable, i.e., its probability is at least r. In connection with the Preface paradox, Proposition 3.2 tells us how firmly authors should believe each of the assertions contained in their book, in order to also believe their conjunction. Note that \(s\ge r\) and that s depends both on the Lockean threshold r and on the number n of the conjuncts (more on this in the next section, cf. Table 1).

Proposition 3.2 highlights a subtle aspect of the argument against the Lockean thesis based on the Preface paradox which, to our knowledge, has not been discussed so far. It is the following. Consider n beliefs \(a_1,\ldots , a_n\) which are contingent, not pairwise contradictory, and not such that one of them logically entails all of the others. Then, it is always possible to find both a probability assignment \(\beta\) according to which \(\beta (a_1\wedge \ldots \wedge a_n)\) falls below the Lockean threshold r, even though each \(\beta (a_{i}) \ge r\) (this is the lesson taught by the Preface paradox), but also a (different) probability assignment such that \(\beta (a_1\wedge \ldots \wedge a_n)\) stays above r. In other words, the high-probability view of rational belief embodied in the Lockean thesis and the requirement of Conjunctive Closure are not strictly incompatible in general: if one has high enough degrees of belief in the single conjuncts, one can rationally believe also their conjunction, and Proposition 3.2 provides a sufficiency condition for this to happen. We shall discuss the implications of this result in Sect. 3.3; in the next subsection, we explore another way out of the Preface paradox based on non-Archimedean probabilistic degrees.

3.2 Non-Archimedean Degrees of Belief

The main result in the previous section, namely the bound established in Proposition 3.2, depends on the number n of assertions for which one aims to believe the conjunction \(a_{1}\wedge \dots \wedge a_{n}\). This means that the number n of relevant assertions must be fixed in advance, in order to calculate the corresponding bound. In this section, we relax that assumption and we hence tackle the following question: under the usual hypothesis of (strict) coherence, how much should an agent believe each of a finite, but arbitrary large, set of propositions in order to also believe their conjunction? As we shall see in a moment, we can provide a precise, general answer to this question by considering “infinitesimal” degrees of probability. Formally, this amount to allowing probability functions to take values in the unit interval \({^*}[0,1]\) of a non-Archimedean extension of the real field (Benci and Nasso (2003); the construction of infinitesimals is illustrated in details in "Appendix 1"). We will refer to the elements of the set \({^*}[0,1]\) as hyperreals. In particular, we will prove that believing all the relevant propositions with a coherent belief infinitesimally close to 1 is sufficient in order to also believe their conjunction.

Intuitively, it seems reasonable enough to assume that the author of a carefully prepared book may believe each of the claims in the book with a rate of confidence which is very close to certainty. Formally, this amounts to assign to the author’s degrees of belief in each of claims of the book the value \(1-\varepsilon\), where \(\varepsilon \in {^*}[0,1]\) is an infinitesimal greater than zero. Interestingly, de Finetti’s coherence criterion is sufficiently robust to be consistent with such infinitesimal degrees of belief. In particular, one can prove (Montagna et al. 2013, Theorem 4.1) that, keeping fixed the definition of coherence as given above (see p. 7), a \({^*}[0,1]\)-valued belief assignment \(\beta\) over a set of events \(\{a_1, \dots , a_n\}\) is coherent if and only if there exists a \({^*}[0,1]\)-valued finitely additive and normalized map P on the Boolean algebra generated by \(\{a_1,\ldots , a_n\}\) which extends \(\beta\).

Taken this for granted, we can state the following result (the proof is given in "Appendix 1").

Proposition 3.3

Let \(\{a_1, a_2\dots \}\) be a countable (possibly infinite) set of assertions and let \(\beta\) be a coherent belief assignment such that \(\beta (a_i)>1-\varepsilon\), for each i and for a positive infinitesimal \(\varepsilon\). Then, for every finite subset \(\{a_{j_1},\ldots , a_{j_n}\}\) of \(\{a_1, a_2\dots \}\), \(\beta (a_{j_1}\wedge \ldots \wedge a_{j_n}) >1-n\varepsilon\).

In informal terms, if our author is “nearly certain” of the truth of each of the propositions in the book— having in each of them a degree of belief infinitesimally close to 1—then the author will also believe their conjunction with sufficiently high probability under any Lockean threshold r. More precisely, the author will be nearly certain both of the single assertions made in the book and of their conjunction, assigning, respectively, to each \(a_i\) a probability greater than \(1 - \varepsilon\) and, to their conjunction, a probability greater than \(1 -n\varepsilon\), which is lower than the former but still infinitesimally close to 1 (and greater than any real-valued threshold r). This result provides another way out of the Preface paradox, if infinitesimal degrees of belief are allowed.

Interestingly, the introduction of non-Archimedean degrees of belief allows also for further extensions of the basic framework discussed so far, in at least two ways. First, one can think of relaxing the assumption that the relevant Lockean threshold r must be a real number; instead, it seems now natural that r can be any hyperreal threshold. The following result takes care of this more general case:

Proposition 3.4

Let \(\varepsilon\) be a positive infinitesimal and let \(0<r<1\) be any hyperreal number. Then there exists a hyperreal \(s>0\) such that, for every \(n\in {\mathbb {N}}\), and for every set of assertions \(a_1,\ldots , a_n\), if \(\beta\) is a coherent belief assignment such that \(\beta (a_i)>s\) for all \(i\in \{1,\ldots , n\}\), then \(\beta (a_1 \wedge \dots \wedge a_n) > r\).

The proof of the proposition above is given in the "Appendix". Here, it is interesting to point out that, if we fix a hyperreal threshold \(r<1\) extremely close to 1, say \(r=1-\delta\) for an infinitesimal \(\delta\), then the claim above can be proved taking \(s=1-\delta ^2\) and hence moving from one order of infinitesimals to a higher one. It is also interesting to remark that, if the threshold \(r<1\) is indeed a real number, then Proposition 3.4 can be slightly strengthen by saying that for every infinitesimal \(\varepsilon\), for every natural number n and for every set of assertions \(a_1,\ldots , a_n\), if \(\beta (a_i)>1-\varepsilon\), then \(\beta (a_1 \wedge \dots \wedge a_n) > r\). Notice that, in this particular case, the choice of \(\varepsilon\) is independent from r, while in the statement of Proposition 3.4 it indeed depends on the hyperreal number r.

A second possible extension concerns the case of a rational agent believing the conjunction of infinitely, yet countably, many propositions \(a_1,a_2, a_3,\ldots\). Of course, this case requires to stretch a bit the standard Preface scenario and to think of a book possibly infinite in length; however, it is interesting to note that our approach can deal also with this rather extreme case. To this purpose, let us rephrase as follow our central question:

  1. (Q)

    Let \(1/2<r<1\) be any real-valued Lockean threshold and let \(a_1,a_2, a_3,\ldots\), stand for countably many assertions. Are there real numbers \(\beta (a_1), \beta (a_2), \beta (a_3), \ldots\) strictly contained between r and 1 and such that \(\beta (a_1\wedge a_2\wedge a_3\wedge \ldots )\) still remains greater than r?

Although one may reasonably argue that the conjunction of infinitely many assertions is not an assertion itself, we can still construe \(\beta (a_1\wedge a_2\wedge a_3\wedge \ldots )\) as an hyperreal number and identify it with the sequence:

$$\begin{aligned} \beta ^\infty =\langle \beta (a_1), \beta (a_1\wedge a_2), \beta (a_1\wedge a_2\wedge a_3), \ldots \rangle . \end{aligned}$$

Then, answering (Q) amounts to require that \(\beta ^\infty \ge r\), which indeed holds when \(\beta (a_1\wedge \ldots \wedge a_i)\ge r\) for (almost) all \(i\in {\mathbb {N}}\). Interestingly, it is possible to define probabilistic degrees of belief that satisfy the above condition. Consider the map \(\beta\) recursively defined on \(a_1,a_2, a_3\ldots\) as follows: \(\beta (a_1)=\frac{r+1}{2}\) and for all \(i>1\), \(\beta (a_i)=\frac{\beta (a_{i-1}) + 1}{2}\). That is to say:

$$\begin{aligned} \beta (a_1)=\frac{r+1}{2}, \beta (a_2)=\frac{\frac{r+1}{2}+1}{2}, \beta (a_3)=\frac{\frac{\frac{r+1}{2}+1}{2}+1}{2},\ldots \end{aligned}$$

Solving the previous fractions we hence obtain:

$$\begin{aligned} \beta (a_1)=\frac{r+1}{2}, \beta (a_2)=\frac{r+3}{4}, \beta (a_3)=\frac{r+7}{8}, \ldots , \beta (a_i)=\frac{r+2^i-1}{2^i},\ldots \end{aligned}$$

Now we can prove that the above defined \(\beta\) satisfies our requirements. First, it is easy to see that for all \(i\in {\mathbb {N}}\), \(r<\beta (a_i)<1\). Furthermore, for all \(i\in {\mathbb {N}}\), \(\beta (a_1\wedge \ldots \wedge a_i)\ge \beta (a_1)\odot \ldots \odot \beta (a_i)>r\). One also can prove (see the "Appendix" for a proof) that

$$\begin{aligned} \beta (a_1)\odot \ldots \odot \beta (a_i)=\frac{(2^i-1) r +1}{2^i}. \end{aligned}$$
(3)

Thus, since \(r<1\), one obtains that, for all \(i\in {\mathbb {N}}\)

$$\begin{aligned} \beta (a_1)\odot \ldots \odot \beta (a_i)=\frac{(2^i-1) r +1}{2^i}>\frac{(2^i-1) r +r}{2^i}=\frac{2^i r - r + r}{2^i}=r. \end{aligned}$$

This gives a positive answer to our question (Q): under any Lockean threshold r, a rational agent can believe the conjunction of infinitely, yet countably, many believable propositions.

Along with that of Sect. 3.1, the above results provide another way out of the Preface paradox, if infinitesimal degrees of belief are allowed. Of course, one may well be skeptical about applying infinitesimal probabilities to modeling the belief of real agents, and the philosophical relevance of this move can well be criticized (see, e.g., Williamson (2007); Easwaran (2014)). In this connection, we just note that proposals for a non-Archimedean treatment of interesting issues in epistemology and philosophy of science have been successfully advanced in the literature: two relevant examples being the discussion of the Lottery paradox by Wenmackers (2013) and that of inter-theory reduction and approximate explanation by Pearce and Rantala (1985), who both apply non-standard analytic methods. Moreover, the philosophical debate on such methods is open, with, e.g., Benci et al. (2016) defending infinitesimal probabilities are useful models of uncertainty quantification.Footnote 10

In the next subsection, we discuss the implications of our results for the analysis of rational belief, before surveying some interesting connections with other recent proposals in the literature in Sect. 4.

3.3 Rational Belief, Quasi-Dogmatism, and Contextualism

Let us summarize the results obtained so far. In Sect. 3.1, we considered a rational agent assigning standard (Archimedean) probabilistic degrees of belief to each of n propositions (with n finite and fixed in advanced). We then proved (Proposition 3.2) under which conditions the agent can rationally believe the conjunction \(a_1\wedge \dots \wedge a_n\) of such propositions under the Lockean thesis, given that he believes each of them: this happens if the probability assigned to each \(a_i\) is sufficiently high, i.e., greater than a threshold s depending both on the Lockean threshold r and on the number n of propositions.

In Sect. 3.2, we considered instead a rational agent assigning non-standard (non-Archimedean) probabilistic degrees of belief to the relevant propositions. In this case, our result (Propositions 3.3 and 3.4) reads as follows: independently from the number of propositions and from the given Lockean threshold, if the agent is nearly certain of each proposition (assigns to it a degree of belief infinitesimally close to 1) then he will also believe the conjunction with near certainty.

The main message of our paper is thus that conjunctive closure and probabilistic degrees of belief are compatible after all, even in the face of the Preface paradox. More precisely, we showed that, under the Lockean thesis, it is possible to believe a long conjunction of propositions with sufficiently high probability, provided that the probability assigned to each proposition is even higherFootnote 11.

Let us now briefly discuss some interesting implications of our two results. The first is that both offer a precise way to compute the minimum degree of belief one should assign to each of a number of propositions in order to be rationally justified to also believe their conjunction under the Lockean thesis. We suggest that this is a kind of problem which may arise, if only implicitly, in many circumstances. For instance, a judge may demand that each single piece of evidence collected by the prosecutor meets a very high threshold s of credibility before admitting it to the trial; such a threshold may be exactly that which guarantees that the whole body of evidence is credible enough (more probable than r) to withstand the debate to follow. Or, a jury or judge may require such a high standard to convict a suspect “beyond reasonable doubt”. Or, a given scientific community may implicitly set threshold s as the standard one scholar must conform to when presenting a bunch of claims which have to be jointly accepted. And so on. In all such circumstances, our results provide a precise way of quantifying the “burden of proof” put, so to speak, on a would-be believer: in order to jointly believe these n assertions to such-and-such degree, you need first to accept each of them to such other, higher degree of belief.

A second relevant aspect of our analysis is that, in order to rationally believe long conjunctions of propositions, an agent has to be “quasi-dogmatic” in the following sense (compare the discussion in Sect. 2.1):

Quasi-Dogmatism:

For all \(a\in B\), \(\beta (a)\approx 1\); i.e., a rational agent can only believe propositions of which he is nearly certain.

This is particularly clear in the non-Archimedean case, where agents assign degrees of belief infinitesimally close to 1 (greater than \(1-\epsilon\)) to each proposition in their belief set. But also in the standard, Archimedean case, the probability threshold s required to believe a conjunction of believable propositions quickly approaches 1 as soon as the number n of propositions grows larger or the Lockean threshold r increases. Table 1 shows how, even for low values of r, s quickly increases already for relatively low values of n. This leads to another way of phrasing our main result: in order to escape the Preface paradox, and preserve together conjunctive closure and probabilistic degrees of belief, quasi-dogmatism is the price to pay. Note that Quasi-Dogmatism entails Weak Fallibilism (but not vice versa, of course) while avoiding the consequences of full Dogmatism; how defensible it is as a general epistemological position is an issue we are not going to discuss here.

Table 1 Values for the threshold s required to believe the conjunction of n propositions, depending on three different values of the Lockean threshold r

A third implication of our analysis, which has to do with the issue of “contextualism” (Schurz 2019), is also worth noting. Many accounts of rational belief are contextual in the sense that at least some of their tenets depend on specific aspects of what we may call the doxastic situation of the agent. For instance, as already noted above, the fixation of a precise value for the Lockean threshold r is often left to the context (provided that \(0.5<r<1\)): some situations may require very high bounds for rational belief, others may be more permissive. This is an instance of a quite weak form of contextualism. Other, stronger forms make rational belief depend more heavily on specific features of the context, for instance on the way the relevant propositions are identified, as in the accounts of Isaac Levi (1980) or of Hannes Leitgeb (2014) (see Schurz (2019) for discussion). In this connection, one of our results (i.e., Proposition 3.2) is also contextual, but in a quite weak form. Indeed, once the Lockean threshold r has been fixed, the probability required to rationally believe a (long) conjunction only depends on the number n of its conjuncts (cf. Table 1). However, it does not depend on the logical form of the propositions involved, on their (logical or probabilistic) relationships, or on any other feature of the doxastic context.

This has, in turn, two interesting implications for a rational author in the Preface paradox scenario. First, the “longer” the book, i.e., the higher the number n of propositions that the author wants to jointly believe, the higher the threshold s that the probability of each of them has to pass. A second, subtler implication is the following. Suppose that the author believes with probability greater than s each of the n propositions in the book, and hence accepts their conjunction with probability greater than r (as for Proposition 3.2). Moreover, let \(a_{n+1}\) be a “new” proposition not already contained in the book and with probability also greater than s. Still, if the book is expanded by adding \(a_{n+1}\) to it, the author might not be confident enough to believe all the assertions anymore, i.e., the probability of the new conjunction \(a_1\wedge \dots \wedge a_n\wedge a_{n+1}\) may fall below r. This is because, given that s depends on n, some of the “old” propositions in the book (or possibly also \(a_{n+1}\) itself) may fail to pass the new threshold \(s'=\frac{(n+1)+r-1}{n+1}\).

Finally, as pointed out by an anonymous reviewer, our approach in this paper invites for a more general reflection on possible ways out of the Preface paradox and of similar challenges to the analysis of rational belief. Indeed, in our analysis three different parameters play a crucial role: first, the number n of assertions contained in the book (or, more generally, in a belief set); second, the Lockean threshold r relevant for believing each single claim; and, third, the threshold s relevant for believing the conjunction of such claims (the one that we computed in Proposition 3.2). Now, for each of these parameters, one can ask what bounds guarantee that rational belief is closed under conjunction. More precisely, one can ask: (1) How high can the Lockean threshold r be without risking failure of the conjunctive closure condition? (2) How low can be the value of s be without risking failure of the same condition? (3) How high can the number n of claims be without risking the same? Different choices of the most relevant parameter(s) and different answers to the above questions lead to different accounts of the paradox. (In most cases, one parameter will result as a function of one or two of the others.)

As for this paper, we can note that Proposition 3.2 addresses question (2) above, while Proposition 3.3 answers both questions (1) and (2) within the non-Archimedean domain. Other contributions in the literature address different sets of questions, and might be classified according to the answers they provide (an interesting task that we have to leave for the future). In the next section, we focus on three of them, highlighting their relations with our proposal.

4 Discussion and Comparison with Other Proposals

Before concluding, the connections of our results with other contributions in the literature are worth pointing out. Three of them are particularly important. The first is a recent paper by Schurz (2019), who proves a number of “impossibility results for rational belief”, including a generalized version of the Preface paradox, which are directly relevant to our own results; in particular, Schurz’s discussion directly addresses question (3) as presented at the end of the foregoing section. The second is the work by Makinson (2012), who connects the work on the logic of uncertain inference in the tradition of Adams’ probability logic with the discussion of the Lottery and Preface paradoxes in formal epistemology and anticipates some of the basic ideas of our paper; his approach follows the lead of question (1) above. The third is a paper by Douven and Uffink (2003) on the Preface paradox which shows some interesting similarities and differences with our approach; indeed, they also essentially address question (2), but using a formulation involving conditional probabilities. In this section, we discuss each of these contributions in turn.

4.1 Comparison with Schurz’s Impossibility Results

In a recent paper, Schurz (2019) provides a thorough analysis of the logical and conceptual connections among a number of rationality requirements for the notion of (qualitative and quantitative) rational belief, including different versions of the Lockean thesis and the requirement of conjunctive closure as discussed in the present paper. His results, presented in the form of two impossibility results, lead him to conclude that the conflict among these requirements “is too deep to allow for cheap solutions: it requires a drastic departure from beloved rationality standards” (Schurz 2019).

As already mentioned, the analysis provided here, as based on the idea of quasi-dogmatism, aims at showing that conjunctive closure and probabilistic degrees of belief are compatible after all, even in the face of the Preface paradox. At a first look, it may appear that our results violate Schurz’s impossibility theorems. This is not, however, the case: to anticipate, Schurz proves that, if the Lockean threshold is sufficiently high, then one can escape his limiting results, and this is exactly what we do in the present paper. In any case, a comparison with Schurz’s approach is both interesting and instructive.

Let us first briefly survey some results from Schurz’s paper. His first impossibility result is a generalization of the Preface paradox and concerns the following requirement:

Rich Fallibilism:

For all \(a_i\in B=\{a_{1},\dots ,a_{n}\}\), \(\beta (a_{i}|a_{1}\wedge \dots \wedge a_{i-1})\le \tau\), where \(r\le \tau < 1\) and r is a Lockean threshold; in words, the mutual conditional probability of a rational agent’s beliefs is upper-bounded by a threshold \(\tau\) smaller than 1 and greater or equal to r.

Schurz proves that the Lockean thesis, conjunctive closure and rich fallibilism cannot stand together (“are inconsistent”, in his terminology) when the agent’s belief set is sufficiently large, i.e. for \(n > |\log r|/| \log \tau |\) (cf. his Theorem 1).

This impossibility result has interesting relations to our Proposition 3.2. Indeed, in a sense the two results are two faces of the same coin, since they look at the same problem from two different angles. Schurz addresses the question of how large (or small) the set of propositions believed by the agent needs to be in order to keep together the Lockean thesis, conjunctive closure and rich fallibilism (his result indeed relates n with r and \(\tau\)). Instead, we ask (see Proposition 3.2) how high the probability of each belief \(a_{i}\) must be, in order to guarantee conjunctive closure under the Lockean thesis. As we shall see in a moment, these two perspectives are indeed fully compatible.

First, let us recall that, since \(\beta\) is strictly coherent (see Sect. 2.2), one can express the probability of a conjunction of propositions as the product of the conditional probability of each propositions given the others:

$$\begin{aligned} \beta (a_{1}\wedge \dots \wedge a_{n}) = \prod _{i=1}^{n} \beta (a_{i}|a_{1}\wedge \dots \wedge a_{i-1}). \end{aligned}$$
(4)

If one now requires (according to Rich Fallibilism) that \(\beta (a_{i}|a_{1}\wedge \dots \wedge a_{i-1})\le \tau\), it follows that:

$$\begin{aligned} r \le \beta (a_{1}\wedge \dots \wedge a_{n}) = \prod _{i=1}^{n} \beta (a_{i}|a_{1}\wedge \dots \wedge a_{i-1}) \le \tau ^{n}, \end{aligned}$$

where the left-hand inequality follows from Proposition 3.2, while the right-hand holds given Rich Fallibilism. Now, one can check (by simple calculation, as reported also in the proof of Theorem 1.1 in (Schurz 2019, sec. 3)) that \(r\le \tau ^{n}\) is equivalent to \(n\le |\log r|/|\log \tau |\), which is precisely the condition under which the impossibility result is not triggered. In other words, our result does not fall under Schurz’s impossibility result: the threshold that we calculate is sufficiently high to guarantee that his theorem does not apply. This also tells us that there are at least two ways to keep together conjunctive closure and the Lockean thesis in the face of the Preface paradox: either limiting the number of believed propositions (Schurz’s result) or fixing a sufficiently high threshold for the probability of each proposition.

A similar remark can be made for the other main impossibility result from Schurz (2019), i.e., his Theorem 2. Roughly (we refer the reader directly to (Schurz 2019, sec. 4) for relevant definitions and discussion), this says that the Lockean thesis is inconsistent with an apparently innocuous property called there “open-mindedness”: i.e., that the number m of doxastic possibilities (possible worlds) compatible with the beliefs of a rational agent may be quite big. However, Schurz also proves (see his Corollary 3 to Theorem 4) that, if the Lockean threshold r is set sufficiently close to 1, one can escape the mentioned impossibility result: this is basically the idea behind quasi-dogmatism, and hence confirms that our analysis also does not violate Schurz’s second impossibility result.

This leads to another interesting issue, that we can only mention in passing. After proving the above-mentioned results, Schurz (2019, Proposition 4) also notes that the incompatibilty between the Lockean thesis and open-mindedness reappers if one allows that the number m of doxastic possibilities can grow arbitrarily large, a property that Schurz calls “unlimited” open-mindedness. As pointed out by an anonymous reviewer, this further result has interesting connections with our talk, introduced in Sect. 3.2, of infinitesimal degrees of belief. In fact, one can conjecture that the incompatibilty between the Lockean thesis and unlimited open-mindedness no longer holds if one allows for non-Archimedean degrees of belief: in such case, even unlimited open mindedness (for arbitrary m) could be consistent with a version of the Lockean thesis. We leave for another occasion the exploration of such interesting conjecture.

4.2 Comparison with Makinson’s Logic of Uncertain Inference

As noted by two anonymous reviewers, the results of the present paper can be connected to some more or less classical results in so called probability logic (Adams 1975, 1996), as shown in particular by Makinson (2012). It is thus useful to briefly illustrate some of these connections.

The basic idea behind probability logic is that the premises of a deductive argument can be more or less uncertain; however, deductive logic by itself has to do with truth-preservation, and does not tell much about how the uncertainty of the conclusion depends on that of the premises. This motivates the study of how the probability of an argument’s premises is preserved under inference and transmitted to the conclusion. In practice, one aims at calculating lower and/or upper bounds for the probability of the conclusion, given the probabilities of the premises.

To this purpose, results like Theorem 3.1 on Fréchet-Hoeffding bounds are obviously relevant. In fact, this theorem appears, even if in a different form, in some classical discussion of probability logic, as follows. Let us first note that such discussion usually proceeds in terms of “improbability propagation” (rather than “probability preservation”), where the improbability of proposition a is defined as \(U(a)=1 - P(a)\) where P is a probability measure over a Boolean algebra \({\mathbf {A}}\). Now, in the case of a conjunction of n propositions, a results going back at least to Suppes (1966) is the following “uncertainty rule”:

$$\begin{aligned} U(a_1\wedge \dots \wedge a_{n})\le U(a_{1})+\dots +U(a_{n}), \end{aligned}$$

which provides an upper bound for the improbability of a conjunction given the improbabilities of the conjuncts. Interestingly, one can easily check that the above formula immediately translates into Theorem 3.1 above (see also the recent review by Makinson and Hawthorne (2015), who calculated these same bounds in their discussion of admissible rules for probability logic).

As noted by Makinson (2012), such results in probability logic helps clarifying some important logical aspects of the Preface and Lottery paradoxes. In fact, both paradoxes may be taken as illustrating “the unreliability of the rule of conjunction of conclusions” in the context of the logic of uncertain inference. Such rule reads as follows: if some set of premises A entails \(a_1\), and it also entails \(a_2\), then A entails \(a_1\wedge a_2\). If “entailment” here is interpreted as classical deduction, the rule holds; however, it may fail for non-deductive consequence relations, as those associated with uncertain reasoning and as vividly shown by the two paradoxes (Makinson 2012, 513-14). Interestingly, in his discussion of how to deal with the failure of the rule of conjunction of conclusions, Makinson (2012, sec. 5.2, p. 518) points, even if only to discard it, to the solution explored in the present paper: fixing a high threshold for the probability of an argument’s premises, in order to bring the probability of the conclusion as close to one as desired. As Makinson notes, this route was already explored by Adams (1966) and others; from our perspective, this suggests an interesting way of applying the bounds computed in our Proposition 3.2 to the logic of uncertain inference as studies in probability logic.

4.3 Comparison with Douven and Uffink’s Analysis

In an interesting discussion of the Preface paradox, Douven and Uffink (2003) also defend the compatibility between the principle of conjunctive closure and the probabilistic nature of degrees of belief by giving appropriate bounds for the relevant thresholds. A quick comparison with our results is then in order.

Douven and Uffink share many of our assumptions and work within a similar framework. They accept the Lockean thesis and the corresponding threshold r, with \(0.5< r < 1\). Moreover, they also work with regular probabilities (meaning, to recall, that values 0 and 1 are only assigned to contradictions and tautologies, respectively). Given all this, their most relevant result for our purposes is the following. First, Douven and Uffink note that, under the regularity assumption, for any finite set of contingent propositions \(a_1,\ldots , a_n\), the probability \(P(a_1\wedge \ldots \wedge a_n)\) of the conjunction can be expressed as the product of the probabilities \(P(a_i\mid a_1\wedge \ldots \wedge a_{i-1})\), for each i (we have already recalled this fact in Equation 4). Then, they prove (Douven and Uffink 2003, Proposition 4.1) that if, for every i ranging over \(2,\dots , n\), \(P(a_i\mid a_1\wedge \ldots \wedge a_{i-1})\) is at least \(\root n \of {r}\), then the conjunction \(a_1\wedge \ldots \wedge a_n\) has probability greater than r and hence is believable.

Interestingly, the first impossibility result proved by Schurz, that we already discussed in Sect. 4.1, turns out to be a generalization of the above result by Douven and Uffink (Schurz 2019, see the discussion of Corollary 3). Moreover, this should be compared with our Proposition 3.2, that establishes \(s=\frac{n+r-1}{n}\) as the probability threshold for each of n propositions which guarantees that their conjunction is believable to degree at least r. In this connection, some remarks are in order.

First, since our result allows one to compute the minimum probability required for each of the propositions \(a_1,\dots , a_n\), it also provides the relevant bounds for the conditional probabilities \(P(a_i\mid a_1\wedge \ldots \wedge a_{i-1})\), for each \(2\le i \le n\), appearing in the formula (4), and hence to determine the constraints which allow to rationally believe the conjunction \(a_1\wedge \ldots \wedge a_n\). This is easily done by direct computation, as the following examples shows. Suppose that r has been fixed and consider only two propositions, such that \(P(a_1)=P(a_2)=\frac{2+r-1}{2}\). Then, applying Proposition 3.2 (we directly refers to a probability measure P extending a coherent belief), we have:

$$\begin{aligned} P(a_2\mid a_1)=\frac{P(a_1\wedge a_2)}{P(a_1)}\ge \frac{r}{\frac{2+r-1}{2}}=\frac{2r}{2+r-1} \end{aligned}$$

Second, it is interesting to note that our approach provides exactly the same bounds computed in Douven and Uffink (2003) under the assumption that all the statements \(a_1,\dots ,a_n\) are mutually independent. Indeed, in this case, \(P(a_i\mid a_1\wedge \dots \wedge a_{i-1})=P(a_i)\) and \(P(a_1\wedge \ldots \wedge a_k)=P(a_1)\cdot \ldots \cdot P(a_k)\). More precisely, we can prove the following.

Proposition 4.1

Let \(\{a_1,\dots ,a_n\}\) be a set of mutually independent events, \(\beta\) a coherent belief assignment onthem and \(r\in [0,1]\) a real number. If \(\beta (a_i)\ge \root n \of {r}\), for each \(i\in \{1,\dots ,n\}\) then \(\beta (a_1\wedge \ldots \wedge a_n)\ge r\).

Proof

The proof will be made modulo de Finetti’s theorem and hence considering any probability measure P which extends \(\beta\) and whose existence is ensured by the hypothesis. The independence of the events \(a_i\)’s, gives us that

$$\begin{aligned} P(a_1\wedge \ldots \wedge a_n) =P(a_1)\cdot \ldots \cdot P(a_n). \end{aligned}$$
(5)

Therefore, if for all \(i=1,\ldots , n\), \(\beta (a_i)=P(a_i)\ge \root n \of {r}\), in particular \(P(a_j)=\min \{P(a_1),\dots ,P(a_n)\}\ge \root n \of {r}\). By (5), we have that

$$\begin{aligned} \beta (a_1\wedge \ldots \wedge a_n)=P(a_1\wedge \ldots \wedge a_n)\ge (P(a_j))^{n}\ge (\root n \of {r})^n=r. \end{aligned}$$

which proves our claim. \(\square\)

The above Proposition states that our analysis agrees with the one developed by Douven and Uffink, in the special case of independent propositions.

Third, a remarkable difference between the two approaches is worth noting. In our account, in order to check whether a conjunction \(a_{1}\wedge \dots \wedge a_{n}\) is acceptable under the Lockean thesis, it is sufficient to check whether the probability of each claim \(a_i\) exceeds the relevant threshold calculated according to Proposition 3.2. In Douven and Uffink’s approach, on the contrary, to obtain the same result one needs to check whether the probability of each claim, conditional on every other, exceeds the relevant threshold. This method thus artificially extends the relevant sets of events from the original claims \(a_{1},\dots ,a_{n}\) to a new set including all conditional events of the form \((a_i\mid a_1\wedge \ldots \wedge a_{i-1})\), for each \(i\in \{1,\dots , n\}\). By avoiding this detour, our approach appears both simpler and, in a sense, more practical.

5 Conclusion

The Preface paradox is often meant to show that the requirement of conjunctive closure of beliefs is incompatible with an account based on probabilistic degrees of belief. We proved that this is strictly speaking false, and that one can keep together the spirit, if not the letter, of both probabilism and conjunctive closure under the Lockean thesis, while also respecting Weak Fallibilism. Moreover, we showed how to compute, for both Archimedean and non-Archimedean probabilities, the value of the thresholds to be met in order to believe long conjunctions of propositions, thus making precise the notion of “nearly certain” belief. The price for this way-out of the paradox is Quasi-dogmatism—i.e., the fact that a rational agent should be nearly certain of what he believes—and a weak form of contextualism of rational belief. Admittedly, this price may appear as too high, since if one rejects Dogmatism, one may well be skeptical about Quasi-dogmatism for essentially the same reasons. In any case, the purpose of this paper has been to explore the possibility of fallibilism in the face of the Preface paradox, and not to defend Quasi-dogmatism as a viable epistemological position. Further discussion is needed in order to assess whether, and to what extent, this is indeed defensible from a philosophical point of view.