1 Introduction

One of the most important issues in formal epistemology is how to combine full beliefs and beliefs held to lower degrees in one and the same formal model. In our everyday lives, we have both types of beliefs, and we have no difficulty in shifting our epistemic attitude to a particular proposition between the two types. The full beliefs are those which we currently do not doubt. For instance, when walking in the morning from my home to the nearest metro station, I have no doubt whatsoever which is the shortest way to it. But one morning I encountered large roadwork barriers, which made me uncertain whether I could at all reach the metro via the usual route. Such shifts from certainty (full belief) to uncertainty (belief to a lower degree) are common, and we do not conceive them as problematic.

The standard formal representation of beliefs coming in degrees makes use of probability functions. However, although probability-based models provide highly useful accounts of beliefs coming in different degrees, these models are not adapted to also represent full beliefs. The most obvious mode of representation is to assign probability 1 to full beliefs. This has the devastating disadvantage that full beliefs cannot be given up. In standard probability theory, there are no means for reducing the probability of a proposition from 1 to a lower number. Other demarcations of full beliefs, such as identifying them as represented by sentences with a degree of belief above some real-valued limit lower than 1, have other difficulties, as shown in the lottery and preface paradoxes (Kyburg 1961, p. 197; Makinson 1965; Hansson 2018a; Schurz 2019).

This does not mean that we lack formal models of full beliefs. A whole area of formal epistemology, the theory of (dichotomous) belief change, is devoted to the representation of full beliefs (Fermé and Hansson 2018). The problem is that we need different types of models, probability theory and belief change models, to represent the two types of belief. The fact that we combine the two types without problem in our everyday lives, and shift between them without effort, attests to the urgency of finding ways to combine them in one and the same formal model. We need to do this both for theoretical reasons and in order to develop forms of knowledge representation that mirror human reasoning.

Recent work has shown that fairly small changes in standard probability theory are sufficient to allow for a more realistic representation of full beliefs (Hansson 2020; 2022b). The two crucial changes are as follows: First, the codomain of probability functions has to be extended from the real-valued interval [0, 1] to a hyperreal-valued interval with the same limits. Instead of assigning probability 1 to empirical full beliefs, they are then assigned a probability that is infinitesimally smaller than 1. Secondly, revision by a sentence a is performed as a Jeffrey conditionalization that assigns probability \(1-\delta \) to a and probability \(\delta \) to \(\lnot a\), for some infinitesimal \(\delta \). This makes it possible to give up full beliefs, and it allows the probability function to contain the information needed to construct its successor after revision by the negation of a full belief.

In this article, the hyperreal model of probability revision will be applied to one of the major topics in studies of dichotomous belief change, namely how to perform a series of revisions by different sentences (“iterated revision”). We will investigate the changes in full beliefs that result from a series of revisions of a hyperreal probability function. After some formal preliminaries have been dealt with in Sect. 2, our model for probability revision will be presented in Sect. 3, with an emphasis on the changes in full beliefs that it gives rise to. Section 4 reports our main results, namely strong connections between the hyperreal model of probability revision and standard constructions and axioms from the literature on iterated dichotomous belief change. All formal proofs are deferred to an Appendix.

2 Formal Preliminaries

\({\mathfrak {L}}\) is the object language. Its elements are sentences used to express beliefs. They are represented by lowercase letters (\(a, b, \ldots \)), and sets of sentences by capital letters (\(A, B, \ldots \)). The object language is formed from atomic sentences with the usual truth-functional connectives: negation (\( \lnot \)), conjunction ( &), disjunction (\(\vee \)), implication (\(\rightarrow \)), and equivalence (\(\leftrightarrow \)). \({\scriptstyle \top }\) is a tautology and \({\scriptstyle \perp }\) a logically contradictory sentence.

A Tarskian consequence operation \(\text {Cn}\) expresses the logic. It satisfies the standard conditions: inclusion (\(A \subseteq \text {Cn}(A)\)), monotony (If \(A \subseteq B\), then \( \text {Cn}(A) \subseteq \text {Cn}(B) \)) and iteration (\( \text {Cn}(A) = \text {Cn}( \text {Cn}(A))\)). Furthermore, \( \text {Cn}\) is supraclassical (if a follows from A by classical truth-functional logic, then \(a \in \text {Cn}(A))\) and compact (if \(a \in \text {Cn}(A)\), then there is a finite subset \(A'\) of A such that \(a \in \text {Cn}(A')\)), and it satisfies the deduction property (\(b \in \text {Cn}(A \cup \{ a \}\)) if and only if \(a \rightarrow b\in \text {Cn}(A)\)). \( \text {Cn}( \varnothing )\) is the set of tautologies. \(X \vdash a\) is an alternative notation for \(a \in \text {Cn}(X)\) and \(\vdash a\) for \(a \in \text {Cn}(\varnothing )\).

A set A of sentences is a (consistent) belief set if and only if it is consistent and logically closed, i.e. \(A = \text {Cn}(A)\ne \text {Cn}(\{{\scriptstyle \perp }\})\). K denotes a belief set. The conjunction of all elements of a finite set A of sentences is denoted \( \textit{ \& }A\). For all sets A of sentences and all sentences a, the remainder set is the set of inclusion-maximal subsets of A not implying a. The maximal consistent sets of the language, i.e. the elements of \({\mathfrak {L}}\perp {\scriptstyle \perp }\), will be called maxisets (thus avoiding the common but misleading term “possible worlds”). For all sentences a, \(|a |\) is the set of a-containing maxisets, i.e. \(|a |= \{X\mid a\in X\in {\mathfrak {L}}\perp {\scriptstyle \perp }\}\).

We will use a system of hyperreal numbers for modelling purposes. A hyperreal number system is an extension of the set R of real numbers. It consists of both finite and infinite numbers, but our focus will be on the finite numbers. The finite hyperreal numbers consist of (1) the real numbers and (2) other numbers on an extended number line, which are posited between the real numbers. The positive hyperreal numbers that are larger than 0 but smaller than all positive real numbers are the positive infinitesimals. The negative infinitesimals are the numbers that are smaller than 0 but larger than all negative real numbers. The construction of the hyperreal number system to be used here is presented in Definition 2 and Postulate 1.

The letters stuvxy, and z represent hyperreal numbers (which may be real). The letters \(\delta \) and \(\varepsilon \) represent numbers that are either 0 or infinitesimal.

Each finite hyperreal number is infinitely close to exactly one real number, which is called its standard part. The standard part of the number t is denoted \(\textsf {st}(t)\). Standard parts satisfy the following rules.

  • \(\textsf {st}(-s)=-\textsf {st}(s)\)

  • \(\textsf {st}(s+t)=\textsf {st}(s)+\textsf {st}(t)\)

  • \(\textsf {st}(s-t)=\textsf {st}(s)-\textsf {st}(t)\)

  • \(\textsf {st}(s\times t)=\textsf {st}(s)\times \textsf {st}(t)\)

  • If \(\textsf {st}(t)\ne 0\) then \(\textsf {st}(s/t)=\textsf {st}(s)/\textsf {st}(t)\)

The symbols \(\approx \) and \(\ll \) are used as follows:

  • \(a\approx b\) if and only if \(\textsf {st}(a) =\textsf {st}(b)\)

  • \(a\ll b\) if and only if \(\textsf {st}(a)<\textsf {st}(b)\)

The set of hyperreal numbers satisfies the same algebraic laws as the real numbers. Let \(\delta \) and \(\epsilon \) be infinitesimals, and let s and t be finite numbers that are not infinitesimals. Then:

  • \(\delta +\epsilon \) and \(\delta \epsilon \) are infinitesimals. If \(s\ne 0\), then \(s\delta \) and \(\delta /s\) are infinitesimals.

  • \(s+\epsilon \) and st are finite and not infinitesimal. If \(s\ne 0\ne t\), then \(s/t\ne 0\).

  • The following is finite and may or may not be infinitesimal: \(s+t\)

  • The following may be infinite or finite, and in the latter case it may or may not be infinitesimal: \(\delta /\epsilon \).

There is a considerable literature on infinitesimal probabilities. Most of it has employed infinitesimals to attack the problems arising when classical probability theory is applied to infinite domains (event spaces). For instance, a fair lottery with an infinite number of tickets cannot be modelled with real-valued probabilities, but it can be modelled by assigning the same infinitesimal probability to all tickets. In this article, infinitesimals will instead be used for two other purposes, both of which are known from previous literature: (1) we will identify the set of full beliefs with the set of propositions whose probabilities are infinitesimally close to 1, and (2) we will use propositions with infinitesimal probabilities as “memory tracks” of beliefs that have been given up.

For a more extensive introduction to the arithmetic of hyperreals and infinitesimals, readers are referred to Keisler (1986 or 2022). Key references to the literature on infinitesimal probabilities can be found in Hansson (2020, p. 1009).

3 Probability Revision

By a probability revision is meant an operation that takes us from a probability function and an input to a new probability function. Standard probability theory contains no specifically defined operation of probability revision, but the role is filled by conditionalization, with the protasis (a in \({\mathfrak {p}}(d\mid a)\)) serving as input. When we conditionalize a probability function \({\mathfrak {p}}\) by a sentence a such that \({\mathfrak {p}}(a)\ne 0\), we construct a new probability function \({\mathfrak {p}}'\) such that for all sentences d: \( {\mathfrak {p}}'(d)={\mathfrak {p}}(a \& d)/{\mathfrak {p}}(a)\).

The standard notation \({\mathfrak {p}}(\;\,\mid a)\) for conditionalization (revision) of \({\mathfrak {p}}\) by a sentence a does not show clearly that it refers to an operation. This notation is impractical for repeated operations of change, such as revising first by \(a_1\) and then by \(a_2\). We will therefore use the notation for operations of change that is used in the literature on dichotomous belief change, where the revision of a belief set K by a proposition \(a_1\) is denoted \(K*a_1\), its revision by first \(a_1\) and then \(a_2\) is denoted \(K*a_1*a_2\), etc. Generalizing this to other types of inputs, we can define probability revision as follows:

Definition 1

An operation of probability revision is an operation \(\circ \) such that for some set \({\mathfrak {I}}\) of inputs, for all probability functions \({\mathfrak {p}}\) and inputs \(i\in {\mathfrak {I}}\), \({\mathfrak {p}}\circ i\) is a probability function.

Thus, \({\mathfrak {p}}\circ i_1\dots \circ i_n\) is the outcome of revising \({\mathfrak {p}}\) in turn by each of \(i_1,\dots , i_n\) in the given order. The input set \({\mathfrak {I}}\) can be a set of sentences, for instance (as in dichotomous belief change) the whole object language. However, there are other options, such as Jeffrey conditionalization inputs, which are pairs consisting of a sentence and its new probability.

To further clarify the notation, we use boldface brackets around compositely notated probability functions; thus we write \(\pmb {(}{\mathfrak {p}}\circ i_1\circ i_n\pmb {)}(d)\) instead of \({\mathfrak {p}}\circ i_1\circ i_n(d)\).

As mentioned in the Introduction, all probability functions are assumed to have a hyperreal interval [0, 1] as their codomain.Footnote 1 Infinitesimal probabilities represent possibilities that are currently treated as negligible, but kept in memory for possible future use. For instance, let \(a_1\) denote that Shakespeare wrote Hamlet and \(a_2\) that Francis Bacon did so, and let \({\mathfrak {p}}(\lnot a_1)=\delta \) and \({\mathfrak {p}}(a_2)=\delta /2\). Then we have \( {\mathfrak {p}}(\lnot a_1 \& a_2)/{\mathfrak {p}}(\lnot a_1)=0.5\), which at least for Bayesians can be a reason to conclude that after revising by \(\lnot a_1\), the epistemic agent would assign a probability around 0.5 to \(a_2\). If we instead had \({\mathfrak {p}}(\lnot a_1)=0\), then the probability function would not carry any information useful for constructing a new probability function after revising by \(\lnot a_1\).

We will have use for infinitesimals of different magnitudes, but we will not need a transfinitely extended hierarchy of infinitesimals. The following definitions and postulate limit the the hierarchies of infinitesimals to what is needed:

Definition 2

(Robinson 1973, pp. 88–89; Hammond 1994, pp. 46–47) Let \(\bar{\varepsilon }\) be a hyperreal number such that \(0<n\bar{\varepsilon }<1\) for all positive integers n. \({\mathfrak {F}}\) is the set of fractions of the form

$$\begin{aligned} \dfrac{s_0\times \bar{\varepsilon }^0+s_1\times \bar{\varepsilon }^1+s_2\times \bar{\varepsilon }^2+\dots +s_k\times \bar{\varepsilon }^k}{t_0\times \bar{\varepsilon }^0+t_1\times \bar{\varepsilon }^1+t_2\times \bar{\varepsilon }^2+\dots +t_n\times \bar{\varepsilon }^n} \end{aligned}$$

within the closed hyperreal interval [0, 1], such that \(s_0,\dots ,s_k\) and \(t_0,\dots ,t_n\) are finite series of real numbers and at least one of \(t_0,\dots ,t_n\) is non-zero.

Definition 3

(Hansson 2022a) A hyperreal number \(y\in {\mathfrak {F}}\) is an infinitesimal of the first order (in \({\mathfrak {F}}\)) if and only if \(0\ne y\approx 0\) but there is no \(z \in {\mathfrak {F}}\) such that \(0\ne z\approx 0\) and \(y/z\approx 0\).

An infinitesimal \(y \in {\mathfrak {F}}\) is an infinitesimal of the nth order, for some \(n>1\), if and only if:

  1. (1)

    There is a series \(z_1,\dots ,z_{n-1}\) of non-zero elements of \({\mathfrak {F}}\), such that \(z_1\approx 0\), \(z_k/z_{k-1}\approx 0\) whenever \(1<k\le n-1\) and \(y/z_{n-1}\approx 0\), and

  2. (2)

    There is no series \(z'_1,\dots ,z'_{n}\) of non-zero elements of \({\mathfrak {F}}\), such that \(z'_1\approx 0\), \(z'_k/z'_{k-1}\approx 0\) whenever \(1<k\le n\) and \(y/z'_n\approx 0\).

An infinitesimal is finite-ordered if and only if it is of the \(n^{\text {th}}\) order for some positive integer n.

Observation 1

\(\bar{\varepsilon }\) is a first-order infinitesimal.

Observation 2

If \(x\in {\mathfrak {F}}\) and \(0\ne x\approx 0\), then x is finite-ordered.

Postulate 1

Probability functions have the codomain \({\mathfrak {F}}\).

This postulate puts rather strict restrictions on the set of hyperreal numbers that we use in our model. For instance, square roots of first order infinitesimals are left undefined. This may be unsatisfactory from a more general mathematical point of view. However, it is necessary to obtain the hierarchical structure required for our purpose, and it does not hamper the calculus of probabilities.

As a first try, one might use standard Bayesian conditionalization for probability revision of hyperreal probabilities. However, this does not work satisfactorily, since after revision by a sentence \(a_1\), the probability of \(\lnot a_1\) will be 0, and it will remain so after any subsequent series of revisions. (Let denote Bayesian revision. Then it holds for any hyperreal probability function \({\mathfrak {p}}\) and any series of sentences \(a_1,\dots ,a_n\) with non-zero probabilities that .) To avoid this problem we can use Jeffrey conditionalization, and always leave an infinitesimal probability that the input does not hold. In this way, the information needed to remove the input sentence in some later revision is retained in the new probability function:

Definition 4

(Hansson 2022b) The hyperreal Bayesian probability revision in a language \({\mathfrak {L}}\) is the operation such that for all probability functions \({\mathfrak {p}}\) with \({\mathfrak {L}}\) as domain and all \(\delta \) with \(0< \delta \approx 0\):

The index notation has been chosen for convenience. In the terminology of Definition 1, is an operation with inputs of the form \(\langle \delta ,a\rangle \), with \(0<\delta \approx 0\) and \(a\in {\mathfrak {L}}\). The requirement that \(0<\delta \) has the effect of disallowing the formation of unremovable contingent full beliefs. This limitation can be removed by replacing the condition \(0<\delta \approx 0\) by \(0\le \delta \approx 0\).

The following definition and observation confirm that the use of Jeffrey conditionalization instead of standard conditionalization has the desired effect.

Definition 5

A probability revision \(\circ \) satisfies contingency retainment if and only if it holds for all probability functions \({\mathfrak {p}}\), all inputs \(i\in {\mathfrak {I}}\), and all sentences d that if \(0\ne {\mathfrak {p}}(d)\ne 1\), then \(0\ne \pmb {(}{\mathfrak {p}}\circ i\pmb {)}(d)\ne 1\).

This definition makes no assumptions about the formal structure of the set \({\mathfrak {I}}\) of potential inputs. Sentences in the object language may be elements of \({\mathfrak {I}}\) (as in conventional probability theory), they may be components of the inputs (as in Jeffrey conditionalization), or they may be be totally unrelated to the set of inputs (Hansson 2017, pp. 45–52).

Observation 3

Hyperreal Bayesian probability revision () satisfies contingency retainment.

The following definition and observation introduce the crucial property of hyperreal probability functions that makes them useful for combining probabilistic and full beliefs in one and the same formal framework, namely that the sentences whose probability is at most infinitesimally smaller than 1 form a logically closed belief set.

Definition 6

(Hansson 2020) Let \({\mathfrak {p}}\) be a hyperreal probability function. Then \(\llbracket {\mathfrak {p}}\rrbracket = \{a \mid \textsf {st}({\mathfrak {p}}(a))=1\}\).

Observation 4

(Hansson 2020) Let \({\mathfrak {p}}\) be a hyperreal probability function. Then: \(\llbracket {\mathfrak {p}}\rrbracket = \text {Cn}(\llbracket {\mathfrak {p}}\rrbracket )\) (top closure).

It follows from Observation 4 that our operation of probability revision gives rise to a sentential revision (revision of a belief set with sentences as inputs) that only records the effects of on the belief sets associated with probability functions. This can be called the “top revision” associated with , since it only revises the top-ranked (full) beliefs.Footnote 2

Definition 7

(Hansson 2022b) Let \(*\) be a sentential revision on a belief set K in a language \({\mathfrak {L}}\). Then \(*\) is a local top revision on K if and only if there is a probability function \({\mathfrak {p}}\) on \({\mathfrak {L}}\) and some \(\delta \) with \(0< \delta \approx 0\), such that \(\llbracket {\mathfrak {p}}\rrbracket =K\) and for all \(a\in {\mathfrak {L}}\).

It was shown in (Hansson 2022b) that for finite languages, the top revisions of local (single-step) hyperreal Bayesian probability revisions almost coincide with AGM revision, which is the standard operation of revision in the belief change literature (Alchourrón et al. 1985; Fermé and Hansson 2018). More precisely, they coincide with AGM revision except in the limiting case of revision by an inconsistent sentence.

Definition 8

(Hansson 2022b) Let \(*\) be a sentential operation on a consistent belief set K. Then \(*\) is an AGM\(^\text {C}\) revision (consistent AGM revision) on K if and only if there is an AGM operation \(*'\) on K such that:

  1. (a)

    \(K*a=K*'a\) if \(a\nvdash {\scriptstyle \perp }\), and

  2. (b)

    \(K*a=K\) if \(a\vdash {\scriptstyle \perp }\).

Theorem 1

(Hansson 2022b) Let K be a consistent belief set in a finite language \({\mathfrak {L}}\), and let \(*\) be a sentential operation on K. The following two conditions are equivalent:

  1. (1)

    \(*\) is a local top revision on K, based on a (hyperreal) probability function \({\mathfrak {p}}\) such that \({\mathfrak {p}}(a)=1\) if and only if \(\vdash a\).

  2. (2)

    \(*\) is an AGM\(^C\) revision on K.

In the limiting case that constitutes the difference between AGM and AGM\(^\text {C}\), AGM revision follows the maxim “if instructed to believe in an inconsistency, believe in everything”, whereas AGM\(^\text {C}\) revision adheres to the arguably more plausible maxim “if instructed to believe in an inconsistency, ignore that instruction”.

4 Iterative Revision

An operation on belief states is local if it is only applicable to a specific belief state. It is iterative, or with a more common but less adequate term, “iterated”, if it can be applied to all outcomes that it gives rise to. It is global if it can be applied to all belief states available within its framework. In dichotomous belief change with belief sets serving as belief states, these definitions can be further specified as follows:

Definition 9

  1. (1)

    \(\circ \) is a local sentential operation on a belief set K in \({\mathfrak {L}}\) if and only it holds for all sentences \(a\in {\mathfrak {L}}\) that \(K\circ a\) is a belief set in \({\mathfrak {L}}\).

  2. (2)

    \(\circ \) is an iterative sentential operation on a belief set K in \({\mathfrak {L}}\) if and only if for all belief sets K in \({\mathfrak {L}}\) and sentences \(a_1, a_2 \in {\mathfrak {L}}\): If \(K\circ a_1\) is a belief set in \({\mathfrak {L}}\), then so is \(K\circ a_1\circ a_2\).

  3. (3)

    \(\circ \) is a global sentential operation on belief sets in \({\mathfrak {L}}\) if and only it holds for all belief sets K in \({\mathfrak {L}}\) and all sentences \(a\in {\mathfrak {L}}\) that \(K\circ a\) is a belief set in \({\mathfrak {L}}\).

In probability theory, only global operations of revision appear to have been discussed. For instance, both standard conditionalization and Jeffrey conditionalization—including our variant in Definition 4—are global operations. In contrast, the literature on dichotomous belief change usually treats local operations as the basic case. A standard AGM operation of revision applies only to one specific belief set. It can be extended to iterative and global revision with the help of additional constructions, such as a function that assigns an operation of revision to each belief set. The focus on local operations as the basic case seems to depend largely on the history of this tradition; the operations introduced in the original AGM framework were all local (Alchourrón et al. 1985).Footnote 3 For overviews of the major approaches to iterative and global dichotomous belief revision, see Rott (2009), Peppas (2014) and Fermé and Hansson (2018, pp. 59–64).

We now turn to our main issue, namely whether the close connection between our operation of hyperreal probability revision and local AGM revision of belief sets that was reported in Theorem 1 can be extended to global (and thus also iterative) AGM revision. For that purpose, we are going to use the semantic framework for AGM that has been most used in studies of iterative revision, namely ordered partitionings of the maxisets (“possible worlds”) of the object language. The most common variant of this approach is the spheres model, in which the partitioning is represented by a series of concentric sets (“spheres”), such that the intersection of the elements of the smallest sphere is equal to the current belief set, and the largest sphere contains all the maxisets (Lewis 1973; Grove 1988). For formal developments it is convenient to focus on the rings of sphere systems. A ring consists of those elements of some sphere that are not elements of any of the spheres that are its proper subsets (Hansson 2017, p. 201). This approach was pioneered by Katsuno and Mendelzon (1992), whose system of “faithful assignments” lacks the geometrical connotations of sphere systems. The following definition dispenses both with the geometrical connotations of spheres and the metaphysical connotations of possible worlds:

Definition 10

An ordered maxiset partitioning (OMP) for the finite language \({\mathfrak {L}}\) is a vector \(\langle R_0,\dots ,R_n\rangle \) of maxisets such that:

  1. (1)

    \(R_0\cup \dots \cup R_n={\mathfrak {L}}\perp {\scriptstyle \perp }\), and

  2. (2)

    \(R_k\cap R_m=\varnothing \) whenever \(0\le k<m\le n\).

The belief set associated with an OMP is the intersection of its top-ranked maxisets:

Definition 11

Let \(\langle R_0,\dots , R_n\rangle \) be an ordered maxiset partitioning. Then

\(\Vert \langle R_0,\dots , R_n\rangle \Vert =\bigcap R_0\)

is the belief set associated with \(\langle R_0,\dots , R_n\rangle \).

A particularly interesting type of operations of revision on OMPs are those that, when revising by a sentence a, move up a-containing maxisets and/or move down \(\lnot a\)-containing maxisets, but keep the internal order and distance within each of those two groups of maxisets unchanged. In order for the revision to be successful, it must be the case that if a is consistent, then a is an element of all top-positioned maxisets in the new OMP that results from the revision.Footnote 4 This brings us to the following operation of iterative revision for OMPs:

Definition 12

The operation of translational revision on ordered maxiset partitionings is the operation such that for any ordered maxiset partitioning \(\langle R_0,\dots , R_n\rangle \), positive integer v and consistent sentence a:

,

such that for all \(X\in {\mathfrak {L}}\perp {\scriptstyle \perp }\):

  1. (1)

    If \(a\in X\in R_k\), then \(X\in R'_{k-m}\) where m is the lowest number such that , and

  2. (2)

    If \(\lnot a\in X\in R_k\), then \(X\in R'_{k-{\overline{m}}+v}\) where \({\overline{m}}\) is the lowest number such that .

Just as for , the index notation for was chosen for convenience. Its inputs have the form \(\langle v,a\rangle \), where v is a positive integer and a is a consistent sentence. By specifying v in different ways, various operations of revision for OMPs can be constructed that have the consistent sentences in \({\mathfrak {L}}\) as their input set.Footnote 5

Each OMP corresponds to a sphere system \(\langle S_0,\dots , S_n\rangle \), such that \(S_k=\bigcup \{R_0,\dots ,R_k\}\) for all k with \(0\le k\le n\). It follows directly from Definition 12 that

  • ,

where \(R_m\) is the lowest-numbered element of \(\langle R_0,\dots , R_n\rangle \) whose intersection does not contain \(\lnot a\), and \(S_m\) is the lowest-numbered element of \(\langle S_0,\dots , S_n\rangle \) whose intersection does not contain \(\lnot a\). Since \(\bigcap (|a|\cap S_m)\) is the outcome of revision by a in a sphere system, it follows that the local restriction (one-step usage) of translational revision coincides with both AGM and AGM\(^\text {C}\) for consistent inputs.

The following theorem connects our (global) hyperreal probability function with (global) translational revision in essentially the same way as Theorem 1 connects the local restriction of with the local operation AGM\(^\text {C}\):

Theorem 2

Let be the hyperreal Bayesian probability revision and the translational revision on OMPs, both in the finite language \({\mathfrak {L}}\).

  1. (1)

    Let \({\mathfrak {p}}\) be a (hyperreal) probability function with \({\mathfrak {L}}\) as domain. Then there is an OMP \(\langle R_0,\dots ,R_m\rangle \) in \({\mathfrak {L}}\), such that for all consistent sentences \(a_1,\dots ,a_n\) in \({\mathfrak {L}}\) and finite-order infinitesimals \(\delta _1,\dots ,\delta _n\) there are positive integers \(v_1,\dots ,v_n\) with:

  2. (2)

    Let \(\langle R_0,\dots ,R_m\rangle \) be an OMP in \({\mathfrak {L}}\). Then there is a (hyperreal) probability function \({\mathfrak {p}}\) with \({\mathfrak {L}}\) as domain, such that for all consistent sentences \(a_1,\dots ,a_n\) in \({\mathfrak {L}}\) and positive integers \(v_1,\dots ,v_n\) there are finite-order infinitesimals \(\delta _1,\dots ,\delta _n\) with:

The four postulates introduced by Darwiche and Pearl (1997, p. 11) have a central role in the discussion on iterated belief changeFootnote 6:

  • If \(a_1\vdash a_2\), then \(K* a_2*a_1= K*a_1\) (DP1)

  • If \(a_1\vdash \lnot a_2\), then \(K*a_2*a_1= K*a_1\) (DP2)

  • If \(K*a_1\vdash a_2\), then \(K*a_2*a_1\vdash a_2\) (DP3)

  • If \(K*a_1\nvdash \lnot a_2\), then \(K*a_2*a_1\nvdash \lnot a_2\) (DP4)

The following observation shows that close analogues of these four postulates hold for the top revision obtained from our probability revision .

Observation 5

Let \({\mathfrak {p}}\) be a probability function, \(a_1\) and \(a_2\) consistent sentences and \(\delta \), \(\delta '\) and \(\varepsilon \) finite-order infinitesimals. Then:

  • If \(a_1\vdash a_2\), then (DP1)

  • If \(a_1\vdash \lnot a_2\), then (DP2)

  • If , then (DP3)

  • If , then (DP4)

Observation 5 serves to strengthen the connections between probability revision and the dominant approach to (iterative) changes in full beliefs. However, this connection does not prove that the models thus connected with each other adequately reflect the properties of either actual or ideally rational patterns of belief change. The Darwiche–Pearl postulates have the status of a gold standard for iterated revision, and they have been shown to be compatible with requirements of syntax splitting that limit the effects of a revision on beliefs that are irrelevant for the input sentence (Kern-Isberner and Brewka 2017). However, there is also much criticism of these postulates, largely based on counter-examples (Konieczny and Pérez 2000, p. 352; Jin and Thielscher, 2007, p. 6 and p. 14; Stalnaker 2009, pp. 205–206; Hansson 2017, pp. 39–42; Aravanis et al. 2019). The search for plausible models of epistemic change should continue. The results presented here indicate that a stronger focus should be put on the construction of models of iterative change that account for changes both in full beliefs and in beliefs that come in lower degrees.