Belief Revision and Verisimilitude Based on Preference and Truth Orderings
Authors
- First Online:
- Received:
- Accepted:
DOI: 10.1007/s10670-011-9293-z
- Cite this article as:
- Renardel de Lavalette, G.R. & Zwart, S.D. Erkenn (2011) 75: 237. doi:10.1007/s10670-011-9293-z
- 287 Views
Abstract
In this rather technical paper we establish a useful combination of belief revision and verisimilitude according to which better theories provide better predictions, and revising with more verisimilar data results in theories that are closer to the truth. Moreover, this paper presents two alternative definitions of refined verisimilitude, which are more perspicuous than the algebraic version used in previous publications.
1 Introduction
In this paper we develop a formal framework that unifies Darwiche and Pearl’s iterated belief revision (Darwiche and Pearl 1997) and refined verisimilitude as defined by Zwart (2001). Both belief revision and verisimilitude profit from the established unification. On the one hand, the unification provides for an answer to the epistemic problem of refined verisimilitude; on the other hand, it shows that belief revision behaves properly under the addition of truth, in the following sense. Revising false information of knowledge bases with true information leads to more verisimilar knowledge basis. It even turns out that revising with better information leads to more verisimilar theories.
None of the proponents of realism has yet articulated a coherent account of approximate truth which entails that approximately true theories will, across the range where we can test them, be successful predictors.
In this paper we show that a formal framework in terms of preference and similarity relations on possible worlds helps to establish a coherent answer to Laudan’s challenge. According to our framework, more verisimilar theories provide for more successful, i.e. more verisimilar, predictions and explanations. This ‘downward path’ from more general theories towards concrete predictions had been established already by the definition of refined verisimilitude given in Zwart (2001). Moreover, in the present framework, revising with even possibly false, but more verisimilar observations ends up with more verisimilar theories, provided that the preference relation regarding other possible empirical evidence is similar to the real verisimilitude order of this evidence. This version of the ‘upward path’ going from the concrete towards the more general theory, does not provide a certain method to come closer to the truth, as we are not familiar with the verisimilitude order of possible empirical evidence. It does, however, formulate and satisfy a welcome condition for a fruitful combination of verisimilitude and belief revision.
The formal framework used to establish our results is built on finite Boolean algebras (i.e. Lindenbaum algebras of propositional logic), the atoms of which correspond to models (possible worlds). We consider linear preorders on the collection of these worlds. They play the role of preference relations in the definition of (iterated) belief revision in the AGM style (Sect. 5). In the definition of (refined) verisimilitude in Sect. 6, we shall use these linear preorders as similarity functions.
The rest of the paper is structured as follows. In the next section we will first introduce the main ideas behind research into belief revision and verisimilitude. Moreover, we describe an example of Sven Ove Hansson which nicely illustrates the intentions of our exercise. We will return to this example in Sect. 7 to illustrate the mechanisms of our framework. Section 3 is mainly dedicated to the introduction of propositional logic in the form of Boolean algebra, the basic framework of our work. In Sect. 4 we introduce the main formal apparatus of the paper: preferences and preference orders (a dual of epistemic entrenchment). In the subsequent Sects. 5 and 6 we show how (iterated) belief revision and (refined) verisimilitude are defined in our framework. Especially, the new definitions of refined verisimilitude as previously defined in Zwart (2001) turn out to be useful. Finally in the penultimate Sect. 7, we show how neatly existing belief revision and refined verisimilitude fit together and successfully fulfill Laudan’s challenge. We end in Sect. 8 with the conclusions and describe prospects of future research.
2 Informal Exposition of Verisimilitude and Belief Revision
Investigations into verisimilitude started within the school of scientific realism in philosophy of science. In 1963, Popper proposed a formal definition of the idea that a given (possibly false) theory can be more similar to “the true theory” than another, competing (possibly false) theory. The research project of verisimilitude really got off the ground only when it was discovered that Popper’s definition in fact failed to compare any two nonequivalent false theories—which had precisely been the main aim of the definition.
The main subject of the verisimilitude project is formulated in relation to a formal language that is assumed to include a (usually) complete empirical truth τ. If τ is complete, any synthetic sentence is either a consequence of τ, or implies \(\lnot \tau. \) Within the context of classical, non-modal logic, empirical incompleteness of τ implies that the underlying language comprises non-referring propositional variables. Given a language, verisimilitude investigations concern two questions. The first question reads: How are we to define the similarity between an arbitrary theory in the language and its true theory τ? Answering this question about the definition of verisimilitude, we may assume to be familiar with τ. The second question about verisimilitude reads: when confronted with two different (scientific) theories, how are we to find out which of the two theories is more verisimilar or closer to the truth τ? This epistemic question of verisimilitude is the more practical one. Obviously, when we formulate the answer to the epistemic question, we may not assume that we know the true theory τ. When we have to decide whether the mechanics of Descartes rather than that of Newton is more verisimilar we do not know the full truth, although, perhaps, we know some elements in it, e.g. some very reliable observations made by different researchers. Until today, no generally accepted solution to the epistemic problem of verisimilitude exists. In the present paper we will show that belief revision provides an appropriate answer to the epistemic question of the refined verisimilitude.
The problem with the AGM postulates is that they only provide one-step revisions. After a revision of K by \(\varphi\) has taken place, the AGM-postulates fail to give indications how one should come to a new entrenchment relation \(\leqslant_{K\ast\varphi}.\) They fail therefore to indicate how to execute a next revision of \(K\ast\varphi\) by ψ. This drawback of the AGM approach was readily remarked and in 1954, Darwiche and Pearl added four postulates for iterated revisions.… exhaust what can be said about revisions and contraction in logical and set-theoretical terms only. This means that we must seek further information about the epistemic status of the elements of a knowledge state to solve the uniqueness problem.
For our purposes iterated revisions are unavoidable. The gist of combination of belief revision and verisimilitude is the answer to the question how theories behave under revisions on the long run, and then one-step revisions are insufficient. What we want is to show that iterated truthful revisions of theories far from the truth eventually end up with theories that are much closer to the truth.
For the purposes mentioned, we prefer to use the formulation of belief revision in terms of preferences as presented by Grove (1988). This approach is dual to entrenchment: \(\varphi\) is preferred to \(\psi (\varphi < \psi)\) iff \(\neg\psi\) is better entrenched than \(\neg\varphi (\neg\varphi <^{\mathsf{e}} \neg\psi)\).
Let us consider a simple example, due to Sven Ove Hansson (private conversation), to illustrate how the result of belief revision depends on the preference.
Example 1
(Hansson) Let the present body of knowledge κ be \(P_0 \wedge \neg P_1.\) How are we to revise κ when confronted with new evidence \(\varphi = P_0\leftrightarrow P_1\)? Here the idea of preference (or its dual, epistemic entrenchment) comes in. If (i) \(\neg P_0 \wedge \neg P_1\) is preferred to P_{0} ∧ P_{1}, i.e. \(\neg P_0 \wedge \neg P_1 < P_0 \wedge P_1, \) then \(\kappa * \varphi = \neg P_0 \wedge \neg P_1; \) but if (ii) \(P_0 \wedge P_1 < \neg P_0 \wedge \neg P_1\) then \(\kappa * \varphi = P_0 \wedge P_1. \) And if (iii) P_{0} ∧ P_{1} and \(\neg P_0 \wedge \neg P_1\) are equally preferred, then \(\kappa * \varphi = P_0 \leftrightarrow P_1. \) Now if the truth is supposed to be P_{0} ∧ P_{1}, then revision (ii) is better than (i), since it leads us closer to the truth.
The example concerns only a one-step revision, but is readily expanded to iterated revisions. To handle iterated revisions, one should indicate how existing preference relations should be updated after a revision. It turned out that the most practical way to formulate the combination of iterated revisions and verisimilitude was in terms of preference relations. The reason of cause being that iterated revisions and verisimilitude all come down to the comparison of orders and the revisions of orders into new ones.
3 Formal Preliminaries
As our logical basis, we take classical (two-valued) propositional logic over a finite collection of propositional variables \(\mathsf{PVAR} = \{P_0,\ldots,P_{n-1}\}\). On the collection \(\mathsf{FORM}\) of formulae, we have the usual entailment relation \(\vdash\) and the logical equivalence relation ≡. In the sequel, language L refers to the triple \( \langle \mathsf{FORM}, \vdash, \equiv\rangle.\)
As is well known, \(\mathsf{FORM}\) is (modulo logical equivalence) isomorphic to the Boolean algebra \({\mathsf{BA}}(n)\) over n generators. Recall that \({\mathsf{BA}}(n)\) is finite but large, having \(2^{2^{n}}\) elements. Since \(\mathsf{FORM}\) is finite (modulo logical equivalence), every knowledge base or theory, i.e. every collection of formulae, is equivalent with a single formula: the conjunction of its (finitely many non-equivalent) elements. We will use this property throughout this paper, representing theories cq. knowledge bases by single formulae.
4 Preferences
Some situations are preferred to others: this simple consideration is the basis for preferences. We identify situations with models, hence with atoms, so preference can be modeled as an order relation \(\leqslant \) on atoms.^{1}
\(\leqslant \) is a total preorder;
\(\leqslant \) subsumes \(\dashv\) (the inverse of \(\vdash\)): \(\varphi \vdash \psi\) implies \(\psi \,\leqslant\, \varphi; \)
\(\leqslant \) is disjunctive: \(\varphi \,\leqslant\, \varphi \vee \psi\) or \(\psi \,\leqslant\, \varphi \vee \psi; \)
\(\leqslant \)-maximal formulae are inconsistent: if \(\psi \,\leqslant\, \varphi\) for all ψ, then \(\varphi \vdash \bot. \)
5 Belief Revision
Traditionally (see Alchourrón et al. (1985)), belief revision is defined in terms of an epistemic entrenchment order \(\leqslant^{\mathsf{e}}. \) The idea is that the degree of epistemic entrenchment determines which of two beliefs \(\varphi\) and ψ has to be given up when their combination has become untenable: in a situation where \(\varphi \wedge\psi\) is inconsistent and \(\varphi <^{\mathsf{e}} \psi\) (i.e. ψ is more entrenched that \(\varphi\)), it is preferred to give up \(\varphi\) and retain ψ. The knowledge set K associated with \(\leqslant^{\mathsf{e}}\) is defined as the collection of the non-\(\leqslant^{\mathsf{e}}\)-minimal formulae.
In the case of a finite logic, \(\leqslant^{\mathsf{e}}\) can be defined straightforwardly in terms of the dual atoms (i.e. negations of atoms). This approach has been dualized by Grove (1988), replacing dual atoms by atoms, and \(\leqslant^{\mathsf{e}}\) by its dual: a preference order \(\leqslant \) as described in Sect. 4 that satisfies \(\varphi \,\leqslant^{\mathsf{e}} \psi\) iff \(\neg\varphi \, \leqslant\, \neg\psi. \) Since the use of atoms is (in our eyes) more intuitive than dual atoms, we adopt Grove’s representation of belief revision based on preference order and preference functions. Since our logic is finite, the knowledge set K can be replaced by its conjunction \(\kappa = \bigwedge K. \) When \(\leqslant \) equals \(\leqslant \)_{p}, the preference order generated by preference function p, then the conjunction κ = κ_{p} of the knowledge set associated with \(\leqslant \)_{p} equals \(\mathsf{form}(p), \) the disjunction of the most preferred atoms (i.e. with minimal p-value).
Observe that, in (1), the revision operator * is in fact an operation on a preference p and a formula \(\varphi, \) although the notation \(\kappa_p * \varphi\) suggest that it is a binary operation on formulae. As a consequence, belief revision in this form cannot be iterated: for the proper definition of \((\kappa_p * \varphi) * \psi, \) the preference function associated with \(\kappa_p * \varphi\) is required.
Traditional belief revision as defined in (1) can be defined in terms of preference revision by \(\kappa_p * \varphi = \mathsf{form}(p) * \varphi = \mathsf{form}(p *_i \mathsf{pref}(\varphi))\,(i = 1,2,3,4)\), where the result does not depend on i: we have in all cases \(\kappa_p * \varphi = \bigvee \{ \alpha \mid \alpha \vdash \varphi \;\&\; p(\alpha)=p(\varphi) \}.\)
6 Verisimilitude
The starting point for verisimilitude is the strongest empirically true theory τ expressible in L . Normally, τ is assumed to be complete, in the sense that any contingent sentence is either a consequence of τ, when it is empirically true, or implies \( \lnot \tau\) when it is false. To stay as general as possible, however, our formal framework does allow for the degenerated case in which the truth is incomplete and \(\tau\not\in {\mathsf{ATOM}}.\) Note that in such cases, language L is inadequate in the sense that it comprises propositional variables which nature does not verify nor falsify. After fixing L and its strongest empirical truth τ, we set out to obtain a relation \(\leqslant^{\mathsf{v}}\) (and its strict version \(<^{\mathsf{v}}\)) of verisimilitude, where \(\varphi <^{\mathsf{v}} \psi\) expresses that \(\varphi\) is more verisimilar (‘closer to the truth’) than ψ.
As for similarity between atoms, we shall obtain a verisimilitude order on \(\mathsf{FORM}\) from a total preorder on atoms. So let some similarity function \({ t : {\mathsf{ATOM}} \rightarrow \mathbb{N}}\) be given. We assume that the t-minimal atoms are precisely the atoms of τ, i.e. \({\mathsf{atom}}(\tau) = \{ \alpha \mid t(\alpha) = 0 \}\). A natural example of a similarity based on τ is t = λ α. d_{H}(α, τ) where d_{H} is the Hamming distance between atoms, introduced at the end of Sect. 4. So t(α) = 0 if \(\alpha \in {\mathsf{atom}}(\tau); \) in general, t(α) is the minimal number of changes in α (i.e. adding or removing a negation sign) required to obtain an atom in τ.
Requirement (2) is a rather natural requirement for a ordering relation. The content condition (3) and the likeness condition (4) have been discussed extensively in Zwart (2001), Zwart and Franssen (2007). Niiniluoto (1987, p. 233) introduces the truth content criterion M7, which comes down to the content condition (3) restricted to false sentences ψ. In addition, Niiniluoto introduces the similarity condition M6, being very similar to (4): if the similarity between atom α and the truth is larger than that of β and the truth, the former is more verisimilar. This natural condition lies at the heart of likeness approaches, such as Niiniluoto’s minsum measure in Niiniluoto (1987).
In slogan: the better theory has the better consequences. As (5) is a necessary condition for a successful acceptance of Laudan’s challenge, it is a desirable property for verisimilitude. In our framework of a Boolean algebra, empirical predictions may be represented by the weakest nontrivial consequences of a theory, viz. its dual atoms \(\delta = (\neg)P_0 \vee \dots (\neg)P_{n-1}. \) This enables us to show that the better/worse theory simply has the better/worse consequences, even in the sense that the better of false theories has the better dual atoms, i.e. empirical consequences. Something similar is out of reach of other content definitions of verisimilitude such as those of Miller’s (1978) and Kuipers’ symmetric difference definition (Kuipers 2000, p. 151): in their definitions, all false atoms (i.e. atoms not in \({\mathsf{atom}}(\tau)\)) are incomparable regarding their similarity with the truth.if \(T \leqslant^{\mathsf{v}} T^{\prime}, \) then (i) for every consequence \(\varphi^{\prime}\) of T′ there is a consequence \(\varphi\) of T with \(\varphi \,\leqslant^{\mathsf{v}}\, \varphi^{\prime}, \) and (ii) for every consequence \(\varphi\) of T there is a consequence \(\varphi^{\prime}\) of T′ with \(\varphi \,\leqslant^{\mathsf{v}}\, \varphi^{\prime}. \)
Finally, let us consider contraposition (6). In Zwart and Franssen (2007), it turned out that contraposition defines an important watershed in the verisimilitude literature. It forces \(\lnot {\tau}\) to be the worst sentence if τ is the truth, such that τ and \( \lnot \tau \) become the lower and upper limit of all sentences in the language. By doing so, contraposition demarcates the differences between likeness and content definitions of verisimilitude, since according to likeness definitions this upper limit is τ*, which is the atom of the language with maximal distance from τ. Consequently, contraposition (6) is a desirable property for content definitions of verisimilitude. Here, it turns out to be one of the reasons for refined verisimilitude to fit closely to the framework of belief revision.
In a first attempt to satisfy these properties, we define two order relations. The first is an attempt to realize the content condition (3), the second aims to satisfy the likeness condition (4).
Definition 1
Here t-increasing means: if \(\alpha \in {\mathsf{dom}}(f)\) then \(t(\alpha)\,\leqslant\, t(f(\alpha))\). We call the f in (7) a witness for \(\varphi \preccurlyeq \psi. \)
Remark
Lemma 1
(properties of \(\sqsubseteq\) and \(\preccurlyeq\)) \(\sqsubseteq\)and\(\preccurlyeq\)are preorders, satisfy contraposition, and commute with\(\vdash\)and\(\dashv.\)Moreover, \(\sqsubseteq\)satisfies the content condition and\(\preccurlyeq\)the likeness condition, but not the other way round.
Proof
See the Appendix. \(\square\)
So neither \(\sqsubseteq\) nor \(\preccurlyeq\) satisfies all requirements (2–6). Let us try to combine both candidates. The next lemma says that the order of composition is irrelevant.
Lemma 2
(commuting relations) \(\sqsubseteq\)and\(\preccurlyeq\)commute, i.e. \((\sqsubseteq \cdot \preccurlyeq) = (\preccurlyeq\cdot \sqsubseteq).\)
Proof
See the Appendix. \(\square\)
Lemma 2 enables us to combine \(\sqsubseteq\) and \(\preccurlyeq\) into a stronger overarching verisimilitude concept.
Definition 2
Since both \(\sqsubseteq\) and \(\preccurlyeq\) are reflexive, they are both subsumed in \(\leqslant^{\mathsf{rv}}. \) Moreover, this notion of verisimilitude has all desired properties:
Theorem 1
(properties of refined verisimilitude) \(\leqslant^{\mathsf{rv}}\)is a preorder that satisfies the content and the likeness condition, contraposition, and commutes with\(\vdash\)and\(\dashv.\)
Proof
\(\sqsubseteq\) and \(\preccurlyeq\) are reflexive, so we have \((\sqsubseteq) \cup (\preccurlyeq) \subseteq (\leqslant^{\mathsf{rv}}). \) As a consequence, \(\leqslant^{\mathsf{rv}}\) inherits reflexivity and the content condition from \(\sqsubseteq, \) and the likeness condition from \(\preccurlyeq. \) Transitivity, contraposition and commuting with \(\vdash\) and \(\dashv\) follow from Lemma 2 and the corresponding properties of \(\sqsubseteq\) and \(\preccurlyeq.\)\(\square\)
We give two alternative definitions of \(\leqslant^{\mathsf{rv}}.\)
Lemma 3
- 1.
\(\varphi \,\leqslant^{\mathsf{rv}}_t\, \psi\)iff there is at-increasing injection\(f : {\mathsf{atom}}(\varphi \wedge \neg\psi \wedge \neg{\tau)} \rightarrow {\mathsf{atom}}(\psi \wedge \neg\varphi \wedge \neg{\tau)}. \)
- 2.\(\leqslant^{\mathsf{rv}}_t\)is the least relation\(\leqslant^{\prime} \)satisfying$$ \varphi \vdash {\tau } \Rightarrow\; \varphi \hbox{ is } \leqslant^{\prime}\hbox{-minimal} ( \hbox{i.e.}\, \varphi \,\leqslant^{\prime} \psi\hbox{ for all } \psi) $$(9)$$ \hbox{for all atoms } \alpha,\beta: t(\alpha) \,\leqslant\, t(\beta) \;\Rightarrow \; \alpha \,\leqslant^{\prime} \beta $$(10)$$ \hbox{if }\varphi \,\leqslant^{\prime}\, \psi, \varphi^{\prime} \,\leqslant^{\prime}\, \psi \hbox{ and } \psi \wedge \psi^{\prime} \vdash \bot \hbox{ then } \varphi \vee \varphi^{\prime} \,\leqslant^{\prime}\, \psi \vee \psi^{\prime} $$(11)
Proof
See the Appendix. \(\square\)
7 Combining Belief Revision and Verisimilitude
- 1.
If theory \(\mathsf{form}(p)\) and evidence \(\varphi\) are false (i.e. \(\tau \vdash \neg\mathsf{form}(p)\) and \(\tau \vdash \neg\varphi\)), then according to Lemma 3 the similarity condition of (12) simply reduces to the existence of a t-increasing injection \(f : {\mathsf{atom}}(\mathsf{form}(p*\varphi)) \rightarrow {\mathsf{atom}}(\mathsf{form}(p)). \)
- 2.
If theory \(\mathsf{form}(p)\) is false but evidence \(\varphi\) is true (i.e. \(\tau \vdash \neg\mathsf{form}(p)\) and \(\tau \vdash\varphi\)), then according to Lemma 3 the similarity condition of (12) reduces to the existence of a t-increasing injection \(f : {\mathsf{atom}}(\mathsf{form}(p*\varphi) \wedge \neg\tau) \rightarrow {\mathsf{atom}}(\mathsf{form}(p)). \)
- 3.
If, on the contrary, theory \(\mathsf{form}(p)\) is true and evidence \(\varphi\) is false (i.e. \(\tau \vdash \mathsf{form}(p)\) and \(\tau \vdash \neg\varphi\)), then according to Lemma 3 the similarity condition of (12) equals to the existence of a t-increasing injection \(f : {\mathsf{atom}}(\mathsf{form}(p*\varphi)) \rightarrow {\mathsf{atom}}(\mathsf{form}(p) \wedge \neg\tau). \)
The previous cases spell out the exact sense in which one’s preference order should be similar, in the sense of having resemblance or likeness, to the verisimilitude order to guarantee improvement of verisimilitude after revision. Perhaps the second case fits best the situation of actual scientific research, where the theory used is probably false but the observations may be (approximately) true due to careful measurements. To illustrate this case let us reconsider Hansson’s example of Sect. 1.
Example 2
| P_{0} ∧ P_{1} | \(P_0 \wedge \neg P_1\) | \(\neg P_0 \wedge P_1\) | \(\neg P_0 \wedge \neg P_1\) |
---|---|---|---|---|
t | 0 | 1 | 1 | 2 |
p_{1} | 1 | 0 | 2 or 3 | 2 |
p_{2} | 2 | 0 | 2 or 3 | 1 |
p_{3} | 1 | 0 | 2 or 3 | 1 |
\(\varphi\) | 0 | 1 | 1 | 0 |
It follows that \(\mathsf{form}(p_i) = P_0 \wedge \neg P_1\) for i = 1, 2, 3, and \( \mathsf{form}(p_1*\varphi ) = P_0 \wedge P_1, \mathsf{form}(p_2*\varphi ) = \neg P_0 \wedge \neg P_1\) and finally \( \mathsf{form}(p_3*\varphi ) = P_0 \leftrightarrow P_1. \) The injection \(f : {\mathsf{atom}}(\mathsf{form}(p_1*\varphi) \wedge \neg\tau) \rightarrow {\mathsf{atom}}(\mathsf{form}(p_1))\) is an empty function and hence t-increasing, so \(\mathsf{form}(p_1 * \varphi) \,\leqslant^{\mathsf{rv}}\, \mathsf{form}(p_1), \) i.e. the revision is at least as verisimilar as the original theory. For i = 2, 3, there is no t-increasing injection \(f : {\mathsf{atom}}(\mathsf{form}(p_i*\varphi) \wedge \neg\tau) \rightarrow {\mathsf{atom}}(\mathsf{form}(p_i)), \) since \(t(\neg P_0 \wedge \neg P_1) = 2 > 1 = t(P_0 \wedge \neg P_1) \).
We observe that the conclusion holds trivially when \(\varphi\) has no models of \( \neg \tau, \) i.e. when \(\varphi \vdash \tau. \) But when \(\varphi \not\vdash \tau, \) (13) requires that there are only few \( \neg \tau\)-models for \(\varphi\) and that they are preferred to the \( \neg \tau\)-models of \(\mathsf{form}(p). \)The revision \(p \mapsto p * \varphi\) is a step in the good direction (\(\mathsf{form}(p * \varphi) \leqslant^{\mathsf{rv}}_t \mathsf{form}(p)\)) when the number of \( \neg \tau\)-models of \(\varphi\) is at most the number of \( \neg \tau\)-models of \(\mathsf{form}(p), \) and they are t-preferred to them.
The revision \(p \mapsto p * \varphi\) is a step in the wrong direction (\(\mathsf{form}(p) <^{\mathsf{rv}}_t \mathsf{form}(p * \varphi)\)) when the number of models of \(\mathsf{form}(p)\) is smaller than the number of \( \neg \tau\)-models of \(\varphi, \) and they are t-preferred to them.
Finally, we look at the situation that t = p, so \(\mathsf{form}(p) = \tau. \) In that case \(\mathsf{form}(t * \varphi)\,\leqslant^{\mathsf{rv}}_t \tau\) iff the revision is conservative, i.e. \(\varphi \wedge \tau \not\vdash \bot\) and hence \(\mathsf{form}(t * \varphi) = \tau \wedge \varphi. \)
8 Discussion and Conclusions
In the foregoing we have seen that (iterated) AGM belief revision and refined verisimilitude fit reasonably well together. What does this mean philosophically? From the viewpoint of belief revision this means that the epistemic entrenchment approach is in a good position to be extended with considerations of truth and even verisimilitude. For refined verisimilitude our results imply that it has found an established and well-studied answer to its accompanying epistemic question. Our approach even puts forward a formal underpinning of the intuitively plausible idea that the better theory has the better consequences and doing so it successfully accepted Laudan’s challenge. What is the reason for the two approaches to fit so well in contrast to other attempts to combine verisimilitude and belief revision? The answer to this question is connected to the way refined verisimilitude and AGM belief revision are constructed. Content definitions of verisimilitude are defined in terms of logical content and truth-value and the same holds true AGM-belief revisions. In both approaches strength prevails over the ordering of worlds. The described match between belief revision and refined verisimilitude provides an important external argument in favor of the refined content approach.^{2}
Future research in this direction has to address at least the following three issues. Firstly we would like to explore more extensively the differences between the four rules of iterated belief revision (see Sect. 5), and the different ways in which they approach the truth. Secondly we would like to investigate whether our framework can be extended to languages with infinitely many formulae. Thirdly, we would like to address the question of the completeness of the empirical truth (in other words: whether τ is an atom or not). We observe that our results do not require the truth to be complete. If the truth is incomplete, some sentences do not acquire a definite truth-value. We would like to find out whether this lack of truth-value of some sentences together with the application of iterative belief revision to refined verisimilitude could assist in deciding between the appropriateness of languages for some area of research.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.