Skip to main content
Log in

Probabilistic Belief Contraction

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Probabilistic belief contraction has been a much neglected topic in the field of probabilistic reasoning. This is due to the difficulty in establishing a reasonable reversal of the effect of Bayesian conditionalization on a probabilistic distribution. We show that indifferent contraction, a solution proposed by Ramer to this problem through a judicious use of the principle of maximum entropy, is a probabilistic version of a full meet contraction. We then propose variations of indifferent contraction, using both the Shannon entropy measure as well as the Hartley entropy measure, with an aim to avoid excessive loss of beliefs that full meet contraction entails.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Ratio rule: \(prob(h|e) = \frac{prob(h \wedge e)}{prob(e)}\) when prob(e) > 0 and is not defined otherwise.

  2. This is known as the Levi identity: \(\mathcal{K} * \alpha = (\mathcal{K} - \neg\alpha) + \alpha\) where \(\mathcal{K}\) is a belief set, α is a sentence and +,  − and ∗ represent expansion, contraction and revision, respectively (Gärdenfors 1988).

  3. It is argued that a transitively relational partial meet contraction does not strictly adhere to these principles and only a severe withdrawal operation strictly adheres to these principles (Rott and Pagnucco 1999).

  4. A consequence operator Cn is ‘classical’ if it satisfies the following properties:- Inclusion: \(A\subseteq\) Cn (A), Closure: Cn ( Cn (A))\(\subseteq\) Cn (A) and Monotony: If \(A\subseteq B\) then Cn (A)\(\subseteq\) Cn (B).

  5. Such a probability function is termed a unit function. A more detailed discussion is given in (Makinson 2011).

  6. Let \(\mathcal{K}\) be a belief set and α be the belief being contracted. Then \(\mathcal{K}\bot\alpha\) is taken to denote the set of all maximal subsets of \(\mathcal{K}\) that do not imply α. A partial meet contraction is defined as the intersection of the elements in \(\sigma(\mathcal{K}\bot\alpha)\) which is a subset of \(\mathcal{K}\bot\alpha\), and σ is a selection function. When σ is such that \(\sigma(\mathcal{K}\bot\alpha) = \{B\in\mathcal{K}\bot\alpha : B'\leq B\,\text{for all}\, B'\in\mathcal{K}\bot\alpha\}\), where ≤ is a transitive relation over \(2^\mathcal{K}\), the resulting partial meet contraction is a transitively relational partial meet contraction (Alchourrón et al. 1985).

  7. In this work, we use logarithms to the base 2.

  8. This principle is also referred to as the principle of insufficient reason.

  9. The α-worlds contribute a value of d H(P) − d log d towards the entropy of the resultant probability distribution. Since log d is negative (d < 1), d H(P) − dlog d is greater than dH(P).

  10. It must be noted that this selection function is different from the selection function referred to earlier in this paper in " Classification of Contraction Functions" section.

  11. Minimal contraction can be shown to satisfy C6 when the following additional constraint is placed on the selection function: For any set \(\mathcal{A}\subseteq\Upomega\), if \(\mathcal{S}(\mathcal{A})\in\mathcal{A}'\), where \(\mathcal{A}'\subseteq\mathcal{A}\), then \(\mathcal{S}(\mathcal{A}') = \mathcal{S}(\mathcal{A})\). This constraint presents a simple notion of the agent’s preferences, i.e., if ω is the preferred element in \(\mathcal{A}\), then it is also the preferred element in any subset of \(\mathcal{A}\) to which it belongs.

  12. A total preorder relation is a connected, reflexive and transitive relation. A total preorder relation over \(\Upomega\) can be pictorially represented as a system of concentric spheres, where the innermost sphere houses the minimal worlds according to the total preorder relation and the enclosing spheres are built up hierarchially based on the relation (Grove 1988).

References

  • Aczél, J., Forte, B., & Ng, C. T. (1974). Why the Shannon and Hartley entropies are ‘natural’. Advances in Applied Probability, 6 (1), 131–146.

    Article  MathSciNet  MATH  Google Scholar 

  • Alchourrón, C. E., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50, 510–530.

    Article  MathSciNet  MATH  Google Scholar 

  • Boutilier, C. (1995). On the revision of probabilistic belief states. Notre Dame Journal of Formal Logic, 36.

  • Chan, H., & Darwiche, A. (2005). On the revision of probabilistic beliefs using uncertain evidence. Artificial Intelligence, 163(1), 67–90.

    Article  MathSciNet  MATH  Google Scholar 

  • Csiszár, I. (2008). Axiomatic characterizations of information measures. Entropy, 10(3), 261–273.

    Article  MATH  Google Scholar 

  • de Finetti, B. (1974). Theory of probability, vol i, ii. Wiley, 1990.

  • Dempster, A. P. (1967). Upper and lower probabilities induced by a multivalued mapping. Annals of Mathematical Society, 38(2), 325–339.

    Article  MathSciNet  MATH  Google Scholar 

  • Diaconis, P., & Zabell, S. L. (1982). Updating subjective probability. Journal of the American Statistical Association, 77(380), 822–830.

    Article  MathSciNet  MATH  Google Scholar 

  • Friedman, N., & Halpern, J. Y. (1999). Belief revision: A critique. Journal of Logic, Language and Information, 8(4), 401–420.

    Article  MathSciNet  MATH  Google Scholar 

  • Gärdenfors, P. (1986). The dynamics of belief: Contractions and revisions of probability functions. Topoi, 5(1), 29–37.

    Article  MathSciNet  Google Scholar 

  • Gärdenfors, P. (1988). Knowledge in flux: Modeling the dynamics of epistemic states. Cambridge, Massachusetts: Bradford Books, MIT Press.

    MATH  Google Scholar 

  • Gärdenfors, P., & Sahlin, N. -E. (1982). Unreliable probabilities, risk taking, and decision making. Synthese, 53, 361–386.

    Article  MathSciNet  MATH  Google Scholar 

  • Gray, R. (1990). Entropy and information theory. Berlin: Springer.

    MATH  Google Scholar 

  • Grove, A. (1988). Two modellings for theory change. Journal of Philosophical Logic, 17, 157–170.

    Article  MathSciNet  MATH  Google Scholar 

  • Grove, A., & Halpern, J. (1998) . Updating sets of probabilities. In Proceedings of the fourteenth annual conference on uncertainty in artificial intelligence (uai-98), (pp. 173–180). San Francisco, CA: Morgan Kaufmann.

  • Grove, A., Halpern, J., & Koller, D. (1994). Random worlds and maximum entropy. Journal of Artificial Intelligence Research, 2, 33–88.

    MathSciNet  MATH  Google Scholar 

  • Hailperin, T. (1988). The development of probability logic from leibniz to maccoll. History and Philosophy of Logic, 9(2), 131–191.

    Article  MathSciNet  MATH  Google Scholar 

  • Hájek, A. (2001). Probability, logic, and probability logic. In The blackwell guide to philosophical logic (pp. 362–384). Blackwell 24.

  • Harper, W. L. (1976). Rational conceptual change. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1976, pp. 462–494.

    Google Scholar 

  • Hartley, V. R. (1928). Transmission of information. Bell System Technical Journal, 7, 535–563.

    Google Scholar 

  • Hosiasson-Lindenbaum, J. (1940). On confirmation. The Journal of Symbolic Logic, 5(4), 133–148.

    Article  MathSciNet  MATH  Google Scholar 

  • Jaynes, E. T. (1983). Where do we stand on maximum entropy? In R. D. Rosenkrantz (Ed.), Papers on probability, statistics and statistical physics (pp. 210–314). Dordrecht, Holland: D. Reidel Publishing Company.

    Google Scholar 

  • Jeffrey, R. (1983). Bayesianism with a human face. In J. Earman (Ed.), Testing scientific theories (pp. 133–156). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Kern-Isberner, G. (1998). Characterizing the principle of minimum crossentropy within a conditional-logical framework. Artificial Intelligence, 98(1–2), 169–208.

    Article  MathSciNet  MATH  Google Scholar 

  • Kern-Isberner, G., & Lukasiewicz, T. (2004). Combining probabilistic logic programming with the power of maximum entropy. Artificial Intelligence, 157(1–2), 139–202.

    Article  MathSciNet  MATH  Google Scholar 

  • Kern-Isberner, G., & Rödder, W. (2004). Belief revision and information fusion on optimum entropy. International Journal of Intelligent Systems, 19(9), 837–857.

    Article  MATH  Google Scholar 

  • Koopman, B. O. (1940). The axioms and algebra of intuitive probability. The Annals of Mathematics, 41(2), 269–292.

    Article  MathSciNet  Google Scholar 

  • Kyburg, H. E. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.

    Google Scholar 

  • Levi, I. (1967). Gambling with truth. New York: Knopf.

  • Levi, I. (1974). On indeterminate probabilities. The Journal of Philosophy, 71(13), 391–418.

    Article  Google Scholar 

  • Levi, I. (1977). Subjunctives, dispositions and chances. Synthese, 34, 423–455.

    Article  MATH  Google Scholar 

  • Levi, I. (1999). Are full beliefs abstractions from credal states? In Spinning ideas, electronic essays dedicated to peter gärdenfors on his fiftieth birthday. http://www.lucs.lu.se/spinning/index.html.

  • Lindström, S., & Rabinowicz, W. (1989). On probabilistic representation of non-probabilistic belief revision. Journal of Philosophical Logic, 18, 69–101.

    Article  MathSciNet  MATH  Google Scholar 

  • Makinson, D. (2011). Conditional Probability in the Light of Qualitative Belief Change. Journal of Philosophical Logic, 40, 121–153.

    Article  MathSciNet  MATH  Google Scholar 

  • Olsson, E. J. (1995). Can information theory help us remove outdated beliefs? In Lund university cognitive studies, vol. 36.

  • Paris, J. B. (1994). The uncertain reasoner’s companion: A mathematical perspective. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Popper, K. (1959). The logic of scientific discovery, 2nd edn. New York: Basic Books.

    MATH  Google Scholar 

  • Ramer, A. (2003). Belief revision as combinatorial optimization. In R. B. Bouchon-Meunier, L. Foulloy (Ed.), Intelligent systems for information processing (pp. 253–264) Elsevier. Probabilistic Belief Contraction 25.

  • Rényi, A. (1955). On a new axiomatic theory of probability. Acta Mathematica Hungarica, 6, 285–335.

    Article  MATH  Google Scholar 

  • Rott, H., & Pagnucco, M. (1999). Severe withdrawal (and recovery). Journal of Philosophical Logic, 28, 501–547.

    Article  MathSciNet  MATH  Google Scholar 

  • Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Champaign: University of Illinois Press.

    MATH  Google Scholar 

  • Shore, J. E., & Johnson, R. W. (1980). Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. Information Theory, IEEE Transactions on, 26(1), 26–37.

    Article  MathSciNet  MATH  Google Scholar 

  • van Fraassen, B. C. (1976). Representational of conditional probabilities. Journal of Philosophical Logic, 5(3), 417–430.

    MATH  Google Scholar 

  • van Fraassen, B. C. (1980). Rational belief and probability kinematics. Philosophy of Science, 47(2), 165–187.

    Article  MathSciNet  Google Scholar 

  • van Fraassen, B. C. (1995). Fine-grained opinion, probability, and the logic of full belief. Journal of Philosophical Logic, 24(4).

  • Walley, P. (1991). Statistical reasoning with imprecise probabilities. London: Chapman and Hall.

    MATH  Google Scholar 

Download references

Acknowledgement

We thank the reviewer for the comments and suggestions that have helped to improve this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raghav Ramachandran.

Appendix: Proofs

Appendix: Proofs

Theorem 1

Indifferent contraction satisfies all the postulates of contraction C1 to C6 and CO.

Proof

Let P be the probability distribution that represents the agent’s initial belief state. Let α be a belief of the agent, that is, P(α) = 1. We further assume that α is a non-trivial belief, that is, \(\nvdash\alpha\). The result of indifferent contraction of P by α is denoted by P α .

Let the cardinality of \(\Upomega, [\alpha], [\neg\alpha]\) and the support of P be nlm, and l′ respectively. Suppose the initial distribution is as follows:

$$ P=\langle p_{1},\ldots,p_{l},\underbrace{0,\ldots,0}_{m}\rangle. $$

The result of indifferent contraction is then given by the following:

$$ P^{-}_{\alpha}= \langle dp_{1},\ldots, dp_{l}, \underbrace{\frac{1}{h+m},\ldots,\frac{1}{h+m}}_{m}\rangle, $$

where h = 2H(P) and \(d=\frac{h}{h+m}\). From Definition 1 it follows that indifferent contraction satisfies the postulate C1. Clearly 0 < d < 1. For every \(\omega\in[\alpha]\), we have P α (ω) = d P(ω). The resultant probability of α is given by, \(P^{-}_{\alpha}(\alpha) = \sum_{\omega\in[\alpha]} d P(\omega) = d\). Thus P α (α) < 1, satisfying postulate C2.

When \(\vdash\alpha\leftrightarrow\beta\), then we have [α] = [β] and \([\neg\alpha]=[\neg\beta]\). Indifferent contraction of P by α is such that P α (ω) = d P(ω), when \(\omega\in[\alpha]\) and similarly, indifferent contraction of P by β is such that P β (ω) = d P(ω), when \(\omega\in[\beta]\), where \(d = \frac{h}{h+m}\), m is the cardinality of [\(\neg\alpha\)] as well as that of [\(\neg\beta\)]. Thus we have P α (ω) = P β (ω), when \(\omega\in[\alpha]\) (or when \(\omega\in[\beta]\)). Also, \(P^{-}_{\alpha}(\omega) = \frac{1}{h+m} = P^{-}_{\beta}(\omega)\), for \(\omega\in[\neg\alpha]\) (or \(\omega\in[\neg\beta]\)). Thus we have P α (ω) = P β (ω), for every \(\omega\in\Upomega\), hence P α  = P β , satisfying the postulate C3.

From Definition 1, it is clear that indifferent contraction satisfies the postulate C4. Now let us consider C5. Suppose P(α) = 1, by Definition 1, conditionalizing P α by α gives P. Since expansion is modelled by Bayesian conditionalization, we have (P α ) +α  = P, indifferent contraction satsifies the postulate C5.

As a result of contraction by \(\alpha\wedge\beta, [\neg\alpha\vee\neg\beta]\subseteq\|P^{-}_{\alpha\wedge\beta}\|\). Hence \(P^{-}_{\alpha\wedge\beta}(\neg\alpha)>0\). Conditionalizing P α∧β by \(\neg\alpha\) gives a uniform distribution over \([\neg\alpha]\), which is also the result of conditionalizing P α by \(\neg\alpha\). Hence, indifferent contraction satisfies C6.

The entropy of P α , which is given by indifferent contraction, is log(2H(P) + m). Since log is a monotonic function, log(2H(P)) < log(2H(P) + m), where m is the cardinality of \([\neg\alpha]\) and is a positive integer (since we have assumed that α is not a theorem). Thus H(P α ) ≥ H(P), satisfying the postulate CO. \(\square\)

Theorem 2

An indifferent contraction is a full meet contraction.

Proof

Let P be the initial distribution representing the agent’s belief state and \(\mathcal{K}\) be the corresponding set of beliefs, given by Eq. 1. Let α be a belief of the agent, that is, P(α) = 1. The result of indifferent contraction of P by α is denoted by P α and the corresponding belief set is denoted by \(\mathcal{K}_{\alpha}^{-}\).

A probabilistic contraction function is said to be a full meet contraction (Gärdenfors 1988) when it satisfies the postulates C1 to C5 and the associated contracted belief set \(\mathcal{K}^{-}_{\alpha}\) is such that \([\mathcal{K}^{-}_{\alpha}] = [\mathcal{K}]\cup[\neg\alpha]\).

Theorem 1 shows that indifferent contraction satisfies postulates C1 to C5. From Eq. 1, we get that \(\mathcal{K}^{-}_{\alpha} = \{\beta\in\mathcal{L} : P^{-}_{\alpha}(\beta) = 1\}\), that is, \(\mathcal{K}^{-}_{\alpha} = \{\beta\in\mathcal{L} : \|P^{-}_{\alpha}\|\subseteq[\beta]\}\). Since, as a result of indifferent contraction, every \(\neg\alpha\)-world is assigned positive probability, we have \([\neg\alpha]\subseteq\|P^{-}_{\alpha}\|\). Since \(\|P^{-}_{\alpha}\| = \|P\| \cup [\neg\alpha]\), it follows that \([\mathcal{K}^{-}_{\alpha}] = [\mathcal{K}]\cup[\neg\alpha]\). Hence indifferent contraction is a full meet contraction. \(\square\)

Theorem 3

A submaximal entropy contraction is a full meet contraction.

Proof

Let P be the initial distribution representing the belief state of the agent and α be a belief, that is P(α) = 1. Submaximal entropy contraction results in reducing the probability associated with the α-worlds proportionally and all the \(\neg\alpha\)-worlds are assigned equal non-zero probability.

A probabilistic contraction function is said to be a full meet contraction (Gärdenfors 1988) when it satisfies the postulates C1 to C5 and the associated contracted belief set \(\mathcal{K}^{-}_{\alpha}\) is such that \([\mathcal{K}^{-}_{\alpha}] = [\mathcal{K}]\cup[\neg\alpha]\).

Submaximal entropy contraction satisfies postulates C1 and C5 by definition (Definition 2). It is evident from Definition 2, that submaximal entropy contraction satisfies postulate C4. The contracted probability distribution, P α is a uniform distribution over the set \([\neg\alpha]\) and assigns non-zero probability to every \(\neg\alpha\)-world. Thus P α (α) < 1, satisfying postulate C2.

When \(\vdash\alpha\leftrightarrow\beta\), then [α] = [β] and \([\neg\alpha] = [\neg\beta]\). As a result of contraction by α, probability associated with every α-world is scaled by a non-zero value d, determined by the minimization problem and the mass 1 − d is spread uniformly over \([\neg\alpha]\). The same happens when contracting by β. Hence submaximal entropy contraction satisfies the postulate C3.

The support of contracted probability distribution P α is such that \(\|P^{-}_{\alpha}\| = \|P\|\cup[\neg\alpha]\). This is the same as in the case of indifferent contraction. Hence along same lines as Theorem 2, submaximal entropy contraction is a full meet contraction. \(\square\)

Theorem 4

A minimal contraction based on a selection function \(\mathcal{S}\) is a maxichoice contraction.

Proof

Let P be the initial distribution representing the agent’s belief state and \(\mathcal{K}\) be the corresponding set of beliefs, given by Eq. 1. Let α be a belief of the agent, that is, P(α) = 1. The result of minimal contraction of P by α is denoted by P α and the corresponding belief set is denoted by \(\mathcal{K}_{\alpha}^{-}\).

A probabilistic contraction function is said to be a maxichoice contraction when it satisfies the postulates C1 to C5 and the associated contracted belief set \(\mathcal{K}_{\alpha}^{-}\) is such that \([\mathcal{K}_{\alpha}^{-}] = [\mathcal{K}]\cup \{\omega\}\) for some \(\omega\in[\neg\alpha]\).

Minimal contraction results in a probability distribution where the probabilities associated with the α-worlds are reduced proportionally to a very small value. Minimal contraction, we assume, is equipped with some mechanism to choose a world from the set \([\neg\alpha]\). The chosen world is assigned non-zero probability which is close to 1. From Definition 4 it follows that minimal contraction satisfies postulates C1 and C4. For every \(\omega\in[\alpha]\), we have P α (ω) = d P(ω), where d is very small. The resultant probability of α is given by, \(P^{-}_{\alpha}(\alpha) = \sum_{\omega\in[\alpha]} d P(\omega) = d\). Thus P α (α) < 1, satisfying postulate C2.

Suppose α and β are two beliefs such that \(\vdash \alpha\leftrightarrow\beta\), then \([\neg\alpha]=[\neg\beta]\). The selection function \(\mathcal{S}\) is such that \(\mathcal{S}([\neg\alpha]) = \mathcal{S}([\neg\beta])\). That is, the same possible world is chosen from the set \([\neg\alpha]\)(\(=[\neg\beta]\)) to be assigned positive probability, and hence the result of contraction by α is same as contraction by β. Hence, minimal contraction satisfies C3. By Definition 4, minimal contraction satisfies the postulate C5.

As a result of minimal contraction, we have \(||P^{-}_{\alpha}|| = ||P||\cup\{\omega\}\), where \(\omega\in[\neg\alpha]\). From Equation 1, it is clear that \([\mathcal{K}_{\alpha}^{-}] = [\mathcal{K}]\cup \{\omega\}\) for some \(\omega\in[\neg\alpha]\), as required. \(\square\)

Theorem 5

A preferential contraction function is a transitively relational partial meet contraction function.

Proof

Let P be the initial distribution representing the agent’s belief state and \(\mathcal{K}\) be the corresponding set of beliefs, given by Eq. 1. Let α be a belief of the agent, that is, P(α) = 1. The result of preferential contraction of P by α is denoted by P α and the corresponding belief set is denoted by \(\mathcal{K}_{\alpha}^{-}\). Let \(\sqsubseteq\) be a total preorder relation on \(\Upomega\). We need to show that preferential contraction satisfies the postulates C1 to C6. Let the cardinality of \(\Upomega, [\alpha], [\neg\alpha]\) and the support of P be nlm, and l′ respectively. Suppose the initial distribution is as follows:

$$ P=\langle p_{1},\ldots,p_{l},\underbrace{0,\ldots,0}_{m}\rangle. $$

Let the cardinality of \([\neg\alpha]\) and \(min_{\sqsubseteq}[\neg\alpha]\) be m and m′ respectively. The probability distribution resulting from preferential contraction is as follows:

$$ P^{-}_{\alpha} = \langle d p_{1},\ldots, d p_{l}, \underbrace{\underbrace{\frac{1}{h+m'},\ldots,\frac{1}{h+m'}}_{m'}, 0,\ldots, 0}_{m}\rangle. $$

where h = 2H(P) and \(d=\frac{h}{h+m'}\). From Definition 5, it is clear that preferential contraction satisfies C1. The resultant probability of α is P α (α) = d P(α) = d, which is less than 1. Hence preferential contraction satisfies C2. When \(\vdash\alpha\leftrightarrow\beta\), then [α] = [β] and \([\neg\alpha] = [\neg\beta]\). Therefore we have \(min_{\sqsubseteq}[\neg\alpha] = min_{\sqsubseteq}[\neg\beta]\). We have

$$ P^{-}_{\alpha}(\omega) = P^{-}_{\beta}(\omega) = \left\{ \begin{array}{ll} d P(\omega), & {\text{if}} \; \omega\in[\alpha] = [\beta] \\ \frac{1}{h+m'}, & {\text{if}} \; \omega\in min_{\sqsubseteq}[\neg\alpha] = min_{\sqsubseteq}[\neg\beta] \\ 0, & {\text{otherwise}}. \end{array}\right. $$

Thus preferential contraction satisfies C3. From Definition 5, it is evident that preferential contraction satisfies postulates C4 and C5. When contracting by α∧β, all the worlds in \(min_{\sqsubseteq}[\neg\alpha\vee\neg\beta]\) have uniform probability, and similarly when contracting by α, all the worlds in \(min_{\sqsubseteq}[\neg\alpha]\) have uniform probability. Suppose that α and β are two beliefs such that \(P^{-}_{\alpha\wedge\beta}(\neg\alpha)>0\). Then the total preorder relation \(\sqsubseteq\) is such that \(min_{\sqsubseteq}[\neg\alpha\vee\neg\beta] \cap [\neg\alpha]\neq\emptyset\). We have \(min_{\sqsubseteq}[\neg\alpha] = min_{\sqsubseteq}[\neg\alpha\vee\neg\beta] \cap [\neg\alpha]\). Thus, conditionalizing P α∧β by \(\neg\alpha\) and conditionalizing P α by \(\neg\alpha\) both result in a uniform distribution over \(min_{\sqsubseteq}[\neg\alpha\vee\neg\beta] \cap [\neg\alpha]\). Hence preferential contraction satisfies C6. Thus, preferential contraction is a transitively relational partial meet contraction. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ramachandran, R., Ramer, A. & Nayak, A.C. Probabilistic Belief Contraction. Minds & Machines 22, 325–351 (2012). https://doi.org/10.1007/s11023-012-9284-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-012-9284-0

Keywords

Navigation