Conditional probability and probability updating

The conditional probability formula is supposed to reflect the correct updating of probability assignments when new information is incorporated. Starting from a non-atomic probability measure, it is proved that the conditional probability formula provides the only transformed probability measure satisfying a “minimum requirement” relational assumption. This result applies to the standard Bayesian parametric model.


Introduction
Conditional probability is, on the one hand, an intuitive concept, which captures the change in the original probability assignment when new information is known.On the other hand, the axiomatic definition of conditional probability is given by a formula that determines it from the original probability.Often both concepts are identified, and it is postulated that the incorporation of new information alters the original probability assignment according to this formula.
As always when an axiomatic definition is applied, it is worth discussing its applicability in each case.Indeed, when considering the frequentist interpretation of probability, there are plausible reasons for such applicability.In the case of the subjective interpretation of probability, as a degree of belief, typical of Bayesian statistical inference, arguments have been constructed to justify that the change in the assignment of probabilities when new information is incorporated must follow the conditional probability formula.These arguments start from a qualitative relation of the form A|B C|D, meaning "A given B is qualitatively at least as probable as C given D", satisfying certain elaborated assumptions (see [8]).Then it is proved that there is one and only one probability P such that This result is to be understood within measurement theory, where the representation by probabilities of qualitative probability orderings of events is discussed; usually finitely additive probabilities have been considered, although completely additive probabilities have also been studied (see [11]).
We consider in this paper a different starting point to justify the applicability of the axiomatic definition of conditional probability (i.e. the conditional probability formula).The original probability measure is taken as given, and an assumption on the relation between this original probability and a possible updated conditional probability is imposed (Aristotelian Assumption, (A.A) for short).Provided that the original probability is non-atomic, it is proved that there is one and only one transformed probability measure satisfying the assumption (Theorem 7).
This result applies to Bayesian statistics.We recall that Bayesian inference relies on the use of the conditional probability formula to update probability assignments when new information is incorporated.For simplicity, we take momentarily all probability distributions to be representable in terms of densities.Suppose that Y = (Y 1 , ..., Y n ) is a random vector of n observations taking values on a sample space S. The parameter θ = (θ 1 , ..., θ k ) with values in a parameter space ⊆ R k indexes the various possible density functions p(y|θ) for Y ; so p(y|θ) denotes the distribution of Y when θ is known.Bayesian statistics postulates that p(y|θ) represents a conditional distribution following the conditional probability formula.Thus (Y , θ) has a probability distribution (say with joint density p(y, θ); p(y) and p(θ ) stand for the density marginals) and p(y|θ) p(θ ) = p(y, θ). ( On the other hand, given the observed data y = (y 1 , ..., y n ), let p(θ |y) denote the distribution of the parameter θ when y is known.Bayesian statistics now postulates that p(y|θ) represents a conditional distribution following the conditional probability formula.Thus p(θ |y) p(y) = p(y, θ). ( Equating ( 1) and ( 2), Bayes' formula for the posterior distribution follows: In general, Bayes' formula for the posterior distribution is certainly the basis of Bayesian statistics.Two hypotheses are underlying this formula: (H1) There is a joint probability measure P on S × .1 (H2) If P(A|C) is given the interpretation "probability of event A when event C is known", then the conditional probability formula applies: for P-measurable A and C, with P(C) > 0.
In the Bayesian parametric model, the joint probability P is shown to be non-atomic (Proposition 9).Taking (A.A) for granted, it follows from Theorem 7 that, at least in the parametric case, condition (H2) is redundant, and only (H1) is necessary for the Bayes' formula for the posterior distribution.

The formula of conditional probability
In this section ( , A, P) is a probability space, where is a set, A is a σ -algebra in and P is a (σ -additive) probability measure.Let C ∈ A, with P(C) > 0.
Definition 1 Let ( , A, P) be a probability space and let C ∈ A with P(C) > 0. The probability space ( , A, P ) is called a pre-conditional probability space given C iff P (C) = 1 and the following assumption hold: (A.A) If A, B ∈ A and A, B ⊆ C, then This definition arguably captures obvious requirements for any re-assignement of probabilities when we have the added information that the outcome is one of the elements of the event C. The requirement P (C) = 1 says simply that "the outcome is one of the elements of the event C".Besides, the original assignment of probabilities has to have an influence on the new assignment, and not merely be thrown away.It has to be re-worked in an even-handed way, and (A.A) is in this sense a minimum requirement, expressing some sort of Aristotelian "treat like cases alike" principle. 2ssumption (A.A) may be even unconstraining.

The set function P(•|C) on A defined by
makes ( , A, P(•|C)) into a pre-conditional probability space given C. We are interested in the question of its uniqueness as pre-conditional probability space given C.

Remark 3
It is immediate that, if a probability space ( , A, P ) satisfies P (C) = 1, then the following three conditions are equivalent: (iii) If A, B ∈ A such that A, B ⊆ C and P(B) > 0, then P (B) > 0 and Recall the following definition.
Definition 4 A ∈ A is an atom for P iff: (a) P(A) > 0 and (b) for every B ∈ A with B ⊆ A, either P(B) = 0 or P(B) = P(A).A probability measure P which has no atoms is called non-atomic.
A probability measure P is called atomic iff every E ∈ A such that P(E) > 0 contains an atom.If P is a probability measure, then there exist unique probability measures P 1 and P 2 and α ∈ [0, 1] such that P = α P 1 + (1 − α)P 2 and such that P 1 is atomic and P 2 is non-atomic (see [7] for further discussion in the general context of measures).
The following result is a particular case of a theorem of Sierpinski [10].
Theorem 5 Let ( , A, P) be a probability space with P non-atomic.If E ∈ A and P(E) > 0, then for every α ∈ [0, P(E)] there is an element F ∈ A with F ⊆ E and P(F) = α.
Induction on k gives directly the next corollary of Theorem 5 (see [9]).
Corollary 6 Let P be non-atomic, and suppose E ∈ A such that P(E) > 0. Let α i for i = 1, ..., k be real numbers with α i > 0 and k i=1 α i = P(E).Then E can be decomposed as a union of disjoint sets E i ∈ A with P(E i ) = α i for i = 1, ..., k.
Provided that a probability measure is non-atomic, we are going to see that any preconditional probability is determined by the conditional probability formula.

Theorem 7 Let ( , A, P) be a probability space and let C ∈ A with P(C) > 0. Suppose that ( , A, P ) is a pre-conditional probability space given C. If P is non-atomic, then P = P(•|C) as defined in (4).
Proof Let A ∈ A such that A ⊆ C. In order to prove (5 ), it can be assumed, without loss of generality, that P(A) > 0. The proof will be divided into three steps.
Obviously (Example 2) the condition of P being non-atomic cannot be dropped in Theorem 7.

Bayesian parametric inference
In standard Bayesian parametric inference we consider a probability space (S × , B n+k , P), where S is a Borel set in R n , is a (generalized) interval in R k , B n+k is the Borel σ -algebra on S × and P is a (σ -additive) probability measure.Here S is interpreted as the sample space where the response vector Y takes values and as the parameter space, each parameter θ determining a probability distribution for Y .Recall that the marginal distributions P Y and P θ are defined by P Y (A):=P(A × ), P θ (B):=P(S × B) for the corresponding Borelian sets A in S and B in .In accordance to practice (see [4] and [3]; note that improper prior distributions are not being considered) we assume that in the parametric case P θ is non-atomic.We shall refer to (S × , B n+k , P) as the Bayesian parametric model.
For proofs of the following proposition see [1] or [5].
Proposition 8 Any atom of a Borel measure on a second countable Hausdorff space includes a singleton of positive measure.
Our last result is now immediate.
Proposition 9 Let (S × , B n+k , P) be the Bayesian parametric model.Then P is nonatomic.
Proof By Proposition 8, if P had an atom, then it would include a singleton of positive measure, which contradicts that P θ is non-atomic.
If the Bayesian parametric model is considered as a valid formulation of a statistical problem (essentially, if S × can be given a joint probability distribution), we conclude (taking (A.A) for granted) from Theorem 7 and Proposition 9 that (H.2) follows, and thus Bayes' formula for the posterior distribution can be applied (provided that the measuretheoretic hypotheses for the suitable representation of the probability distributions hold; see for instance [6]).
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.Access This article is licensed a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.