Abstract
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models.
This is a preview of subscription content, log in to check access.
Notes
 1.
Other notable pioneers of imprecise probability include Koopman (1940), Horn and Tarski (1948), Halmos (1950), Smith (1961), Ellsberg (1961), Kyburg Jr. (1961) and Isaac Levi. Notable contemporary advocates include Isaac Levi, Peter Walley, Teddy Seidenfeld, James Joyce, Cozman (2000), de Cooman and Miranda (2007, 2009).
 2.
The first systematic study of dilation is Seidenfeld and Wasserman (1993), which includes historical remarks that identify Levi and Seidenfeld’s reaction to Good (1967) as the earliest observation of dilation and Good’s reply in (1974) as the first published record. Seidenfeld and Wasserman’s study is further developed in Herron et al. (1994) and Herron et al. (1997). See our note 10, below, which discusses a variety of weaker dilation concepts that can be articulated and studied.
 3.
Although evidential probability avoids strict dilation, there are cases where adding new information from conflicting but seemingly ad hoc reference classes yields a less precise estimate. See, for example, Seidenfeld’s hollow cube example in Seidenfeld (2007) and Kyburg’s reply in the same volume (Kyburg 2007).
 4.
 5.
 6.
 7.
E ^{c} is the complement \(\Upomega\backslash E\) of E.
 8.
The measure S has been given a variety of interpretations in philosophy of science and formal epistemology, including as a measure of coherence (Shogenji 1999) and a measure of similarity (Wayne 1995), and is a variation of ideas due to Yule (1911, Ch. 3). See Wheeler (2009a) for discussion and both Schlosshauer and Wheeler (2011) and Wheeler and Scheines (2013) for a study of the systematic relationships between covariance, confirmation, and causal structure.
 9.
 10.
We mention that while our terminology agrees with that of Herron et al. (1994, p. 252), it differs from that of Seidenfeld and Wasserman (1993, p. 1141) and Herron et al. (1997, p. 412), who call dilation in our sense strict dilation.
Indeed, weaker notions of dilation can be articulated and subject to investigation. Say that a positive measurable partition \({\fancyscript{B}}\) weakly dilates E if \(\underline{\hbox{P}}(E \mid H) \leq \underline{\hbox{P}}(E) \leq \overline{\hbox{P}}(E) \leq \overline{\hbox{P}}(E \mid H)\) for each \({H\in\fancyscript{B}}\). If \({\fancyscript{B}}\) weakly dilates E, say that (i) \({\fancyscript{B}}\) pseudodilates E if in addition there is \({H\in \fancyscript{B}}\) such that either \(\underline{\hbox{P}}(E \mid H)<\underline{\hbox{P}}(E)\) or \(\overline{\hbox{P}}(E)<\overline{\hbox{P}}(E \mid H)\) and that (ii) \({\fancyscript{B}}\) nearly dilates E if in addition for each \({H\in \fancyscript{B},\; }\)either \(\underline{\hbox{P}}(E \mid H)<\underline{\hbox{P}}(E)\) or \(\overline{\hbox{P}}(E)<\overline{\hbox{P}}(E \mid H)\). Thus, \({\fancyscript{B}}\) pseudodilates E just in case the closed interval \(\left[\underline{\hbox{P}}(E), \overline{\hbox{P}}(E) \right]\) is contained in the closed interval \(\left[\underline{\hbox{P}}(E \mid H), \overline{\hbox{P}}(E \mid H) \right]\) for each \({H\in\fancyscript{B}}\), with proper inclusion obtaining for some partition cell from \({\fancyscript{B}}\), while \({\fancyscript{B}}\) nearly dilates E just in case the closed interval \(\left[\underline{\hbox{P}}(E), \overline{\hbox{P}}(E)\right]\) is properly contained in the closed interval \(\left[\underline{\hbox{P}}(E \mid H), \overline{\hbox{P}}(E \mid H)\right]\) for each \({F\in\fancyscript{B}}\). Seidenfeld and Wasserman (1993) and Herron et al. (1994, 1997) also investigate near dilation and pseudodilation.
 11.
This is Walley’s canonical dilation example (Walley 1991, pp. 298–299), except that here we are using lower probabilities instead of lower previsions.
 12.
Seidenfeld and Wasserman’s results are about dependence of particular events, not about dependence of variables. Independence of variables implies independence of all their respective values, but not conversely.
 13.
Specifically, \({\mathbb{P}}\) is assumed to be closed with respect to the total variation norm (Seidenfeld and Wasserman 1993, p. 1141).
 14.
 15.
An extreme point of a set of probabilities is a probability function from the set that cannot be written as a nontrivial convex combination of elements from the set.
 16.
We note that in their article, Seidenfeld and Wasserman (1993) assume that the set of probabilities under consideration is convex and closed with respect to the total variation norm. In the special case they consider, Seidenfeld and Wasserman (1993, p. 1142) correctly point out that their Theorem 2.1 goes through without the assumption of closure and that their Theorem 2.2, Theorem 2.3, and Theorem 2.4 (below) go through without the assumption of convexity (p. 1143). However, we have shown that their Theorem 2.2 and Theorem 2.3 trivially go through even without the assumptions of convexity and closure.
 17.
They also discuss total variation neighborhoods and \(\varepsilon\)contamination neighborhoods, but these neighborhoods are distinct from the neighborhoods we discuss.
 18.
 19.
This strategy is outlined in Haenni et al. (2011, §9.3).
 20.
 21.
The open interval (0,1) includes all real numbers in the unit interval except for 0 and 1. This means that we are excluding the possibility that the second coin is either doubleheaded or doubletailed. Conveniently, this also allows us to avoid complications arising from conditioning on measure zero events, although readers interested in how to condition on zeromeasure events within an imprecise probability setting should see Walley (1991, §6.10) for details.
 22.
If \({\mathbb{P}}\) is closed and convex, then every point in the tetrahedron is admissible if the constraint is the closed unit interval [0,1].
 23.
Recall that the random variables C _{1} and C _{2} were introduced in Eq. 1.
 24.
 25.
Thanks to Clark Glymour for putting this argument to us.
References
Couso, I., Moral, S., & Walley, P. (1999). Examples of independence for imprecise probabilities. In G. de Cooman (Ed.), Proceedings of the first symposium on imprecise probabilities and their applications (ISIPTA), Ghent, Belgium.
Cozman, F. (2000). Credal networks. Artificial Intelligence, 120(2), 199–233.
Cozman, F. (2012). Sets of probability distributions, independence, and convexity. Synthese, 186(2), 577–600.
de Cooman, G., & Miranda, E. (2007). Symmetry of models versus models of symmetry. In W. Harper, & G. Wheeler (Eds.), Probability and inference: Essays in honor of Henry E. Kyburg, Jr. (pp. 67–149). London: King’s College Publications.
de Cooman, G., & Miranda, E. (2009). Forward irrelevance. Journal of Statistical Planning, 139, 256–276.
de Cooman, G., Miranda, E., & Zaffalon, M. (2011). Independent natural extension. Artificial Intelligence, 175, 1911–1950.
de Finetti, B. (1974a). Theory of probability (vol. I). Wiley, 1990 edition.
de Finetti, B. (1974b). Theory of probability (vol. II). Wiley, 1990 edition.
Elga, A. (2010). Subjective probabilities should be sharp. Philosophers Imprint, 10(5).
Ellsberg, D. (1961). Risk, ambiguity and the savage axioms. Quarterly Journal of Economics, 75, 643–669.
Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B, 14(1), 107–114.
Good, I. J. (1967). On the principle of total evidence. The British Journal for the Philosophy of Science, 17(4), 319–321.
Good, I. J. (1974). A little learning can be dangerous. The British Journal for the Philosophy of Science, 25(4), 340–342.
Grünwald, P., & Halpern, J. Y. (2004). When ignorance is bliss. In J. Y. Halpern (Ed.), Proceedings of the 20th conference on uncertainty in artificial intelligence (UAI ’04) (pp. 226–234). Arlington, VA: AUAI Press.
Haenni, R., Romeijn, J.W., Wheeler, G., & Williamson, J. (2011). Probabilistic logics and probabilistic networks. Synthese library. Dordrecht: Springer.
Halmos, P. R. (1950). Measure theory. New York: Van Nostrand Reinhold Company.
Harper, W. L. (1982). Kyburg on direct inference. In R. Bogdan (Ed.), Henry E. Kyburg and Isaac Levi (pp. 97–128). Dordrecht: Kluwer.
Herron, T., Seidenfeld, T., & Wasserman, L. (1994). The extent of dilation of sets of probabilities and the asymptotics of robust bayesian inference. In PSA 1994 proceedings of the Biennial meeting of the philosophy of science association (vol. 1, pp. 250–259).
Herron, T., Seidenfeld, T., & Wasserman, L. (1997). Divisive conditioning: Further results on dilation. Philosophy of Science, 64, 411–444.
Horn, A., & Tarski, A. (1948). Measures in boolean algebras. Transactions of the AMS, 64(1), 467–497.
Joyce, J. (2011). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.
Koopman, B. O. (1940). The axioms and algebra of intuitive probability. Annals of Mathematics, 41(2), 269–292.
Kyburg, H. E., Jr. (1961). Probability and the logic of rational belief. Middletown, CT: Wesleyan University Press.
Kyburg, H. E., Jr. (1974). The logical foundations of statistical inference. Dordrecht: D. Reidel.
Kyburg, H. E., Jr. (2007). Bayesian inference with evidential probability. In W. Harper, & G. Wheeler (Eds.), Probability and inference: Essays in honor of Henry E. Kyburg, Jr. (pp. 281–296). London: King’s College.
Kyburg, H. E., Jr., & Teng, C. M. (2001). Uncertain inference. Cambridge: Cambridge University Press.
Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71, 391–418.
Levi, I. (1977). Direct inference. Journal of Philosophy, 74, 5–29.
Levi, I. (1980). The enterprise of knowledge. Cambridge, MA: MIT Press.
Rao, K. B., & Rao, M. B. (1983). Theory of charges: A study of finitely additive measures. London: Academic Press.
Romeijn, J.W. (2006). Analogical predictions for explicit similarity. Erkenntnis, 64, 253–80.
Savage, L. J. (1972). Foundations of statistics. New York: Dover.
Schlosshauer, M., & Wheeler, G. (2011). Focused correlation, confirmation, and the Jigsaw puzzle of variable evidence. Philosophy of Science, 78(3), 276–92.
Seidenfeld, T. (1994). When normal and extensive form decisions differ. In D. Prawitz, B. Skyrms, & D. Westerstahl (Eds.), Logic, methodology and philosophy of science. Amsterdam: Elsevier.
Seidenfeld, T. (2007). Forbidden fruit: When epistemic probability may not take a bite of the Bayesian apple. In W. Harper, & G. Wheeler (Eds.), Probability and inference: Essays in honor of Henry E. Kyburg, Jr. London: King’s College Publications.
Seidenfeld, T., Schervish, M. J., & Kadane, J. B. (2010). Coherent choice functions under uncertainty. Synthese, 172(1), 157–176.
Seidenfeld, T., & Wasserman, L. (1993). Dilation for sets of probabilities. The Annals of Statistics, 21, 1139–154.
Shogenji, T. (1999). Is coherence truth conducive? Analysis, 59, 338–345.
Smith, C. A. B. (1961). Consistency in statistical inference (with discussion). Journal of the Royal Statistical Society, 23, 1–37.
Sturgeon, S. (2008). Reason and the grain of belief. Noûs, 42(1), 139–165.
Sturgeon, S. (2010). Confidence and coarsegrain attitudes. In T. S. Gendler, & J. Hawthorne (Eds.), Oxford studies in epistemology (vol. 3, pp. 126–149). Oxford: Oxford University Press.
Walley, P. (1991). Statistical reasoning with imprecise probabilities. London: Chapman and Hall.
Wayne, A. (1995). Bayesianism and diverse evidence. Philosophy of Science, 62(1), 111–121.
Wheeler, G. (2006). Rational acceptance and conjunctive/disjunctive absorption. Journal of Logic, Language and Information, 15(1–2), 49–63.
Wheeler, G. (2009a). Focused correlation and confirmation. The British Journal for the Philosophy of Science, 60(1), 79–100.
Wheeler, G. (2009b). A good year for imprecise probability. In V. F. Hendricks (Ed.), PHIBOOK. New York: VIP/Automatic Press.
Wheeler, G. (2012). Objective Bayesianism and the problem of nonconvex evidence. The British Journal for the Philosophy of Science, 63(3), 841–850.
Wheeler, G. (2013). Character matching and the envelope of belief. In F. Lihoreau, & M. Rebuschi (Eds.), Epistemology, context, and formalism, synthese library (pp. 185–194). Berlin: Springer. Presented at the 2010 APA Pacific division meeting.
Wheeler, G., & Scheines, R. (2013). Coherence, confirmation, and causation. Mind, 122(435), 135–170.
White, R. (2010). Evidential symmetry and mushy credence. In T. S. Gendler, & J. Hawthorne (Eds.), Oxford studies in epistemology (vol. 3, pp. 161–186). Oxford: Oxford University Press.
Williamson, J. (2007). Motivating objective Bayesianism: From empirical constraints to objective probabilities. In W. Harper, & G. Wheeler (Eds.), Probability and inference: Essays in honor of Henry E. Kyburg, Jr. London: College Publications.
Williamson, J. (2010). In defence of objective Bayesianism. Oxford: Oxford University Press.
Yule, U. (1911). Introduction to the theory of statistics. London: Griffin.
Acknowledgments
Thanks to Horacio ArlóCosta, Jim Joyce, Isaac Levi, Teddy Seidenfeld, and Jon Williamson for their comments on early drafts, and to Clark Glymour and Choh Man Teng for a long discussion on dilation that began one sunny afternoon in a Lisbon café. This research was supported in part by award (LogICCC/0001/2007) from the European Science Foundation.
Author information
Affiliations
Corresponding author
Appendix
Appendix
We now return to our discussion of the technical machinery for the general case from Sect. 5. In the general setting, where infinitely many events inhabit the algebra \({\fancyscript{A}}\), closure with respect to the total variation norm topology (i.e., strong topology) or with respect to the weak topology (distinct from the weak*topology) introduces more closed sets than the weak*topology. The former topologies are stronger and indeed too strong for the purposes of imprecise probabilities, which demand a very weak topology, the weak*topology. Hence, the discussion in Seidenfeld and Wasserman (1993, pp. 1141–1443), which employs the total variation norm, suits sets of probabilities over an algebra consisting of finitely many events.
A set of probabilities in this general setting is always normbounded (with respect to the normdual), so a weak*closed set of probabilities is weak*compact. Accordingly, every weak*continuous functional on the dual space achieves its minimum at the extreme points of a closed convex (i.e., a weak*closed and convex) set of probabilities, and since all and only evaluation functionals are weak*continuous, the lower probability of an event is the minimum number assigned to the event by all probability functions in the set, where as in the finite setting a probability function from the collection of extreme points of the set witnesses the lower probability of the event.
Much as in the finite case, any compact convex set of probabilities is the closed convex hull of its extreme points (i.e, any convex weak*compact set of probabilities is the weak*closed convex hull of its extreme points). However, while in the finite setting a compact convex set of probabilities is identified with the convex hull of its extreme points, in the general setting a compact convex set of probabilities is identified with the closed convex hull of its extreme points. In both the finite and general setting, the convex hull of the set of extreme points is dense in the compact convex set in question, but in the general setting, a compact convex set may properly contain the convex hull of its extreme points and so the hull must in addition be closed.
Proof of Proposition 4.3
Let \({\mathbb{P}({\fancyscript{A}})}\) be the set of all probability functions on \({\fancyscript{A}}\), and let \({\mathbb{D}=_{df}\{p\in\mathbb{P}({\fancyscript{A}}):p(E)\geq\underline{\hbox{P}}(E)}\) for all \({E\in\fancyscript{A}\}}\). Observe that \({\mathbb{P}\subseteq\mathbb{D}}\) and that \({\mathbb{D}}\) is convex and weak*closed and so weak*compact. In addition, since \({\hbox{co}(\mathbb{P})\subseteq \mathbb{D}}\), it follows that the \({\overline{\hbox{co}}({\mathbb{P}})\subseteq\mathbb{D}}\), a weak*compact set, whence \({\underline{\hbox{P}}(E) =\inf\{p(E): p \in \mathbb{P}\}=\min\{p(E): p \in\overline{\hbox{co}}(\mathbb{P})\}}\), the minimum being achieved at an extreme point of \({\overline{\hbox{co}}(\mathbb{P})}\).
Now given \(\underline{\hbox{P}}(H)>0\), let \({\mathbb{D}[H]}\) be defined by:
As before, observe that \({\mathbb{P}\subseteq\mathbb{D}[H]}\) and that \({\mathbb{D}[H]}\) is convex and weak*closed and so weak*compact. Hence, \({\overline{\hbox{co}}(\mathbb{P})\subseteq\mathbb{D}[H]}\). Since \(\underline{\hbox{P}}(H)>0\), by the first part it follows that \({\overline{\hbox{co}}(\mathbb{P})\subseteq\{p\in\mathbb{P}({\fancyscript{A}}):p(H)>0\}}\). Since with respect to the weak* topology on the dual space every evaluation functional f ^{*} is a realvalued continuous linear functional on the dual space with f ^{*}(p) = p(f) for each p from the dual space, it follows that \(\frac{(E\cap H)^*}{H^*}\) is a continuous explicitly quasiconcave function of p on \({\overline{\hbox{co}}(\mathbb{P})}\), so it attains a minimum at an extreme point of \({\overline{\hbox{co}}(\mathbb{P})}\) (thus, \({\min\{\frac{(E\cap H)^*(p)}{H^*(p)}:p\in\overline{\hbox{co}}(\mathbb{P})\}=\min\{\frac{p(E\cap H)}{p(H)}:p\in\overline{\hbox{co}}(\mathbb{P})\}}\) exists and is an extreme point of \({\overline{\hbox{co}}(\mathbb{P})}\)), whence \({\underline{\hbox{P}}(EH)= \inf\{p(EH): p \in \mathbb{P}\}=\min\{p(EH):p\in\overline{\hbox{co}}(\mathbb{P})\}}\), as desired. □
Proof of Proposition 5.1
We first show that (i) \(\Longleftrightarrow\) (iii), and we then show that (i) \(\Longleftrightarrow\) (ii).
 (i)⇒(iii):

Suppose that \({\fancyscript{B}}\) dilates E. Then for each \(i\in I,\;\underline{\hbox{P}}(EH_{i})<\underline{\hbox{P}}(E)\leq\overline{\hbox{P}}(E)<\overline{\hbox{P}}(EH_{i})\). For each \(i\in I\), consider the realvalued function \(\underline{\varepsilon}_{i}(p)=_{df}p(EH_{i})\underline{\hbox{P}}(EH_{i})\) and the realvalued function S_{ p,i }(E, H _{ i }). We recall that the weak* topology on the dual space is a locally convex topological vector space with respect to which every evaluation functional f ^{*} is a realvalued continuous linear functional on the dual space. It follows that \(\underline{\varepsilon}_{i}(p)\) and S_{ p,i }(E, H _{ i }) are continuous functions of p on \({\mathbb{P}}\) for each \(i\in I\).
Now let \(i\in I\). By hypothesis, there is \({p_{1}\in \mathbb{P}}\) such that S_{ p_1}, i(E, H _{ i }) > 1, so importantly, C ^{+}_{ i } is nonempty. Then since \({C^{+}_{i}=_{df}\{p\in\mathbb{P}:\hbox{S}_p(E,H_{i})\geq 1\}}\) is a weak*closed and so weak*compact set, it follows that \(\underline{\varepsilon}_{i}\) achieves a minimum value on C ^{+}_{ i } (and the set of minimizers of \(\underline{\varepsilon}_{i}\) is also compact). Choosing a minimizer \({p_{i}\in \mathbb{P}}\) of \(\underline{\varepsilon}_{i}\), we see that for every \({p\in\mathbb{P}}\), if \(p(EH_{i}) \underline{\hbox{P}}(EH_{i}) <\underline{\varepsilon}_{i}(p_{i})= p_{i}(E H_{i})  \underline{\hbox{P}}(EH_{i})\), then S_{ p }(E, H _{ i }) < 1. We have accordingly shown that \({\underline{\mathbb{P}}(EH_{i},\underline{\varepsilon}_{i}(p_{i}))\subseteq \hbox{S}_{\mathbb{P}}^{}(E,H_{i})}\). Of course, we may suppress reference to the minimizer p _{ i } in \(\underline{\varepsilon}_{i}(p_{i})\). The other inclusion \({\overline{\mathbb{P}}(EH_{i},\overline{\varepsilon}_{i})\subseteq\hbox{S}_{\mathbb{P}}^{+}(E,H_{i})}\) is established by a similar argument.
 (iii) \(\Leftarrow\) (i):

Suppose that \({\fancyscript{B}}\) does not dilate E. Then there is \(i\in I\) such that \(\underline{\hbox{P}}(EH_{i})\geq \underline{\hbox{P}}(E)\) or \(\overline{\hbox{P}}(E)\geq \overline{\hbox{P}}(EH_{i})\). We may assume without loss of generality that \(\underline{\hbox{P}}(EH_{i})\geq \underline{\hbox{P}}(E)\) for some \(i\in I\). First, if \(\underline{\hbox{P}}(E)\leq\overline{\hbox{P}}(E)\leq \underline{\hbox{P}}(EH_{i})\), then choosing a minimizer \({p\in\mathbb{P}}\) of \(\underline{\hbox{P}}(EH_{i})\), we see that S_{ p }(E, H _{ i }) ≥ 1. Second, if \(\underline{\hbox{P}}(E)< \underline{\hbox{P}}(EH_{i})<\overline{\hbox{P}}(E)\), then for every \(\varepsilon>0\) we can find a convex combination \({p\in \mathbb{P}}\) of \({p_{0},p_{1}\in\mathbb{P}}\) assigning a probability to E within \(\varepsilon\)distance below \(\underline{\hbox{P}}(EH_{i})\), where \(\underline{\hbox{P}}(E)\leq p_{0}(E)< \underline{\hbox{P}}(EH_{i})< p_{1}(E)\leq\overline{\hbox{P}}(E)\), so S_{ p }(E, H _{ i }) > 1. Third, if \(\underline{\hbox{P}}(E)= \underline{\hbox{P}}(EH_{i})<\overline{\hbox{P}}(E)\), then choosing a minimizer \({p\in \mathbb{P}}\) of \(\underline{\hbox{P}}(E)\), we see that S_{ p }(E, H _{ i }) ≥ 1. Evidently, the conditions of the main claim cannot be jointly satisfied.
 (i) ⇔ (ii):

On the one hand, suppose that (i) obtains. Then since (iii) accordingly obtains, define \((\varepsilon_{i})_{i\in I}\) by setting \(\varepsilon_{i}=_{df}\min(\underline{\varepsilon}_{i},\overline{\varepsilon}_{i})\) for each \(i\in I\). Clearly the inclusions still obtain for the \(\varepsilon_{i}\). On the other hand, if (ii) obtains, obviously by setting \(\underline{\varepsilon}_{i}=_{df}\varepsilon_{i}\) and \(\overline{\varepsilon}_{i}=_{df}\varepsilon_{i}\) for each \(i\in I\), condition (iii) obtains and so (i) obtains.
Proof of Corollary 5.2
Only (i)\(\Longleftrightarrow\)(iii) requires proof. On the one hand, suppose that \({\fancyscript{B}}\) dilates E. Then for each \(i\in I,\;\underline{\hbox{P}}(EH_{i})<\underline{\hbox{P}}(E)\leq\overline{\hbox{P}}(E)<\overline{\hbox{P}}(EH_{i})\). By Proposition 4.3 we have \({\underline{\hbox{P}}(A) = \min\{p(A): p \in\mathbb{P}_{*}\},\;\underline{\hbox{P}}(AB) = \min\{p(AB): p \in\mathbb{P}_{*}\},\;\overline{\hbox{P}}(A) = \max\{p(A): p \in\mathbb{P}_{*}\}}\), and \({\overline{\hbox{P}}(AB) = \max\{p(AB): p \in\mathbb{P}_{*}\}}\) for every \({A,B\in\fancyscript{A}}\) with \(\underline{\hbox{P}}(B)>0\), so \({\fancyscript{B}}\) dilates E with respect to \({\mathbb{P}_{*}}\). It follows from Proposition 5.1 that there are positive \((\underline{\varepsilon}_{i},\overline{\varepsilon}_{i})_{i\in I}\) in \({\mathbb{R}}\) such that \({\underline{\mathbb{P}}_{*}(EH_{i},\underline{\varepsilon}_{i})\subseteq\hbox{S}_{*}^{}(E,H_{i})}\) and \({\overline{\mathbb{P}}_{\ast}(EH_{i},\overline{\varepsilon}_{i})\subseteq\hbox{S}_{\ast}^{+}(E,H_{i})}\) for every \(i\in I\).
On the other hand, suppose that there are positive \((\underline{\varepsilon}_{i},\overline{\varepsilon}_{i})_{i\in I}\) in \({\mathbb{R}}\) such that for every \({i\in I,\;\underline{\mathbb{P}}_{*}(EH_{i},\underline{\varepsilon}_{i})\subseteq\hbox{S}_{*}^{}(E,F)}\) and \({\overline{\mathbb{P}}_{*}(EH_{i},\overline{\varepsilon}_{i})\subseteq\hbox{S}_{*}^{+}(E,H_{i})}\). Then by Proposition 5.1, \({\fancyscript{B}}\) dilates E with respect to \({\mathbb{P}_{*}}\), so for every event \(i\in I,\;\underline{\hbox{P}}(EH_{i})<\underline{\hbox{P}}(E)\leq\overline{\hbox{P}}(E)<\overline{\hbox{P}}(EH_{i})\), whence again by Proposition 4.3 it follows that \({\fancyscript{B}}\) dilates E with respect to \({\mathbb{P}}\), as desired.
Clearly, that the radii \(\underline{\varepsilon}_{i}\) and \(\overline{\varepsilon}_{i}\) may be chosen in the way described follows from Proposition 5.1. The other implications are trivial consequences of what we have just shown.□
Proof of Proposition 5.3
On the one hand, if \({\fancyscript{B}}\) strictly dilates E, then by Corollary 5.2 there are \({(\underline{\varepsilon}_{i})_{i\in I}\in{\mathbb{R}}_{+}^{I}}\) and \({(\overline{\varepsilon}_{i})_{i\in I}\in{\mathbb{R}}_{+}^{I}}\) such that for every \(i\in I,\) \(\underline{\mathbb{P}}_{*}(EH_{i},\underline{\varepsilon}_{i})\subseteq\hbox{S}_{*}^{}(E,H_{i})\) and \(\overline{\mathbb{P}}_{*}(EH_{i},\overline{\varepsilon}_{i})\subseteq\hbox{S}_{*}^{+}(E,H_{i})\). Let
Then \(\varepsilon>0\), and clearly the inclusions still obtain. On the other hand, if (ii) obtains, then part (ii) of Corollary 5.2 obtains, so \({\fancyscript{B}}\) strictly dilates E. □
Rights and permissions
About this article
Cite this article
Pedersen, A.P., Wheeler, G. Demystifying Dilation. Erkenn 79, 1305–1342 (2014). https://doi.org/10.1007/s1067001395317
Received:
Accepted:
Published:
Issue Date: