Foundations: Indifference, Independence & MaxEnt
Through completing an under specified probability model, Maximum Entropy (MaxEnt) supports non-monotonic inferences. Some major aspects of how this is done by MaxEnt can be understood from the background of two principles of rational decision: the concept of Indifference and the concept of Independence. In a formal specification MaxEnt can be viewed as (conservative) extension of these principles; so these principles shed light on the “magical” decisions of MaxEnt. But the other direction is true as well: Since MaxEnt is a “correct” representation of the set of models (Concentration Theorem), it elucidates these two principles (e.g. it can be shown, that the knowledge of independences can be of very different information-theoretic value). These principles and their calculi are not just arbitrary ideas: When extended to work with qualitative constraints which are modelled by probability intervals, each calculus can be successfully applied to V.Lifschitz’s Benchmarks of Non-Monotonic Reasoning and is able to infer some instances of them ([Lifschitz88]). Since MaxEnt is strictly stronger than the combination of the two principles, it yields a powerful tool for decisions in situations of incomplete knowledge. To give an example, a well-known problem of statistical inference (Simpson’s Paradox) will serve as an illustration throughout the paper.
KeywordsMaximum Entropy Undirected Graph Linear Constraint Propositional Logic Elementary Event
Unable to display preview. Download preview PDF.
- [Bacchus90]F. Bacchus, “Lp — A Logic for Statistical Information”, Uncertainty in Artificial Intelligence 5, pp. 3–14, Elsevier Science, ed.: M. Henrion, R.D. Shachter, L.N. Kanal, J.F. Lemmer, 1990.Google Scholar
- [Bacchus94]F. Bacchus, A.J. Grove, J.Y. Halpern, D. Koller, “From Statistical Knowledge Bases to Degrees of Belief”, Technical Report(available via ftp at logos.uwaterloo. ca: /pub/bacchus), 1994.Google Scholar
- [Cox79]R.T. Cox, “Of Inference and Inquiry — An Essay in Inductive Logic”, in: The Maximum Entropy Formalism, MIT Press, ed.: Levine & Tribus, pp. 119–167, 1979.Google Scholar
- [Howson93]C. Howson, P. Urbach. Urbach,“Scientific Reasoning: The Bayesian Approach”, 2nd Edition, Open Court, 1993.Google Scholar
- [Jaynes78]E.T. Jaynes, “Where do we stand on Maximum Entropy?”, 1978, in: E.T. Jaynes: Papers on Probability, Statistics and Statistical Physics, pp. 210 – 314, Kluwer Academic Publishers, ed.: R.D. Rosenkrantz, 1989.Google Scholar
- [Lifschitz88]V. Lifschitz, “Benchmark Problems for Formal nonmonotonic Reasoning”, Lecture Notes in Artificial Intelligence Non-Monotonie Reasoning, Vol. 346, pp. 202–219, ed.: Reinfrank et al., 1988.Google Scholar
- [Neapolitan90]R.E. Neapolitan, “Probabilistic Reasoning in Expert Systems: Theory and Algorithms”, John Wiley & Sons, 1990.Google Scholar
- [Pearl88]J. Pearl, “Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference”, Kaufmann, San Mateo, CA, 1988.Google Scholar
- [Skilling88]J. Skilling, “The Axioms of Maximum Entropy, Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol. 1 – Foundations”, Kluwer Academic, ed.: G.J. Erickson, C.R. Smith, Seattle Univ. Washington, 1988.Google Scholar