Maximum Entropy and Bayesian Methods pp 31-39 | Cite as

# Objective Bayesianism and Geometry

Chapter

## Abstract

We suggest in this paper that the concepts of utility, prior probability and entropy are not independent but must be related through the following formula: “*The expected utility of a Theory is an increasing funetion of its entropy*.” It follows that associated to each regular class of theories (i.e. parametric statistical model) there is a unique one parameter family of densities able to act as prior distributions. These entropic priors form the exponential family generated by the invariant measure in the class, that has the entropy of each theory as sufficient statistic.

## Keywords

Invariant Measure Prior Distribution Prior Probability Maximum Entropy Exponential Family
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## Preview

Unable to display preview. Download preview PDF.

## References

- Amari, S., 1985, Differentual Geometrical Methods in Statistics. Lecture Notes in Statistics, 28 Springer-Verlag.Google Scholar
- Jeffreys, H., 1939, Theory of Probability. Clarendon Press, Oxford.Google Scholar
- Rodriguez, C., 1988. The metrics induced by the Kullback Number. Maximum Entropy and Bayesian Methods, 415–422. (J. Skilling, Ed.) Kluwer Academic Publishers.Google Scholar
- Shore, J. and Johnson, R., 1980. Axiomatic derivations of the principle of maximum entropy and the principle of minimum cross-entropy, IEEE Trains, on information Theory, IT 26, 26–37.MathSciNetCrossRefGoogle Scholar
- Skilling, J., 1987. The axioms of maximum entropy. Proceedings of the 7th Maximum Entropy Workshop. G. Erikson and R. Smith (Eds.) Kluwer Academic Publishers.Google Scholar
- Skilling, J., 1988. Classical MaxEnt data analysis. Maximum Entropy and Bayesian Methods. (J. Skilling, Ed.) Kluwer Academic Publisher.Google Scholar

## Copyright information

© Kluwer Academic Publishers 1990