Advertisement

Measuring Informativeness of Data by Entropy and Variance

  • Nader Ebrahimi
  • Esfandiar Maasoumi
  • Ehsan S. Soofi

Abstract

Measuring informativeness of data or news is particularly important as it quantifies the amount of “learning”. This is central to scientific progress as well as to assessing the direction and value of “information” and technologies. As is the case with all “indices”, the desirability of any measure of information depends on at least two considerations: First is the inference/investigative technique that would utilize the information. The second is the distributional characteristics of the information which is to be summarized. As examples, least squares techniques are, by design, incapable of utilizing any information other than the “variation” in a distribution/data. And, the Gaussian distributions/data are entirely characterized by the first two moments; any index will thus be a function of the same moments. These two considerations need to be borne in mind when contrasting entropy and variance as indices of informativeness or uncertainty.

Keywords

Posterior Distribution Prior Distribution Maximum Entropy Posterior Density Lorenz Curve 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Abel, P. S. and N.D. Singpurwalla (1994) “To Survive or to Fail: That is the Question”, The American Statistician, 48, 18 - 21.CrossRefGoogle Scholar
  2. [2]
    Ebrahimi, Maasoumi, and Soofi (1999) “Ordering Univariate Distributions by Entropy and Variance”, Journal of Econometrics, 90, 2, 317 - 336.CrossRefGoogle Scholar
  3. [3]
    Fisher, R. A. (1921) “On Mathematical Foundations of Theoretical Statistics” Philosophical Transactions of the Royal Society of London, Ser. A, 222, 309-368.Google Scholar
  4. [4]
    Fomby, T. B. and R. C. Hill (1997) Advances in Econometrics: Applying Maximum Entropy to Econometric Problems, Vol. 12, Greenwich CT: JAI Press.Google Scholar
  5. [5]
    Goel, P.K. (1983) “Information Measures and Bayesian Hierarchical Models”, Journal of the American Statistical Association, 78, 408 - 410.CrossRefGoogle Scholar
  6. [6]
    Goel, P.K. and M. H. DeGroot (1979) “Comparison of Experiments and Information Measures”, The Annals of Statistics, 7, 1066 - 1077.CrossRefGoogle Scholar
  7. [7]
    Gill, C.A. and Joanes, D.N. (1979) “Bayesian Estimation of Shannon’s Index of Diversity”, Biometrika, 66, 81 - 85.CrossRefGoogle Scholar
  8. [8]
    Golan, A., Judge, G., and D. Miller (1996) Maximum Entropy Econometrics, New York: Wiley.Google Scholar
  9. [9]
    Holm J. (1993) “Maximum Entropy Lorenz Curves”, Journal of Econometrics, 59, 377 - 389.CrossRefGoogle Scholar
  10. [10]
    Kullback, S. (1959) Information Theory and Statistics,N.Y.: Wiley (reprinted in 1968 by Dover).Google Scholar
  11. [11]
    Lehmann, E. L. (1983) Theory of Point Estimation, N.Y.: Wiley.Google Scholar
  12. [12]
    Lindley, D.V. (1956) “On a Measure of Information Provided by an Experiment”, The Annals of Mathematical Statistics, 27, 986 - 1005.CrossRefGoogle Scholar
  13. [13]
    Lindley, D.V. (1957) “Binomial Sampling Schemes and the Concept of Information”, Biometrika, 44, 179 - 186.Google Scholar
  14. [14]
    Lindley, D. V. (1961) “The Use of Prior Probability Distributions in Statistical Inference and Decision”, Proceedings of the Fourth Berkeley Symposium, 1, 436 - 468, Berkeley: UC Press.Google Scholar
  15. [15]
    Maasoumi, E. (1993) “A Compendium to Information Theory in Economics and Econometrics”, Econometric Reviews, 12 (2), 137 - 181.CrossRefGoogle Scholar
  16. [16]
    Maasoumi, E. (1998) “Empirical Analyses of Inequality and Welfare,” in M.H. Pesaran and P. Schmidt (eds), Handbook of Applied Microeconometrics, Basil Blackwell.Google Scholar
  17. [17]
    Mazzuchi, T. A., Soofi, E.S., and R. Soyer (1997) “Bayesian Estimatation of Entropy and the information index”, under review.Google Scholar
  18. [18]
    Ryu, H. K. (1993) “Maximum Entropy Estimation of Destiny and Regression Functions”, Journal of Econometrics, 56, 397 - 440.CrossRefGoogle Scholar
  19. [19]
    Shannon, C. E. (1948) “A Mathematical Theory of Communication”, Bell System Technical Journal, 27, 379 - 423.Google Scholar
  20. [20]
    Soofi, E. S. (1990) “Effects of Collinearity on Information About Regression Coefficients”, Journal of Econometrics, 43, 255 - 274.CrossRefGoogle Scholar
  21. [21]
    Soofi, E. S. (1994) “Capturing the intangible concept of Information”, Journal of the American Statistical Association, 89, 1243 - 1254.CrossRefGoogle Scholar
  22. [22]
    Soofi, E. S. (1997) “Information Theoretic Regression Methods”, in Advances in Econometrics: Applying Maximum Entropy to Econometric Problems, 12, T. B. Fomby and R. C. Hill (eds.), 25 - 83, Greenwich, CT: JAI Press.Google Scholar
  23. [23]
    Soofi, E. S. and Gokhale, D.V. (1997) “Information Theoretic Methods for Categorical Data”, in Advances in Econometrics: Applying Maximum Entropy to Econometric Problems, 12, T. B. Fomby and R. C. Hill (eds.), 107 - 134, Greenwich, CT: JAI Press.Google Scholar
  24. [24]
    Stutzer, M. (1995) “A Bayesian Approach to Diagnostics of Asset Pricing Models”, Journal of Econometrics, 68, 367 - 397.CrossRefGoogle Scholar
  25. [25]
    Stutzer, M. (1996) “An Information-Theoretic Index of Risk in Financial Markets”, in Bayesian Analysis in Statistics and Econometrics: Essays in Honor of Arnold Zellner, D.A. Berry, K.M. Chanloner, and J.K. Geweke (eds.), New York: Wiley.Google Scholar
  26. [26]
    Zellner, A. (1971) An Introduction to Bayesian Inference in Econometrics,New York: Wiley (reprinted in 1996 by Wiley)Google Scholar
  27. [27]
    Zellner, A. (1984) Basic Issues in Econometrics, Chicago: University of Chicago Press.Google Scholar
  28. [28]
    Zellner, A. (1988) “Optimal Information Processing and Bayes Theorem” (with discussion), The American Statistician, 42, 278 - 284.CrossRefGoogle Scholar
  29. [29]
    Zellner, A. (1991) “Bayesian Methods and Entropy in Economics and Econometrics”, in Maximum Entropy and Bayesian Methods, eds. W. T. Grandy, Jr. and L. H. Schick, 17 - 31, Netherlands: Kulwer.CrossRefGoogle Scholar
  30. [30]
    Zellner, A. (1996a) “Bayesian Method of Moment/Instrumental Variable (BMOM/IV) Analysis of Mean and Regression Models”, in Modeling and Prediction: Honoring Seymour Geisser, J. Lee, W. Johnson, and A. Zellner (eds. ), Springer-Verlag.Google Scholar
  31. [31]
    Zellner, A. (1996b) “Models, Prior Information, and Bayesian Analysis” Journal of Econometrics, 75, 51 - 68.CrossRefGoogle Scholar
  32. [32]
    Zellner, A. (1997) “The Bayesian Method of Moments (BMOM), Theory and Applications”, in Advances in Econometrics: Applying Maximum Entropy to Econometric Problems, Vol. 12, T. B. Fomby and R. C. Hill (eds.), Greenwich CT: JAI Press, 85, 106.Google Scholar
  33. [33]
    Zellner, A. and B. Sacks (1996) “Bayesian Method of Moment (BMOM) Analysis of the Multiple Regression Model with Autocorrelated Errors”, H.G.B. Alexander Research Foundation, University of Chicago.Google Scholar
  34. [34]
    Zenner, A., J. Tobias, and H. K. Ryu (1997) “Bayesian Method of Moments (BMOM) Analysis of Parametric and Semiparameteric Regression Models”, Manuscript presented at the Fourth World Meeting of the International Society for Bayesian Analysis, Istanbul, Turkey.Google Scholar

Copyright information

© Physica-Verlag Heidelberg 1999

Authors and Affiliations

  • Nader Ebrahimi
    • 1
  • Esfandiar Maasoumi
    • 2
  • Ehsan S. Soofi
    • 3
  1. 1.Division of StatisticsNorthern Illinois UniversityDeKalbUSA
  2. 2.Department of EconomicsSouthern Methodist UniversityDallasUSA
  3. 3.School of Business AdministrationUniversity of Wisconsin-MilwaukeeUSA

Personalised recommendations