Advertisement

Synthese

, Volume 168, Issue 1, pp 119–149 | Cite as

Counting distinctions: on the conceptual foundations of Shannon’s information theory

  • David EllermanEmail author
Article

Abstract

Categorical logic has shown that modern logic is essentially the logic of subsets (or “subobjects”). In “subset logic,” predicates are modeled as subsets of a universe and a predicate applies to an individual if the individual is in the subset. Partitions are dual to subsets so there is a dual logic of partitions where a “distinction” [an ordered pair of distinct elements (u, u′) from the universe U] is dual to an “element”. A predicate modeled by a partition π on U would apply to a distinction if the pair of elements was distinguished by the partition π, i.e., if u and u′ were in different blocks of π. Subset logic leads to finite probability theory by taking the (Laplacian) probability as the normalized size of each subset-event of a finite universe. The analogous step in the logic of partitions is to assign to a partition the number of distinctions made by a partition normalized by the total number of ordered |U|2 pairs from the finite universe. That yields a notion of “logical entropy” for partitions and a “logical information theory.” The logical theory directly counts the (normalized) number of distinctions in a partition while Shannon’s theory gives the average number of binary partitions needed to make those same distinctions. Thus the logical theory is seen as providing a conceptual underpinning for Shannon’s theory based on the logical notion of “distinctions.”

Keywords

Information theory Logic of partitions Logical entropy Shannon entropy 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aczél J. and Daróczy Z. (1975). On measures of information and their characterization. Academic Press, New York Google Scholar
  2. Adelman M.A. (1969). Comment on the H concentration measure as a numbers-equivalent. Review of Economics and Statistics 51: 99–101 CrossRefGoogle Scholar
  3. Baclawski, K., & Rota, G.-C. (1979). An introduction to probability and random processes. Unpublished typescript. 467 pages. Download available at: http://www.ellerman.org.
  4. Bhargava T.N. and Uppuluri V.R.R. (1975). On an axiomatic derivation of gini diversity, with applications. Metron 33: 41–53 Google Scholar
  5. Birkhoff G. (1948). Lattice theory. American Mathematical Society, New York Google Scholar
  6. Cover T. and Thomas J. (1991). Elements of information theory. John Wiley, New York CrossRefGoogle Scholar
  7. Finberg D., Mainetti M. and Rota G.C. (1996). The logic of commuting equivalence relations. In: Aldo, U. and Agliano, P. (eds) Logic and algebra, pp 69–96. Marcel Dekker, New York Google Scholar
  8. Friedman W.F. (1922). The index of coincidence and its applications in cryptography. Riverbank Laboratories, Geneva, IL Google Scholar
  9. Ganeshaiah K.N., Chandrashekara K. and Kumar A.R.V. (1997). Avalanche Index: A new measure of biodiversity based on biological heterogeneity of communities. Current Science 73: 128–133 Google Scholar
  10. Gini C. (1912). Variabilitá e mutabilitá. Tipografia di Paolo Cuppini, Bologna Google Scholar
  11. Gini C. (1955). Variabilitá e mutabilitá. In: Pizetti, E. and Salvemini, T. (eds) Memorie di metodologica statistica, pp. Libreria Eredi Virgilio Veschi, Rome Google Scholar
  12. Good I.J. (1979). A. M. Turing’s statistical work in World War II. Biometrika 66(2): 393–396 CrossRefGoogle Scholar
  13. Good I.J. (1982). Comment (on Patil and Taillie: Diversity as a concept and its measurement). Journal of the American Statistical Association 77(379): 561–563 CrossRefGoogle Scholar
  14. Gray R.M. (1990). Entropy and information theory. Springer-Verlag, New York Google Scholar
  15. Hartley R.V.L. (1928). Transmission of information. Bell System Technical Journal 7(3): 535–563 Google Scholar
  16. Havrda J.H. and Charvat F. (1967). Quantification methods of classification processes: Concept of structural α-entropy. Kybernetika (Prague) 3: 30–35 Google Scholar
  17. Herfindahl O.C. (1950). Concentration in the U.S. Steel Industry. Unpublished doctoral dissertation, Columbia UniversityGoogle Scholar
  18. Hirschman A.O. (1945). National power and the structure of foreign trade. University of California Press, Berkeley Google Scholar
  19. Hirschman A.O. (1964). The Paternity of an Index. American Economic Review 54(5): 761–762 Google Scholar
  20. Kapur J.N. (1994). Measures of information and their applications. Wiley Eastern, New Delhi Google Scholar
  21. Kolmogorov A.N. (1956). Foundations of the theory of probability. Chelsea, New York Google Scholar
  22. Kullback S. (1976). Statistical methods in cryptanalysis. Aegean Park Press, Walnut Creek CA Google Scholar
  23. Lawvere F.W. and Rosebrugh R. (2003). Sets for mathematics. Cambridge University Press, Cambridge Google Scholar
  24. MacArthur R.H. (1965). Patterns of species diversity. Biology Review 40: 510–533 CrossRefGoogle Scholar
  25. Patil G.P. and Taillie C. (1982). Diversity as a concept and its measurement. Journal of the American Statistical Association 77(379): 548–561 CrossRefGoogle Scholar
  26. Rao C.R. (1982). Diversity and dissimilarity coefficients: A unified approach. Theoretical Population Biology 21: 24–43 CrossRefGoogle Scholar
  27. Rejewski M. (1981). How polish mathematicians deciphered the enigma. Annals of the History of Computing 3: 213–234 CrossRefGoogle Scholar
  28. Rényi A. (1965). On the theory of random search. Bulletin of American Mathematical Society 71: 809–828 CrossRefGoogle Scholar
  29. Rényi A. (1970). Probability theory (trans: Vekerdi, L.). North-Holland, Amsterdam Google Scholar
  30. Rényi, A. (1976). In P. Turan (Ed.), Selected Papers of Alfréd Rényi, Volumes 1,2, and 3. Budapest: Akademiai Kiado.Google Scholar
  31. Ricotta C. and Szeidl L. (2006). Towards a unifying approach to diversity measures: Bridging the gap between the Shannon entropy and Rao’s quadratic index. Theoretical Population Biology 70: 237–243 CrossRefGoogle Scholar
  32. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423; 623–656.Google Scholar
  33. Simpson E.H. (1949). Measurement of diversity. Nature 163: 688 CrossRefGoogle Scholar
  34. Stigler S.M. (1999). Statistics on the table. Harvard University Press, Cambridge Google Scholar
  35. Tsallis C. (1988). Possible generalization for Boltzmann-Gibbs statistics. Journal of Statistical Physics 52: 479–487 CrossRefGoogle Scholar
  36. Vajda I. (1969). A contribution to informational analysis of patterns. In: Watanabe, S. (eds) Methodologies of pattern recognition, pp 509–519. Academic Press, New York Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of California RiversideRiversideUSA

Personalised recommendations