Advertisement

Estimation of Entropy from Subword Complexity

  • Łukasz Dębowski
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 605)

Abstract

Subword complexity is a function that describes how many different substrings of a given length are contained in a given string. In this paper, two estimators of block entropy are proposed, based on the profile of subword complexity. The first estimator works well only for IID processes with uniform probabilities. The second estimator provides a lower bound of block entropy for any strictly stationary process with the distributions of blocks skewed towards less probable values. Using this estimator, some estimates of block entropy for natural language are obtained, confirming earlier hypotheses.

Keywords

Subword complexity Block entropy IID processes Natural language Large number of rare events 

References

  1. 1.
    Algoet PH, Cover TM (1988) A sandwich proof of the Shannon-McMillan-Breiman theorem. Ann. Probab. 16:899–909MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Baayen, H (2001) Word frequency distributions. Kluwer Academic Publishers, DordrechtGoogle Scholar
  3. 3.
    Cover TM, Thomas JA (1991) Elements of information theory. Wiley, New YorkGoogle Scholar
  4. 4.
    Crutchfield JP, Feldman DP (2003) Regularities unseen, randomness observed: the entropy convergence hierarchy. Chaos 15:25–54MathSciNetCrossRefGoogle Scholar
  5. 5.
    Dillon WR, Goldstein M (1984) Multivariate analysis: methods and appplications. Wiley, New YorkGoogle Scholar
  6. 6.
    Dębowski Ł (2011) On the vocabulary of grammar-based codes and the logical consistency of texts. IEEE Trans. Inform. Theor. 57:4589–4599CrossRefGoogle Scholar
  7. 7.
    Dębowski Ł (2013) A preadapted universal switch distribution for testing Hilberg’s conjecture (2013). http://arxiv.org/abs/1310.8511
  8. 8.
    Dębowski Ł (2014) Maximal repetitions in written texts: finite energy hypothesis vs. strong Hilberg conjecture (2014). http://www.ipipan.waw.pl/~ldebowsk/
  9. 9.
    Dębowski Ł (2014) A new universal code helps to distinguish natural language from random texts (2014). http://www.ipipan.waw.pl/~ldebowsk/
  10. 10.
    Ebeling W, Pöschel T (1994) Entropy and long-range correlations in literary English. Europhys. Lett. 26:241–246CrossRefGoogle Scholar
  11. 11.
    Ebeling W, Nicolis G (1991) Entropy of symbolic sequences: the role of correlations. Europhys. Lett. 14:191–196CrossRefGoogle Scholar
  12. 12.
    Ebeling W, Nicolis G (1992) Word frequency and entropy of symbolic sequences: a dynamical perspective. Chaos Sol. Fract. 2:635–650MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Ferenczi S (1999) Complexity of sequences and dynamical systems. Discr. Math. 206:145–154MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Gheorghiciuc I, Ward MD (2007) On correlation polynomials and subword complexity. Discr. Math. Theo. Comp. Sci. AH, 1–18Google Scholar
  15. 15.
    Graham RL, Knuth DE, Patashnik O (1994) Concrete mathematics, a foundation for computer science. Addison-Wiley, New YorkzbMATHGoogle Scholar
  16. 16.
    Hall P, Morton SC (1993) On the estimation of entropy. Ann. Inst. Statist. Math. 45:69–88MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Hilberg W (1990) Der bekannte Grenzwert der redundanzfreien Information in Texten—eine Fehlinterpretation der Shannonschen Experimente? Frequenz 44:243–248CrossRefGoogle Scholar
  18. 18.
    Ivanko EE (2008) Exact approximation of average subword complexity of finite random words over finite alphabet. Trud. Inst. Mat. Meh. UrO RAN 14(4):185–189Google Scholar
  19. 19.
    Janson S, Lonardi S, Szpankowski W (2004) On average sequence complexity. Theor. Comput. Sci. 326:213–227MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Joe H (1989) Estimation of entropy and other functionals of a multivariate density. Ann. Inst. Statist. Math. 41:683–697MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Khmaladze E (1988) The statistical analysis of large number of rare events, Technical Report MS-R8804. Centrum voor Wiskunde en Informatica, AmsterdamGoogle Scholar
  22. 22.
    Kontoyiannis I, Algoet PH, Suhov YM, Wyner AJ (1998) Nonparametric entropy estimation for stationary processes and random fields, with applications to English text. IEEE Trans. Inform. Theor. 44:1319–1327MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Koslicki D (2011) Topological entropy of DNA sequences. Bioinformatics 27:1061–1067CrossRefGoogle Scholar
  24. 24.
    Krzanowski W (2000) Principles of multivariate analysis. Oxford University Press, OxfordGoogle Scholar
  25. 25.
    de Luca A (1999) On the combinatorics of finite words. Theor. Comput. Sci. 218:13–39CrossRefzbMATHGoogle Scholar
  26. 26.
    Schmitt AO, Herzel H, Ebeling W (1993) A new method to calculate higher-order entropies from finite samples. Europhys. Lett. 23:303–309CrossRefGoogle Scholar
  27. 27.
    Shannon C (1951) Prediction and entropy of printed English. Bell Syst. Tech. J. 30:50–64CrossRefzbMATHGoogle Scholar
  28. 28.
    Vogel H (2013) On the shape of subword complexity sequences of finite words (2013). http://arxiv.org/abs/1309.3441
  29. 29.
    Wyner AD, Ziv J (1989) Some asymptotic properties of entropy of a stationary ergodic data source with applications to data compression. IEEE Trans. Inform. Theor. 35:1250–1258MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Zipf GK (1935) The Psycho-Biology of language: an introduction to Dynamic Philology. Houghton Mifflin, BostonGoogle Scholar
  31. 31.
    Ziv J, Lempel A (1977) A universal algorithm for sequential data compression. IEEE Trans. Inform. Theor. 23:337–343MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Ziv J, Lempel A (1978) Compression of individual sequences via variable-rate coding. IEEE Trans. Inform. Theor. 24:530–536MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute of Computer Science, Polish Academy of SciencesWarszawaPoland

Personalised recommendations