Skip to main content
Log in

Different Closed-Form Expressions for Generalized Entropy Rates of Markov Chains

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

Closed-form expressions for generalized entropy rates of Markov chains are obtained through pertinent averaging. First, the rates are expressed in terms of Perron-Frobenius eigenvalues of perturbations of the transition matrices. This leads to a classification of generalized entropy functionals into five exclusive types. Then, a weighted expression is obtained in which the associated Perron-Frobenius eigenvectors play the same role as the stationary distribution in the well-known weighted expression of Shannon entropy rate. Finally, all terms are shown to bear a meaning in terms of dynamics of an auxiliary absorbing Markov chain through the notion of quasi-limit distribution. Illustration of important properties of the involved spectral elements is provided through application to binary Markov chains.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Amari SI, Nagaoka H (2007) Methods of information geometry (Vol. 191), American Mathematical Soc.

  • Basseville M (2013) Divergence measures for statistical data processing. J Signal Proc 93:621–633

    Article  Google Scholar 

  • Beck C, Schögl F (1995) Thermodynamics of chaotic systems: an introduction (No. 4). Cambridge University Press, Cambridge

    Google Scholar 

  • Ciuperca G, Girardin V (2007) Estimation of the entropy rate of a countable Markov chain. Comm Stat Th Meth 36:2543–2557

    Article  MathSciNet  Google Scholar 

  • Ciuperca G, Girardin V, Lhote L (2011) Computation of generalized entropy rates. Application and estimation for countable Markov chains. IEEE Trans Info Th 57:4026–4034

    Article  Google Scholar 

  • Cover L, Thomas J (1991) Elements of information theory. Wiley series in telecommunications, New-York

    Book  Google Scholar 

  • Ekroot L, Cover TM (1993) The entropy of Markov trajectories. IEEE Trans Info Th 39:1418–1421

    Article  Google Scholar 

  • Darroch JN, Seneta EE (1965) On quasi-stationary distributions in discrete-time finite Markov chains. J App Probab 2:88–100

    Article  MathSciNet  Google Scholar 

  • Gerchak Y (1981) Maximal entropy of Markov chains with common steady-states probabilities. J Oper Res Soc 32:233–234

    Article  Google Scholar 

  • Girardin V (2004) Entropy maximization for Markov and semi-Markov processes. Meth Comp App Prob 6:109–127

    Article  MathSciNet  Google Scholar 

  • Girardin V (2005) On the different extensions of the Ergodic Theorem of information theory. In: Baeza-Yates R, Glaz J, Gzyl H, Hüsler J, Palacios JL (eds) Recent advances in applied probability. Springer, San Francisco, pp 163–179

  • Girardin V, Lhote L (2015) Rescaling entropy and divergence rates. IEEE Trans Info Th 61:5868–5882

    Article  MathSciNet  Google Scholar 

  • Girardin V, Regnault P (2016) Escort distributions minimizing the Kullback–Leibler divergence for a large deviations principle and tests of entropy level. Ann Inst Stat Math 68:439–468

    Article  MathSciNet  Google Scholar 

  • Gosselin F (2001) Asymptotic behavior of absorbing Markov chains conditional on nonabsorption for applications in conservation biology. Adv App Prob 11:261–284

    Article  MathSciNet  Google Scholar 

  • HohoÈldt T, Justesen J (1984) Maxentropic Markov chains. IEEE Trans Info Th 30:665–667

    Article  MathSciNet  Google Scholar 

  • Huillet T (2009) Random walks pertaining to a class of deterministic weighted graphs, article id. J Physics A 42:275001

    Article  MathSciNet  Google Scholar 

  • Kafsi M, Grossglauser M, Thiran P (2015) Traveling salesman in reverse: Conditional Markov entropy for trajectory segmentation. IEEE Int Conf Data Min 2015:201–210

    Google Scholar 

  • Lambert A (2008) Population dynamics and random genealogies. Stoch Models 24:45–163

    Article  MathSciNet  Google Scholar 

  • Ledoux J, Rubino G, Sericola B (1994) Exact aggregation of absorbing Markov processes using quasi-stationary distribution. J App Prob 31:626–634

    Article  MathSciNet  Google Scholar 

  • Meyer CD (2000) Matrix analysis and applied linear algebra, SIAM Philadelphia

  • Pronzato L, Wynn HP, Zhigljavsky AA (1997) Using Renyi entropies to measure uncertainty in search problems. In: Mathematics of stochastic manufacturing systems: AMS-SIAM summer seminar in applied mathematics 33. Williamsburg, USA, pp 253–268

  • Rached Z, Alajaji F, Campbell LL (2001) Rényi’s divergence and entropy rates for finite alphabet Markov sources. IEEE Trans Info Th 47:1553–1561

    Article  Google Scholar 

  • Rényi A (1961) On measures of entropy and information. In: Proceedings of the fourth Berkeley symposium on mathematical statistics and probability volume 1: contributions to the theory of statistics, The Regents of the University of California

  • Regnault P, Girardin V, Lhote L (2017) Escort distributions and the Rényi entropy rates of Markov chains, Geometric science of information, Paris

  • Saerens M, Achbany Y, Fouss F, Yen L (2009) Randomized shortest-path problems: two related models. Neural Comp 21:2363–2404

    Article  MathSciNet  Google Scholar 

  • Menéndez ML, Morales D, Pardo L, Salicrú M (1997) (h,φ)-entropy differential metric. Appl Math 42:81–98

    Article  MathSciNet  Google Scholar 

  • R Core Team (2018) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/

  • Seneta E (2006) Non-negative matrices and Markov chains, No. 21. Springer Series in Statistics, New York

    MATH  Google Scholar 

  • Sgarro A (1978) An informational divergence geometry for stochastic matrices. Calcolo 15:41–49

    Article  MathSciNet  Google Scholar 

  • Shannon C (1948) A mathematical theory of communication. Bell Syst Techn J 27:379–423, 623–656

    Article  MathSciNet  Google Scholar 

  • Seneta E, Vere-Jones D (1966) On quasi-stationary distributions in discrete-time Markov chains with a denumerable infinity of states. J App Prob 3:403–434

    Article  MathSciNet  Google Scholar 

  • Vallée B (2001) Dynamical sources in information theory: Fundamental intervals and word prefixes. Algorithmica 29:262–306

    Article  MathSciNet  Google Scholar 

  • Varma RS (1966) Generalizations of Rényi’s entropy of order α. J Math Sc 1:34–48

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valérie Girardin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Girardin, V., Lhote, L. & Regnault, P. Different Closed-Form Expressions for Generalized Entropy Rates of Markov Chains. Methodol Comput Appl Probab 21, 1431–1452 (2019). https://doi.org/10.1007/s11009-018-9679-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-018-9679-3

Keywords

Mathematics Subject Classification (2010)

Navigation