Skip to main content

Information Quantities and Parameter Estimation in Classical Systems

  • Chapter
  • First Online:
Quantum Information Theory

Part of the book series: Graduate Texts in Physics ((GTP))

  • 5831 Accesses

Abstract

For the study of quantum information theory, mathematical statistics, and information geometry, which are mainly examined in a nonquantum context. This chapter briefly summarizes the fundamentals of these topics from a unified viewpoint. Since these topics are usually treated individually, this chapter will be useful even for nonquantum applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this case, we consider \( 0 \log 0 \) to be 0 here.

  2. 2.

    The term relative entropy is commonly used in statistical physics. In information theory, it is generally known as the Kullback–Leibler divergence, while in statistics it is known as the Kullback–Leibler information.

  3. 3.

    In this book, monotonicity refers to only the monotonicity regarding the change in probability distributions or density matrices.

  4. 4.

    This quantity is more commonly used in information theory, where it is called f-divergence [1]. In this text, we prefer to use the term “relative entropy” for all relative-entropy-like quantities.

  5. 5.

    If L is not an integer, we consider the largest integer that does not exceed L.

  6. 6.

    \( \mathop {\mathrm{argmin}}_{0\le s\le 1} f(s) \) returns the value of s that yields \( \min _{0\le s\le 1} f(s)\). \( \mathop {\mathrm{argmax}}\) is similarly defined.

  7. 7.

    The superscript (e) means “exponential.” This is because A corresponds to the exponential representation, as discussed later.

  8. 8.

    This inequality still holds even if the asymptotic unbiasedness condition is replaced by another weak condition. Indeed, it is a problem to choose a suitable condition to be assumed for the inequality (2.142). For details, see van der Vaart [7].

  9. 9.

    This is generally true for all probability distribution families, although some regularity conditions must be imposed. For example, consider the case in which \(\varOmega \) consists of finite elements. These regularity conditions are satisfied when the first and second derivatives with respect to \(\theta \) are continuous. Generally, the central limit theorem is used in the proof [7].

  10. 10.

    The set is called the interior of a set X when it consists of the elements of X without its boundary. For example, for a one-dimensional set, the interior of \([0,0.5]\cup \{0.7\}\) is (0, 0.5) and the closure of the interior is [0, 0.5]. Therefore, the condition is not satisfied in this case.

References

  1. I. Csiszár, Information type measures of difference of probability distribution and indirect observations. Studia Scient. Math. Hungar. 2, 299–318 (1967)

    MathSciNet  MATH  Google Scholar 

  2. S. Amari, H. Nagaoka, Methods of Information Geometry (AMS & Oxford University Press, Oxford, 2000)

    MATH  Google Scholar 

  3. A. Rényi, On measures of information and entropy, in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley, 1961), pp. 547–561

    Google Scholar 

  4. R.M. Fano, Transmission of Information: A Statistical Theory of Communication (Wiley, New York, 1961)

    Google Scholar 

  5. M. Hayashi, Security analysis of \(\varepsilon \)-almost dual universal\(_2\) hash functions: smoothing of min entropy vs. smoothing of Rényi entropy of order 2 (2013). arXiv:1309.1596

  6. S. Amari, \(\alpha \)-divergence Is unique, belonging to both \(f\)-divergence and Bregman divergence classes. IEEE Trans. Inform. Theory 55(11), 4925–4931 (2009)

    Article  MathSciNet  Google Scholar 

  7. A.W. van der Vaart, Asymptotic Statistics (Cambridge University Press, Cambridge, 1998)

    Book  MATH  Google Scholar 

  8. I. Csiszár, J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems (Academic, 1981)

    Google Scholar 

  9. I.N. Sanov, On the probability of large deviations of random variables. Mat. Sbornik 42, 11–44 (1957) (in Russian). English translation: Selected Translat. Math. Stat. 1, 213–244 (1961)

    Google Scholar 

  10. M. Keyl, R.F. Werner, Estimating the spectrum of a density operator. Phys. Rev. A 64, 052311 (2001)

    Article  ADS  MathSciNet  Google Scholar 

  11. K. Matsumoto, Seminar notes (1999)

    Google Scholar 

  12. M. Hayashi, Optimal sequence of POVMs in the sense of Stein’s lemma in quantum hypothesis. J. Phys. A Math. Gen. 35, 10759–10773 (2002)

    Article  ADS  MATH  Google Scholar 

  13. M. Hayashi, Exponents of quantum fixed-length pure state source coding. Phys. Rev. A 66, 032321 (2002)

    Article  ADS  Google Scholar 

  14. M. Hayashi, K. Matsumoto, Variable length universal entanglement concentration by local operations and its application to teleportation and dense coding, quant-ph/0109028 (2001); K. Matsumoto, M. Hayashi, Universal entanglement concentration. Phys. Rev. A 75, 062338 (2007)

    Google Scholar 

  15. M. Hayashi, K. Matsumoto, Quantum universal variable-length source coding. Phys. Rev. A 66, 022311 (2002)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  16. M. Hayashi, K. Matsumoto, Simple construction of quantum universal variable-length source coding. Quant. Inf. Comput. 2, Special Issue, 519–529 (2002)

    Google Scholar 

  17. M. Hayashi, Asymptotics of quantum relative entropy from a representation theoretical viewpoint. J. Phys. A Math. Gen. 34, 3413–3419 (2001)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  18. H. Cramér, Sur un nouveaux theoorème-limite de la théorie des probabilités, in Actualités Scientifiques et Industrielles, no. 736, in Colloque consacré à la thèorie des probabilités (Hermann, Paris, 1938), pp. 5–23

    Google Scholar 

  19. J. Gärtner, On large deviations from the invariant measure. Theory Prob. Appl. 22, 24–39 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  20. R. Ellis, Large deviations for a general class of random vectors, Ann. Probab. 12, 1, 1–12 (1984); Entropy, Large Deviations and Statistical Mechanics (Springer, Berlin, 1985)

    Google Scholar 

  21. R.R. Bahadur, On the asymptotic efficiency of tests and estimates. Sankhyā 22, 229 (1960)

    MathSciNet  MATH  Google Scholar 

  22. R.R. Bahadur, Rates of Convergence of Estimates and Test Statistics. Ann. Math. Stat. 38, 303 (1967)

    Google Scholar 

  23. R.R. Bahadur, Some limit theorems in statistics, in Regional Conference Series in Applied Mathematics, no. 4 (SIAM, Philadelphia, 1971)

    Google Scholar 

  24. J.C. Fu, On a theorem of Bahadur on the rate of convergence of point estimators. Ann. Stat. 1, 745 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  25. A.I. Khinchin, Mathematical Foundations of Information Theory (Dover, New York, 1957)

    MATH  Google Scholar 

  26. T.S. Han, Information-Spectrum Methods in Information Theory (Springer, Berlin, 2002) (originally appeared in Japanese in 1998)

    Google Scholar 

  27. V.D. Milman, G. Schechtman, Asymptotic theory of finite-dimensional normed spaces, vol. 1200, Lecture Notes in Mathematics (Springer, Berlin, 1986)

    Google Scholar 

  28. T. Cover, J. Thomas, Elements of Information Theory (Wiley, New York, 1991)

    Google Scholar 

  29. M. Hayashi, Exponential decreasing rate of leaked information in universal random privacy amplification. IEEE Trans. Inf. Theory 57, 3989–4001 (2011)

    Article  MathSciNet  Google Scholar 

  30. M. Iwamoto, J. Shikata, Information theoretic security for encryption based on conditional Rényi entropies. Inform. Theor. Secur. Lect. Notes Comput. Sci. 8317(2014), 103–121 (2014)

    Article  MATH  Google Scholar 

  31. M. Müller-Lennert, F. Dupuis, O. Szehr, S. Fehr, M. Tomamichel, On quantum Renyi entropies: a new generalization and some properties. J. Math. Phys. 54, 122203 (2013)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  32. E.L. Lehman, G. Casella, Theory of Point Estimation (Springer, Berlin Heidelberg New York, 1998)

    Google Scholar 

  33. A. Dembo, O. Zeitouni, Large Deviation Techniques and Applications (Springer, Berlin, 1997)

    MATH  Google Scholar 

  34. J.A. Bucklew, Large Deviation Techniques in Decision, Simulation, and Estimation (Wiley, New York, 1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masahito Hayashi .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hayashi, M. (2017). Information Quantities and Parameter Estimation in Classical Systems. In: Quantum Information Theory. Graduate Texts in Physics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49725-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-49725-8_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-49723-4

  • Online ISBN: 978-3-662-49725-8

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics