Shannon Entropy Versus Renyi Entropy from a Cryptographic Viewpoint

  • Maciej SkórskiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9496)


We provide a new inequality that links two important entropy notions: Shannon Entropy \(H_1\) and collision entropy \(H_2\). Our formula gives the worst possible amount of collision entropy in a probability distribution, when its Shannon Entropy is fixed. While in practice it is easier to evaluate Shannon entropy than other entropy notions, it is well known in folklore that it does not provide a good estimate of randomness quality from a cryptographic viewpoint, except very special settings. Our results and techniques put this in a quantitative form, allowing us to precisely answer the following questions:
  1. (a)

    How accurately does Shannon entropy estimate uniformity? Concretely, if the Shannon entropy of an n-bit source X is \(n-\epsilon \), where \(\epsilon \) is a small number, can we conclude that X is close to uniform? This question is motivated by uniformity tests based on entropy estimators, like Maurer’s Universal Test.

  2. (b)

    How much randomness can we extract having high Shannon entropy? That is, if the Shannon entropy of an n-bit source X is \(n-O(1)\), how many almost uniform bits can we retrieve, at least? This question is motivated by the folklore upper bound \(O(\log (n))\).

  3. (c)

    Can we use high Shannon entropy for key derivation? More precisely, if we have an n-bit source X of Shannon entropy \(n-O(1)\), can we use it as a secure key for some applications, such as square-secure applications? This is motivated by recent improvements in key derivation obtained by Barak et al. (CRYPTO’11) and Dodis et al. (TCC’14), which consider keys with some entropy deficiency.


Our approach involves convex optimization techniques, which yield the shape of the “worst” distribution, and the use of the Lambert W function, by which we resolve equations coming from Shannon Entropy constraints. We believe that it may be useful and of independent interests elsewhere, particularly for studying Shannon Entropy with constraints.


Shannon entropy Renyi entropy Smooth renyi entropy Min-entropy Lambda w function 



The author thanks anonymous reviewers for their valuable comments.


  1. [AIS11]
    A proposal for: Functionality classes for random number generators1, Technical report AIS 30, Bonn, Germany, September 2011.
  2. [AOST14]
    Acharya, J., Orlitsky, A., Suresh, A.T., Tyagi, H.: The complexity of estimating renyi entropy, CoRR abs/1408.1000 (2014)Google Scholar
  3. [BDK+11]
    Barak, B., Dodis, Y., Krawczyk, H., Pereira, O., Pietrzak, K., Standaert, F.-X., Yu, Y.: Leftover hash lemma, revisited. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 1–20. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  4. [BK12]
    Barker, E.B., Kelsey, J.M.: Sp 800–90a recommendation for random number generation using deterministic random bit generators, Technical report, Gaithersburg, MD, United States (2012)Google Scholar
  5. [BKMS09]
    Bouda, J., Krhovjak, J., Matyas, V., Svenda, P.: Towards true random number generation in mobile environments. In: Jøsang, A., Maseng, T., Knapskog, S.J. (eds.) NordSec 2009. LNCS, vol. 5838, pp. 179–189. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  6. [BL05]
    Bucci, M., Luzzi, R.: Design of testable random bit generators. In: Rao, J.R., Sunar, B. (eds.) CHES 2005. LNCS, vol. 3659, pp. 147–156. Springer, Heidelberg (2005) CrossRefGoogle Scholar
  7. [BST03]
    Barak, B., Shaltiel, R., Tromer, E.: True random number generators secure in a changing environment. In: Walter, C.D., Koç, Ç.K., Paar, C. (eds.) CHES 2003. LNCS, vol. 2779, pp. 166–180. Springer, Heidelberg (2003) CrossRefGoogle Scholar
  8. [Cac97]
    Cachin, C.: Smooth entropy and rényi entropy. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 193–208. Springer, Heidelberg (1997) CrossRefGoogle Scholar
  9. [Cor99]
    Coron, J.-S.: On the security of random sources. In: Imai, H., Zheng, Y. (eds.) PKC 1999. LNCS, vol. 1560, p. 29. Springer, Heidelberg (1999) CrossRefGoogle Scholar
  10. [CW79]
    Carter, J.L., Wegman, M.N.: Universal classes of hash functions. J. Comput. Syst. Sci. 18(2), 143–154 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  11. [DPR+13]
    Dodis, Y., Pointcheval, D., Ruhault, S., Vergniaud, D., Wichs, D.: Security analysis of pseudo-random number generators with input: /dev/random is not robust. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS 2013, pp. 647–658. ACM, New York (2013)Google Scholar
  12. [DY13]
    Dodis, Y., Yu, Y.: Overcoming weak expectations. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 1–22. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  13. [HH08]
    Hoorfar, A., Hassani, M.: Inequalities on the lambert w function and hyperpower function. J. Inequal. Pure Appl. Math. 9(2), 07–15 (2008)MathSciNetzbMATHGoogle Scholar
  14. [HILL88]
    Hstad, J., Impagliazzo, R., Levin, L.A., Luby, M.: Pseudo-random generation from one-way functions. In: Proceedings of the 20TH STOC, pp. 12–24 (1988)Google Scholar
  15. [HILL99]
    Hastad, J., Impagliazzo, R., Levin, L.A., Luby, M.: A pseudorandom generator from any one-way function. SIAM J. Comput. 28(4), 1364–1396 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  16. [Hol06]
    Holenstein, T.: Pseudorandom generators from one-way functions: a simple construction for any hardness. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 443–461. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  17. [Hol11]
    Holenstein, T.: On the randomness of repeated experimentGoogle Scholar
  18. [LPR11]
    Lauradoux, C., Ponge, J., Röck, A.: Online Entropy Estimation for Non-Binary Sources and Applications on iPhone. Rapport de recherche, Inria (2011) Google Scholar
  19. [Mau92]
    Maurer, U.: A universal statistical test for random bit generators. J. Cryptology 5, 89–105 (1992)MathSciNetzbMATHGoogle Scholar
  20. [NZ96]
    Nisan, N., Zuckerman, D.: Randomness is linear in space. J. Comput. Syst. Sci. 52(1), 43–52 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  21. [RTS00]
    Radhakrishnan, J., Ta-Shma, A.: Bounds for dispersers, extractors, and depth-two superconcentrators. SIAM J. Discrete Math. 13, 2000 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  22. [RW04]
    Renner, R., Wolf, S.: Smooth renyi entropy and applications. In: Proceedings of the International Symposium on Information Theory, ISIT 2004, p. 232. IEEE (2004)Google Scholar
  23. [RW05]
    Renner, R.S., Wolf, S.: Simple and tight bounds for information reconciliation and privacy amplification. In: Roy, B. (ed.) ASIACRYPT 2005. LNCS, vol. 3788, pp. 199–216. Springer, Heidelberg (2005) CrossRefGoogle Scholar
  24. [Sha48]
    Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948)MathSciNetCrossRefzbMATHGoogle Scholar
  25. [Sha11]
    Shaltiel, R.: An introduction to randomness extractors. In: Aceto, L., Henzinger, M., Sgall, J. (eds.) ICALP 2011, Part II. LNCS, vol. 6756, pp. 21–41. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  26. [Shi15]
    Shikata, J.: Design and analysis of information-theoretically secure authentication codes with non-uniformly random keys. IACR Cryptology ePrint Arch. 2015, 250 (2015)Google Scholar
  27. [VSH11]
    Voris, J., Saxena, N., Halevi, T.: Accelerometers and randomness: perfect together. In: Proceedings of the Fourth ACM Conference on Wireless Network Security, WiSec 2011, pp. 115–126. ACM, New York (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Cryptology and Data Security GroupUniversity of WarsawWarsawPoland

Personalised recommendations