Skip to main content

Modulus Computational Entropy

  • Conference paper
  • First Online:
Information Theoretic Security (ICITS 2013)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 8317))

Included in the following conference series:

Abstract

The so-called leakage-chain rule is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable \(X\) in case the adversary who having already learned some random variables \(Z_{1},\ldots ,Z_{\ell }\) correlated with \(X\), obtains some further information \(Z_{\ell +1}\) about \(X\). Analogously to the information-theoretic case, one might expect that also for the computational variants of entropy the loss depends only on the actual leakage, i.e. on \(Z_{\ell +1}\). Surprisingly, Krenn et al. have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in \(|(Z_{1},\ldots ,Z_{\ell })|\). This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred “in the past”, which severely limits the applicability of this notion.

As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the modulus computational entropy, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other, sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.

This work was partly supported by the WELCOME/2010-4/2 grant founded within the framework of the EU Innovative Economy Operational Programme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We stress that this is a non-trivial result, as the computational entropy \(X\) given \(Z\) is calculated by distinguishers on \(\{0,1\}^{n+m}\), thus it might happen that even circuits of size \(2^{n}\) are not able to break it.

  2. 2.

    We use only min-entropy in this work. See, however, [VZ12] for a similar definition based on Shanon Entropy.

  3. 3.

    The question whether it can happen was raised in [FR12]

  4. 4.

    Recall that for the HILL Entropy all kinds of circuits: deterministic boolean, deterministic real valued, randomized boolean are equivalent [FR12] thus we can abbreviate the notation writing just \(\mathbf {H}^{{\mathrm {HILL}},s',\epsilon '}\left( X|Z \right) \).

  5. 5.

    Throughout the proofs, we will make use of the simple Markov-style principle: let \(X\) be a non-negative random variable bounded by \(M\). Then \(X > \frac{1}{2M}\mathbf {E}X\) with probability at least \(\frac{1}{2}\mathbf {E}X\).

  6. 6.

    We use the following version: Let \(X_i\) be random variables satisfying \(\left| X_i-\mathbf {E}X_i\right| \leqslant 1\) and \(X=\sum _{i}X_i\). Then \(\mathbf {P}\left[ \left| X-\mathbf {E}X\right| \geqslant \lambda \sigma \right] \leqslant 2\min \left( e^{-\frac{\lambda ^2}{4}},\,e^{-\frac{\lambda \sigma }{2}} \right) \), where \(\sigma = {\mathrm {Var}}(X)\)

References

  1. Barak, B., Shaltiel, R., Wigderson, A.: Computational analogues of entropy. In: Arora, S., Jansen, K., Rolim, J.D.P., Sahai, A. (eds.) RANDOM 2003 and APPROX 2003. LNCS, vol. 2764, pp. 200–215. Springer, Heidelberg (2003)

    Google Scholar 

  2. Chung, K.-M., Kalai, T.Y., Liu, F.-H., Raz, R.: Memory delegation. Cryptol. ePrint Arch. 2011, 273 (2011). http://eprint.iacr.org/

    Google Scholar 

  3. Dodis, Y., Ostrovsky, R., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. SIAM J. Comput. 38(1), 97–139 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  4. Dziembowski, S., Pietrzak, K.: Leakage-resilient cryptography in the standard model. IACR Cryptol. ePrint Arch. 2008, 240 (2008)

    Google Scholar 

  5. Dodis, Y., Yu, Y.: Overcoming weak expectations. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 1–22. Springer, Heidelberg (2013)

    Google Scholar 

  6. Fuller, B., O’Neill, A., Reyzin, L.: A unified approach to deterministic encryption: new constructions and a connection to computational entropy. Cryptol. ePrint Arch. 2012, 005 (2012). http://eprint.iacr.org/

    Google Scholar 

  7. Fuller, B., Reyzin, L.: Computational entropy and information leakage. Cryptol. ePrint Arch. 2012, 466 (2012). http://eprint.iacr.org/

    Google Scholar 

  8. Gentry, C., Wichs, D.: Separating succinct non-interactive arguments from all falsifiable assumptions. Cryptol. ePrint Arch. 2010, 610 (2010). http://eprint.iacr.org/

    Google Scholar 

  9. Hastad, J., Impagliazzo, R., Levin, L.A., Luby, M.: A pseudorandom generator from any one-way function. SIAM J. Comput. 28(4), 1364–1396 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  10. Hsiao, C.-Y., Lu, C.-J., Reyzin, L.: Conditional computational entropy, or toward separating pseudoentropy from compressibility. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS, vol. 4515, pp. 169–186. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  11. Krenn, S., Pietrzak, K., Wadia, A.: A counterexample to the chain rule for conditional HILL entropy. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 23–39. Springer, Heidelberg (2013)

    Google Scholar 

  12. O’Donnell, R., Guruswami, V.: An intensive introduction to computational complexity theory, University Lecture. http://www.cs.cmu.edu/~odonnell/complexity/ (2009)

  13. Reyzin, L.: Some notions of entropy for cryptography. In: Fehr, S. (ed.) ICITS 2011. LNCS, vol. 6673, pp. 138–142. Springer, Heidelberg (2011)

    Google Scholar 

  14. Reingold, O., Trevisan, L., Tulsiani, M., Vadhan, S.: Dense subsets of pseudorandom sets. In: Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’08, pp. 76–85. IEEE Computer Society, Washington (2008)

    Chapter  Google Scholar 

  15. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)

    Article  MATH  MathSciNet  Google Scholar 

  16. Vadhan, S., Zheng, C.J.: Characterizing pseudoentropy and simplifying pseudorandom generator constructions. In: Proceedings of the 44th Symposium on Theory of Computing, STOC ’12, pp. 817–836. ACM, New York (2012)

    Chapter  Google Scholar 

  17. Yao, A.C.: Theory and application of trapdoor functions. In: Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, SFCS ’82, pp. 80–91. IEEE Computer Society, Washington (1982)

    Google Scholar 

Download references

Acknowledgments

I would like to express special thanks to Stefan Dziembowski and Krzysztof Pietrzak, for their helpful suggestions and discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Skórski .

Editor information

Editors and Affiliations

Appendices

A Tightness of the Leakage Lemma

Lemma 10

Let \(X\in \{0,1\}^{n}\) be a random variable, \(f:\,\{0,1\}^{m}\rightarrow \{0,1\}^{n}\) be a deterministic circuit of size \(s\) and \( \epsilon < \frac{1}{12}\). Then \(\widetilde{\mathbf {H}}^{{\mathrm {Metric}},{\mathrm {det}}\{0,1\},s,\epsilon }_{}\left( f(X)|X\right) < 3 \).

Proof

Consider the following distinguisher \(D\): on the input \((y,x)\), where \(x\in \{0,1\}^{m}\) and \(y\in \{0,1\}^{n}\), run \(f(x)\) and return \(1\) iff \(f(x)=y\). Then for every \(x\) we get \(D(f(x),x) = 1\). Let \(Y\) be any random variable over \(\{0,1\}^{n}\) such that \(\widetilde{\mathbf {H}}_{\infty }(Y|X) \geqslant 3\). Then by Lemma 1, with probability \(\frac{2}{3}\) over \(x\leftarrow X\) we have \( \mathbf {H}_{\infty }(Y|X=x) \geqslant 3-\log _{2}(3)\). Since \(D(y,x) = 0\) if \(y\not =x\), for any such \(x\) we have \(\mathbf {E}_{y \leftarrow Y|X=x} D\left( y, x \right) \leqslant 2^{-(3 - \log _2(3))} \leqslant \frac{3}{8}\), and thus, with probability \(\frac{2}{3}\) over \(x\leftarrow X\), we get \(\mathbf {E}_{y \leftarrow f(X)|X=x} D\left( y, x \right) - \mathbf {E}_{y \leftarrow Y|X=x} D\left( y, x \right) \geqslant \frac{5}{8}\). Taking the expectation over \(x\leftarrow X\) we obtain finally \(\mathbf {E}D( f(X), X )- \mathbf {E}D ( Y,X ) \geqslant \frac{2}{3}\cdot \frac{5}{8} - \frac{1}{3} \cdot 1 = \frac{1}{12}\).

We use this lemma to show that the estimate in Lemma 3 cannot be improved:

Theorem 10

(Tightness of the estimate in Lemma 3) Suppose that there exists an exponentially secure pseudorandom generator \(f\). Then for every \(m\) and \(C>0\) we have \( \mathbf {H}^{{\mathrm {HILL}},{\mathrm {rand}}\{0,1\}, 2^{\mathcal {O}\left( m \right) },\frac{1}{2^{\mathcal {O}\left( m \right) }}}_{}\left( f\left( U_m\right) \right) \geqslant m+C\) and simultaneously \( \widetilde{\mathbf {H}}^{{\mathrm {Metric}},{\mathrm {det}}\{0,1\}, {\mathrm {poly}}(m),\frac{1}{{\mathrm {poly}}(m)}}_{}\left( \left. f\left( U_m\right) \right| U_m\right) \leqslant 3\).

Proof

The first inequality follows from the definition of the exponentially secure pseudorandom generator. The second inequality is implied by Lemma 10.

B Metric Entropy vs Different Kinds of Distinguishers

Below we prove the equivalence between boolean and real valued distinguishers

Theorem 11

For any random variables \(X,Z\) over \(\{0,1\}^{n},\{0,1\}^{m}\) we have \( \mathbf {H}_{}^{{\mathrm {Metric}},\mathrm {det}[0,1], s',\epsilon }(X|Z) = \mathbf {H}_{}^{{\mathrm {Metric}},\mathrm {det}\{0,1\}, s,\epsilon }(X|Z) \), where \(s'\approx s\).

Proof

We only need to prove \(\mathbf {H}^{{\mathrm {Metric}},{\mathrm {det}}[0,1], s',\epsilon }_{}\left( X|Z\right) \geqslant \mathbf {H}_{\infty }^{{\mathrm {Metric}},\mathrm {det}\{0,1\}, s,\epsilon }\) as the other direction is trivial (because the class \(({\mathrm {det}}[0,1],s)\) is larger than \(({\mathrm {det}}\{0,1\},s)\)). Suppose that \(\mathbf {H}^{{\mathrm {Metric}},{\mathrm {det}}[0,1],s,\epsilon }_{}\left( X|Z\right) <k\). Then for some \(D\) and all \(Y\) satisfying \(\mathbf {H}_{\infty }\left( X|Z\right) \geqslant k\) we have \( \left| \mathbf {E}_{(x,z)\leftarrow (X,Z)}D(x,z) - \mathbf {E}_{(x,z)\leftarrow (Y,Z)}D(x,z) \right| \geqslant \epsilon \). Applying the same reasoning as in Theorem 6 we can replace \(D\) with \(D'\), which is equal either to \(D\) or to \(D^{c}\), obtaining for all distributions \(\mathbf {H}_{\infty }\left( Y|Z\right) \geqslant k\), the following:

$$\begin{aligned} \mathbf {E}D'(X,Z) - \mathbf {E}D'(Y,Z) \geqslant \epsilon . \end{aligned}$$

Consider the distribution \(\left( Y^{+},Z\right) \) minimizing the left side of the above inequality. Equivalently, it maximizes the expected value of \(D'\) under the condition \(\mathbf {H}_{\infty }\left( Y|Z\right) \geqslant k\). Since this condition means that \(\mathbf {H}_{\infty }\left( \left. Y^{+}\right| Z=z\right) \geqslant k\) for all \(z\), we conclude that \(\left. Y^{+}\right| Z=z\), for fixed \(z\), is distributed over \(2^k\) values of \(x\) giving the greatest values of \(D'(x,z)\). Calculating the expected values in the last inequality via integration of the tail yields

$$\begin{aligned} \int \limits _{t\in [0,1]}\mathbf {P}_{(x,z)\leftarrow (X,Z)}\left[ D(x,z) > t\right] \text{ d }t - \int \limits _{t\in [0,1]}\mathbf {P}_{(x,z)\leftarrow \left( Y^{+},Z\right) } \left[ D(x,z) > t\right] \text{ d }t \geqslant \epsilon \end{aligned}$$

therefore for some number \(t\in (0,1)\), the following holds:

$$\begin{aligned} \mathbf {P}_{(x,z)\leftarrow (X,Z)}\left[ D(x,z) > t\right] \geqslant \mathbf {P}_{(x,z)\leftarrow \left( Y^{+},Z\right) } \left[ D(x,z) > t\right] + \epsilon . \end{aligned}$$

Let \(D''\) be a \(\{0,1\}\)-distinguisher that for every \((x,z)\) outputs \(1\) iff \(D(x,z) > t\). Clearly \(D''\) is of size \(s+\mathcal {O}(1)\) and satisfies

$$\begin{aligned} \mathbf {E}_{(x,z)\leftarrow (X,Z)}D''(x,z) \geqslant \mathbf {E}_{(x,z)\leftarrow \left( Y^{+},Z\right) }D''(x,z) + \epsilon . \end{aligned}$$

We assumed that \((Y,Z)\) maximizes \(\mathbf {E}D'(Y,Z)\). Now we argue that \((Y,Z)\) is also maximal for \(D''\). We know that for every \(z\) the distribution \(Y_z\) is flat over the set \(\mathrm {Max}_{D'(\cdot ,z)}^{k}\) of \(2^k\) values of \(x\) corresponding to largest values of \(D'(x,z)\). It is easy to see that \(\mathrm {Max}_{D'(\cdot ,z)}^{k} = \mathrm {Max}_{D''(\cdot ,z)}^{k}\). Therefore, we have shown in fact that

$$\begin{aligned} \mathbf {E}_{(x,z)\leftarrow (X,Z)}D''(x,z) - \max \limits _{(Y,Z):\,\mathbf {H}_{\infty }(Y|Z)\geqslant k}\mathbf {E}_{(x,z)\leftarrow (Y,Z)}D''(x,z) \geqslant \epsilon , \end{aligned}$$

which means exactly that \(\mathbf {H}^{{\mathrm {Metric}},\{0,1\},s',\epsilon }_{}\left( X|Z\right) <k\).

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Skórski, M. (2014). Modulus Computational Entropy. In: Padró, C. (eds) Information Theoretic Security. ICITS 2013. Lecture Notes in Computer Science(), vol 8317. Springer, Cham. https://doi.org/10.1007/978-3-319-04268-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-04268-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-04267-1

  • Online ISBN: 978-3-319-04268-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics