Secure key generation from biased PUFs: extended version

Abstract

When the applied PUF in a PUF-based key generator does not produce full entropy responses, information about the derived key material is leaked by code-offset helper data. If the PUF’s entropy level is too low, the PUF-derived key is even fully disclosed by the helper data. In this work we analyze this entropy leakage, and provide several solutions for preventing leakage for PUFs suffering from i.i.d. biased bits. Our methods pose no limit on the amount of PUF bias that can be tolerated for achieving secure key generation, with only a moderate increase in the required PUF size. This solves an important open problem in this field. In addition, we also consider the reusability of PUF-based key generators and present a variant of our solution which retains the reusability property. In an exemplary application of these methods, we are able to derive a secure 128-bit key from a 15 %-noisy and 25 %-biased PUF requiring only 4890 PUF bits for the non-reusable variant, or 7392 PUF bits for the reusable variant.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    Relaxed, in the sense that practical constructions often consider the entropy of the PUF responses instead of the more pessimistic min-entropy, and use cryptographic key derivation functions to generate secure keys instead of strong randomness extractors.

  2. 2.

    The construction in Fig. 1 is technically a variant of a fuzzy commitment scheme as proposed in [9], rather than a fuzzy extractor as classically described in [5]. However, for the treatment in this work, they can be considered equivalent.

  3. 3.

    In this work, unpredictability of random variables is expressed by Shannon entropy, as is done in many earlier work on this subject, e.g., [10]. Note that Shannon entropy serves as a lower bound for average guesswork [23]. For a stronger (less practical) provable security notion, the more pessimistic min-entropy measure should be used. We express entropies and mutual informations in bits.

  4. 4.

    Note that \(H(X|W) = H(S|W)\), see Appendix 1, Corollary 2. This shows the equivalence in security (in terms of entropy) for a key generated from S or X.

  5. 5.

    E.g., a variant thereof appeared before in an early version of [25].

  6. 6.

    This has led to some confusion and occasional misinterpretations, i.e., under- or overestimations of the leakage. A discussion on this is, e.g., found in [3].

  7. 7.

    Note that in particular for strong bias this entropy bound even becomes negative, making it absolutely clear that this is a pessimistic lower bound.

  8. 8.

    A similar result for min-entropy is given in [3].

  9. 9.

    Only \(p \le 0.5\) is shown; all shown entropy-vs-bias graphs are symmetrical around \(p=0.5\).

  10. 10.

    Efficient in terms of PUF size, while following the design of Fig. 1 and using only a single enrollment measurement per derived key.

  11. 11.

    [14] aims for a seed of 171 bits, but this is rounded up to 180 for practicality. The need for having 171-bit seeds originated in [7], but the reasoning there is not fully clear.

  12. 12.

    Since bits of X are assumed i.i.d., which particular bits from X are considered for the entropy calculation is of no importance.

  13. 13.

    \(X_{1:n_1}\mathbf {H_{rep}}^\mathbf {\top }\) and \(X_{1:n_2}\mathbf {H_2}^\mathbf {\top }\) are not necessarily independent.

  14. 14.

    Note that this does not directly imply that the key becomes predictable, just that it is potentially less unpredictable than it should be according to its length.

  15. 15.

    Note that we cannot increase beyond \(r=31\), without increasing the length of the repetition code, otherwise the failure rate gets too large.

  16. 16.

    Von Neumann(-like) extractors have a small effect on bit error rate, as explained in Sect. 4.1.

  17. 17.

    We denote the probability mass and cumulative distribution function of the binomial distribution with parameters n and p, respectively, as \(f_{\text {bino}}\left( x;n,p\right) \) and \(F_{\text {bino}}\left( x;n,p\right) \).

  18. 18.

    This is just one possible exemplary representation of the information contained in (WD).

  19. 19.

    The graphs of Fig. 9b are generic and not depending on selected parameters. These graphs are hence generically usable, the only assumption being that the considered PUF is compatible with the stochastic model used in [18].

  20. 20.

    Failure rates differ slightly from the results in Table 1 which were extrapolated from [14]. For objective comparison, the results of Table 2 are based on a new simulations, with the Hackett Golay decoder from [14] implemented in Matlab. The single Golay decoding failure rate \(p_{\text {Golay-fail}}\) is estimated as the 95 %-confidence upper bound from the simulations; the actual values for \(p_{\text {Golay-fail}}\) are hence likely smaller. The total reconstruction failure rate is computed as \(1-(1-p_{\text {Golay-fail}})^{r}\).

  21. 21.

    The same can also be derived from the observation in [5] that secure sketches based on the code-offset construction and the syndrome construction are information-theoretically equivalent.

  22. 22.

    It was demonstrated that these distributions are very realistic, fitting on experimental PUF data of different PUF constructions with high accuracy. We only consider the fixed-temperature case from [18]. A similar derivation can be done for varying temperatures.

References

  1. 1.

    Bösch, C., Guajardo, J., Sadeghi, A.R., Shokrollahi, J., Tuyls, P.: Efficient helper data key extractor on FPGAs. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 181–197 (2008)

  2. 2.

    Boyen, X.: Reusable cryptographic fuzzy extractors. In: ACM Conference on Computer and Communications Security—CCS 2004, pp. 82–91. ACM Press, New York (2004)

  3. 3.

    Delvaux, J., Gu, D., Schellekens, D., Verbauwhede, I.: Helper data algorithms for PUF-based key generation: overview and analysis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) (2014)

  4. 4.

    Delvaux, J., Verbauwhede, I.: Attacking PUF-based pattern matching key generators via helper data manipulation. In: RSA Conference Cryptographers’ Track (CT-RSA), pp. 106–131 (2014)

  5. 5.

    Dodis, Y., Ostrovsky, R., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. SIAM J. Comput. 38(1), 97–139 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Dodis, Y., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. In: Advances in Cryptology-EUROCRYPT 2004. Lecture Notes in Computer Science, vol. 3027, pp. 523–540. Springer, Berlin Heidelberg (2004)

  7. 7.

    Guajardo, J., Kumar, S.S., Schrijen, G.J., Tuyls, P.: FPGA intrinsic PUFs and their use for IP protection. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 63–80 (2007)

  8. 8.

    Ignatenko, T., Willems, F.: Information leakage in fuzzy commitment schemes. IEEE Trans. Inf. Forensics Secur. 5(2), 337–348 (2010)

    Article  Google Scholar 

  9. 9.

    Juels, A., Wattenberg, M.: A fuzzy commitment scheme. In: Proceedings of the 6th ACM Conference on Computer and Communications Security. CCS ’99, pp. 28–36. ACM, New York (1999)

  10. 10.

    Katzenbeisser, S., Kocabas, U., Rozic, V., Sadeghi, A.R., Verbauwhede, I., Wachsmann, C.: PUFs: myth, fact or busted? A security evaluation of physically unclonable functions (PUFs) cast in silicon. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 283–301 (2012)

  11. 11.

    Koeberl, P., Li, J., Maes, R., Rajan, A., Vishik, C., Wójcik, M.: Evaluation of a PUF device authentication scheme on a discrete 0.13um SRAM. In: Trusted Systems-INTRUST. Lecture Notes in Computer Science, vol. 7222, pp. 271–288. Springer, Berlin Heidelberg (2011)

  12. 12.

    Koeberl, P., Li, J., Rajan, A., Wu, W.: Entropy loss in PUF-based key generation schemes: The repetition code pitfall. In: IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), pp. 44–49 (2014)

  13. 13.

    Krawczyk, H.: Cryptographic extraction and key derivation: the HKDF scheme. In: Advances in Cryptology CRYPTO 2010. Lecture Notes in Computer Science, vol. 6223, pp. 631–648. Springer, Berlin Heidelberg (2010)

  14. 14.

    van der Leest, V., Preneel, B., van der Sluis, E.: Soft decision error correction for compact memory-based PUFs using a single enrollment. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 268–282 (2012)

  15. 15.

    Lily, C.: NIST Special Publication 800-108: Recommendation for Key Derivation Using Pseudorandom Functions (revised) (2009)

  16. 16.

    Lily, C.: NIST Special Publication 800-56C: Recommendation for Key Derivation through Extraction-then-Expansion (2011)

  17. 17.

    Lim, D., Lee, J., Gassend, B., Suh, G., van Dijk, M., Devadas, S.: Extracting secret keys from integrated circuits. IEEE Trans. Very Large Scale Integr. VLSI Syst. 13(10), 1200–1205 (2005)

    Article  Google Scholar 

  18. 18.

    Maes, R.: An accurate probabilistic reliability model for silicon PUFs. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 73–89 (2013)

  19. 19.

    Maes, R.: Physically Unclonable Functions: Constructions, Properties and Applications. Springer, New York (2013)

    Book  MATH  Google Scholar 

  20. 20.

    Maes, R., Van der Leest, V., Van der Sluis, E., Willems, F.: Secure key generation from biased PUFs. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 517–534 (2015)

  21. 21.

    Maes, R., Tuyls, P., Verbauwhede, I.: Low-overhead implementation of a soft decision helper data algorithm for SRAM PUFs. In: Workshop on Cryptographic Hardware and Embedded Systems (CHES), pp. 332–347 (2009)

  22. 22.

    Maes, R., Van Herrewege, A., Verbauwhede, I.: PUFKY: a fully functional PUF-based cryptographic key generator. In: Cryptographic Hardware and Embedded Systems CHES 2012. Lecture Notes in Computer Science, vol. 7428, pp. 302–319. Springer, Berlin Heidelberg (2012)

  23. 23.

    Massey, J.L.: Guessing and entropy. In: IEEE International Symposium on Information Theory (ISIT), p. 204 (1994)

  24. 24.

    von Neumann, J.: Various techniques used in connection with random digits. In: Applied Math Series 12. National Bureau of Standards, USA (1951)

  25. 25.

    Skoric, B., de Vreede, N.: The Spammed Code Offset Method. Cryptology ePrint Archive, Report 2013/527 (2013). http://eprint.iacr.org/

  26. 26.

    Yu, M.D.: MRaihi, D., Sowell, R., Devadas, S.: Lightweight and secure PUF key storage using limits of machine learning. In: Cryptographic Hardware and Embedded Systems CHES 2011. Lecture Notes in Computer Science, vol. 6917, pp. 358–373. Springer, Berlin Heidelberg (2011)

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Roel Maes.

Additional information

This is an extended journal version of a paper [20] which previously appeared in the proceedings of CHES-2015.

This research was done in the PATRIOT project which is funded by the Eurostars-2 joint programme with co-funding from the EU Horizon 2020 research and innovation programme (http://www.patriot-project.eu).

Appendices

Appendix 1: Information leakage of code-offset key generators

Setting  Let X be a binary PUF response and let C be a random code word from a binary linear block code, i.e., \(C = S\mathbf {G}\) with S a uniformly random seed (\(H(S) = |S| = k\)) and \(\mathbf {G}\) the generator matrix of the code. Let \(W = X \oplus C\) and let V be the syndrome of X under mentioned linear block code, i.e., \(V = X\mathbf {H}^\mathbf {\top }\) with \(\mathbf {H}\) the parity-check matrix of the code.

Corollary 1

In the given setting, it holds that:

$$\begin{aligned} p(x|w)= & {} p(x|v=w\mathbf {H}^\mathbf {\top }), \end{aligned}$$
(10)
$$\begin{aligned} p(v)= & {} \sum _{w:w\mathbf {H}^\mathbf {\top }=v} p(w). \end{aligned}$$
(11)

Proof

Note that the following holds since \(\mathbf {G}\) and \(\mathbf {H}\) determine a linear block code:

$$\begin{aligned} x\mathbf {H}^\mathbf {\top }= w\mathbf {H}^\mathbf {\top }\Longleftrightarrow \exists s:x=w \oplus s\mathbf {G}. \end{aligned}$$

Based on this equivalence, (10) is shown as follows:

$$\begin{aligned} p(x|w)= & {} \left\{ \begin{array}{l@{\quad }l} \frac{2^{-k}p(x)}{\sum _{t}2^{-k}p(w \oplus t\mathbf {G})},&{} \text { if } \exists s:x=w \oplus s\mathbf {G},\\ 0,&{} \text {for all other } x. \end{array} \right. \\= & {} \left\{ \begin{array}{l@{\quad }l} \frac{p(x)}{\sum _{u:u\mathbf {H}^\mathbf {\top }=w\mathbf {H}^\mathbf {\top }}p(u)}, &{} \text {if } x\mathbf {H}^\mathbf {\top }= w\mathbf {H}^\mathbf {\top }, \\ 0, &{} \text {for all other } x. \end{array} \right. \\= & {} p(x|w\mathbf {H}^\mathbf {\top }) = p(x|x\mathbf {H}^\mathbf {\top }) = p(x|v). \end{aligned}$$

Since \(V = X\mathbf {H}^\mathbf {\top }= W\mathbf {H}^\mathbf {\top }\) is a deterministic function of W, it follows that \(p(v|w) = 1\) if \(v = w\mathbf {H}^\mathbf {\top }\) and zero for all other v. From this, (11) follows since \(p(v) = \sum _{w}p(v,w) = \sum _{w}p(w)p(v|w) = \sum _{w:w\mathbf {H}^\mathbf {\top }= v} p(w)\). \(\square \)

Corollary 2

In the given setting, it holds that:

$$\begin{aligned} H(X|W)= & {} H(X|V), \end{aligned}$$
(12)
$$\begin{aligned} I\left( X;W\right)= & {} I\left( X;V\right) , \end{aligned}$$
(13)
$$\begin{aligned} H(X|W)= & {} H(S|W), \end{aligned}$$
(14)
$$\begin{aligned} H(X|V)= & {} H(X) - H(V). \end{aligned}$$
(15)

Proof

Equation (12) follows from the application of (10) and (11) on the definition of \(H(X|W)\), and from the observation that \(\sum _{w}\) can be expanded as \(\sum _{v} \sum _{w:w\mathbf {H}^\mathbf {\top }= v}\), see below:

$$\begin{aligned} H(X|W)&{\mathop {=}\limits ^{\text {def}}} -\sum _{w}p(w)\sum _{x}p(x|w)\log _2p(x|w) \\= & {} -\sum _{v}\sum _{w:w\mathbf {H}^\mathbf {\top }=v}p(w)\sum _{x}p(x|w)\log _2p(x|w) \\= & {} -\sum _{v}p(v)\sum _{x}p(x|v)\log _2p(x|v) \\= & {} H(X|V). \end{aligned}$$

Equation (13) then follows straightforwardly from (12). Equation (14) follows from the fact that, given W, X is fully determined by S and vice versa, or \(H(X|W) = H(X \oplus W|W) = H(C|W) = H(S|W)\). Finally, (15) follows from the fact that V is fully determined by X and hence \(H(V|X) = 0\). Consequentially, \(H(X,V) = H(X)\) and \(H(X|V) = H(X) - H(V)\). \(\square \)

Informally, (13) shows that W and V disclose exactly the same amount of information about X, despite the fact that W is longer than V for typical error-correcting codes.Footnote 21

Proof (Theorem 1)

The derivation of (2) from Theorem 1 now follows straightforwardly by writing \(I\left( S;W\right) = H(S) - H(S|W)\), considering that \(H(S)=k\) and subsequently applying the equalities (14), (12) and (15) from Corollary 2. \(\square \)

Appendix 2: Repetition code syndrome entropy

Let \(Z = X\mathbf {H_{rep}}^\mathbf {\top }\). We need to derive an expression for the distribution of Z given the known distribution of X (i.i.d. but p-biased bits). Note that the distribution of X is given by:

$$\begin{aligned} p(x) = p^{\text {HW}(x)}(1-p)^{n-\text {HW}(x)}, \end{aligned}$$

with \(\text {HW}(x)\) the Hamming weight of x.

For repetition codes, a parity-check matrix takes the following form:

$$\begin{aligned} \mathbf {H_{rep}}= \left[ \mathbf {1_{(n-1) \times 1}} \, | \, \mathbf {I_{(n-1)}} \right] , \end{aligned}$$

hence for each value of z there are two values of x meeting the equation \(z = x\mathbf {H_{rep}}^\mathbf {\top }\), being \(x_1 = [0, z]\) and \(x_2 = [1, \overline{z}]\). We can now write:

$$\begin{aligned} p(z)&= p(x = [0, z]) + p(x = [1, \overline{z}]), \\&= p^{\text {HW}(z)}(1-p)^{n-\text {HW}(z)} + (1-p)^{\text {HW}(z)}p^{n-\text {HW}(z)}, \\&\mathop {=}\limits ^{\text {def}}f(\text {HW}(z);n,p). \end{aligned}$$

From the distribution of Z, entropy can now be written as:

$$\begin{aligned} H(Z)&\mathop {=}\limits ^{\text {def}}-\sum _{z} p(z) \log _2 p(z), \\&= -\sum _{t=0}^{n-1} \sum _{z:\text {HW}(z)=t} f(t;n,p) \log _2 f(t;n,p) , \\&= -\sum _{t=0}^{n-1} {n-1 \atopwithdelims ()t} f(t;n,p) \log _2 f(t;n,p). \end{aligned}$$

Appendix 3: Information leakage of code-offset key generators with CVN debiasing

Theorem 2

Let X be an n-bit p-biased (i.i.d.) PUF response, let Y be the output of a classic von Neumann extractor applied on X and let D be the corresponding selection bits (as shown in Fig. 4). Let \(C = S\mathbf {G}\) be a random \(\ell \)-bit code word from a binary linear block code, and let \(W = C \oplus Y_{1:\ell }\) if \(|Y| \ge \ell \) or \(W = C_{1:|Y|} \oplus Y\) if \(|Y| < \ell \), then it holds that \(I\left( S;(W, D)\right) = 0\).

Proof

Note that the debiasing data sequence D is an i.i.d. sequence as well, with \(p(d_i = 1) = 2p(1-p)\) and \(p(d_i=0) = p^2+(1-p)^2\). Let Q be the length \(\ell =\tfrac{n}{2}\) sequence of consecutive odd bits from X, i.e., \(Q_i = X_{2i-1}\), then it holds that:

$$\begin{aligned} p(q|d) = \prod _{i=1}^{\ell } p(q_i|d_i), \end{aligned}$$

with:

$$\begin{aligned} p(q_i|d_i=1)&= \tfrac{1}{2}, \\ p(q_i|d_i=0)&= \left\{ \begin{array}{l l} \tfrac{p^2}{p^2+(1-p)^2} &{} , \quad \text {if }q_i = 1, \\ \tfrac{(1-p)^2}{p^2+(1-p)^2} &{} ,\quad \text {if }q_i = 0. \end{array} \right. \end{aligned}$$

Now let \(\mathcal {U} = \{i:d_i = 1\}\), i.e., \(\mathcal {U}\) is the set of bit pair positions for which the first bit is retained by the Von Neumann extractor. The complete retained bit string (Y) can hence also be written as \(Y = Q_{\mathcal {U}}\). From the previous, it follows that \(Y=Q_{\mathcal {U}}\) is an unbiased and i.i.d. bit sequence. We now consider the leakage \(I\left( S;(W,D)\right) \):

$$\begin{aligned} I\left( S;(W,D)\right)&= I\left( S;D\right) + I\left( S;W|D\right) \\&= I\left( S;W|D\right) \\&= I\left( S;W|D,\mathcal {U}\right) \\&= H(W|D,\mathcal {U}) - H(W|D,\mathcal {U},S), \end{aligned}$$

since S and D are independent and \(\mathcal {U}\) is a function of D. Assume first that \(\#\mathcal {U} = |Y| \ge \ell \) and hence \(W = C \oplus Y_{1:\ell } = S\mathbf {G}\oplus Y_{1:\ell }\), then it holds that:

$$\begin{aligned} H(W|D,\mathcal {U})&\le \ell , \text { and,} \\ H(W|D,\mathcal {U},S)&= H(S\mathbf {G}\oplus Y_{1:\ell }|D,\mathcal {U},S), \\&= H(Y_{1:\ell }|D,\mathcal {U}) = \ell , \end{aligned}$$

since Y is i.i.d. and unbiased. For \(|Y| < \ell \) a similar result holds. By consequence,

$$\begin{aligned} I\left( S;(W,D)\right) = I\left( S;W|D,\mathcal {U}\right) = 0. \end{aligned}$$

\(\square \)

Appendix 4: Information leakage of code-offset key generators with \(\epsilon \)-2O-VN debiasing

Theorem 3

Let X be an n-bit p-biased (i.i.d.) PUF response, let Y be the \(\ell =\tfrac{n}{2}\)-symbol output of a classic von Neumann extractor applied on X with erasure symbols \(\epsilon \) inserted at the locations of discarded bit pairs. Let \(C=S\mathbf {G}\) be a random \(\ell \)-bit code word from a binary linear block code, and let (with the operation as defined before), then it holds that \(I\left( S;W\right) = 0\).

Proof

Note that as a result of the debiasing method, \(Y_i\) is distributed as follows: \(p(0) = p(1) = p(1-p)\) and \(p(\epsilon ) = p^2 + (1-p)^2\). From this, it follows that:

$$\begin{aligned} H(Y_i)&= -\left( p(0) \log _2 p(0) + p(1) \log _2 p(1) + p(\epsilon ) \log _2 p(\epsilon ) \right) \!,\\&= -\left( (1-p(\epsilon )) \log _2 \tfrac{1-p(\epsilon )}{2} + p(\epsilon ) \log _2 p(\epsilon ) \right) ,\\&= 1 - p(\epsilon ) + h(p(\epsilon )),\\&= h(2p(1-p)) + 2p(1-p). \end{aligned}$$

Similarly, one can show that:

$$\begin{aligned} H(W_i)&\le 1 - p(w_i=\epsilon ) + h(p(w_i = \epsilon )), \\&= h(2p(1-p)) + 2p(1-p). \end{aligned}$$

By consequence, \(I\left( S;W\right) = 0\). \(\square \)

Appendix 5: Reusability of code-offset key generators with \(\epsilon \)-2O-VN Debiasing

Theorem 4

Let \(X^{(1)}, X^{(2)}, \ldots \) be multiple evaluations of the same n-bit p-biased (i.i.d.) PUF response, differing potentially due to bit errors. Let \(Y^{(1)}, Y^{(2)}, \ldots \) be the corresponding \(\ell =\tfrac{n}{2}\)-symbol outputs of a classic von Neumann extractor applied on \(X^{(1)}, X^{(2)}, \ldots \) with erasure symbols \(\epsilon \) inserted at the locations of discarded bit pairs. Let \(C^{(1)} = S^{(1)}\mathbf {G}, C^{(2)} = S^{(2)}\mathbf {G}, \ldots \) be multiple independently and uniformly random \(\ell \)-bit code words from a binary linear block code, and let (with the operation as defined before), then it holds that \(I\left( S^{(i)};W^{(1)},W^{(2)},\ldots \right) = 0\).

Proof

Without loss of generality, we prove reusability for three enrollments. Hence, we will show that

$$\begin{aligned} I\left( S^{(3)};W^{(1)},W^{(2)},W^{(3)}\right) = 0, \end{aligned}$$

or equivalently \(I\left( C^{(3)};W^{(1)},W^{(2)},W^{(3)}\right) = 0\), which comes down to showing that \(\forall c^{(3)}: p(w^{(1)},w^{(2)},w^{(3)}|c^{(3)}) = p(w^{(1)},\) \(w^{(2)},w^{(3)})\). Let \(p_{yyy}()\) be the joint distribution of \((Y^{(1)}_i,Y^{(2)}_i,\) \(Y^{(3)}_i)\). Now observe that

since \(Y^{(j)}\) is unbiased. From this, it also follows that:

This leads to:

with \(\mathcal {C}\) the total number of different code words of the considered code. Equivalently, we can write:

Now observe that \((c^{(1)} \oplus c^{(3)})\) and \((c^{(2)} \oplus c^{(3)})\) run over all the code words of the linear code, uniformly, for each value of \(c^{(3)}\). Hence, it follows that \(p(w^{(1)},w^{(2)},w^{(3)}) = p(w^{(1)},w^{(2)},w^{(3)}|c^{(3)})\) and consequentially that \(I\left( S^{(3)};W^{(1)},W^{(2)},W^{(3)}\right) = 0\). \(\square \)

The conclusion of Theorem 4 holds for classic VN debiasing with erasures. It can be extended to pair-output VN debiasing in the same way as 2O-VN.

Appendix 6: Derivation of relation: \(p_e = f_{\text {bias}}(p;p_{e@50\%})\)

In [18], the distributions of bias probabilities (P) and error probabilities (\(P_e\)) over the PUF response bits were derived:Footnote 22

$$\begin{aligned} {\mathbf {cdf}}_{P}(x;\lambda _1,\lambda _2) =\,&\Phi \left( \lambda _1\Phi ^{-1}(x)+\lambda _2\right) ,\\ {\mathbf {cdf}}_{P_{e}}(x;\lambda _1,\lambda _2) =\,&\lambda _1 \int _{-\infty }^{\Phi ^{-1}(x)} \Phi (-u) \\&\times \left( \varphi (\lambda _1u+\lambda _2)+\varphi (\lambda _1u-\lambda _2)\right) du, \end{aligned}$$

with \(\varphi (x)\) and \(\Phi (x)\), respectively, the probability density function and the cumulative distribution function of the standard normal distribution. These distributions of P and \(P_e\) have two parameters: \(\lambda _1\) and \(\lambda _2\). For an unbiased PUF, the average bias probability is \(50~\%\) and it follows that \(\lambda _2 = 0\). For a given average bit error rate \(p_{e@50\%}\) of an unbiased PUF, we can hence write:

$$\begin{aligned} p_{e@50\%}= \mathbb {E}\left( P_{e@50~\%}\right) = \int _{0}^{1} 1-{\mathbf {cdf}}_{P_e}(x;\lambda _1,0)dx, \end{aligned}$$

which can be inverted (numerically) to find the corresponding value for \(\lambda _1\) which fully determines the noisiness of the considered PUF. Next, we look at what happens when a PUF with this value for \(\lambda _1\) becomes biased, i.e., \(\lambda _2\) is no longer zero. For a globally p-biased PUF, we can write:

$$\begin{aligned} p = \mathbb {E}\left( P\right) = \int _{0}^{1} 1-{\mathbf {cdf}}_{P}(x;\lambda _1,\lambda _2)dx, \end{aligned}$$

which can again be inverted (numerically) to find the corresponding value for \(\lambda _2\). For a p-biased PUF which would have a (hypothetical) bit error rate of \(p_{e@50\%}\) when it would not be biased, we can hence derive the parameters \(\lambda _1\) and \(\lambda _2\) from p and \(p_{e@50\%}\). Finally, we can write:

$$\begin{aligned} p_e= & {} \mathbb {E}\left( P_e\right) = \int _{0}^{1} 1-{\mathbf {cdf}}_{P_e}(x;\lambda _1,\lambda _2)dx\\= & {} f_{\text {bias}}(p;p_{e@50\%}). \end{aligned}$$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Maes, R., van der Leest, V., van der Sluis, E. et al. Secure key generation from biased PUFs: extended version. J Cryptogr Eng 6, 121–137 (2016). https://doi.org/10.1007/s13389-016-0125-6

Download citation

Keywords

  • PUFs
  • Key generation
  • Code-offset method
  • Bias
  • Helper data leakage