Abstract
Everlasting security models the setting where hardness assumptions hold during the execution of a protocol but may get broken in the future. Due to the strength of this adversarial model, achieving any meaningful security guarantees for composable protocols is impossible without relying on hardware assumptions (MüllerQuade and Unruh, JoC’10). For this reason, a rich line of research has tried to leverage physical assumptions to construct wellknown everlasting cryptographic primitives, such as commitment schemes. The only known everlastingly UC secure commitment scheme, due to MüllerQuade and Unruh (JoC’10), assumes honestly generated hardware tokens. The authors leave the possibility of constructing everlastingly UC secure commitments from malicious hardware tokens as an open problem. Goyal et al. (Crypto’10) constructs unconditionally UCsecure commitments and secure computation from malicious hardware tokens, with the caveat that the honest tokens must encapsulate other tokens. This extra restriction rules out interesting classes of hardware tokens, such as physically uncloneable functions (PUFs). In this work, we present the first construction of an everlastingly UCsecure commitment scheme in the fully malicious token model without requiring honest token encapsulation. Our scheme assumes the existence of PUFs and is secure in the common reference string model. We also show that our results are tight by giving an impossibility proof for everlasting UCsecure computation from nonerasable tokens (such as PUFs), even with trusted setup.
Introduction
The security of almost all cryptographic schemes relies on certain hardness assumptions. These assumptions are believed to hold right now, and researchers are even fairly certain that they will not be broken in the near future. It is widely believed, for example, that the computational Diffie–Hellmann and the RSA assumptions hold in certain groups. But what about the security of these assumptions in 10, 20, or 100 years? Can we give any formal security guarantees for current constructions that remain valid in the distant future? This is certainly possible for informationtheoretic schemes and properties. However, given that many interesting functionalities are impossible to realize in an information theoretic sense, this leaves us in a very unsatisfactory situation.
To overcome this problem, MüllerQuade and Unruh suggested a novel security notion widely known as everlasting universal composability security [26] (building on the work of Rabin on virtual satellites [34]). The basic idea of this security notion is to bound the running time of the attacker only during the protocol execution. After the protocol run is over, the attacker may run in superpolynomial time. This models the intuition that computational assumptions are believed to hold right now, and therefore, during the protocol run. However, at some point in the future, known computational assumptions may no longer hold. Everlasting UC security^{Footnote 1} refers to a composable protocol that remains secure in these settings. The everlasting UC security model has also been considered for quantum protocols [35]. Everlasting UC security is clearly a very desirable security notion, and since it is strictly weaker than statistical UC security, one may hope that it is easier to achieve. However, MüllerQuade and Unruh showed that everlasting UC commitments cannot be realized, not even in the common reference string (CRS), or with a publickey infrastructure (PKI) [27].
Everlasting UC Security From Hardware Assumptions The stark impossibility result of MüllerQuade and Unruh raises the question whether the notion is achievable at all. The authors answered this question affirmatively by presenting two constructions based on hardware assumptions. The first construction is based on a tailoredmade hardware token that embeds a random oracle. The second construction relies on signature cards [27]. However, both constructions assume that the hardware token is honestly generated. The authors left open the question whether it is possible to achieve everlasting security in the setting of maliciously generated hardware tokens. Goyal et al. [18] constructs unconditionally UCsecure commitments and secure computation (as opposed to everlasting) from malicious hardware tokens. However, the construction of [18] requires honest tokens to encapsulate other tokens, ruling out some classes of hardware tokens such as physically uncloneable functions (PUFs).
Physically Uncloneable Functions (PUFs) In this work, we present an everlastingly UC secure commitment scheme assuming the existence of PUFs. Loosely speaking, PUFs are physical objects that can be queried by mapping an input to a specific stimulus and mapping an observable behaviour to an output set. The crucial properties for a PUF are (i) that it should be hard (if not impossible) to clone and (ii) that it should be hard to predict the output on any input without first querying the PUF on a close enough input.
Our Contributions
We initiate the study of everlasting UC security in the setting of maliciously generated hardware tokens, such as PUFs. Our model extends the frameworks of [4, 8] by introducing fully malicious hardware tokens, whose state is not apriori bounded, the generator of a token can install arbitrary code inside of it, and it can encapsulate (and decapsulate) other (possibly fully malicious) tokens within itself. Our contributions can be summarized as follows:

Aiming at bridging the gap between hardware tokens and PUFs, we propose a unified ideal functionality for fully malicious tokens that is general enough to capture hardware devices with arbitrary functionalities such as PUFs and signature cards.

We put forward a novel definition for unpredictability of PUFs. We argue that the formalization from prior works [3, 24, 30] is not sufficient for our setting because it does not exclude adversaries that may indeed predict the PUF responses for values never queried to the PUF. We demonstrate this fact in Sect. 4.1.1 by giving a concrete counterexample.

We show with an impossibility result that one cannot hope to achieve an everlastingly secure oblivious transfer (OT) (therefore, secure computation) in the malicious token setting by using nonerasable (honestly generated) tokens; nonerasable tokens can keep a state but are not allowed to erase previous states.

Finally, we present an everlastingly UC secure commitment scheme in the fully malicious token model. Our protocol assumes the existence of PUFs and allows for the PUF to be reused for polynomially many runs of the protocol. Our cryptographic building blocks can be instantiated from standard computational assumptions, such as the learning with errors (LWE) problem.
Related Work
Everlasting and Memory Bound Adversaries Everlasting security was first considered in the setting of memorybounded adversaries [6, 10], and later extended to the UC setting by MüllerQuade and Unruh [27]. Rabin [34] suggested a construction using distributed servers of randomness, called virtual satellites, to achieve everlasting security. The resulting scheme remains secure if the attacker that accesses the communication between the parties and the distributed servers is polynomially bounded during the key exchange. Dziembowski and Maurer [15] showed that protocols in the bounded storage model do not necessarily stay secure when composed with other protocols.
Damgård [11] presented a statistical zeroknowledge protocol secure under concurrent composition. Although counterintuitive, statistical zeroknowledge may lose its everlasting property under composition. This was illustrated in [27] for statistically hiding UC commitments [16] which were shown to leak secrets under (even sequential) composition; they are composable and statistically hiding, but not at the same time (i.e. the composability only holds for the computational hiding property, intuitively). Technically, the reason for this is that the common reference string used by the simulator is not statistically indistinguishable. For the same reason, the protocol of Damgård [11] does not directly translate into an everlasting commitment scheme: for this specific case, the gap consists in extracting the witness from adversarial proofs using a common reference string that is statistically close to the honestly sampled one.
(Malicious) Hardware Tokens A model proposed in [22] allows parties to build hardware tokens to compute functions of their choice, such that an adversary, given a hardware token T for a function F, can only observe the input and output behaviour of T. The motivation is that the existence of a tamperproof hardware can be viewed as a physical assumption, rather than a trust assumption. The authors show how to implement UCsecure twoparty computation using stateful tokens, under the DDH assumption. Shortly after, Moran and Segev [28] showed that in the hardware token model of [22] even unconditionally secure UC commitments are possible using stateful tokens. This result was later extended by [19] for unconditionally UCsecure computation, also using stateful tokens.
One limitation of the model of [22] is the assumption that all parties (including the adversary) know the code running inside the hardware token it produces; this assumption gives extra power to the simulator, allowing it to rewind the hardware token in the proofs of [19, 22, 28]. However, this assumption rules out real scenarios where the adversary can create a new hardware token that simply “encapsulates” a hardware token it receives from some party and that the adversary does not know the code running inside of it.
In this direction, Chandran et al. [8] extended the model of [22] to allow for the hardware tokens produced by the adversary to be stateful, to encapsulate other tokens inside of it and to be passed on to other parties. They constructed a computationally secure UC commitment protocol without setup, assuming the existence of stateless hardware tokens (signature cards). Unfortunately, the construction of [8] cannot fulfil the notion of unconditional (or everlasting) security since it requires perfectly binding, and therefore only computationally hiding, commitments as a building block.
Goyal et al. [18], following the model of [8], prove that statistically secure OT from stateless tokens is possible if (honest) tokens can encapsulate other tokens. However, honest token encapsulation is highly undesirable in practice, and in particular not even compatible with PUFs as they are physical objects. Interestingly, the authors also show that statistically secure OT (and therefore secure computation) is impossible to achieve when one considers only stateless tokens that cannot be encapsulated. To circumvent this impossibility result, Döttling et al. [13, 14] studied the feasibility of secure computation in the stateful token model, where the adversary is not allowed to rewind the token arbitrarily. Although this model has a practical significance, it does not cover certain classes of hardware tokens, such as PUFs. Later, a rich line of research investigated on the round complexity of secure computation using stateless hardware tokens [20, 25] in the computational setting. Unfortunately, it seems that the security guarantees of these protocols cannot be lifted to the everlasting model: in Sect. 5, we present an impossibility result against everlastingly UC secure computation from stateful but nonerasable honest tokens. The result holds even in the presence of an honestly sampled CRS.
PUFs Brzuska et al. [3] introduced PUFs in the UC framework, and proposed UC constructions of several interesting cryptographic primitives such as oblivious transfer, bit commitment, and key agreement. Ostrovsky, Scafuro, Visconti, and Wadia [30] pointed out that the previous results implicitly assume that all PUFs, including those created by the attacker, are honestly generated. To address this limitation, they defined a model in which an attacker can create malicious PUFs having arbitrary behaviour. Many of the previous protocols can be easily attacked in this new adversarial setting, but Ostrovsky, Scafuro, Visconti, and Wadia showed that it is possible to construct universally composable protocols for secure computation in the malicious PUF model under additional, numbertheoretic assumptions. They leave open the question of whether unconditional security is possible in the malicious PUF model. Damgård and Scafuro [17] have made partial progress on this question presenting a commitment scheme with unconditional security based on PUFs. However, as shown by [4] in the form of an attack, the construction of [17] completely breaks when the adversary is allowed to generate encapsulated PUFs. DachmanSoled, Fleischhacker, Katz, Lysyanskaya, and Schröder [12] investigated the possibility of secure twoparty computation based on malicious PUFs. Badrinarayanan, Khurana, Ostrovsky, and Visconti [4] introduced a model where the adversary is allowed to generate malicious PUFs that encapsulate other PUFs inside of it; the outer PUF has oracle access to all its inner PUFs. The security of their scheme assumes a bound on the memory of adversarially generated PUFs.
In Table 1, we show a comparison of UC schemes based on malicious hardware tokens (including PUFs).
Technical Overview
In the following, we give an informal overview of our everlasting UC commitment scheme construction, and we introduce the main ideas behind our proof strategy. Besides PUFs, our protocol assumes the existence of the following cryptographic building blocks:

A noninteractive statistically hiding (NISH) UCsecure commitment (\(\mathsf {Com}\)).

A 2round statistically receiver private UCsecure oblivious transfer (OT).

A statistical witnessindistinguishable argument of knowledge (SWIAoK).

A strong randomness extractor H.
The message flow of our protocol is shown in Fig. 1. The protocol is executed by a committer (Alice) and a recipient (Bob). We assume that both parties have access to a uniformly sampled common reference string that contains a random image of a oneway permutation \(y = f(x)\).
Protocol Overview At the beginning of a commitment execution, Bob prepares a series of random stringpairs \((p_i^0, p_i^1)\), and queries them to the PUF to obtain the corresponding pair \((q_i^0, q_i^1)\); the PUF is then transferred to Alice. Here, we make the simplifying assumption that a PUF is used only for a single run of the commitment. Note, however, that one can reuse the same PUF by having Bob computing as many tuples \((p_i^0, p_i^1)\) as needed, and by querying the PUF on all of these values before passing it to Alice.
Alice samples a random string \(k \in \{0,1\}^{\ell (\uplambda )}\) and engages Bob in many parallel OT instances, where Alice receives \(p_i^{k_i}\), and where \(k_i\) denotes the ith bit of k. Alice queries the strings \(p_i^{k_i}\) to the PUF and sends to Bob:

a set of NISH commitments \((\mathsf {com}_1, \ldots , \mathsf {com}_{\ell (\uplambda )})\) to the outputs of the PUF,

an (NISH) commitment \(\mathsf {com}\) to m, and

the string \(\omega := H(\mathsf {seed}, k) \oplus m \Vert \mathsf {decom}\).
Alice then produces a SWIAoK that certifies that either (i) all of her messages were honestly generated, or (ii) she knows a preimage x such that \(f(x) = y\).
The idea here is that, if an algorithm recovers k, then it can also recompute \(H(\mathsf {seed}, k)\) and extract the message m. Note that the value of k is “encoded” in the OT bits of Alice for the \(p_i^{k_i}\), and those values are queried by Alice to the PUF. Therefore, an extractor that sees the queries of Alice can easily recover the message m. What is not clear at this point is how to enforce Alice to query the PUF on the correct \(p_i^{k_i}\) and not on some other random string. For this reason, we introduce an additional authentication step where Bob publishes all the pairs \((q_i^0, q_i^1)\). In the opening phase, Alice proves (with a SWIAoK) to Bob that the vector of commitments sent in the previous interaction opens indeed to \(q_1^{k_1}, \ldots , q_{\ell (\uplambda )}^{k_{\ell (\uplambda )}}\), up to small errors (or she knows the preimage of y). Intuitively, Alice cannot convince Bob without querying all the \(p_i^{k_i}\), since she would need to guess some \(q_i^{{\overline{k}}_i}\) without knowing the preimage \(p_i^{{\overline{k}}_i}\) (due to the security of the OT). In the proof, the extractor can recover k by just looking at the queries Alice made to the PUF.
To see why the commitment is hiding, it is sufficient to observe that k hides the message in an information theoretic sense, under the assumption that the OT and SWIAoK protocols are secure. One subtlety that we need to address is that some bits of k might be revealed by the aborts of Alice. For this reason, we onetimepad the message m with \(H(\mathsf {seed}, k)\): the strong randomness extractor guarantees that the value \(H(\mathsf {seed}, k)\) is still uniformly distributed even if some bits of k are leaked.
Proof Sketch (Hiding) We show that our commitment scheme is hiding through a series of hybrids where at the last step Alice can equivocate the commitment to any message of her choice. Note that every step is informationtheoretic.
\(\mathcal {H}_1\): Alice uses x, the preimage of y, as a witness to compute the SWIAoK. Since the AoK is statistically witness indistinguishable, this hybrid is statistically close to the original protocol.
\(\mathcal {H}_2\): Alice uses the simulator for the OT protocols and extracts both values \((p_i^0, p_i^1)\). Since the OT is statistically receiverprivate, this hybrid is statistically close to the previous. In the full proof, this is shown via a hybrid argument.
\(\mathcal {H}_3\): Alice computes \(\mathsf {com}_i\) as commitment to a random string. A hybrid argument can be used to bound the distance of this hybrid with the previous by the statistically hiding property of the commitment scheme.
\(\mathcal {H}_4\): Alice chooses the value of k for all sessions upfront. Here, the change is only syntactical.
\(\mathcal {H}_5\): Alice no longer queries the PUF token but instead checks that the output pairs \((q_i^0, q_i^1)\) sent by Bob correspond to the correct outputs of the PUF on input \((p_i^0, p_i^1)\). Note that the state of the PUF is fixed once the PUF is sent to Alice and therefore the consistency of all pairs \((q_i^0, q_i^1)\) is well defined. Note that the relation is not efficiently computable by Alice, but for informationtheoretic security the fact that it is defined is enough. Since Alice retains the ownership of the PUF, this hybrid is identical to the previous.
\(\mathcal {H}_5\): Alice samples \(\omega \) uniformly at random. Note that in \(\mathcal {H}_4\) the leakage of Alice of k is bounded by whether she aborts or not. Since Alice aborts at most once and since there are at most polynomially many sessions, we can bound the leakage of k to \(O(\log \uplambda )\) bits. Leveraging the randomness extractor \(H\), we can argue that \(\mathcal {H}_4\) and \(\mathcal {H}_5\) are statistically indistinguishable.
\(\mathcal {H}_6\): Alice opens the commitment to a message of her choice. Note that in \(\mathcal {H}_5\) the original message m is informationtheoretically hidden.
Proof Sketch (Binding) To argue that the scheme is binding, we define the following extractor: the algorithm examines the list of queries made by Alice to the PUF and, for each i, it checks whether some query \({\mathfrak {q}}\) is equal to \(p_i^b\) (for \(b\in \{0,1\}\)), if this is the case then it sets \(k_i = b\). Once the full k is reconstructed, the extractor computes \(\omega \oplus H(\mathsf {seed},k) = m \Vert \mathsf {decom}\) and outputs m. To show that the extractor always succeeds, we need to argue that:

1.
The value of k is always well defined: if some \({\mathfrak {q}} = p_i^0\) and some other query \({\mathfrak {q}}' = p_i^1\), then the bit \(k_i\) is not well defined. However, this means that Alice learned both \(p_i^0\) and \(p_i^1\) from the OT protocol, which is computationally infeasible.

2.
The string k is always fully reconstructed: if no query \({\mathfrak {q}}\) is equal to \(p_i^0\) or \(p_i^1\), then the ith bit of k is not defined. This implies that Alice never queried \(p_i^0\) or \(p_i^1\) to the PUF. However, note that Alice should produce a commitment \(\mathsf {com}_i\) to either PUF(\(p_i^0\)) or PUF(\(p_i^0\)) and prove consistency. This is clearly not possible without querying the PUF unless Alice breaks the binding of the commitment or proves a false statement in the SWIAoK. To establish the latter, we also need to rule out the case where Alice computes the SWIAoK using the knowledge of x, the preimage of y. In the full proof, we show this via a reduction against the onewayness of the oneway permutation f.
We are now in the position to show that the extracted message m is identical to the one that Alice decommits to. Recall that Alice proves that she committed to the values PUF(\(p_i^{k_i}\)) such that \(\omega \oplus H(\mathsf {seed},k) = m \Vert \mathsf {decom}\). It follows that, if k is uniquely defined, then the extractor always returns the correct m, unless Alice can break the soundness of the SWIAoK (or inverts the oneway permutation). By the above conditions, this happens with all but negligible probability.
On the Common Reference String Our protocol needs to assume the existence of a common reference string to equivocate commitments in the security proof: having access to the generation of the \(\mathsf {crs} \), the simulator can craft proofs for false statements, simulate the OT, and extract the commitments. Note that the simulation has to be “straightline”, since we cannot rewind the adversary in the UC framework. A previous work [29] circumvented this issue by leveraging some computationally hard problem. Unfortunately, this class of techniques does not seem to apply to the everlasting setting since the environment can distinguish a simulated trace once it becomes unbounded. The work of [17] builds unconditionally secure commitments from PUFs without a CRS, but as shown by [4], the construction breaks down in our model where the adversary is allowed to generate encapsulated PUFs. It is not clear if the techniques of [17] can be adapted to our setting. We leave the question of removing the necessity of a common reference string from our protocol as a fascinating open problem.
Preliminaries
In the following, we introduce the notation and the building blocks necessary for our results.
Notations
An algorithm \(\mathcal {A} \) is probabilistic polynomial time (PPT) if \(\mathcal {A} \) is randomized and for any input \(x,r\in \{0,1\}^*\) the computation of \(\mathcal {A} (x;r)\) terminates in at most \(\mathsf {poly}(x)\) steps. We denote with \(\uplambda \in {\mathbb {N}}\) the security parameter. A function \(\mathsf {negl}\) is negligible, if for any positive polynomial p and sufficiently large k, \(\mathsf {negl}(k) < 1/p(k)\). A relation \(R \in \{0,1\}^{*}\times \{0,1\}^{*}\) is an \({\mathcal {N}}{\mathcal {P}}\) relation if there is a polynomialtime algorithm that decides \((x,w)\in R\). If \((x,w)\in R\), then we call x the statement and x witness for x. We denote by \({\mathsf {hd}}(x,x')\) the Hamming distance between two bitstrings x and \(x'\). Given two ensembles \(\mathbf {X} = \{X_\uplambda \}_{\uplambda \in {\mathbb {N}}}\) and \(\mathbf {Y} = \{Y_\uplambda \}_{\uplambda \in {\mathbb {N}}}\), we write \(\mathbf {X} \approx \mathbf {Y}\) to denote that the two ensembles are statistically indistinguishable, and \(\mathbf {X} \approx _c \mathbf {Y}\) to denote that they are computationally indistinguishable. We denote the set \(\{1, \ldots , n\}\) by [n]. We recall the definition of statistical distance.
Definition 1
(Statistical Distance) Let X and Y be two random variables over a finite set \(\mathcal {U}\). The statistical distance between X and Y is defined as
Cryptographic Building Blocks
One Way Function A oneway function is a function that is easy to compute and hard to invert. It is the building block of almost all known cryptographic primitives.
Definition 2
A function \(f : \{0,1\}^* \rightarrow \{0,1\}^*\) is one way if and only if it can be computed in polynomial time but for all PPT algorithms \(\mathcal {A} \), there exists a negligible function \(\mathsf {negl}\) such that
where the probability is taken over the random choice of x. Moreover, we say that f is a oneway permutation whenever the domain and range of f are of the same size.
Noninteractive Commitment Scheme A commitment scheme (in the CRS model) consists of a pair of efficient algorithms \(\mathcal {C}= (\mathsf {Com},\mathsf {Open})\) where: \(\mathsf {Com}\) takes as input \(m \in \{0,1\}^{\uplambda }\) and outputs \((\mathsf {decom},\mathsf {com}) \leftarrow \mathsf {Com}(m)\), where \(\mathsf {decom}\) and \(\mathsf {com}\) are both of length \(\{0,1\}^{\uplambda }\); the algorithm \(\mathsf {Open}(\mathsf {decom},\mathsf {com})\) outputs a message m or \(\bot \) if c is not a valid commitment to any message. It is assumed that the commitment scheme is complete, i.e. for any message \(m \in \{0,1\}^{\uplambda }\) and \((\mathsf {decom},\mathsf {com})\leftarrow \mathsf {Com}(\textit{ck},m)\), we have \(\mathsf {Open}(\textit{ck},\mathsf {decom},\mathsf {Com}(\textit{ck},m))=m\) with overwhelming probability in \(\uplambda \in {\mathbb {N}}\). For convenience, we assume that the verification is deterministic and canonical (i.e. it takes as input the random coins used in the commitment phase and checks whether the commitment was correctly computed).
We require commitments to be (standalone) statistically hiding. Let \(\mathcal {A} \) be a nonuniform adversary against \(\mathcal {C}\) and define its \(\mathsf {hiding}\)advantage as
Definition 3
(Statistically Hiding) \(\mathcal {C}\) is statistically hiding if the advantage function \(\mathbf {Adv}_{\mathcal {C},\mathcal {A}}^{\text {hid}}\) is a negligible function for all unbounded adversaries \(\mathcal {A} \).
Furthermore, we require the commitments to be UCsecure: roughly speaking, an equivocator (with the help of a trapdoor in the CRS) can open the commitments arbitrarily. On the other hand, we require the existence of a computationally indistinguishable CRS (in extraction mode) where commitments are statistically binding and can be efficiently extracted via the knowledge of a trapdoor. Such commitments can be constructed in the CRS model from a variety of assumptions [32], including the learning with errors (LWE) problem. For a precise functionality, we refer the reader to Sect. 3.3.
Oblivious Transfer A \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \)Oblivious transfer (OT) is a protocol executed between two parties called sender \(\mathcal {S} \) (i.e. Alice) with input bits \((s_0 , s_1)\) and receiver \(\mathcal {R} \) (i.e. Bob) with input bit b. Bob wishes to retrieve \(s_b\) from Alice in such a way that Alice does not learn anything about Bob’s choice b and Bob learns nothing about Alice’s remaining input \(s_{1b}\). In this work, we require a 2round protocol \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) secure in the CRS model, which satisfies (standalone) statistical receiver privacy. We define the sender Alice’s advantage of breaking the security of Bob to be
Definition 4
(Statistical Receiver Privacy) \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) is statistically receiverprivate if the advantage function \(\mathbf {Adv}_\mathcal {S} ^{\mathsf {OT}}\) is a negligible function for all unbounded adversaries \(\mathcal {A} \).
In addition, we require our OT to be UCsecure: for a wellformed CRS, there exists an efficient equivocator that can (noninteractively) recover both messages of the sender. Furthermore, there exists an alternative CRS distribution (which is computationally indistinguishable from the original one) and an efficient noninteractive extractor that is able to uniquely recover the message of the receiver using the knowledge of a trapdoor. Such 2round OT can be constructed from a variety of assumptions [31], including LWE [33]. For a precise description of the ideal functionality, we refer the reader to Sect. 3.3.
Statistical WitnessIndistinguishable Argument of Knowledge (SWIAoK) A witnessindistinguishable argument is a proof system for languages in \({\mathcal {N}}{\mathcal {P}}\) that does not leak any information about which witness the prover used, not even to a malicious verifier. If the prover is a PPT algorithm, then we call such a system an argument system, and if it is unbounded, we call it a proof system. For witnessindistinguishable arguments of knowledge, we formally introduce the following notation to represent interactive executions between algorithms \(\mathcal {P}\) and \(\mathcal {V}\). By \(\left\langle \mathcal {P}(y),\mathcal {V}(z)\right\rangle (x)\), we denote the view (i.e. inputs, internal coin tosses, incoming messages) of \(\mathcal {V}\) when interacting with \(\mathcal {P}\) on common input x, when \(\mathcal {P}\) has auxiliary input y and \(\mathcal {V}\) has auxiliary input z. Some of the following definitions are based on [29].
Definition 5
(Witness Relation) A witness relation for a \({\mathcal {N}}{\mathcal {P}}\) language L is a binary relation R that is polynomially bounded, polynomial time recognizable, and characterizes L by \(L = \{x :\exists w \text { s.t. }(x, w) \in R\}\). We say that w is a witness for \(x \in L\) if \((x, w) \in R\).
Definition 6
(Interactive Argument System) A twoparty game \(\left\langle \mathcal {P},\mathcal {V}\right\rangle \) is called an Interactive Argument System for a language L if \(\mathcal {P}\), \(\mathcal {V}\) are PPT algorithms and the following two conditions hold:

Completeness: For every \(x \in L\), .

Soundness: For every \(x \notin L\) and every PPT algorithm \(\mathcal {P}^*\), there exists a negligible function \(\mathsf {negl}(\cdot )\), such that, .
Definition 7
(Witness Indistinguishability) Let \(L \in {\mathcal {N}}{\mathcal {P}}\) and \((\mathcal {P},\mathcal {V})\) be an interactive argument system for L with perfect completeness. The proof system \((\mathcal {P},\mathcal {V})\) is witness indistinguishable (WI) if for every PPT algorithm \(\mathcal {V}^*\), and every two sequences \(\{ w^1_x\}_{x\in L}\) and \(\{ w^2_x\}_{x\in L}\) such that \(w^1_x,w^2_x \in R\), the following sequences are witness indistinguishable:

1.
\(\{ \left\langle \mathcal {P}(w^1_x),\mathcal {V}(z)\right\rangle (x)\}_{x \in L,z\in \{0,1\}^{*}}\)

2.
\(\{ \left\langle \mathcal {P}(w^2_x),\mathcal {V}(z)\right\rangle (x)\}_{x \in L,z\in \{0,1\}^{*}}\)
Next, we define the notion of extractability for SWIAoKs.
Definition 8
(Argument of Knowledge) Let \(L \in {\mathcal {N}}{\mathcal {P}}\) and \((\mathcal {P},\mathcal {V})\) be an interactive argument system for L with perfect completeness. The proof system \((\mathcal {P},\mathcal {V})\) is an argument of knowledge (AoK) if there exists a PPT algorithm \(\mathsf {Ext}\), called the extractor, a polynomial p, and a constant c such that, for every PPT machine \(\mathcal {P}^*\), every \(x \in L\), auxiliary input z, and random coins r, there exists a negligible function \(\mathsf {negl}\) such that
Strong Randomness Extractor A strong randomness extractor is a function that, applied to some input with high minentropy, returns some uniformly distributed element in the range.
Definition 9
(Strong Randomness Extractor) A function \(H: \{0,1\}^d \times \{0,1\}^\ell \rightarrow \{0,1\}^c\) is called a \((t,\varepsilon )\)strong randomness extractor if for all \(X \in \{0,1\}^\ell \) s.t \({\mathsf {H}}_{\infty }(X) \ge t\), we have that,
and \(L = tc\) is called the entropy loss of \(H\).
Universal Composability Framework
In this section, we recall the basics of the Universal Composability (UC) framework of Canetti [5], and later we discuss the Everlasting Universal Composability framework^{Footnote 2} following [27] closely. We refer the reader to [5, 27] for a more comprehensive description.
Basics of the UC Framework
Our description of the UC framework follows [27] closely. The composition of two provably secure protocols does not necessarily preserve the security of each protocol and the result may also be no longer secure. A framework that analyses the security of composed protocols and which is able to provide security guarantees is the Universal Composability framework (UC) due to Canetti [5].
The main idea of this security notion is to compare a real protocol \(\pi \) with some ideal protocol \(\rho \). In most cases, this ideal protocol \(\rho \) will consist of a single machine, a socalled ideal functionality. Such a functionality can be seen as a trusted machine that implements the intended behaviour of the protocol. For example, a functionality \({\mathcal {F}}\) for commitment would expect a value m from a party C. Upon receipt of that value, the recipient R would be notified by \({\mathcal {F}}\) that C has committed to some value (but \({\mathcal {F}}\) would not reveal that value). When C sends an unveil request to \({\mathcal {F}}\), the value m will be sent to R (but \({\mathcal {F}}\) will not allow C to unveil a different value).
Given a real protocol \(\pi \) and an ideal protocol \(\rho \), we say that \(\pi \) realizes \(\rho \) (also called “implements”, “emulates”, or “is as secure as”) if for any adversary \({\mathcal {A}}\) attacking the protocol \(\pi \) there is a simulator \({\mathcal {S}}\) performing an attack on the ideal protocol \(\rho \) such that no environment \({\mathcal {Z}}\) can distinguish between \(\pi \) running with \({\mathcal {A}}\) and \(\rho \) running with \({\mathcal {Z}}\). Here, \({\mathcal {Z}}\) may choose the protocol inputs and read the protocol outputs and may communicate with the adversary or simulator (but \({\mathcal {Z}}\) is, of course, not informed whether it communicates with the adversary or the simulator). First, the environment may communicate with the adversary during the protocol execution, and second, the environment does not need to choose the inputs at the beginning of the protocol execution; it may adaptively send inputs to the protocol parties at any time, and it may choose these inputs depending upon the outputs and the communication with the adversary. These modifications are the reason for the very strong composability properties of the UC model.
Network Execution In the UC framework, all protocol machines and functionalities, as well as the adversary, the simulator and the environment are modelled as interactive Turing machines (ITM). Throughout a protocol execution, an integer k called the security parameter is accessible to all parties. At the beginning of the execution of a network consisting of \(\pi \), \({\mathcal {A}}\), and \({\mathcal {Z}}\), the environment \({\mathcal {Z}}\) is invoked with an initial input z. From then on, every machine M that is activated can send a message m to a single other machine \(M'\). Then that machine \(M'\) is activated and given the message m and the id of the originator \(M'\). If in some activation a machine does not send a message, the environment \({\mathcal {Z}}\) is activated again. Additionally the environment may issue corruption requests for some party P. From then on, the machines corresponding to the party P are controlled by the adversary (i.e. it can send and receive messages in the name of that machine, and it can read the internal state of that machine). Finally, at some point the environment \({\mathcal {Z}}\) gives some output m which can be an arbitrary string. By \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}(k,z)\) we denote the distribution of that output m on security parameter k and initial input z. Analogously, we define \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}(k,z)\) for an execution involving the protocol \(\rho \), the simulator \({\mathcal {S}}\), and the environment \({\mathcal {Z}}\).
We distinguish two different flavours of corruption. We speak of static corruption if the environment \({\mathcal {Z}}\) may only send corruption requests before the begin of the protocol, and of adaptive corruption if \({\mathcal {Z}}\) may send corruption requests at any time in the protocol, even depending on messages learned during the execution. In this paper, we will restrict our attention to the less strict security model using static corruption. We leave the case of adaptive corruptions, in which the environment may corrupt any party adaptively during the execution of the protocol as an interesting open problem.
UC Definitions If the ideal protocol \(\rho \) consists of an ideal functionality \({\mathcal {F}}\), for technical reasons we assume the presence of socalled dummy parties that forward messages between the environment \({\mathcal {Z}}\) and the functionality \({\mathcal {F}}\). For example, assume that \({\mathcal {F}}\) is a commitment functionality. In an ideal execution, \({\mathcal {Z}}\) would send a value m to the party C (since it does not know of \({\mathcal {F}}\) and therefore will not send to \({\mathcal {F}}\) directly). Then, C would forward m to \({\mathcal {F}}\). Then, \({\mathcal {F}}\) notifies R that a commitment has been performed. This notification is then forwarded to \({\mathcal {Z}}\). With these dummy parties we have, at least syntactically, the same messages as in the real execution: \({\mathcal {Z}}\) sends m to C and receives a commit notification from R. Second, the dummy parties allow a meaningful corruption in the ideal model. If \({\mathcal {Z}}\) corrupts some party P, in the ideal model the effect would be that the simulator controls the corresponding dummy party P and thus can read and modify messages to and from the functionality \({\mathcal {F}}\) in the name of P. Thus, if we write \(\mathrm {EXC}_{{\mathcal {F}},{\mathcal {S}},{\mathcal {Z}}}\), this is essentially an abbreviation for \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) where the ideal protocol \(\rho \) consists of the functionality \({\mathcal {F}}\) and the dummy parties. Having defined the families of random variables \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}(k,z)\) and \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}(k,z)\) we can now define security via indistinguishability.
Definition 10
(Universal Composability [5]) A protocol \(\pi \) UC realizes a protocol \(\rho \), if for any polynomialtime adversary \({\mathcal {A}}\) there exists a polynomialtime simulator \({\mathcal {S}}\), such that for any polynomialtime environment \({\mathcal {Z}}\),
Note that in this definition, it is also possible to only consider environments \({\mathcal {Z}}\) that give a single bit of output. As demonstrated in [5], this gives rise to an equivalent definition. However, in the case of everlasting UC below, this will not be the case, so we stress the fact that we allow \({\mathcal {Z}}\) to output arbitrary strings. In particular an environment machine can output its complete view.
Natural variants of this definition are statistical UC, where all machines (environment, adversary, simulator) are computationally unbounded and the families of random variables are required to be statistically indistinguishable, and perfect UC, where all machines are computationally unbounded and the families of random variables are required to have the same distribution. In these cases, one often additionally requires that if the adversary is polynomial time, so is the simulator.
Composition For some protocol \(\sigma \), and some protocol \(\pi \), by \(\sigma ^\pi \) we denote the protocol where \(\sigma \) invokes (up to polynomially many) instances of \(\pi \).^{Footnote 3} That is, in \(\sigma ^\pi \) the machines from \(\sigma \) and from \(\pi \) run together in one network, and the machines from \(\sigma \) access the inputs and outputs of \(\pi \). (In particular, \({\mathcal {Z}}\) then talks only to \(\sigma \) and not to the subprotocol \(\pi \) directly.) A typical situation would be that \(\sigma ^{\mathcal {F}}\) is some protocol that makes use of some ideal functionality \({\mathcal {F}}\) (say, a commitment) and then \(\sigma ^\pi \) would be the protocol resulting from implementing that functionality by some protocol \(\pi \) (say, a commitment protocol). One would hope that such an implementation results in a secure protocol \(\sigma ^\pi \). That is, if \(\pi \) realizes \({\mathcal {F}}\) and \(\sigma ^{\mathcal {F}}\) realizes \({\mathcal {G}}\), then \(\sigma ^\pi \) realizes \({\mathcal {G}}\). Fortunately, this is the case:
Theorem 11
(Universal Composition Theorem [5]) Let \(\pi \), \(\rho \), and \(\sigma \) be polynomialtime protocols. Assume that \(\pi \) UC realizes \(\rho \). Then, \(\sigma ^\pi \) UC realizes \(\sigma ^\rho \).
The intuitive reason for this theorem is that \(\sigma \) can be considered as an environment for \(\pi \) or \(\rho \), respectively. Since Definition 10 guarantees that \(\pi \) and \(\rho \) are indistinguishable by any environment, security follows. In a typical application of this theorem, one would first show that \(\pi \) realizes \({\mathcal {F}}\) and that \(\sigma ^{\mathcal {F}}\) realizes \({\mathcal {G}}\). Then using the composition theorem, one gets that \(\sigma ^\pi \) realizes \(\sigma ^{\mathcal {F}}\) which in turn realizes \({\mathcal {G}}\). Since the realizes relation is transitive (as can be easily seen from Definition 10), it follows that \(\sigma ^\pi \) realizes \({\mathcal {G}}\).
This composition theorem is the main feature of the UC framework. It allows us to build up protocols from elementary building blocks. This greatly increases the manageability of security proofs for large protocols. Furthermore, it guarantees that the protocol can be used in arbitrary contexts. Analogous theorems also hold for statistical and perfect UC.
Dummy adversary When proving the security of a given protocol in the UC setting, a useful tool is the socalled dummy adversary. The dummy adversary \(\tilde{\mathcal {A}}\) is the adversary that simply forwards messages between the environment \({\mathcal {Z}}\) and the protocol (i.e. it is a puppet of the environment that does whatever \({\mathcal {Z}}\) instructs it to do). In [5], it is shown that UC security with respect to the dummy adversary implies UC security. The intuitive reason is that since \(\tilde{\mathcal {A}}\) does whatever \({\mathcal {Z}}\) instructs it to do, it can perform arbitrary attacks and is therefore the worstcase adversary given the right environment (remember that we quantify over all environments).
We very roughly sketch the proof idea. Let protocols \(\pi \) and \(\rho \) and some adversary \({\mathcal {A}}\) be given. Assume that \(\pi \) UC realizes \(\rho \) with respect to the dummy adversary \(\tilde{\mathcal {A}}\). We want to show that \(\pi \) UC realizes \(\rho \) with respect to \({\mathcal {A}}\). Given an environment \({\mathcal {Z}}\), we construct an environment \({\mathcal {Z}}_{\mathcal {A}}\) which simulates \({\mathcal {Z}}\) and \({\mathcal {A}}\). Note that an execution of \(\mathrm {EXC}_{\pi ,\tilde{\mathcal {A}},{\mathcal {Z}}_{\mathcal {A}}}\) is essentially the same as \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}\) (up to a regrouping of machines). Then there is a simulator \(\tilde{\mathcal {S}}\) such that \(\mathrm {EXC}_{\pi ,\tilde{\mathcal {A}},{\mathcal {Z}}_{\mathcal {A}}}\) and \(\mathrm {EXC}_{\rho ,\tilde{\mathcal {S}},{\mathcal {Z}}_{\mathcal {A}}}\) are indistinguishable. Let \({\mathcal {S}}\) be the simulator that internally simulates the machines \({\mathcal {A}}\) and \(\tilde{\mathcal {S}}\) and forwards all actions performed by \({\mathcal {A}}\) as instructions to \(\tilde{\mathcal {S}}\) (remember that \(\tilde{\mathcal {S}}\) simulates \(\tilde{\mathcal {A}}\), so it expects such instructions). Then, \(\mathrm {EXC}_{\rho ,\tilde{\mathcal {S}},{\mathcal {Z}}_{\mathcal {A}}}\) is again the same as \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) up to a regrouping of machines. Summarizing, we have that \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}\) and \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) are indistinguishable.
A nice property of this technique is that it is quite robust with respect to changes in the definition of UC security. For example, it also holds with respect to statistical and perfect UC security, as well as with respect to the notion of Everlasting UC from [27].
Everlasting UC Security
In this section, we present our definitions of everlasting UC security. Our formalization builds on Canetti’s Universal Composability framework [5] and extends the notion of everlasting/longterm security due to MüllerQuade and Unruh [27]. Loosely speaking, everlasting security guarantees the “standard” notion of UC security during the execution of the protocol. This means that the security is guaranteed against polynomially bounded adversaries. Therefore, standard computational assumptions, such as the hardness of the decisional DiffieHellman problem and the existence of oneway functions can be used as hardness assumptions. However, after the execution of the protocol, we no longer assume that these assumptions hold, because they may be broken in the future. MüllerQuade and Unruh model this by letting the distinguisher become unbounded after the execution of the protocol. Everlasting security guarantees security and confidentiality in this setting.
They showed in [27] that everlasting UC commitments cannot be realized, not even in the common reference string (CRS) or the publickey infrastructure (PKI) model.^{Footnote 4} The fact that everlasting UC commitments cannot be constructed in the CRS model shows a strong separation between the everlasting UC and the computational UC security notion, because commitments schemes do exist (under standard assumptions) in the computational UC security model [7]. The stark impossibility result of MüllerQuade and Unruh motivated the use of other trust assumptions, such as trusted pseudorandom functions (TDF) and signature cards [27]. It is not hard to see that everlasting UC security is strictly stronger than computational UC security, since the adversary is allowed to become unbounded after the execution of the protocol, and it is strictly weaker than statistical UC security, since the adversary is polynomially bounded during the run of the protocol.
Defining Everlasting UC Security.
The formalization of [27] is surprisingly simple and only extends the original UC definition by the requirement that the execution of the real protocol and of the functionality cannot be distinguished by an unbounded entity after the execution of the protocol is over (that is run by efficient adversaries and environments). Formally, this means that the output of the environment in the real and ideal worlds is statistically close. A comprehensive discussion is given in [27], and we briefly recall the definitions.
Definition 12
(Everlasting UC) A protocol \(\pi \) everlastingly UCrealizes an ideal protocol \(\rho \) if, for any PPT adversary \(\mathcal {A} \), there exists a PPT simulator \(\mathcal {S}\) such that, for any PPT environment \(\mathcal {Z}\),
In [27], the authors show that the composition theorem from [5] also holds with respect to Definition 12.A shortcoming of Definition 12, when applied to the token model, is that the distinguisher has no access to the hardware token after it becomes unbounded. Another issue is that Definition 12 does not model the case that the hardware assumption may be broken in the longterm.
Everlasting UC Security with Hardware Assumptions
We define a notion of everlasting security which allows the participants in a protocol to leak information in the long term.
With the exception of the environment \(\mathcal {Z}\) and the adversary \(\mathcal {A} \), we give each instance of a Turing machine (ITI for short) in the protocol an additional output tape, that we call longterm output tape. We modify the execution model to handle the longterm tapes as follows. At the end of the execution of the protocol (i.e. when the environment \(\mathcal {Z}\) produces its output m), adversary \(\mathcal {A} \) is invoked once again, this time with all longterm tapes, and produces an output a. We define the new execution model to be \({\text { EXC }}' := (m,a)\). A formal definition follows.
Definition 13
(Everlasting UC with Longterm Tapes) A protocol \(\pi \) everlastingly UC realizes an ideal protocol \(\rho \) if, for any PPT adversary \(\mathcal {A} \), there exists a PPT simulator \(\mathcal {S}\) such that, for any PPT environment \(\mathcal {Z}\),
In Definition 13, the distinguisher does not get the longterm tapes directly, instead, the tapes go through the adversary. The real adversary \(\mathcal {A} \) can, wlog, let the tapes go unchanged to the distinguisher (i.e. dummy adversary). The simulator \(\mathcal {S}\) can replace the longterm tapes by any simulated a of its choice. We point out that Definition 13 is equivalent to Definition 12 when none of the ITIs in \(\pi \) or \(\rho \) have longterm output tapes.
It is easy to show that the composition theorem from [27] carries over to our settings: the longterm tapes of the honest parties are also given to the adversary/simulator at the end of the protocol execution; however, the simulator (when communicating with the environment) can replace them with values of his choice. Formally, this means that the longterm tapes are just a message sent from protocol to adversary (in the same way as, e.g. the state is sent in the case of adaptive corruption), and consequently, when proving the composition theorem, those messages are handled in exactly the same way as the messages resulting from adaptive corruption.
Functionalities
In this section, we define some commonly used functionalities that we will need for our results.
CRS The first functionality is the common reference string (CRS). Intuitively, the CRS denotes a string sampled uniformly from a given distribution \(\mathcal {G}\) by some trusted party, and that is known to all parties prior to the start of the protocol.
Definition 14
(Common reference string (CRS)) Let \(\mathcal {D}_\uplambda \) (\(\uplambda \in {\mathbb {N}}\)) be an efficiently samplable distribution on \(\{0,1\}^*\). At the beginning of the protocol, the functionality \({\mathcal {F}}_{\mathrm {CRS}}^\mathcal {D}\) chooses a value r according to the distribution \(\mathcal {D}_\uplambda \) (where \(\uplambda \) is the security parameter) and sends r to the adversary and all parties \(P_i\).
Multiple commitment Here, we recall the functionality for a commitment scheme. Throughout the following description, we implicitly assume that the attacker is informed about each invocation and that the attacker controls the output of the functionality. We omit those messages from the description of the functionalities for readability. Note that to securely realize this functionality, a protocol must guarantee independence among different executions of the commitment protocol.
Definition 15
(Multiple Commitment) Let S and R be two parties, where we call S the sender and R the receiver. The functionality \(\mathcal {F}_{\textsf {MCOM}}^{S \rightarrow R,\ell }\) behaves as follows: upon the command \((\mathtt {commit},\mathsf {sid}, x)\), where \(x \in \{0,1\}^{\ell (\uplambda )}\), from S, send the message (\(\mathtt {committed},\mathsf {sid}\)) to R. Upon command (\(\mathtt {unveil},\mathsf {sid}\)) from S, send \((\mathtt {unveiled},\mathsf {sid},x)\) to R (with the matching \(\mathsf {sid}\)). Several commands \((\mathtt {commit})\) or \((\mathtt {unveil})\) with the same \(\mathsf {sid}\) are ignored.
Oblivious Transfer Functionality The oblivious transfer functionality allows for the receiver party to select a bit b and the sender party to send two messages \(m_0\) and \(m_1\) to the receiver in such a way that, the sender never learns the bit b the receiver chose, and the receiver learns only the message \(m_b\), and nothing else about \(m_{b1}\).
Definition 16
(Oblivious Transfer (OT)) Let R and S be two parties. The functionality \(\mathcal {F}_{\textsf {OT}}^{S \rightarrow R,\ell }\) behaves as follows: upon receiving the command \(( transfer ,{{\mathsf {id}}},m_0,m_1)\) from S, with \(m_0,m_1 \in \{0,1\}^{\ell (\uplambda )}\), send the message \(( received ,{{\mathsf {id}}})\) to R; party R replies with \(( choice ,{{\mathsf {id}}},b)\), for \(b \in \{0,1\}\). Upon receiving \(( choice ,{{\mathsf {id}}},b)\) from R, send \(( eliver ,{{\mathsf {id}}},m_b)\) to R. We call S the sender, and R the receiver.
Remark 17
Looking ahead, we note that we cannot define the protocol of Sect. 6 in the \(\mathcal {F}_{\textsf {OT}}\)hybrid model or in the \(\mathcal {F}_{\textsf {MCOM}}\)hybrid model. The former is due to the protocol of Section 6 requiring an OT with the additional property of statistical receiver privacy, which is not the case of all OT protocols that realize the \(\mathcal {F}_{\textsf {OT}}\) functionality. The latter is due to the protocol requiring a commitment scheme with the additional property of statistical hiding, which is not the case of all commitment schemes that realize the \(\mathcal {F}_{\textsf {MCOM}}\) functionality. Moreover, the protocol of Sect. 6 requires to prove statements about the contents inside of a commitment, and as shown by [9] this is not possible using a UC commitment functionality.
Physical Assumptions
The functionality \(\mathcal {F}_{\textsf {HToken}}\) described in this section models generic fully malicious hardware tokens, including PUFs. A fully malicious hardware token is the one that its state is not bounded apriori, its creator can install arbitrary code inside of it, and it can encapsulate an arbitrary number of (possibly fully malicious) tokens inside of itself, called children. As far as we know, this is the first functionality to integrate tamperproof hardware tokens with PUFs, allowing us to design protocols that are transparent about the type of hardware token used, as the functionality can be instantiated with any of the former. Moreover, in the particular case of PUFs, our model extends the PUFsinsidePUF model of [4] to the more general case of TokensinsideToken.^{Footnote 5} We handle encapsulated tokens in the functionality by allowing the parent token (i.e. the token that contains other token(s)) to have oracle access to all its children during its evaluation; we believe that token encapsulation models a realistic capability of an adversary and we believe that it is important to include it in our model for the soundness of the security analysis. We also note that \(\mathcal {F}_{\textsf {HToken}}\) is not PPT; this is because the functionality does not impose a restriction on the efficiency of the malicious code.
The functionality \(\mathcal {F}_{\textsf {HToken}}\) allows tokens to be transferred among parties by invoking \(\mathtt {handover}\); a token can only be queried by the party that currently owns the token by invoking \(\mathtt {query}\). Malicious tokens can be created by the adversary and it can contain other tokens inside of it. In contrast to [8], the adversary can “unwrap” encapsulated tokens by invoking \(\mathtt {openup}\) and read malicious tokens’ state by invoking \(\mathtt {readout}\).
Physically Uncloneable Functions (PUFs)
In a nutshell, a PUF is a noisy source of randomness. It is a hardware device that, upon physical stimuli, called challenges, produces physical outputs (that are measured), called responses. The response measured for each challenge of the PUF is unpredictable, in the sense that it is hard to predict the response of the PUF on a given challenge without first measuring the response of the PUF on the same (or similar) challenge. When a PUF receives the same physical stimulus more than once, the responses produced may not be exactly equal (due to the added noise), but the Hamming distance of the responses are bounded by a parameter of the PUF.
A family of PUFs is a pair of algorithms \((\mathsf {PUFSamp},\mathsf {PUFEval})\), not necessarily PPT. \(\mathsf {PUFSamp}\) models the manufacturing process of the PUF: on input the security parameter, it draws an index \(\sigma \), that represents an instance of a PUF that satisfies the security definitions for the security parameter (that we define later). \(\mathsf {PUFEval}\) models a physical stimulus applied to the PUF: Upon a challenge input x, it invokes the PUF with x and measures the response y, that is returned as the output. The length of a response y returned by algorithm \(\mathsf {PUFEval}\) is a bitstring of size \({{\mathsf {rg}}}\). A formal definition follows.
Definition 18
(Physically Uncloneable Functions) Let \({{\mathsf {rg}}}\) denote the size (in bits) of the range of the PUF responses of a PUF family. The pair \(\mathsf {PUF}= (\mathsf {PUFSamp},\mathsf {PUFEval})\) is a PUF family if it satisfies the following properties.

Sampling. Let \(\mathcal {I}_\uplambda \) be an index set. On input the security parameter \(\uplambda \), the stateless and unbounded sampling algorithm \(\mathsf {PUFSamp}\) outputs an index \(\sigma \in \mathcal {I}_\uplambda \). Each \(\sigma \in \mathcal {I}_\uplambda \) corresponds to a family of distributions \(\mathcal {D}_\sigma \). For each challenge \(x \in \{0,1\}^\uplambda \), \(\mathcal {D}_\sigma \) contains a distribution \(\mathcal {D}_\sigma (x)\) on \(\{0,1\}^{{{\mathsf {rg}}}(\uplambda )}\). It is not required that \(\mathsf {PUFSamp}\) is a PPT algorithm.

Evaluation. On input \((1^\uplambda ,\sigma , x)\), where \(x \in \{0,1\}^\uplambda \), the evaluation algorithm \(\mathsf {PUFEval}\) outputs a response \(y \in \{0,1\}^{{{\mathsf {rg}}}(\uplambda )}\) according to the distribution \(\mathcal {D}_\sigma (x)\). It is not required that \(\mathsf {PUFEval}\) is a PPT algorithm.
Additionally, we require the PUF family to satisfy a reproducibility notion that we describe next. Reproducibility informally says that, the responses produced by the PUF when queried on the same random challenge are always close.
Definition 19
(PUF Reproducibility) A PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\), for security parameter \(\uplambda \), is \(\delta \)reproducible if for \(\sigma \leftarrow \mathsf {PUFSamp}(1^\uplambda )\), , and \(y \leftarrow \mathsf {PUFEval}(\sigma ,x)\), \(y' \leftarrow \mathsf {PUFEval}(\sigma ,x)\), we have that,
for a negligible function \(\mathsf {negl}(\uplambda )\).
Many PUF definitions in the literature [3, 4, 12, 30] have had problems with the superpolynomial nature of PUFs. In particular, the possibility of PUFs solving hard computational problems, such as discrete logarithms or factoring, was not excluded, or excluded in an awkward way. We take our inspiration from the idea that a PUF can be thought as a function selected at random from a very large set, and therefore cannot be succinctly described; however, it can be efficiently simulated using lazy sampling. Conceptually, we will only consider PUFs that can be efficiently simulated by a stateful machine.
Definition 20
A polynomialtime (stateful) interactive Turing machine \((\mathsf {MSamp},\mathsf {MEval})\) is a lazy sampler for \((\mathsf {PUFSamp}, \mathsf {PUFEval})\) such that for all sequences \((x_1, \ldots , x_n)\) of inputs, the random variables \((Y_1, \ldots , Y_n)\) and \((Y'_1, \ldots , Y'_n)\), defined by the following experiments, are identically distributed.
where \({{\mathsf {st}}}\) denotes the initial state of the TM \({\mathsf {M}}\).
Security of PUFs The security of PUFs has been mainly defined by the properties of unpredictability and uncloneability [1, 3, 4, 24, 30]. In Sect. 4.1.1, we introduce a novel unpredictability notion for PUFs, and we later discuss why the standard unpredictability notion is not suited for our setting.
Fully adaptive PUF Unpredictability.
In contrast to the standard definition of unpredictability [3], in this work we require a stronger notion of adaptive unpredictability. Loosely speaking, unpredictability should capture the fact that it is hard to learn the response of the PUF on a given challenge without first querying the PUF on a similar challenge. Note that this implies uncloneability: if one could clone the PUF, one could use the cloned PUF to predict the answers of the original PUF. We express the similarity of inputs/outputs of the PUF in terms of the Hamming distance \({\mathsf {hd}}\), however, our results can be easily adapted to other metrics.
Definition 21
(Adaptive PUF Unpredictability) A PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\), for security parameter \(\uplambda \), is \((\gamma ,\delta )\)unpredictable if for all adversaries \(\mathcal {A} \), there exists a negligible function \(\mathsf {negl}(\uplambda )\), such that,
where \(\mathcal {Q}\) is the list of all queries made by \(\mathcal {A} \).
The adaptive PUF unpredictability says that the only way to learn the output of \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,x)\) is to query the PUF on x (or something close enough to x). Our definition captures this by allowing adversary \(\mathcal {A} \) to know the challenge x before having oracle access to \(\mathsf {PUFEval}\).
The unsuitability of the standard PUF unpredictability of [3]. We first recall the standard unpredictability definition of [3]. As the definition itself is based on the notion of average minentropy, for convenience, we present that first.
Definition 22
(Average Minentropy [3]) The average minentropy of the measurement \(\mathsf {PUFEval}(q)\) conditioned on the measurements of challenges \(\mathcal {Q} \!=\! \{q_1,\cdots \!,q_{\mathsf {poly}(\uplambda )}\}\) for the PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) is defined by
where the probability is taken over the choice of \(\sigma \) from \(\mathcal {I}_\uplambda \) and the choice of possible PUF responses on challenge q. The term \(\mathsf {PUFEval}(\mathcal {Q})\) denotes a sequence of random variables \(\mathsf {PUFEval}(q_1),\ldots , \mathsf {PUFEval}(q_{\mathsf {poly}(\uplambda )})\), each corresponding to an evaluation of the PUF on challenge \(q_k\), for \(1 \ge k \ge \mathsf {poly}(\uplambda )\).
Definition 23
(PUF Unpredictability [3]) A \(({{\mathsf {rg}}}, \delta )\)PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) for security parameter \(\uplambda \) is \((\gamma (\uplambda ), {\mathsf {m}}(\uplambda ))\)unpredictable if for any \(q \in \{0,1\}^\uplambda \) and challenge list \(\mathcal {Q} = \{q_1,\ldots ,q_{\mathsf {poly}(\uplambda )}\}\), one has that, if for all \(1 \ge k \ge \mathsf {poly}(\uplambda )\) the Hamming distance satisfies \({\mathsf {hd}}(q,q_k) \ge \gamma (\uplambda )\), then the average minentropy satisfies \({\tilde{\mathsf {H}}}_{\infty }(\mathsf {PUFEval}(q)\mathsf {PUFEval}(\mathcal {Q})) \ge {\mathsf {m}}(\uplambda )\), where \(\mathsf {PUFEval}(\mathcal {Q})\) denotes the sequence of random variables \(\mathsf {PUFEval}(q_1),\cdots ,\mathsf {PUFEval}(q_{\mathsf {poly}(\uplambda )})\), each corresponding to an evaluation of the PUF on challenge \(q_k\). Such a PUF family is called a \(({{\mathsf {rg}}}, \delta , \gamma , {\mathsf {m}})\)PUF family.
We now argue why Definition 23 is not suited for our setting. We present a PUF family that satisfies Definition 23 and yet allows for an adversary to predict the response of the PUF on a challenge never queried to the PUF (and far apart from the other queried challenges). We prove the following theorem next.
Theorem 24
There exists a PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) that satisfies Definition 23 (with \({\mathsf {m}}> 0\)), such that there exists a PPT adversary \(\mathcal {A} \) that can predict with probability 1 the output of the PUF on an input far from every other input queried to the PUF prior (thereby contradicting Definition 21).
Proof
Let \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) be a PUF family for challenges of size \((n+1)\)bits and responses of size nbits, We construct the family \(\mathsf {PUF}\) as follows:

\(\mathsf {PUFSamp}(1^\uplambda )\): Samples , and . Return \(\sigma :=(x^*,f)\).

\(\mathsf {PUFEval}(1^\uplambda ,\sigma , x)\):
 −:

Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,{0^{n+1}})\) output \(x^*\).
 −:

Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,0\Vert m)\) with \(m\ne 0^n\) output f(m).
 −:

Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,1\Vert m)\) output \(f(m \oplus x^*)\).
We first show how an adversary can predict with probability 1 the output of a PUF from the family described above on a fresh input. Given some arbitrary fresh challenge input \(b\Vert m\), the adversary can find the corresponding response \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,b\Vert m)\), without ever querying the PUF on \(b\Vert m\), by doing the following: Compute \(x^*:=\mathsf {PUFEval}(1^\uplambda ,\sigma ,0^{n+1})\) and compute \(y:=\mathsf {PUFEval}(1^\uplambda ,\sigma ,{\bar{b}}\Vert m\oplus x^*)\). Note that both queries are far apart from \(b\Vert m\), yet the adversary learns \(y = \mathsf {PUFEval}(1^\uplambda ,{\sigma }, b\Vert m) = \mathsf {PUFEval}(1^\uplambda ,\sigma ,{\bar{b}}\Vert m\oplus x^*)\).
Now we show that the PUF family described above satisfies Definition 23.^{Footnote 6} Fix any polynomialsize challenge list \(\mathcal {Q} = \{q_1,\ldots ,q_{\kappa 1}\}\) and any challenge query \(q_{\kappa }\) such that, for any \(k\in [\kappa 1]:{\mathsf {hd}}(q_{\kappa },q_k) \ge 1\), which is clearly minimal. Since f is a random function, it holds that \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,q)\) has maximal average minentropy, unless the PUF is queried on two inputs \((q_{i}, q_j)\) that form a collision for f. Note that this happens only if \(q_i \oplus q_j = 1 \Vert x^*\). Thus, all we need to show is that, for any fixed set of queries \(\{q_1,\ldots ,q_{\kappa }\}\) the probability that \(q_i \oplus q_j = 1 \Vert x^*\) is negligible, over the random choice of \(x^*\). This holds because
by applying the Bernoulli inequality. The above expression approaches 0 exponentially fast, as n grows. This concludes our proof. \(\square \)
Contrasting our unpredictability definition with the one of [3]. The motivation behind our newly proposed adaptive unpredictability notion (Definition 21) is that the standard PUF unpredictability notion of [3] implicitly assumes that PUFs are only dependent on random physical factors (likely introduced during manufacturing), and in particular it does not capture families of PUFs that could have some programmability built in, allowing to predict the output of a PUF on an input by querying a completely different input. What our new PUF unpredictability notion explicitly captures is that a “good” PUF must solely depend on random physical factors, and in particular cannot have any form of programmability. On a more philosophical level, we believe that our new notion is what was meant to be modelled as a property for PUFs from the start. Since PUFs are inherently randomized devices that are specifically built to be unpredictable and uncontrollable, a PUF family such as the one described above should not be considered to be a “good” PUF family; however, the previous notion fails to capture this fact.^{Footnote 7}
Overall, our new definition of unpredictability does not hinder in any way the progress and development of new realworld PUFs, but merely addresses a technical oversight by the previous unpredictability notion. Therefore, we conjecture that most realworld PUFs that satisfy the unpredictability notion of [3] will most likely also satisfy our unpredictability notion, since real PUFs are inherently randomized physical devices built to be unpredictable and uncontrollable.
Impossibility of Everlasting OT with Malicious Hardware
In this section, we prove the impossibility of realizing everlasting secure oblivious transfer (OT) in the hardware token model, even in the presence of a trusted setup. The result carries over immediately to any secure computation protocol due to the completeness of OT [23]. We consider honest tokens to be stateful but nonerasable (Definition 25) and the tokens produced by the adversary can be malicious but not encapsulate other tokens (note that this restriction on malicious tokens only makes our result stronger, as the impossibility holds even against an adversary that is more limited). The adversary \(\mathcal {A} \) is PPT during the execution of the protocol, but \(\mathcal {A} \) becomes unbounded after the execution is over (i.e. everlasting security). This extends the seminal result of Goyal et al. [18] that shows the impossibility of having statistically (as opposed to everlasting) UC secure oblivious transfer from stateless (as opposed to nonerasable) tokens. We stress, however, that our negative results does not contradict the work of Döttling et al. [13, 14], since they assume honest tokens to be nonresettable or boundedresettable (i.e. tokens cannot be reset to a previous state, or only reset up to an apriori bound), whereas for our result to hold the token must be nonerasable.
In the following, we show the main theorem of the section. The result holds under the assumption that the token scheduling is fixed apriori, which captures most of the known protocols for secure computation [13, 14, 20]. The scheduling of the tokens determines the exchange of the tokens among parties. We stress that we do not impose any restriction on which party will hold each hardware token in the end of the execution. For a formal definition of OT, we refer the reader to Sect. 2.2. We first define “nonerasability” for hardware tokens next.
Definition 25
(Nonerasable hardware token) A (stateful) hardware token is said to be nonerasable if any state ever recorded by the token can be efficiently retrieved.
Note in particular that stateless tokens are trivially nonerasable, as the former cannot keep any state.
Theorem 26
Let \(\Pi \) be a hardware tokenbased everlasting OT protocol between Alice (i.e. sender) and Bob (i.e. receiver) where the honest tokens are nonerasable and the scheduling of the tokens is fixed. Then, at least one of the following holds:

There exists an everlasting adversary \(\mathcal {S} '\) that uses malicious and stateful hardware tokens such that \(\mathbf {Adv}_{\mathcal {S} '}^\Pi \ge \epsilon (\uplambda )\), or

there exists an everlasting adversary \(\mathcal {R} '\) that uses malicious and stateful hardware tokens such that \(\mathbf {Adv}_{\mathcal {R} '}^\Pi \ge \epsilon (\uplambda )\),
for some nonnegligible function \(\epsilon (\uplambda )\).
Proof
The proof consists of the following sequence of modified simulations. Let the game \({\mathfrak {G}}_{0}\) define an everlastingly secure OT protocol for \(\mathcal {S} \) and \(\mathcal {R} \). Then, by assumption, we have that for all everlasting adversaries \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that
We define a quasisemihonest adversary to be an adversary that behaves semihonestly but keeps a log of all queries ever made to the hardware token (i.e. the nonerasable token assumption). Let the game \({\mathfrak {G}}_{1}\) define an everlastingly secure OT protocol where \(\mathcal {S} '\) and \(\mathcal {R} '\) are quasisemihonest. Since we are strictly reducing the capabilities of the adversaries and the tokens are nonerasable, we can state the following lemma.
Lemma 27
For all quasisemihonest \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that
Let \({\mathfrak {G}}_{2}\) be the same as \({\mathfrak {G}}_{1}\) except that whenever \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) queries a token from \(\mathcal {R} \) (resp. \(\mathcal {S} \)) that will return to \(\mathcal {R} \) (resp. \(\mathcal {S} \)), instead of making that query to the token, \(\mathcal {S} '\) queries directly \(\mathcal {R} \) who answers it as the token would have. Since the distribution of the answers for the queries does not change, we can state the following lemma.
Lemma 28
For all quasisemihonest \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that
Let game \({\mathfrak {G}}_{3}\) be exactly the same as \({\mathfrak {G}}_{2}\) except that whenever \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) sends a token to \(\mathcal {R} \) (resp. \(\mathcal {S} \)) that will not return to \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)), then \(\mathcal {S} '\) sends a description of the token instead. Since we consider everlasting adversaries we assume that after the execution of the protocol all tokens can be read out. Therefore, both parties will have the description of all the tokens, even the ones that are not sent to the other party. Note that at this point there are no hardware tokens involved, and only description of tokens. Therefore, a quasisemihonest adversary is identical to a semihonest everlasting one.^{Footnote 8}
Lemma 29
For all semihonest everlasting \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that
We point out that a semihonest unbounded adversary \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) is also a semihonest everlasting adversary, since during the execution of the protocol it performs only the honest (PPT) actions. We are now in the position of stating the final lemma.
Lemma 30
For all semihonest unbounded \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that
It was shown [2] that it is not possible to build a secure OT protocol against semihonest unbounded adversaries (even in the presence of a trusted setup), what gives us a contradiction and concludes our proof. \(\square \)
Everlasting Commitment from Fully Malicious PUFs
In this section, we build an everlastingly secure UC commitment scheme from fully malicious PUFs. Let \(\mathcal {C}= (\mathsf {Com},\mathsf {Open})\) be a statistically hiding UCsecure commitment scheme, let \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) be a 1outof2 statistically receiverprivate UCsecure OT, let \(f: \{0,1\}^\uplambda \rightarrow \{0,1\}^\uplambda \) be a oneway permutation, and let \(H: \{0,1\}^{d(\uplambda )} \times \{0,1\}^{\ell (\uplambda )} \rightarrow \{0,1\}^c\) be a strong randomness extractor, where d and \(\ell \) are two polynomials such that \(H\) allows for \((\ell (\uplambda )c)\)many bits of entropy loss, for \(c:= m\Vert \mathsf {decom}\). Let
and let
We denote by \((\mathcal {P}_1, \mathcal {V}_1)\) and \((\mathcal {P}_2, \mathcal {V}_2)\) the statistically witnessindistinguishable arguments of knowledge (SWIAoK) for the relations \(R_1\) and \(R_2\), respectively. Our commitment scheme is described next.
Theorem 31
Let

\(\mathcal {C}=(\mathsf {Com},\mathsf {Open})\) be a statistically hiding and computationally binding commitment scheme,

\((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) be a UCsecure 1outof2 statistically receiverprivate oblivious transfer,

\(H: \{0,1\}^{d(\uplambda )} \times \{0,1\}^{\ell (\uplambda )} \rightarrow \{0,1\}^c\) be a strong randomness extractor, where d and \(\ell \) are two polynomials such that \(H\) allows for \((\ell (\uplambda )c)\)many bits of entropy loss,

and let \((\mathcal {P}_1, \mathcal {V}_1)\) and \((\mathcal {P}_2,\mathcal {V}_2)\) be SWIAoK systems for the relations \(R_1\) and \(R_2\), respectively.
Then, the protocol above everlastingly UCrealizes the functionality \(\mathcal {F}_{\textsf {MCOM}}\) in the \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\)hybrid model.
Proof
We consider the cases of the two corrupted parties separately. The proof consists of the description of a series of hybrids and we argue about the indistinguishability of neighbouring experiments. Then, we describe a simulator that reproduces the realworld protocol to the corrupted party while executing the protocol in interaction with the ideal functionality.
Corrupted Bob (recipient) Consider the following sequence of hybrids, with \(\mathcal {H}_0\) being the protocol as defined above in interaction with \(\mathcal {A} \) and \(\mathcal {Z}\):
\(\mathcal {H}_1\): Defined exactly as in \(H_0\) except that, for all executions of commitment and opening routines, the SWIAoK for \(R_1\) and \(R_2\) are computed using the knowledge of x, the preimage of y. This is possible as the \(\mathcal {F}_{\textsf {CRS}}\) functionality is simulated by the simulator that samples an \(f(x)=y\) such that it knows x. In order to avoid trivial distinguishing attack, we additionally require Alice to explicitly check that \(\forall i \in \{ [\ell (\uplambda )]\}: {\mathsf {hd}}(\beta _i, q_i^{k_i}) \le \delta \) and abort (prior to computing the SWIAoK) if the condition is not satisfied. The two protocols are statistically indistinguishable due to the statistical witness indistinguishability of the SWIAoK scheme. In particular, for all unbounded distinguishers \(\mathcal {D}\) querying the functionality polynomially many times, it holds that
\(\mathcal {H}_2, \dots , \mathcal {H}_{\ell (\uplambda ) + 1}\): Each \(\mathcal {H}_{1+i}\) for \(i \in [\ell (\uplambda )]\) is defined exactly as \(\mathcal {H}_1\) except that in all of the sessions Alice uses the simulator of the statistical receiver private OT protocol (that implements the \(\mathcal {F}_{\textsf {OT}}\) functionality) to run the first i instances of the oblivious transfers. Note that the simulator (using the knowledge of the CRS trapdoor) returns both of the inputs of the sender, in this case \((p_i^0, p_i^1)\). By statistical receiver privacy of the oblivious transfer, it holds that the simulated execution is statistically close to a honest run, and therefore, we have that for all unbounded distinguishers \(\mathcal {D}\) that queries \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) polynomially many times:
\(\mathcal {H}_{\ell (\uplambda ) + 2}, \dots , \mathcal {H}_{2 \cdot \ell (\uplambda ) + 1}\): Each \(\mathcal {H}_{\ell (\uplambda )+1+i}\) for \(i \in [\ell (\uplambda )]\) is defined exactly as \(\mathcal {H}_{\ell (\uplambda ) + 1}\) with the difference that, in all of the sessions, the first imany commitments \(\mathsf {com}_i\) are computed as \(\mathsf {Com}(r_i)\), for some random \(r_i\) in the appropriate domain. Note that the corresponding decommitments are no longer used in the computation of the SWIAoK. Therefore, the statistically hiding property of the commitment scheme guarantees that the neighbouring simulations are statistically close for all unbounded \(\mathcal {D}\). That is
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2}\): Let n be a bound on the total number of sessions. The hybrid \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2}\) is defined as the previous except that Alice chooses some random values \((k_1, \dots , k_n) \in \{0,1\}^{\ell (\uplambda )}\) at the beginning of the execution. In the ith session Alice uses the value of \(k_i\) instead of a fresh k in the interaction with the functionality \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\). The changes between the two hybrids are only syntactical, and therefore, it holds that
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 3}\): Let f be the following deterministic stateless oracle: f is initialized with the initial state of the physical token sent by Bob, the tuples \((k_1, \dots , k_n)\), and a random tape. On input an index i, a set \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\), and a set \(\{p_j\}_{j\in [\ell (\uplambda )]}\), the oracle f returns 1 if and only if for all \(j \in [\ell (\uplambda )] :{\mathsf {hd}}(q_j^{k_{i,j}}, \beta _j) \le \delta \), where \(\beta _j\) is the output of the token on input \(p_j\). In this hybrid, Alice no longer queries the token but computes a valid opening for the ith commitment only if f returns 1 on inputs i, \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\), and \(\{p_j\}_{j\in [\ell (\uplambda )]}\). Where the elements \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\) and \(\{p_j\}_{j\in [\ell (\uplambda )]}\) are defined in the ith session. If f returns 0, then Alice interrupts all of the executions simultaneously. Note that this modification does not affect the view of the adversary: since Alice keeps ownership of the token, the state of the token is not included in the longterm tapes. Also note that Alice never uses the values \(\beta _j\), except for the check mentioned above. Thus, we have that
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 4}\): Let \(\mathsf {F}_i\) be the set of tuples \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\) and \(\{p_j\}_{j\in [\ell (\uplambda )]}\) such that f on input i and those tuples returns 0. Note that \(\mathsf {F}_i\) is well defined as soon as Bob sends the token to Alice. In the ith session, Alice no longer queries f but just checks whether \((\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]} ,\{p_j\}_{j\in [\ell (\uplambda )]}) \in \mathsf {F}_i\) and aborts all of the executions if this is the case. We denote by \(\gamma \in \{1,\dots , n, \infty \}\) the session in which Alice aborts. Here, the two hybrids need to be equivalent only up to the first query to f that returns 0, thus
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\): Defined exactly as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 4}\) with the difference that for all sessions i of the protocol \(\omega _i\) is computed as \(\mathcal {H}_i \oplus m\Vert \mathsf {decom}\), where \(\mathcal {H}_i\) is a random string in \(\{ 0,1\}^c\). To prove the indistinguishability of \(\mathcal {H}_4\) and \(\mathcal {H}_5\), we define the intermediate hybrids \((\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0}, \dots , \mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n})\), where in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i}\) the strings \((\omega _1, \dots , \omega _i)\) are computed as in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\), whereas the strings \((\omega _{i+1}, \dots , \omega _n)\) are computed as in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4}\). Note that \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0} = \mathcal {H}_{2 \cdot \ell (\uplambda ) +4}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n} = \mathcal {H}_{2 \cdot \ell (\uplambda ) +5}\). By definition the hybrids \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i1}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i}\) differ only in the value of \(\mathcal {H}_i\) (which is \(H(\mathsf {seed}, k_i)\) in the former case and a random string in the latter). Note that \(k_i\) is used only in the computation of \(\mathcal {H}_i\) and that the only variable that depends on \(k_i\) is \(\gamma \). Since \(\gamma \) is from a set of size \(n+1\), we can bound from above the entropy loss of \(k_i\) to \(log(n+1)\)many bits. Recall that \((n+1) \ll 2^\uplambda \), therefore we have that \(\ell (\uplambda )  c > log(n+1)\), for an appropriate choice of \(\ell ()\). Hence, by the strong randomness of \(H\) we have that
Since the distance between \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n}\) is the sum of the bounds obtained by the leftover hash lemma [21], we can conclude that
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\): Defined as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\) except that in all sessions \(\mathsf {com}\) is a commitment to a random string s. Note that in the execution of \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\) the value of \(\mathsf {decom}\) is masked by a random string \(\mathcal {H}_i\) and therefore it is information theoretically hidden to the eyes of the adversary. By the statistically hiding property of \(\mathsf {Com}\), we have that for all unbounded distinguisher \(\mathcal {A} \) the following holds:
\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7}\): Defined as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\) except that Alice opens the commitment to an arbitrary message \(m'\). We observe that the execution of \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\) is completely independent from the message m, except when m is sent to Bob in clear in the opening phase. Therefore, we have that for all unbounded distinguishers \(\mathcal {D}\) that query the functionality polynomially many times:
\({\mathsf {S}} \): We now define \({\mathsf {S}} \) as a simulator in the ideal world that engages the adversary in the simulation of a protocol when queried by the ideal functionality on input \((\mathsf {committed}, \mathsf {sid})\). The interaction of \({\mathsf {S}} \) with the adversary works exactly as specified in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7}\), with the only difference that the message \(m'\) is set to be equal to x, where \((\mathsf {unveil}, \mathsf {sid}, x)\) is the message sent by the ideal functionality with the same value of \(\mathsf {sid}\). Since the simulation is unchanged to the eyes of the adversary we have that
By transitivity, we have that \(\mathcal {H}_0\) is statistically indistinguishable from \({\mathsf {S}} \) to the eyes of the environment \(\mathcal {Z}\). We can conclude that our protocol everlastingly UCrealizes the commitment functionality \(\mathcal {F}_{\textsf {MCOM}}\) for any corrupted Bob. We stress that that we allow Bob to be computationally unbounded and we only require that the number of sessions is bounded by some polynomial in \(\uplambda \).
Corrupted Alice (committer) Let \(\mathcal {H}_0\) be the execution of the protocol as described above in interaction with \(\mathcal {A} \) and \(\mathcal {Z}\). We define the following sequence of hybrids:
\(\mathcal {H}_1\): Defined as \(\mathcal {H}_0\) except that the following algorithm is executed locally by Bob at the end of the commit phase of each session, in addition to Bob’s normal actions.
 \(\mathcal {E}(1^\uplambda )\)::

Let K be a bitstring of length \(\ell (\uplambda )\), the extractor parses the list of queries \(\mathcal {Q}\) that Alice sent to \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) before the last message of Bob in the commitment phase. Then, for all \(\mathcal {Q}_j \in \mathcal {Q}\) it checks whether \(\exists j \in [\ell (\uplambda )]\) such that \(\exists z \in \{0,1\}\) such that \({\mathsf {hd}}(\mathcal {Q}_j , p_i^z) \le \gamma \), where \(p_i^z\) is defined as in the original protocol. If this is the case the extractor sets \(K_i = z\). If the value of \(K_i\) is already set to a different bit the extractor aborts. If at the end of list \(\mathcal {Q}\) there is some i such that \(K_i\) is undefined, the extractor aborts. Otherwise it parses \(\omega \oplus H(\mathsf {seed},K)\) as \(m'\mathsf {decom}\) and it returns \((m',\mathsf {decom})\).
Note that Bob does not use the output of \(\mathcal {E}\) and therefore, for all distinguishers \(\mathcal {D}\), we have that:
\(\mathcal {H}_2\): Let \(\mathcal {H}_2\) be defined as \(\mathcal {H}_1\) except that Bob outputs the message \(m'\) as computed by \(\mathcal {E}\) instead of the message m as sent by Alice in the opening phase. For the indistinguishability of \(\mathcal {H}_1\) and \(\mathcal {H}_2\), we have to argue that if the opening of the adversary succeeds, then the extraction succeeds with overwhelming probability, i.e. \(m = m'\). For the ease of exposition, we assume that the sessions are enumerated with a unique identifier, e.g. according to their initialization order. Let \(\mathsf {Abort}\) be the event such that there exists a session \(j \in [n]\) such that the simulator aborts but the opening is successful. We are going to prove the following lemma.
Lemma 32
.
Proof
We define \(\mathsf {NoUnique}\) as the event such that there exists a session \(j \in [n]\) such that the corresponding \(K^j\) as defined in \(\mathcal {E}\) is not uniquely defined but the commitment is successful, i.e. there exists some \(i \in [\ell (\uplambda )]\) such that \(K^j_i = 0\) and \(K^j_i=1\). Let \(\mathsf {NoDefined}\) be the event such that there exists a \(j \in [n]\) and \(i \in [\ell (\uplambda )]\) such that \(K^j_i\) is undefined at the end of the iteration, but the corresponding opening phase is successful. By definition of \(\mathcal {E}\), we have that
The rest of the proof proceeds as follows:

We show through a series of intermediate hybrids \((\mathcal {H}^\mathsf {U}_0, \dots , \mathcal {H}^\mathsf {U}_3)\) that the event \(\mathsf {NoUnique}\) happens only with negligible probability.

We show through a series of intermediate hybrids \((\mathcal {H}^\mathsf {D}_0, \dots , \mathcal {H}^\mathsf {D}_4)\) that the event \(\mathsf {NoDefined}\) happens only with negligible probability.

The proof of the lemma follows by a union bound.
We first derive a bound for the probability that the event \(\mathsf {NoUnique}\) happens. Consider the following sequence of hybrids.
\(\mathcal {H}^\mathsf {U}_0:\) The experiment \(\mathcal {H}^\mathsf {U}_0\) identical to \(\mathcal {H}_2\) except that we sample some \(j^*\) from the identifiers associated to all sessions and some \(i^*\) from \([\ell (\uplambda )]\). Let n be a bound on the total number of session and let \(\mathsf {NoUnique}(j^*, i^*)\) be the event where \(\mathsf {NoUnique}\) happens in session \(j^*\) and for the \(i^*\)th bit. Since \(j^*\) and \(i^*\) are randomly chosen we have that
\(\mathcal {H}^\mathsf {U}_1:\) The experiment \(\mathcal {H}^\mathsf {U}_1\) is defined as \(\mathcal {H}^\mathsf {U}_0\) except that it stops before the execution of the \(i^*\)th OT in session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(\mathcal {H}^\mathsf {U}_0\), the experiment does the following:

Continue the execution of \(\mathcal {H}^\mathsf {U}_0\) from \(\textsf {st}\).

Input/output all the \(i^*\)th OT messages from session \(j^*\).

Simulate all other messages internally.
The experiments sets the bit \(b=1\) if and only if the commitment of the jth session succeeds. Let \(\mathsf {NoUnique}^*(j^*, i^*)\) be the event that \(K_{i^*}^{j^*}\) is not uniquely defined. Since the execution does not change to the eyes of Alice we have that
\(\mathcal {H}^\mathsf {U}_2:\) Defined as \(\mathcal {H}^\mathsf {U}_2\) except that the CRS for the OT is sampled to be in extraction mode. By the computational indistinguishability of the CRS, it holds that
\(\mathcal {H}^\mathsf {U}_3:\) Defined as \(\mathcal {H}^\mathsf {U}_2\) except that the extractor for the OT is used in the \(i^*\)th OT of the \(j^*\)th session. The experiment sets \(b=1\) if the simulation succeeds. Recall that the simulator outputs the choice of the receiver \(b_{i^*}\) and expects as input the value \(p_{i^*}^{b_{i^*}}\). Note that this implies that the value \(p_{i^*}^{1b_{i^*}}\) is information theoretically hidden to the eyes of Alice. Also note that
by the simulation security of the OT we can rewrite
thus by Jensen’s inequality we have that
As we argued before the value of \(p_{i^*}^{1b_{i^*}}\) is information theoretically hidden to the eyes of Alice. However, by definition of \(\mathsf {NoUnique}^*(j^*, i^*)\) Alice queries both \((p_{i^*}^0, p_{i^*}^1)\) to the functionality \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\). It follows that we can bound the probability of the event \(\mathsf {NoUnique}^*(j^*, i^*)\) to happen to a negligible function in the security parameter. Therefore, we have that
In order to show a bound on the probability of \(\mathsf {NoDefined}\) to happen in \(\mathcal {H}_2\), we define another sequence of hybrids.
\(\mathcal {H}^\mathsf {D}_0:\) The experiment \(\mathcal {H}^\mathsf {D}_0\) identical to \(\mathcal {H}_2\) except that we sample some \(j^*\) from the identifiers associated to all sessions. Let n be a bound on the total number of session and let \(\mathsf {NoDefined}(j^*)\) be the event where \(\mathsf {NoDefined}\) happens for the session j. Since \(j^*\) is randomly chosen we have that
\(\mathcal {H}^\mathsf {D}_1:\) The experiment \(H^\mathsf {B}_1\) is defined as \(H^\mathsf {D}_0\) except that it stops before the execution of the SWIAoK in the commitment of session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(H^\mathsf {D}_0\) under the assumption that no machine keeps a copy of the preimage x after generating \(\mathsf {crs} \). Let \(\mathcal {P}^*\) be the following algorithm:

Continue the execution of \(H^\mathsf {D}_0\) from \(\textsf {st}\).

Input/output all the SWIAoK messages from session \(j^*\).

Simulate all other messages internally.
The experiment \(H^\mathsf {D}_1\) runs \(b \leftarrow \left\langle \mathcal {P}^*(\textsf {st};r), \mathcal {V}_1(y,\{com_i\}_{i\in [\ell (\uplambda )]})\right\rangle \), where \(\{com_i\}_{i\in [\ell (\uplambda )]}\) are the messages sent in session \(j^*\) from Alice. Let \(\mathsf {NoDefined}^*(j^*)\) be the event that there exists some \(K_i^{j^*}\) that is undefined before the execution of the SWIAoK in session \(j^*\). Then, we have that
by definition of \(\mathsf {NoDefined}(j^*)\).
\(\mathcal {H}^\mathsf {D}_2:\) Defined as \(\mathcal {H}^\mathsf {D}_1\) except that the extractor \((\{m_i,\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]}, x) \leftarrow \mathsf {Ext}^{\mathcal {P}^*(\textsf {st})} (y, \{\mathsf {com}_i\}_{i\in [\ell (\uplambda )]};r)\) is executed instead of the SWIAoK. Note that
by the extraction property of the SWIAoK we can rewrite
By Jensen’s inequality, we can conclude that
\(\mathcal {H}^\mathsf {D}_3:\) The experiment \(H^\mathsf {B}_3\) is defined as \(H^\mathsf {D}_2\) except that it stops before the execution of the SWIAoK in the opening of session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(H^\mathsf {D}_2\) under the assumption that no machine keeps a copy of the trapdoor x after generating \(\mathsf {crs} \). Let \(\mathcal {P}^*\) be the following algorithm:

Continue the execution of \(H^\mathsf {D}_2\) from \(\textsf {st}\).

Input/output all the SWIAoK messages from the opening phase of session \(j^*\).

Simulate all other messages internally.
The experiment \(H^\mathsf {D}_1\) runs \(b \leftarrow \langle \mathcal {P}^*(\textsf {st};r), \mathcal {V}_1(y,\mathsf {seed}, m, \mathsf {com}, \omega , \{com_i\}_{i\in [\ell (\uplambda )]}, \{q_i^0, q_i^1\}_{i\in [\ell (\uplambda )]})\rangle \), where the input of the verification algorithm corresponds to the messages exchanged in session \(j^*\). To the eyes of Alice, this change is only syntactical, and therefore, we have that
\(\mathcal {H}^\mathsf {D}_4:\) Defined as \(\mathcal {H}^\mathsf {D}_3\) except that the extractor \((k, \{\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]}, \mathsf {decom}, x) \leftarrow \mathsf {Ext}^{\mathcal {P}^*(\textsf {st})} (y,\mathsf {seed}, m, \mathsf {com}, \omega , \{com_i\}_{i\in [\ell (\uplambda )]}, \{q_i^0, q_i^1\}_{i\in [\ell (\uplambda )]};r)\) is executed instead of the SWIAoK. An argument identical as above can be used to show that
In the following analysis, we ignore the case where the two extracted witnesses are a valid trapdoor for the common reference string y, as this event can be easily ruled out with a reduction to the onewayness of f. Let us denote by \(\beta _i \leftarrow \mathsf {Open}(\mathsf {com}_i, \mathsf {decom}_i')\). Now it is now enough to observe that the successful termination of the protocol implies that for all \(i \in [\ell (\uplambda )]\) we have that \({\mathsf {hd}}(q_i^{k_i}, \beta _i') \le \delta \), for some \(k = k_1\dots k_\ell (\uplambda )\). By definition of \(\mathsf {NoDefined}^*(j^*)\) there exists some \(i^*\) such that \(\mathcal {A} \) never queried any \(p'\) to \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) such that neither \({\mathsf {hd}}(p',p_{i^*}^0) \le \gamma \) nor \({\mathsf {hd}}(p',p_{i^*}^1) \le \gamma \), before seeing the last message of the commitment phase. By the unpredictability of the PUF, it follows that . We can conclude that there exists an \(i^*\) such that \(\beta _{i^*} \ne m_{i^*}\). Since \(\mathsf {decom}_{i^*}\) and \(\mathsf {decom}_{i^*}'\) are valid opening information for \(m_{i^*}\) and \(\beta _{i^*}\), respectively, then we can derive the following bound
by the binding property of the commitment scheme. Therefore, we can conclude that
This proves our lemma. \(\square \)
In order to conclude our proof, we need to show that the extractor always returns a valid messagedecommitment pair for the same message that Alice outputs in the opening phase. More formally, let \(\mathsf {NoExt}\) be the event such that for the output of the extractor \((m', \mathsf {decom}) \leftarrow \mathcal {E}(1^\uplambda )\) it holds that \(m' \ne \mathsf {Open}(\mathsf {com}, \mathsf {decom})\), where \(\mathsf {com}\) is the variable sent by Alice in the same session. Additionally, let \(\mathsf {BadExt}\) be the event such that the output of extractor \((m', \mathsf {decom})\) is a valid opening for \(\mathsf {com}\) but \(m' \ne m\), where m is the message sent by Alice in the opening for the same session. We are now going to argue that the probability that either \(\mathsf {NoExt}\) or \(\mathsf {BadExt}\) happens is bounded by a negligible function.
Lemma 33
.
Proof
Consider the sequence of games \(\mathcal {H}_0^\mathsf {D}, \dots , \mathcal {H}_4^\mathsf {D}\) as defined in the proof of Lemma 32. Let \(\mathsf {NoExt}^*(j^*)\) be the event that the algorithm \(\mathcal {E}\) returns an invalid opening for the commitment in session \(j^*\) and the extractors of the zero knowledge proofs output a valid pair of witnesses. With an argument along the same lines of the proof of Lemma 32, we can show that
We now observe that whenever the extractor of the SWIAoK is successful then, for all \(i \in [\ell (\uplambda )]\) it holds that that \(\beta _i \leftarrow \mathsf {Open}(\mathsf {com}_i, \mathsf {decom}_i')\) and that \({\mathsf {hd}}(q_i^{k_i} ,\beta _i)\le \delta \), for some \(k = k_1\dots k_{\ell (\uplambda )}\). Additionally, we have that \(H(k,\mathsf {seed}) \oplus \omega \) is a valid decommitment information for \(\mathsf {com}\). By definition of \(\mathsf {NoExt}\), we have that \(m' \ne \mathsf {Open}(\mathsf {com},\mathsf {decom})\), where \((m', \mathsf {decom})\) is the output of \(\mathcal {E}\) and it is defined as \(\omega \oplus H(\mathsf {seed}, K)\). This implies that \(K \ne k\), since the function H is deterministic. Therefore, there must exists some \(i^*\) such that \(K_{i^*} \ne k_{i^*}\). By Lemma 32, we know that K is uniquely defined and therefore Alice did not query \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) for any \(p'\) such that \({\mathsf {hd}}(p_i^{z_i},p')\le \gamma \) for \(z_i \ne K_i\), and therefore for all \(i \in [\ell (\uplambda )]\) it holds, by the unpredictability of the PUF, that
and in particular we have that \(\beta _{i^*} \ne m_{i^*}\). Since \(\mathsf {decom}_{i^*}\) and \(\mathsf {decom}_{i^*}'\) are valid openings for \(m_{i^*}\) and \(\beta _{i^*}\) with respect to \(\mathsf {com}_{i^*}\), the probability of \(\mathsf {NoExt}^*(j^*)\) to happen in \(\mathcal {H}_4^\mathsf {D}\) can be bound to a negligible function by the binding property of the commitment scheme. This proves the initial lemma. \(\square \)
Lemma 34
.
Proof
The formal argument follows along the same lines as the proof of Lemma 33. The main observation here is that the argument implies that the output of \(\mathcal {E}\) and the tuple \((m, \mathsf {decom})\), where m is sent in plain by Alice and \(\mathsf {decom}\) is the output of the extractor for the SWIAoK, must be identical with overwhelming probability. \(\square \)
By the union bound we have that
It follows that for all session \(j \in [n]\) our extractor as defined above does not abort except with negligible probability and outputs the same message that the adversary opens to with overwhelming probability. Therefore, we can conclude that
\({\mathsf {S}} \): We can now define the simulator \({\mathsf {S}} \) that is identical to \(\mathcal {H}_2\) except that the output \(m'\) of the algorithm \(\mathcal {E}\) (defined as above) is used in the message \((\mathtt {commit}, \mathsf {sid}, m')\) to the ideal functionality \(\mathcal {F}_{\textsf {MCOM}}\). The corresponding decommitment message \((\mathtt {unveil}, \mathsf {sid})\) is sent when the adversary returns a valid decommitment to some message m. Since the interaction is unchanged to the eyes of the adversary, we have that
This implies that our protocol everlastingly UCrealizes the commitment functionality \(\mathcal {F}_{\textsf {MCOM}}\) for any corrupted Alice and concludes our proof. \(\square \)
Notes
We use the terms “everlasting security” and “everlasting UC security” interchangeably in this paper to describe protocols that are everlasting secure and exhibit a composition theorem which allows a modular design of such protocols.
For simplicity, we assume throughout this work that the session ids assigned to these instances are \(\{1,\dots ,p\}\) for some polynomial p.
In the model of [4] the malicious PUFsinsidePUF are stateless, while \(\mathcal {F}_{\textsf {HToken}}\) allows the malicious PUFs to be stateful.
The authors of [3] discuss in Appendix C the different notions of security for PUFs and their relationships, and in particular mention that their definition of unpredictability assumes that the creation process of the PUF is not controllable.
An everlasting semihonest adversary follows the protocol honestly, but can behave arbitrarily after the protocol runs is over and it becomes unbounded.
References
F. Armknecht, D. Moriyama, A.R. Sadeghi, M. Yung. Towards a unified security model for physically unclonable functions. in K. Sako, editor, Topics in Cryptology – CTRSA 2016, vol. 9610 of Lecture Notes in Computer Science, San Francisco, CA, USA. (Springer, Heidelberg, 2016), pp. 271–287
D. Beaver. Correlated pseudorandomness and the complexity of private computations. in 28th Annual ACM Symposium on Theory of Computing, Philadephia, PA, USA. (ACM Press, 1996), pp. 479–488
C. Brzuska, M. Fischlin, H. Schröder, S. Katzenbeisser. Physically uncloneable functions in the universal composition framework. in P. Rogaway, editor, Advances in Cryptology – CRYPTO 2011, vol. 6841 of Lecture Notes in Computer Science, Santa Barbara, CA, USA, (Springer, Heidelberg, Germany, 2011), pp. 51–70
S. Badrinarayanan, D. Khurana, R. Ostrovsky, and I. Visconti. Unconditional UCsecure computation with (strongermalicious) PUFs. in J.S. Coron, J. B. Nielsen, editors, Advances in Cryptology – EUROCRYPT 2017, Part I, vol. 10210 of Lecture Notes in Computer Science, Paris, France. (Springer, Heidelberg, 2017) pp. 382–411
R. Canetti. Universally composable security: a new paradigm for cryptographic protocols. in 42nd Annual Symposium on Foundations of Computer Science, , Las Vegas, NV, USA. (IEEE Computer Society Press, 2001), pp. 136–145
C. Cachin, C. Crépeau, J. Marcil. Oblivious transfer with a memorybounded receiver. in 39th Annual Symposium on Foundations of Computer Science, (Palo Alto, CA, USA, 1998). IEEE Computer Society Press, pp. 493–502
R. Canetti, M. Fischlin. Universally composable commitments. in J. Kilian, editor, Advances in Cryptology—CRYPTO 2001, vol. 2139 of Lecture Notes in Computer Science, Santa Barbara, CA, USA (Springer, Heidelberg, Germany, 2001), pp. 19–40
N. Chandran, V. Goyal, and A. Sahai. New constructions for UC secure computation using tamperproof hardware. In N. P. Smart, editor, Advances in Cryptology – EUROCRYPT 2008, vol. 4965 of Lecture Notes in Computer Science, Istanbul, Turkey, (Springer, Heidelberg, Germany, 2008) pp. 545–562
R. Canetti, Y. Lindell, R. Ostrovsky, A. Sahai. Universally composable twoparty and multiparty secure computation. in 34th Annual ACM Symposium on Theory of Computing. (ACM Press, Montréal, Québec, Canada, 2002), pp. 494–503
C. Cachin, U. M. Maurer. Unconditional security against memorybounded adversaries, in B. S. Kaliski Jr., editor, Advances in Cryptology – CRYPTO’97, vol. 1294 of Lecture Notes in Computer Science, Santa Barbara, CA, USA. (Springer, Heidelberg, Germany 1997), pp. 292–306
I. Damgård. Efficient concurrent zeroknowledge in the auxiliary string model. in B. Preneel, editor, Advances in Cryptology – EUROCRYPT 2000, vol. 1807 of Lecture Notes in Computer Science, Bruges, Belgium (Springer, Heidelberg, Germany, 2000) pp. 418–430
D. DachmanSoled, N. Fleischhacker, J. Katz, A. Lysyanskaya, D. Schröder. Feasibility and infeasibility of secure computation with malicious PUFs. In Juan A. Garay and Rosario Gennaro, editors, Advances in Cryptology – CRYPTO 2014, Part II, vol. 8617 of Lecture Notes in Computer Science, Santa Barbara, CA, USA. (Springer, Heidelberg, Germany, 2014), pp. 405–420
N. Döttling, D. Kraschewski, J. MüllerQuade, T. Nilges. General statistically secure computation with boundedresettable hardware tokens. in Y. Dodis, J. B. Nielsen, editors, TCC 2015: 12th Theory of Cryptography Conference, Part I, vol. 9014 of Lecture Notes in Computer Science, Warsaw, Poland, (Springer, Heidelberg, Germany, 2015), pp. 319–344
N. Döttling, D. Kraschewski, J. MüllerQuade. Unconditional and composable security using a single stateful tamperproof hardware token. in Y. Ishai, editor, TCC 2011: 8th Theory of Cryptography Conference, vol. 6597 of Lecture Notes in Computer Science, Providence, RI, USA. (Springer, Heidelberg, Germany, 2011), pp. 164–181
S. Dziembowski, U. M. Maurer. The bare boundedstorage model: The tight bound on the storage requirement for key agreement. IEEE Trans. Inf. Theory, 54(6), 2790–2792 (2008)
I. Damgård, J. B. Nielsen. Perfect hiding and perfect binding universally composable commitment schemes with constant expansion factor. in Moti Yung, editor, Advances in Cryptology – CRYPTO 2002, vol. 2442 of Lecture Notes in Computer Science, Santa Barbara, CA, USA, August 18–22. (Springer, Heidelberg, Germany, 2002) pp. 581–596
I. Damgård, A. Scafuro. Unconditionally secure and universally composable commitments from physical assumptions. in K. Sako, P. Sarkar, editors, Advances in Cryptology – ASIACRYPT 2013, Part II, vol. 8270 of Lecture Notes in Computer Science, Bengalore, India, December 1–5. (Springer, Heidelberg, Germany, 2013), pp. 100–119
V. Goyal, Y. Ishai, M. Mahmoody, A. Sahai. Interactive locking, zeroknowledge pcps, and unconditional cryptography. in T. Rabin, editor, Advances in Cryptology – CRYPTO 2010, vol. 6223 of Lecture Notes in Computer Science, Santa Barbara, CA, USA, August 15–19. (Springer, Heidelberg, Germany, 2010), pp. 173–190
V. Goyal, Y. Ishai, A. Sahai, R. Venkatesan, A. Wadia. Founding cryptography on tamperproof hardware tokens. in D. Micciancio, editor, TCC 2010: 7th Theory of Cryptography Conference, vol. 5978 of Lecture Notes in Computer Science, Zurich, Switzerland, February 9–11. (Springer, Heidelberg, Germany, 2010), pp. 308–326
C. Hazay, A. Polychroniadou, M. Venkitasubramaniam. Composable security in the tamperproof hardware model under minimal complexity. in Theory of Cryptography Conference. (Springer, 2016), pp. 367–399
R. Impagliazzo, Leonid A. Levin, and Michael Luby. Pseudorandom generation from oneway functions (extended abstracts). In 21st Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, May 15–17. (ACM Press, 1989), pp. 12–24
J. Katz. Universally composable multiparty computation using tamperproof hardware. in M. Naor, editor, Advances in Cryptology – EUROCRYPT 2007, vol. 4515 of Lecture Notes in Computer Science, Barcelona, Spain, May 20–24. (Springer, Heidelberg, Germany, 2007), pp. 115–128
J. Kilian. Founding cryptography on oblivious transfer. in 20th Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, May 2–4. (ACM Press, 1988), pp. 20–31
R. Maes. Physically Unclonable Functions: Constructions, Properties and Applications, vol. 9783642413957. 11, 2013.
J. Mechler, J. MüllerQuade, T. Nilges. Universally composable (noninteractive) twoparty computation from untrusted reusable hardware tokens. IACR Cryptology ePrint Archive, 2016:615 (2016)
J. MüllerQuade and Dominique Unruh. Longterm security and universal composability. In S. P. Vadhan, editor, TCC 2007: 4th Theory of Cryptography Conference, vol. 4392 of Lecture Notes in Computer Science, Amsterdam, The Netherlands, February 21–24. (Springer, Heidelberg, Germany, 2007), pp. 41–60
J. MüllerQuade, D. Unruh. Longterm security and universal composability. Journal of Cryptology, 23(4):594–671 (2010)
T. Moran, G. Segev. David and Goliath commitments: UC computation for asymmetric parties using tamperproof hardware. in N.P. Smart, editor, Advances in Cryptology – EUROCRYPT 2008, vol. 4965 of Lecture Notes in Computer Science, Istanbul, Turkey, April 13–17. (Springer, Heidelberg, Germany, 2008), pp. 527–544
C. Orlandi, R. Ostrovsky, V. Rao, A. Sahai, I. Visconti. Statistical concurrent nonmalleable zero knowledge. in Y. Lindell, editor, TCC 2014: 11th Theory of Cryptography Conference, vol. 8349 of Lecture Notes in Computer Science, San Diego, CA, USA, February 24–26. (Springer, Heidelberg, Germany, 2014), pp. 167–191
R. Ostrovsky, A. Scafuro, I. Visconti, and A. Wadia. Universally composable secure computation with (malicious) physically uncloneable functions. In T. Johansson, P. Q. Nguyen, editors, Advances in Cryptology – EUROCRYPT 2013, volume 7881 of Lecture Notes in Computer Science, Athens, Greece, May 26–30. (Springer, Heidelberg, Germany, 2013), pp. 702–718
C. Peikert, V. Vaikuntanathan, B. Waters. A framework for efficient and composable oblivious transfer. in D. Wagner, editor, Advances in Cryptology – CRYPTO 2008, vol. 5157 of Lecture Notes in Computer Science, Santa Barbara, CA, USA, August 17–21. (Springer, Heidelberg, Germany, 2008), pp. 554–571
C. Peikert, B. Waters. Lossy trapdoor functions and their applications. in R. E. Ladner, C. Dwork, editors, 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17–20. (ACM Press, 2008), pp. 187–196
W. Quach. Ucsecure OT from lwe, revisited. in C. Galdi, V. Kolesnikov, editors, Security and Cryptography for Networks  12th International Conference, SCN 2020, Amalfi, Italy, September 14–16, 2020, Proceedings, vol. 12238 of Lecture Notes in Computer Science. (Springer, 2020), pp. 192–211
M. O. Rabin. Hyper encryption and everlasting secrets. in Algorithms and Complexity, 5th Italian Conference, CIAC 2003, Rome, Italy, May 28–30, 2003, Proceedings, (Rome, Italy, 2003), pp. 7–10
D. Unruh. Everlasting multiparty computation. in R. Canetti, J. A. Garay, editors, Advances in Cryptology – CRYPTO 2013, Part II, volume 8043 of Lecture Notes in Computer Science, Santa Barbara, CA, USA, August 18–22. (Springer, Heidelberg, Germany, 2013), pp. 380–397
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Serge Fehr.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Partially supported by the Deutsche Forschungsgemeinschaft (DFG, 442893093) and by the State of Bavaria through the Nuremberg Campus of Technology (NCT). Supported by the ERC consolidator grant CerQuS (819317), by the PRG team grant “Secure Quantum Technology” (PRG946) from the Estonian Research Council, by the United States Air Force Office of Scientific Research (AFOSR) via AOARD Grant “Verification of Quantum Cryptography” (FA23861714022), and by the Estonian Centre of Exellence in IT (EXCITE) funded by ERDF.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Magri, B., Malavolta, G., Schröder, D. et al. Everlasting UC Commitments from Fully Malicious PUFs. J Cryptol 35, 20 (2022). https://doi.org/10.1007/s00145022094324
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00145022094324
Keywords
 Everlasting security
 PUF
 Universal composability
 Commitment scheme