Introduction

The security of almost all cryptographic schemes relies on certain hardness assumptions. These assumptions are believed to hold right now, and researchers are even fairly certain that they will not be broken in the near future. It is widely believed, for example, that the computational Diffie–Hellmann and the RSA assumptions hold in certain groups. But what about the security of these assumptions in 10, 20, or 100 years? Can we give any formal security guarantees for current constructions that remain valid in the distant future? This is certainly possible for information-theoretic schemes and properties. However, given that many interesting functionalities are impossible to realize in an information theoretic sense, this leaves us in a very unsatisfactory situation.

To overcome this problem, Müller-Quade and Unruh suggested a novel security notion widely known as everlasting universal composability security [26] (building on the work of Rabin on virtual satellites [34]). The basic idea of this security notion is to bound the running time of the attacker only during the protocol execution. After the protocol run is over, the attacker may run in super-polynomial time. This models the intuition that computational assumptions are believed to hold right now, and therefore, during the protocol run. However, at some point in the future, known computational assumptions may no longer hold. Everlasting UC securityFootnote 1 refers to a composable protocol that remains secure in these settings. The everlasting UC security model has also been considered for quantum protocols [35]. Everlasting UC security is clearly a very desirable security notion, and since it is strictly weaker than statistical UC security, one may hope that it is easier to achieve. However, Müller-Quade and Unruh showed that everlasting UC commitments cannot be realized, not even in the common reference string (CRS), or with a public-key infrastructure (PKI) [27].

Everlasting UC Security From Hardware Assumptions The stark impossibility result of Müller-Quade and Unruh raises the question whether the notion is achievable at all. The authors answered this question affirmatively by presenting two constructions based on hardware assumptions. The first construction is based on a tailored-made hardware token that embeds a random oracle. The second construction relies on signature cards [27]. However, both constructions assume that the hardware token is honestly generated. The authors left open the question whether it is possible to achieve everlasting security in the setting of maliciously generated hardware tokens. Goyal et al. [18] constructs unconditionally UC-secure commitments and secure computation (as opposed to everlasting) from malicious hardware tokens. However, the construction of [18] requires honest tokens to encapsulate other tokens, ruling out some classes of hardware tokens such as physically uncloneable functions (PUFs).

Physically Uncloneable Functions (PUFs) In this work, we present an everlastingly UC secure commitment scheme assuming the existence of PUFs. Loosely speaking, PUFs are physical objects that can be queried by mapping an input to a specific stimulus and mapping an observable behaviour to an output set. The crucial properties for a PUF are (i) that it should be hard (if not impossible) to clone and (ii) that it should be hard to predict the output on any input without first querying the PUF on a close enough input.

Our Contributions

We initiate the study of everlasting UC security in the setting of maliciously generated hardware tokens, such as PUFs. Our model extends the frameworks of [4, 8] by introducing fully malicious hardware tokens, whose state is not a-priori bounded, the generator of a token can install arbitrary code inside of it, and it can encapsulate (and decapsulate) other (possibly fully malicious) tokens within itself. Our contributions can be summarized as follows:

  • Aiming at bridging the gap between hardware tokens and PUFs, we propose a unified ideal functionality for fully malicious tokens that is general enough to capture hardware devices with arbitrary functionalities such as PUFs and signature cards.

  • We put forward a novel definition for unpredictability of PUFs. We argue that the formalization from prior works [3, 24, 30] is not sufficient for our setting because it does not exclude adversaries that may indeed predict the PUF responses for values never queried to the PUF. We demonstrate this fact in Sect. 4.1.1 by giving a concrete counterexample.

  • We show with an impossibility result that one cannot hope to achieve an everlastingly secure oblivious transfer (OT) (therefore, secure computation) in the malicious token setting by using non-erasable (honestly generated) tokens; non-erasable tokens can keep a state but are not allowed to erase previous states.

  • Finally, we present an everlastingly UC secure commitment scheme in the fully malicious token model. Our protocol assumes the existence of PUFs and allows for the PUF to be reused for polynomially many runs of the protocol. Our cryptographic building blocks can be instantiated from standard computational assumptions, such as the learning with errors (LWE) problem.

Related Work

Everlasting and Memory Bound Adversaries Everlasting security was first considered in the setting of memory-bounded adversaries [6, 10], and later extended to the UC setting by Müller-Quade and Unruh [27]. Rabin [34] suggested a construction using distributed servers of randomness, called virtual satellites, to achieve everlasting security. The resulting scheme remains secure if the attacker that accesses the communication between the parties and the distributed servers is polynomially bounded during the key exchange. Dziembowski and Maurer [15] showed that protocols in the bounded storage model do not necessarily stay secure when composed with other protocols.

Damgård [11] presented a statistical zero-knowledge protocol secure under concurrent composition. Although counterintuitive, statistical zero-knowledge may lose its everlasting property under composition. This was illustrated in [27] for statistically hiding UC commitments [16] which were shown to leak secrets under (even sequential) composition; they are composable and statistically hiding, but not at the same time (i.e. the composability only holds for the computational hiding property, intuitively). Technically, the reason for this is that the common reference string used by the simulator is not statistically indistinguishable. For the same reason, the protocol of Damgård [11] does not directly translate into an everlasting commitment scheme: for this specific case, the gap consists in extracting the witness from adversarial proofs using a common reference string that is statistically close to the honestly sampled one.

(Malicious) Hardware Tokens A model proposed in [22] allows parties to build hardware tokens to compute functions of their choice, such that an adversary, given a hardware token T for a function F, can only observe the input and output behaviour of T. The motivation is that the existence of a tamper-proof hardware can be viewed as a physical assumption, rather than a trust assumption. The authors show how to implement UC-secure two-party computation using stateful tokens, under the DDH assumption. Shortly after, Moran and Segev [28] showed that in the hardware token model of [22] even unconditionally secure UC commitments are possible using stateful tokens. This result was later extended by [19] for unconditionally UC-secure computation, also using stateful tokens.

One limitation of the model of [22] is the assumption that all parties (including the adversary) know the code running inside the hardware token it produces; this assumption gives extra power to the simulator, allowing it to rewind the hardware token in the proofs of [19, 22, 28]. However, this assumption rules out real scenarios where the adversary can create a new hardware token that simply “encapsulates” a hardware token it receives from some party and that the adversary does not know the code running inside of it.

In this direction, Chandran et al. [8] extended the model of [22] to allow for the hardware tokens produced by the adversary to be stateful, to encapsulate other tokens inside of it and to be passed on to other parties. They constructed a computationally secure UC commitment protocol without setup, assuming the existence of stateless hardware tokens (signature cards). Unfortunately, the construction of [8] cannot fulfil the notion of unconditional (or everlasting) security since it requires perfectly binding, and therefore only computationally hiding, commitments as a building block.

Goyal et al. [18], following the model of [8], prove that statistically secure OT from stateless tokens is possible if (honest) tokens can encapsulate other tokens. However, honest token encapsulation is highly undesirable in practice, and in particular not even compatible with PUFs as they are physical objects. Interestingly, the authors also show that statistically secure OT (and therefore secure computation) is impossible to achieve when one considers only stateless tokens that cannot be encapsulated. To circumvent this impossibility result, Döttling et al. [13, 14] studied the feasibility of secure computation in the stateful token model, where the adversary is not allowed to rewind the token arbitrarily. Although this model has a practical significance, it does not cover certain classes of hardware tokens, such as PUFs. Later, a rich line of research investigated on the round complexity of secure computation using stateless hardware tokens [20, 25] in the computational setting. Unfortunately, it seems that the security guarantees of these protocols cannot be lifted to the everlasting model: in Sect. 5, we present an impossibility result against everlastingly UC secure computation from stateful but non-erasable honest tokens. The result holds even in the presence of an honestly sampled CRS.

PUFs Brzuska et al. [3] introduced PUFs in the UC framework, and proposed UC constructions of several interesting cryptographic primitives such as oblivious transfer, bit commitment, and key agreement. Ostrovsky, Scafuro, Visconti, and Wadia [30] pointed out that the previous results implicitly assume that all PUFs, including those created by the attacker, are honestly generated. To address this limitation, they defined a model in which an attacker can create malicious PUFs having arbitrary behaviour. Many of the previous protocols can be easily attacked in this new adversarial setting, but Ostrovsky, Scafuro, Visconti, and Wadia showed that it is possible to construct universally composable protocols for secure computation in the malicious PUF model under additional, number-theoretic assumptions. They leave open the question of whether unconditional security is possible in the malicious PUF model. Damgård and Scafuro [17] have made partial progress on this question presenting a commitment scheme with unconditional security based on PUFs. However, as shown by [4] in the form of an attack, the construction of [17] completely breaks when the adversary is allowed to generate encapsulated PUFs. Dachman-Soled, Fleischhacker, Katz, Lysyanskaya, and Schröder [12] investigated the possibility of secure two-party computation based on malicious PUFs. Badrinarayanan, Khurana, Ostrovsky, and Visconti [4] introduced a model where the adversary is allowed to generate malicious PUFs that encapsulate other PUFs inside of it; the outer PUF has oracle access to all its inner PUFs. The security of their scheme assumes a bound on the memory of adversarially generated PUFs.

In Table 1, we show a comparison of UC schemes based on malicious hardware tokens (including PUFs).

Table 1 Comparison of UC secure schemes based on tamper-proof hardware tokens

Technical Overview

In the following, we give an informal overview of our everlasting UC commitment scheme construction, and we introduce the main ideas behind our proof strategy. Besides PUFs, our protocol assumes the existence of the following cryptographic building blocks:

  • A non-interactive statistically hiding (NI-SH) UC-secure commitment (\(\mathsf {Com}\)).

  • A 2-round statistically receiver private UC-secure oblivious transfer (OT).

  • A statistical witness-indistinguishable argument of knowledge (SWIAoK).

  • A strong randomness extractor H.

The message flow of our protocol is shown in Fig. 1. The protocol is executed by a committer (Alice) and a recipient (Bob). We assume that both parties have access to a uniformly sampled common reference string that contains a random image of a one-way permutation \(y = f(x)\).

Protocol Overview At the beginning of a commitment execution, Bob prepares a series of random string-pairs \((p_i^0, p_i^1)\), and queries them to the PUF to obtain the corresponding pair \((q_i^0, q_i^1)\); the PUF is then transferred to Alice. Here, we make the simplifying assumption that a PUF is used only for a single run of the commitment. Note, however, that one can reuse the same PUF by having Bob computing as many tuples \((p_i^0, p_i^1)\) as needed, and by querying the PUF on all of these values before passing it to Alice.

Alice samples a random string \(k \in \{0,1\}^{\ell (\uplambda )}\) and engages Bob in many parallel OT instances, where Alice receives \(p_i^{k_i}\), and where \(k_i\) denotes the i-th bit of k. Alice queries the strings \(p_i^{k_i}\) to the PUF and sends to Bob:

  • a set of NI-SH commitments \((\mathsf {com}_1, \ldots , \mathsf {com}_{\ell (\uplambda )})\) to the outputs of the PUF,

  • an (NI-SH) commitment \(\mathsf {com}\) to m, and

  • the string \(\omega := H(\mathsf {seed}, k) \oplus m \Vert \mathsf {decom}\).

Alice then produces a SWIAoK that certifies that either (i) all of her messages were honestly generated, or (ii) she knows a pre-image x such that \(f(x) = y\).

The idea here is that, if an algorithm recovers k, then it can also recompute \(H(\mathsf {seed}, k)\) and extract the message m. Note that the value of k is “encoded” in the OT bits of Alice for the \(p_i^{k_i}\), and those values are queried by Alice to the PUF. Therefore, an extractor that sees the queries of Alice can easily recover the message m. What is not clear at this point is how to enforce Alice to query the PUF on the correct \(p_i^{k_i}\) and not on some other random string. For this reason, we introduce an additional authentication step where Bob publishes all the pairs \((q_i^0, q_i^1)\). In the opening phase, Alice proves (with a SWIAoK) to Bob that the vector of commitments sent in the previous interaction opens indeed to \(q_1^{k_1}, \ldots , q_{\ell (\uplambda )}^{k_{\ell (\uplambda )}}\), up to small errors (or she knows the pre-image of y). Intuitively, Alice cannot convince Bob without querying all the \(p_i^{k_i}\), since she would need to guess some \(q_i^{{\overline{k}}_i}\) without knowing the pre-image \(p_i^{{\overline{k}}_i}\) (due to the security of the OT). In the proof, the extractor can recover k by just looking at the queries Alice made to the PUF.

To see why the commitment is hiding, it is sufficient to observe that k hides the message in an information theoretic sense, under the assumption that the OT and SWIAoK protocols are secure. One subtlety that we need to address is that some bits of k might be revealed by the aborts of Alice. For this reason, we one-time-pad the message m with \(H(\mathsf {seed}, k)\): the strong randomness extractor guarantees that the value \(H(\mathsf {seed}, k)\) is still uniformly distributed even if some bits of k are leaked.

Fig. 1
figure 1

Message flow diagram between Alice and Bob for the commitment and opening phases of our protocol in Sect. 6

Proof Sketch (Hiding) We show that our commitment scheme is hiding through a series of hybrids where at the last step Alice can equivocate the commitment to any message of her choice. Note that every step is information-theoretic.

\(\mathcal {H}_1\): Alice uses x, the pre-image of y, as a witness to compute the SWIAoK. Since the AoK is statistically witness indistinguishable, this hybrid is statistically close to the original protocol.

\(\mathcal {H}_2\): Alice uses the simulator for the OT protocols and extracts both values \((p_i^0, p_i^1)\). Since the OT is statistically receiver-private, this hybrid is statistically close to the previous. In the full proof, this is shown via a hybrid argument.

\(\mathcal {H}_3\): Alice computes \(\mathsf {com}_i\) as commitment to a random string. A hybrid argument can be used to bound the distance of this hybrid with the previous by the statistically hiding property of the commitment scheme.

\(\mathcal {H}_4\): Alice chooses the value of k for all sessions upfront. Here, the change is only syntactical.

\(\mathcal {H}_5\): Alice no longer queries the PUF token but instead checks that the output pairs \((q_i^0, q_i^1)\) sent by Bob correspond to the correct outputs of the PUF on input \((p_i^0, p_i^1)\). Note that the state of the PUF is fixed once the PUF is sent to Alice and therefore the consistency of all pairs \((q_i^0, q_i^1)\) is well defined. Note that the relation is not efficiently computable by Alice, but for information-theoretic security the fact that it is defined is enough. Since Alice retains the ownership of the PUF, this hybrid is identical to the previous.

\(\mathcal {H}_5\): Alice samples \(\omega \) uniformly at random. Note that in \(\mathcal {H}_4\) the leakage of Alice of k is bounded by whether she aborts or not. Since Alice aborts at most once and since there are at most polynomially many sessions, we can bound the leakage of k to \(O(\log \uplambda )\) bits. Leveraging the randomness extractor \(H\), we can argue that \(\mathcal {H}_4\) and \(\mathcal {H}_5\) are statistically indistinguishable.

\(\mathcal {H}_6\): Alice opens the commitment to a message of her choice. Note that in \(\mathcal {H}_5\) the original message m is information-theoretically hidden.

Proof Sketch (Binding) To argue that the scheme is binding, we define the following extractor: the algorithm examines the list of queries made by Alice to the PUF and, for each i, it checks whether some query \({\mathfrak {q}}\) is equal to \(p_i^b\) (for \(b\in \{0,1\}\)), if this is the case then it sets \(k_i = b\). Once the full k is reconstructed, the extractor computes \(\omega \oplus H(\mathsf {seed},k) = m \Vert \mathsf {decom}\) and outputs m. To show that the extractor always succeeds, we need to argue that:

  1. 1.

    The value of k is always well defined: if some \({\mathfrak {q}} = p_i^0\) and some other query \({\mathfrak {q}}' = p_i^1\), then the bit \(k_i\) is not well defined. However, this means that Alice learned both \(p_i^0\) and \(p_i^1\) from the OT protocol, which is computationally infeasible.

  2. 2.

    The string k is always fully reconstructed: if no query \({\mathfrak {q}}\) is equal to \(p_i^0\) or \(p_i^1\), then the i-th bit of k is not defined. This implies that Alice never queried \(p_i^0\) or \(p_i^1\) to the PUF. However, note that Alice should produce a commitment \(\mathsf {com}_i\) to either PUF(\(p_i^0\)) or PUF(\(p_i^0\)) and prove consistency. This is clearly not possible without querying the PUF unless Alice breaks the binding of the commitment or proves a false statement in the SWIAoK. To establish the latter, we also need to rule out the case where Alice computes the SWIAoK using the knowledge of x, the pre-image of y. In the full proof, we show this via a reduction against the one-wayness of the one-way permutation f.

We are now in the position to show that the extracted message m is identical to the one that Alice decommits to. Recall that Alice proves that she committed to the values PUF(\(p_i^{k_i}\)) such that \(\omega \oplus H(\mathsf {seed},k) = m \Vert \mathsf {decom}\). It follows that, if k is uniquely defined, then the extractor always returns the correct m, unless Alice can break the soundness of the SWIAoK (or inverts the one-way permutation). By the above conditions, this happens with all but negligible probability.

On the Common Reference String Our protocol needs to assume the existence of a common reference string to equivocate commitments in the security proof: having access to the generation of the \(\mathsf {crs} \), the simulator can craft proofs for false statements, simulate the OT, and extract the commitments. Note that the simulation has to be “straight-line”, since we cannot rewind the adversary in the UC framework. A previous work [29] circumvented this issue by leveraging some computationally hard problem. Unfortunately, this class of techniques does not seem to apply to the everlasting setting since the environment can distinguish a simulated trace once it becomes unbounded. The work of [17] builds unconditionally secure commitments from PUFs without a CRS, but as shown by [4], the construction breaks down in our model where the adversary is allowed to generate encapsulated PUFs. It is not clear if the techniques of [17] can be adapted to our setting. We leave the question of removing the necessity of a common reference string from our protocol as a fascinating open problem.

Preliminaries

In the following, we introduce the notation and the building blocks necessary for our results.

Notations

An algorithm \(\mathcal {A} \) is probabilistic polynomial time (PPT) if \(\mathcal {A} \) is randomized and for any input \(x,r\in \{0,1\}^*\) the computation of \(\mathcal {A} (x;r)\) terminates in at most \(\mathsf {poly}(|x|)\) steps. We denote with \(\uplambda \in {\mathbb {N}}\) the security parameter. A function \(\mathsf {negl}\) is negligible, if for any positive polynomial p and sufficiently large k, \(\mathsf {negl}(k) < 1/p(k)\). A relation \(R \in \{0,1\}^{*}\times \{0,1\}^{*}\) is an \({\mathcal {N}}{\mathcal {P}}\) relation if there is a polynomial-time algorithm that decides \((x,w)\in R\). If \((x,w)\in R\), then we call x the statement and x witness for x. We denote by \({\mathsf {hd}}(x,x')\) the Hamming distance between two bitstrings x and \(x'\). Given two ensembles \(\mathbf {X} = \{X_\uplambda \}_{\uplambda \in {\mathbb {N}}}\) and \(\mathbf {Y} = \{Y_\uplambda \}_{\uplambda \in {\mathbb {N}}}\), we write \(\mathbf {X} \approx \mathbf {Y}\) to denote that the two ensembles are statistically indistinguishable, and \(\mathbf {X} \approx _c \mathbf {Y}\) to denote that they are computationally indistinguishable. We denote the set \(\{1, \ldots , n\}\) by [n]. We recall the definition of statistical distance.

Definition 1

(Statistical Distance) Let X and Y be two random variables over a finite set \(\mathcal {U}\). The statistical distance between X and Y is defined as

Cryptographic Building Blocks

One Way Function A one-way function is a function that is easy to compute and hard to invert. It is the building block of almost all known cryptographic primitives.

Definition 2

A function \(f : \{0,1\}^* \rightarrow \{0,1\}^*\) is one way if and only if it can be computed in polynomial time but for all PPT algorithms \(\mathcal {A} \), there exists a negligible function \(\mathsf {negl}\) such that

where the probability is taken over the random choice of x. Moreover, we say that f is a one-way permutation whenever the domain and range of f are of the same size.

Non-interactive Commitment Scheme A commitment scheme (in the CRS model) consists of a pair of efficient algorithms \(\mathcal {C}= (\mathsf {Com},\mathsf {Open})\) where: \(\mathsf {Com}\) takes as input \(m \in \{0,1\}^{\uplambda }\) and outputs \((\mathsf {decom},\mathsf {com}) \leftarrow \mathsf {Com}(m)\), where \(\mathsf {decom}\) and \(\mathsf {com}\) are both of length \(\{0,1\}^{\uplambda }\); the algorithm \(\mathsf {Open}(\mathsf {decom},\mathsf {com})\) outputs a message m or \(\bot \) if c is not a valid commitment to any message. It is assumed that the commitment scheme is complete, i.e. for any message \(m \in \{0,1\}^{\uplambda }\) and \((\mathsf {decom},\mathsf {com})\leftarrow \mathsf {Com}(\textit{ck},m)\), we have \(\mathsf {Open}(\textit{ck},\mathsf {decom},\mathsf {Com}(\textit{ck},m))=m\) with overwhelming probability in \(\uplambda \in {\mathbb {N}}\). For convenience, we assume that the verification is deterministic and canonical (i.e. it takes as input the random coins used in the commitment phase and checks whether the commitment was correctly computed).

We require commitments to be (stand-alone) statistically hiding. Let \(\mathcal {A} \) be a non-uniform adversary against \(\mathcal {C}\) and define its \(\mathsf {hiding}\)-advantage as

$$\begin{aligned} \mathbf {Adv}_{\mathcal {C},\mathcal {A}}^{\text {hid}}(\uplambda ) =2\cdot \Pr \left[ b=b' \left| \begin{array}{l} (m_0,m_1,\textsf {st}_{\textsf {}}) \leftarrow \mathcal {A} (1^{\uplambda }); b \leftarrow \{0,1\}; \\ (\mathsf {decom},\mathsf {com}) \leftarrow \mathsf {Com}(m_b); b' \leftarrow \mathcal {A} (\mathsf {com},\textsf {st}_{\textsf {}}) \end{array} \right. \right] -1 \;. \end{aligned}$$

Definition 3

(Statistically Hiding) \(\mathcal {C}\) is statistically hiding if the advantage function \(\mathbf {Adv}_{\mathcal {C},\mathcal {A}}^{\text {hid}}\) is a negligible function for all unbounded adversaries \(\mathcal {A} \).

Furthermore, we require the commitments to be UC-secure: roughly speaking, an equivocator (with the help of a trapdoor in the CRS) can open the commitments arbitrarily. On the other hand, we require the existence of a computationally indistinguishable CRS (in extraction mode) where commitments are statistically binding and can be efficiently extracted via the knowledge of a trapdoor. Such commitments can be constructed in the CRS model from a variety of assumptions [32], including the learning with errors (LWE) problem. For a precise functionality, we refer the reader to Sect. 3.3.

Oblivious Transfer A \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \)-Oblivious transfer (OT) is a protocol executed between two parties called sender \(\mathcal {S} \) (i.e. Alice) with input bits \((s_0 , s_1)\) and receiver \(\mathcal {R} \) (i.e. Bob) with input bit b. Bob wishes to retrieve \(s_b\) from Alice in such a way that Alice does not learn anything about Bob’s choice b and Bob learns nothing about Alice’s remaining input \(s_{1-b}\). In this work, we require a 2-round protocol \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) secure in the CRS model, which satisfies (stand-alone) statistical receiver privacy. We define the sender Alice’s advantage of breaking the security of Bob to be

Definition 4

(Statistical Receiver Privacy) \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) is statistically receiver-private if the advantage function \(\mathbf {Adv}_\mathcal {S} ^{\mathsf {OT}}\) is a negligible function for all unbounded adversaries \(\mathcal {A} \).

In addition, we require our OT to be UC-secure: for a well-formed CRS, there exists an efficient equivocator that can (non-interactively) recover both messages of the sender. Furthermore, there exists an alternative CRS distribution (which is computationally indistinguishable from the original one) and an efficient non-interactive extractor that is able to uniquely recover the message of the receiver using the knowledge of a trapdoor. Such 2-round OT can be constructed from a variety of assumptions [31], including LWE [33]. For a precise description of the ideal functionality, we refer the reader to Sect. 3.3.

Statistical Witness-Indistinguishable Argument of Knowledge (SWIAoK) A witness-indistinguishable argument is a proof system for languages in \({\mathcal {N}}{\mathcal {P}}\) that does not leak any information about which witness the prover used, not even to a malicious verifier. If the prover is a PPT algorithm, then we call such a system an argument system, and if it is unbounded, we call it a proof system. For witness-indistinguishable arguments of knowledge, we formally introduce the following notation to represent interactive executions between algorithms \(\mathcal {P}\) and \(\mathcal {V}\). By \(\left\langle \mathcal {P}(y),\mathcal {V}(z)\right\rangle (x)\), we denote the view (i.e. inputs, internal coin tosses, incoming messages) of \(\mathcal {V}\) when interacting with \(\mathcal {P}\) on common input x, when \(\mathcal {P}\) has auxiliary input y and \(\mathcal {V}\) has auxiliary input z. Some of the following definitions are based on [29].

Definition 5

(Witness Relation) A witness relation for a \({\mathcal {N}}{\mathcal {P}}\) language L is a binary relation R that is polynomially bounded, polynomial time recognizable, and characterizes L by \(L = \{x :\exists w \text { s.t. }(x, w) \in R\}\). We say that w is a witness for \(x \in L\) if \((x, w) \in R\).

Definition 6

(Interactive Argument System) A two-party game \(\left\langle \mathcal {P},\mathcal {V}\right\rangle \) is called an Interactive Argument System for a language L if \(\mathcal {P}\), \(\mathcal {V}\) are PPT algorithms and the following two conditions hold:

  • Completeness: For every \(x \in L\), .

  • Soundness: For every \(x \notin L\) and every PPT algorithm \(\mathcal {P}^*\), there exists a negligible function \(\mathsf {negl}(\cdot )\), such that, .

Definition 7

(Witness Indistinguishability) Let \(L \in {\mathcal {N}}{\mathcal {P}}\) and \((\mathcal {P},\mathcal {V})\) be an interactive argument system for L with perfect completeness. The proof system \((\mathcal {P},\mathcal {V})\) is witness indistinguishable (WI) if for every PPT algorithm \(\mathcal {V}^*\), and every two sequences \(\{ w^1_x\}_{x\in L}\) and \(\{ w^2_x\}_{x\in L}\) such that \(w^1_x,w^2_x \in R\), the following sequences are witness indistinguishable:

  1. 1.

    \(\{ \left\langle \mathcal {P}(w^1_x),\mathcal {V}(z)\right\rangle (x)\}_{x \in L,z\in \{0,1\}^{*}}\)

  2. 2.

    \(\{ \left\langle \mathcal {P}(w^2_x),\mathcal {V}(z)\right\rangle (x)\}_{x \in L,z\in \{0,1\}^{*}}\)

Next, we define the notion of extractability for SWIAoKs.

Definition 8

(Argument of Knowledge) Let \(L \in {\mathcal {N}}{\mathcal {P}}\) and \((\mathcal {P},\mathcal {V})\) be an interactive argument system for L with perfect completeness. The proof system \((\mathcal {P},\mathcal {V})\) is an argument of knowledge (AoK) if there exists a PPT algorithm \(\mathsf {Ext}\), called the extractor, a polynomial p, and a constant c such that, for every PPT machine \(\mathcal {P}^*\), every \(x \in L\), auxiliary input z, and random coins r, there exists a negligible function \(\mathsf {negl}\) such that

Strong Randomness Extractor A strong randomness extractor is a function that, applied to some input with high min-entropy, returns some uniformly distributed element in the range.

Definition 9

(Strong Randomness Extractor) A function \(H: \{0,1\}^d \times \{0,1\}^\ell \rightarrow \{0,1\}^c\) is called a \((t,\varepsilon )\)-strong randomness extractor if for all \(X \in \{0,1\}^\ell \) s.t \({\mathsf {H}}_{\infty }(X) \ge t\), we have that,

$$\begin{aligned} {{\mathbb {S}}}{\mathbb {D}}\left( (U_d,H(U_d,X)),(U_d,U_c)\right) \le \varepsilon \end{aligned}$$

and \(L = t-c\) is called the entropy loss of \(H\).

Universal Composability Framework

In this section, we recall the basics of the Universal Composability (UC) framework of Canetti [5], and later we discuss the Everlasting Universal Composability frameworkFootnote 2 following [27] closely. We refer the reader to [5, 27] for a more comprehensive description.

Basics of the UC Framework

Our description of the UC framework follows [27] closely. The composition of two provably secure protocols does not necessarily preserve the security of each protocol and the result may also be no longer secure. A framework that analyses the security of composed protocols and which is able to provide security guarantees is the Universal Composability framework (UC) due to Canetti [5].

The main idea of this security notion is to compare a real protocol \(\pi \) with some ideal protocol \(\rho \). In most cases, this ideal protocol \(\rho \) will consist of a single machine, a so-called ideal functionality. Such a functionality can be seen as a trusted machine that implements the intended behaviour of the protocol. For example, a functionality \({\mathcal {F}}\) for commitment would expect a value m from a party C. Upon receipt of that value, the recipient R would be notified by \({\mathcal {F}}\) that C has committed to some value (but \({\mathcal {F}}\) would not reveal that value). When C sends an unveil request to \({\mathcal {F}}\), the value m will be sent to R (but \({\mathcal {F}}\) will not allow C to unveil a different value).

Given a real protocol \(\pi \) and an ideal protocol \(\rho \), we say that \(\pi \) realizes \(\rho \) (also called “implements”, “emulates”, or “is as secure as”) if for any adversary \({\mathcal {A}}\) attacking the protocol \(\pi \) there is a simulator \({\mathcal {S}}\) performing an attack on the ideal protocol \(\rho \) such that no environment \({\mathcal {Z}}\) can distinguish between \(\pi \) running with \({\mathcal {A}}\) and \(\rho \) running with \({\mathcal {Z}}\). Here, \({\mathcal {Z}}\) may choose the protocol inputs and read the protocol outputs and may communicate with the adversary or simulator (but \({\mathcal {Z}}\) is, of course, not informed whether it communicates with the adversary or the simulator). First, the environment may communicate with the adversary during the protocol execution, and second, the environment does not need to choose the inputs at the beginning of the protocol execution; it may adaptively send inputs to the protocol parties at any time, and it may choose these inputs depending upon the outputs and the communication with the adversary. These modifications are the reason for the very strong composability properties of the UC model.

Network Execution In the UC framework, all protocol machines and functionalities, as well as the adversary, the simulator and the environment are modelled as interactive Turing machines (ITM). Throughout a protocol execution, an integer k called the security parameter is accessible to all parties. At the beginning of the execution of a network consisting of \(\pi \), \({\mathcal {A}}\), and \({\mathcal {Z}}\), the environment \({\mathcal {Z}}\) is invoked with an initial input z. From then on, every machine M that is activated can send a message m to a single other machine \(M'\). Then that machine \(M'\) is activated and given the message m and the id of the originator \(M'\). If in some activation a machine does not send a message, the environment \({\mathcal {Z}}\) is activated again. Additionally the environment may issue corruption requests for some party P. From then on, the machines corresponding to the party P are controlled by the adversary (i.e. it can send and receive messages in the name of that machine, and it can read the internal state of that machine). Finally, at some point the environment \({\mathcal {Z}}\) gives some output m which can be an arbitrary string. By \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}(k,z)\) we denote the distribution of that output m on security parameter k and initial input z. Analogously, we define \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}(k,z)\) for an execution involving the protocol \(\rho \), the simulator \({\mathcal {S}}\), and the environment \({\mathcal {Z}}\).

We distinguish two different flavours of corruption. We speak of static corruption if the environment \({\mathcal {Z}}\) may only send corruption requests before the begin of the protocol, and of adaptive corruption if \({\mathcal {Z}}\) may send corruption requests at any time in the protocol, even depending on messages learned during the execution. In this paper, we will restrict our attention to the less strict security model using static corruption. We leave the case of adaptive corruptions, in which the environment may corrupt any party adaptively during the execution of the protocol as an interesting open problem.

UC Definitions If the ideal protocol \(\rho \) consists of an ideal functionality \({\mathcal {F}}\), for technical reasons we assume the presence of so-called dummy parties that forward messages between the environment \({\mathcal {Z}}\) and the functionality \({\mathcal {F}}\). For example, assume that \({\mathcal {F}}\) is a commitment functionality. In an ideal execution, \({\mathcal {Z}}\) would send a value m to the party C (since it does not know of \({\mathcal {F}}\) and therefore will not send to \({\mathcal {F}}\) directly). Then, C would forward m to \({\mathcal {F}}\). Then, \({\mathcal {F}}\) notifies R that a commitment has been performed. This notification is then forwarded to \({\mathcal {Z}}\). With these dummy parties we have, at least syntactically, the same messages as in the real execution: \({\mathcal {Z}}\) sends m to C and receives a commit notification from R. Second, the dummy parties allow a meaningful corruption in the ideal model. If \({\mathcal {Z}}\) corrupts some party P, in the ideal model the effect would be that the simulator controls the corresponding dummy party P and thus can read and modify messages to and from the functionality \({\mathcal {F}}\) in the name of P. Thus, if we write \(\mathrm {EXC}_{{\mathcal {F}},{\mathcal {S}},{\mathcal {Z}}}\), this is essentially an abbreviation for \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) where the ideal protocol \(\rho \) consists of the functionality \({\mathcal {F}}\) and the dummy parties. Having defined the families of random variables \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}(k,z)\) and \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}(k,z)\) we can now define security via indistinguishability.

Definition 10

(Universal Composability [5]) A protocol \(\pi \) UC realizes a protocol \(\rho \), if for any polynomial-time adversary \({\mathcal {A}}\) there exists a polynomial-time simulator \({\mathcal {S}}\), such that for any polynomial-time environment \({\mathcal {Z}}\),

$$\begin{aligned} \{\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}(k,z)\}_{k \in {\mathbb {N}}, z \in \{0,1\}^{ poly (k)}} \approx _c \{\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}(k,z)\}_{k \in {\mathbb {N}}, z \in \{0,1\}^{ poly (k)}}. \end{aligned}$$

Note that in this definition, it is also possible to only consider environments \({\mathcal {Z}}\) that give a single bit of output. As demonstrated in [5], this gives rise to an equivalent definition. However, in the case of everlasting UC below, this will not be the case, so we stress the fact that we allow \({\mathcal {Z}}\) to output arbitrary strings. In particular an environment machine can output its complete view.

Natural variants of this definition are statistical UC, where all machines (environment, adversary, simulator) are computationally unbounded and the families of random variables are required to be statistically indistinguishable, and perfect UC, where all machines are computationally unbounded and the families of random variables are required to have the same distribution. In these cases, one often additionally requires that if the adversary is polynomial time, so is the simulator.

Composition For some protocol \(\sigma \), and some protocol \(\pi \), by \(\sigma ^\pi \) we denote the protocol where \(\sigma \) invokes (up to polynomially many) instances of \(\pi \).Footnote 3 That is, in \(\sigma ^\pi \) the machines from \(\sigma \) and from \(\pi \) run together in one network, and the machines from \(\sigma \) access the inputs and outputs of \(\pi \). (In particular, \({\mathcal {Z}}\) then talks only to \(\sigma \) and not to the subprotocol \(\pi \) directly.) A typical situation would be that \(\sigma ^{\mathcal {F}}\) is some protocol that makes use of some ideal functionality \({\mathcal {F}}\) (say, a commitment) and then \(\sigma ^\pi \) would be the protocol resulting from implementing that functionality by some protocol \(\pi \) (say, a commitment protocol). One would hope that such an implementation results in a secure protocol \(\sigma ^\pi \). That is, if \(\pi \) realizes \({\mathcal {F}}\) and \(\sigma ^{\mathcal {F}}\) realizes \({\mathcal {G}}\), then \(\sigma ^\pi \) realizes \({\mathcal {G}}\). Fortunately, this is the case:

Theorem 11

(Universal Composition Theorem [5]) Let \(\pi \), \(\rho \), and \(\sigma \) be polynomial-time protocols. Assume that \(\pi \) UC realizes \(\rho \). Then, \(\sigma ^\pi \) UC realizes \(\sigma ^\rho \).

The intuitive reason for this theorem is that \(\sigma \) can be considered as an environment for \(\pi \) or \(\rho \), respectively. Since Definition 10 guarantees that \(\pi \) and \(\rho \) are indistinguishable by any environment, security follows. In a typical application of this theorem, one would first show that \(\pi \) realizes \({\mathcal {F}}\) and that \(\sigma ^{\mathcal {F}}\) realizes \({\mathcal {G}}\). Then using the composition theorem, one gets that \(\sigma ^\pi \) realizes \(\sigma ^{\mathcal {F}}\) which in turn realizes \({\mathcal {G}}\). Since the realizes relation is transitive (as can be easily seen from Definition 10), it follows that \(\sigma ^\pi \) realizes \({\mathcal {G}}\).

This composition theorem is the main feature of the UC framework. It allows us to build up protocols from elementary building blocks. This greatly increases the manageability of security proofs for large protocols. Furthermore, it guarantees that the protocol can be used in arbitrary contexts. Analogous theorems also hold for statistical and perfect UC.

Dummy adversary When proving the security of a given protocol in the UC setting, a useful tool is the so-called dummy adversary. The dummy adversary \(\tilde{\mathcal {A}}\) is the adversary that simply forwards messages between the environment \({\mathcal {Z}}\) and the protocol (i.e. it is a puppet of the environment that does whatever \({\mathcal {Z}}\) instructs it to do). In [5], it is shown that UC security with respect to the dummy adversary implies UC security. The intuitive reason is that since \(\tilde{\mathcal {A}}\) does whatever \({\mathcal {Z}}\) instructs it to do, it can perform arbitrary attacks and is therefore the worst-case adversary given the right environment (remember that we quantify over all environments).

We very roughly sketch the proof idea. Let protocols \(\pi \) and \(\rho \) and some adversary \({\mathcal {A}}\) be given. Assume that \(\pi \) UC realizes \(\rho \) with respect to the dummy adversary \(\tilde{\mathcal {A}}\). We want to show that \(\pi \) UC realizes \(\rho \) with respect to \({\mathcal {A}}\). Given an environment \({\mathcal {Z}}\), we construct an environment \({\mathcal {Z}}_{\mathcal {A}}\) which simulates \({\mathcal {Z}}\) and \({\mathcal {A}}\). Note that an execution of \(\mathrm {EXC}_{\pi ,\tilde{\mathcal {A}},{\mathcal {Z}}_{\mathcal {A}}}\) is essentially the same as \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}\) (up to a regrouping of machines). Then there is a simulator \(\tilde{\mathcal {S}}\) such that \(\mathrm {EXC}_{\pi ,\tilde{\mathcal {A}},{\mathcal {Z}}_{\mathcal {A}}}\) and \(\mathrm {EXC}_{\rho ,\tilde{\mathcal {S}},{\mathcal {Z}}_{\mathcal {A}}}\) are indistinguishable. Let \({\mathcal {S}}\) be the simulator that internally simulates the machines \({\mathcal {A}}\) and \(\tilde{\mathcal {S}}\) and forwards all actions performed by \({\mathcal {A}}\) as instructions to \(\tilde{\mathcal {S}}\) (remember that \(\tilde{\mathcal {S}}\) simulates \(\tilde{\mathcal {A}}\), so it expects such instructions). Then, \(\mathrm {EXC}_{\rho ,\tilde{\mathcal {S}},{\mathcal {Z}}_{\mathcal {A}}}\) is again the same as \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) up to a regrouping of machines. Summarizing, we have that \(\mathrm {EXC}_{\pi ,{\mathcal {A}},{\mathcal {Z}}}\) and \(\mathrm {EXC}_{\rho ,{\mathcal {S}},{\mathcal {Z}}}\) are indistinguishable.

A nice property of this technique is that it is quite robust with respect to changes in the definition of UC security. For example, it also holds with respect to statistical and perfect UC security, as well as with respect to the notion of Everlasting UC from [27].

Everlasting UC Security

In this section, we present our definitions of everlasting UC security. Our formalization builds on Canetti’s Universal Composability framework [5] and extends the notion of everlasting/long-term security due to Müller-Quade and Unruh [27]. Loosely speaking, everlasting security guarantees the “standard” notion of UC security during the execution of the protocol. This means that the security is guaranteed against polynomially bounded adversaries. Therefore, standard computational assumptions, such as the hardness of the decisional Diffie-Hellman problem and the existence of one-way functions can be used as hardness assumptions. However, after the execution of the protocol, we no longer assume that these assumptions hold, because they may be broken in the future. Müller-Quade and Unruh model this by letting the distinguisher become unbounded after the execution of the protocol. Everlasting security guarantees security and confidentiality in this setting.

They showed in [27] that everlasting UC commitments cannot be realized, not even in the common reference string (CRS) or the public-key infrastructure (PKI) model.Footnote 4 The fact that everlasting UC commitments cannot be constructed in the CRS model shows a strong separation between the everlasting UC and the computational UC security notion, because commitments schemes do exist (under standard assumptions) in the computational UC security model [7]. The stark impossibility result of Müller-Quade and Unruh motivated the use of other trust assumptions, such as trusted pseudorandom functions (TDF) and signature cards [27]. It is not hard to see that everlasting UC security is strictly stronger than computational UC security, since the adversary is allowed to become unbounded after the execution of the protocol, and it is strictly weaker than statistical UC security, since the adversary is polynomially bounded during the run of the protocol.

Defining Everlasting UC Security.

The formalization of [27] is surprisingly simple and only extends the original UC definition by the requirement that the execution of the real protocol and of the functionality cannot be distinguished by an unbounded entity after the execution of the protocol is over (that is run by efficient adversaries and environments). Formally, this means that the output of the environment in the real and ideal worlds is statistically close. A comprehensive discussion is given in [27], and we briefly recall the definitions.

Definition 12

(Everlasting UC) A protocol \(\pi \) everlastingly UC-realizes an ideal protocol \(\rho \) if, for any PPT adversary \(\mathcal {A} \), there exists a PPT simulator \(\mathcal {S}\) such that, for any PPT environment \(\mathcal {Z}\),

$$\begin{aligned} \{{\text { EXC }}_{\pi ,\mathcal {A},\mathcal {Z}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}_{\rho ,\mathcal {S},\mathcal {Z}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

In [27], the authors show that the composition theorem from [5] also holds with respect to Definition 12.A shortcoming of Definition 12, when applied to the token model, is that the distinguisher has no access to the hardware token after it becomes unbounded. Another issue is that Definition 12 does not model the case that the hardware assumption may be broken in the long-term.

Everlasting UC Security with Hardware Assumptions

We define a notion of everlasting security which allows the participants in a protocol to leak information in the long term.

With the exception of the environment \(\mathcal {Z}\) and the adversary \(\mathcal {A} \), we give each instance of a Turing machine (ITI for short) in the protocol an additional output tape, that we call long-term output tape. We modify the execution model to handle the long-term tapes as follows. At the end of the execution of the protocol (i.e. when the environment \(\mathcal {Z}\) produces its output m), adversary \(\mathcal {A} \) is invoked once again, this time with all long-term tapes, and produces an output a. We define the new execution model to be \({\text { EXC }}' := (m,a)\). A formal definition follows.

Definition 13

(Everlasting UC with Long-term Tapes) A protocol \(\pi \) everlastingly UC realizes an ideal protocol \(\rho \) if, for any PPT adversary \(\mathcal {A} \), there exists a PPT simulator \(\mathcal {S}\) such that, for any PPT environment \(\mathcal {Z}\),

$$\begin{aligned} \{{\text { EXC }}'_{\pi ,\mathcal {A},\mathcal {Z}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\rho ,\mathcal {S},\mathcal {Z}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \end{aligned}$$

In Definition 13, the distinguisher does not get the long-term tapes directly, instead, the tapes go through the adversary. The real adversary \(\mathcal {A} \) can, wlog, let the tapes go unchanged to the distinguisher (i.e. dummy adversary). The simulator \(\mathcal {S}\) can replace the long-term tapes by any simulated a of its choice. We point out that Definition 13 is equivalent to Definition 12 when none of the ITIs in \(\pi \) or \(\rho \) have long-term output tapes.

It is easy to show that the composition theorem from [27] carries over to our settings: the long-term tapes of the honest parties are also given to the adversary/simulator at the end of the protocol execution; however, the simulator (when communicating with the environment) can replace them with values of his choice. Formally, this means that the long-term tapes are just a message sent from protocol to adversary (in the same way as, e.g. the state is sent in the case of adaptive corruption), and consequently, when proving the composition theorem, those messages are handled in exactly the same way as the messages resulting from adaptive corruption.

Functionalities

In this section, we define some commonly used functionalities that we will need for our results.

CRS The first functionality is the common reference string (CRS). Intuitively, the CRS denotes a string sampled uniformly from a given distribution \(\mathcal {G}\) by some trusted party, and that is known to all parties prior to the start of the protocol.

Definition 14

(Common reference string (CRS)) Let \(\mathcal {D}_\uplambda \) (\(\uplambda \in {\mathbb {N}}\)) be an efficiently samplable distribution on \(\{0,1\}^*\). At the beginning of the protocol, the functionality \({\mathcal {F}}_{\mathrm {CRS}}^\mathcal {D}\) chooses a value r according to the distribution \(\mathcal {D}_\uplambda \) (where \(\uplambda \) is the security parameter) and sends r to the adversary and all parties \(P_i\).

Multiple commitment Here, we recall the functionality for a commitment scheme. Throughout the following description, we implicitly assume that the attacker is informed about each invocation and that the attacker controls the output of the functionality. We omit those messages from the description of the functionalities for readability. Note that to securely realize this functionality, a protocol must guarantee independence among different executions of the commitment protocol.

Definition 15

(Multiple Commitment) Let S and R be two parties, where we call S the sender and R the receiver. The functionality \(\mathcal {F}_{\textsf {MCOM}}^{S \rightarrow R,\ell }\) behaves as follows: upon the command \((\mathtt {commit},\mathsf {sid}, x)\), where \(x \in \{0,1\}^{\ell (\uplambda )}\), from S, send the message (\(\mathtt {committed},\mathsf {sid}\)) to R. Upon command (\(\mathtt {unveil},\mathsf {sid}\)) from S, send \((\mathtt {unveiled},\mathsf {sid},x)\) to R (with the matching \(\mathsf {sid}\)). Several commands \((\mathtt {commit})\) or \((\mathtt {unveil})\) with the same \(\mathsf {sid}\) are ignored.

Oblivious Transfer Functionality The oblivious transfer functionality allows for the receiver party to select a bit b and the sender party to send two messages \(m_0\) and \(m_1\) to the receiver in such a way that, the sender never learns the bit b the receiver chose, and the receiver learns only the message \(m_b\), and nothing else about \(m_{b-1}\).

Definition 16

(Oblivious Transfer (OT)) Let R and S be two parties. The functionality \(\mathcal {F}_{\textsf {OT}}^{S \rightarrow R,\ell }\) behaves as follows: upon receiving the command \(( transfer ,{{\mathsf {id}}},m_0,m_1)\) from S, with \(m_0,m_1 \in \{0,1\}^{\ell (\uplambda )}\), send the message \(( received ,{{\mathsf {id}}})\) to R; party R replies with \(( choice ,{{\mathsf {id}}},b)\), for \(b \in \{0,1\}\). Upon receiving \(( choice ,{{\mathsf {id}}},b)\) from R, send \(( eliver ,{{\mathsf {id}}},m_b)\) to R. We call S the sender, and R the receiver.

Remark 17

Looking ahead, we note that we cannot define the protocol of Sect. 6 in the \(\mathcal {F}_{\textsf {OT}}\)-hybrid model or in the \(\mathcal {F}_{\textsf {MCOM}}\)-hybrid model. The former is due to the protocol of Section 6 requiring an OT with the additional property of statistical receiver privacy, which is not the case of all OT protocols that realize the \(\mathcal {F}_{\textsf {OT}}\) functionality. The latter is due to the protocol requiring a commitment scheme with the additional property of statistical hiding, which is not the case of all commitment schemes that realize the \(\mathcal {F}_{\textsf {MCOM}}\) functionality. Moreover, the protocol of Sect. 6 requires to prove statements about the contents inside of a commitment, and as shown by [9] this is not possible using a UC commitment functionality.

Physical Assumptions

The functionality \(\mathcal {F}_{\textsf {HToken}}\) described in this section models generic fully malicious hardware tokens, including PUFs. A fully malicious hardware token is the one that its state is not bounded a-priori, its creator can install arbitrary code inside of it, and it can encapsulate an arbitrary number of (possibly fully malicious) tokens inside of itself, called children. As far as we know, this is the first functionality to integrate tamper-proof hardware tokens with PUFs, allowing us to design protocols that are transparent about the type of hardware token used, as the functionality can be instantiated with any of the former. Moreover, in the particular case of PUFs, our model extends the PUFs-inside-PUF model of [4] to the more general case of Tokens-inside-Token.Footnote 5 We handle encapsulated tokens in the functionality by allowing the parent token (i.e. the token that contains other token(s)) to have oracle access to all its children during its evaluation; we believe that token encapsulation models a realistic capability of an adversary and we believe that it is important to include it in our model for the soundness of the security analysis. We also note that \(\mathcal {F}_{\textsf {HToken}}\) is not PPT; this is because the functionality does not impose a restriction on the efficiency of the malicious code.

The functionality \(\mathcal {F}_{\textsf {HToken}}\) allows tokens to be transferred among parties by invoking \(\mathtt {handover}\); a token can only be queried by the party that currently owns the token by invoking \(\mathtt {query}\). Malicious tokens can be created by the adversary and it can contain other tokens inside of it. In contrast to [8], the adversary can “unwrap” encapsulated tokens by invoking \(\mathtt {openup}\) and read malicious tokens’ state by invoking \(\mathtt {readout}\).

figure a

Physically Uncloneable Functions (PUFs)

In a nutshell, a PUF is a noisy source of randomness. It is a hardware device that, upon physical stimuli, called challenges, produces physical outputs (that are measured), called responses. The response measured for each challenge of the PUF is unpredictable, in the sense that it is hard to predict the response of the PUF on a given challenge without first measuring the response of the PUF on the same (or similar) challenge. When a PUF receives the same physical stimulus more than once, the responses produced may not be exactly equal (due to the added noise), but the Hamming distance of the responses are bounded by a parameter of the PUF.

A family of PUFs is a pair of algorithms \((\mathsf {PUFSamp},\mathsf {PUFEval})\), not necessarily PPT. \(\mathsf {PUFSamp}\) models the manufacturing process of the PUF: on input the security parameter, it draws an index \(\sigma \), that represents an instance of a PUF that satisfies the security definitions for the security parameter (that we define later). \(\mathsf {PUFEval}\) models a physical stimulus applied to the PUF: Upon a challenge input x, it invokes the PUF with x and measures the response y, that is returned as the output. The length of a response y returned by algorithm \(\mathsf {PUFEval}\) is a bitstring of size \({{\mathsf {rg}}}\). A formal definition follows.

Definition 18

(Physically Uncloneable Functions) Let \({{\mathsf {rg}}}\) denote the size (in bits) of the range of the PUF responses of a PUF family. The pair \(\mathsf {PUF}= (\mathsf {PUFSamp},\mathsf {PUFEval})\) is a PUF family if it satisfies the following properties.

  • Sampling. Let \(\mathcal {I}_\uplambda \) be an index set. On input the security parameter \(\uplambda \), the stateless and unbounded sampling algorithm \(\mathsf {PUFSamp}\) outputs an index \(\sigma \in \mathcal {I}_\uplambda \). Each \(\sigma \in \mathcal {I}_\uplambda \) corresponds to a family of distributions \(\mathcal {D}_\sigma \). For each challenge \(x \in \{0,1\}^\uplambda \), \(\mathcal {D}_\sigma \) contains a distribution \(\mathcal {D}_\sigma (x)\) on \(\{0,1\}^{{{\mathsf {rg}}}(\uplambda )}\). It is not required that \(\mathsf {PUFSamp}\) is a PPT algorithm.

  • Evaluation. On input \((1^\uplambda ,\sigma , x)\), where \(x \in \{0,1\}^\uplambda \), the evaluation algorithm \(\mathsf {PUFEval}\) outputs a response \(y \in \{0,1\}^{{{\mathsf {rg}}}(\uplambda )}\) according to the distribution \(\mathcal {D}_\sigma (x)\). It is not required that \(\mathsf {PUFEval}\) is a PPT algorithm.

Additionally, we require the PUF family to satisfy a reproducibility notion that we describe next. Reproducibility informally says that, the responses produced by the PUF when queried on the same random challenge are always close.

Definition 19

(PUF Reproducibility) A PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\), for security parameter \(\uplambda \), is \(\delta \)-reproducible if for \(\sigma \leftarrow \mathsf {PUFSamp}(1^\uplambda )\), , and \(y \leftarrow \mathsf {PUFEval}(\sigma ,x)\), \(y' \leftarrow \mathsf {PUFEval}(\sigma ,x)\), we have that,

for a negligible function \(\mathsf {negl}(\uplambda )\).

Many PUF definitions in the literature [3, 4, 12, 30] have had problems with the super-polynomial nature of PUFs. In particular, the possibility of PUFs solving hard computational problems, such as discrete logarithms or factoring, was not excluded, or excluded in an awkward way. We take our inspiration from the idea that a PUF can be thought as a function selected at random from a very large set, and therefore cannot be succinctly described; however, it can be efficiently simulated using lazy sampling. Conceptually, we will only consider PUFs that can be efficiently simulated by a stateful machine.

Definition 20

A polynomial-time (stateful) interactive Turing machine \((\mathsf {MSamp},\mathsf {MEval})\) is a lazy sampler for \((\mathsf {PUFSamp}, \mathsf {PUFEval})\) such that for all sequences \((x_1, \ldots , x_n)\) of inputs, the random variables \((Y_1, \ldots , Y_n)\) and \((Y'_1, \ldots , Y'_n)\), defined by the following experiments, are identically distributed.

$$\begin{aligned}&{{\mathsf {st}}}\leftarrow \mathsf {MSamp}(1^\uplambda ); Y_1 \leftarrow \mathsf {MEval}(x_1), \ldots , Y_n \leftarrow \mathsf {MEval}(x_n);\\&\sigma \leftarrow \mathsf {PUFSamp}(1^\uplambda ); Y'_1 \leftarrow \mathsf {PUFEval}(\sigma , x_1), \ldots , Y'_n \leftarrow \mathsf {PUFEval}(\sigma , x_n); \end{aligned}$$

where \({{\mathsf {st}}}\) denotes the initial state of the TM \({\mathsf {M}}\).

Security of PUFs The security of PUFs has been mainly defined by the properties of unpredictability and uncloneability [1, 3, 4, 24, 30]. In Sect. 4.1.1, we introduce a novel unpredictability notion for PUFs, and we later discuss why the standard unpredictability notion is not suited for our setting.

Fully adaptive PUF Unpredictability.

In contrast to the standard definition of unpredictability [3], in this work we require a stronger notion of adaptive unpredictability. Loosely speaking, unpredictability should capture the fact that it is hard to learn the response of the PUF on a given challenge without first querying the PUF on a similar challenge. Note that this implies uncloneability: if one could clone the PUF, one could use the cloned PUF to predict the answers of the original PUF. We express the similarity of inputs/outputs of the PUF in terms of the Hamming distance \({\mathsf {hd}}\), however, our results can be easily adapted to other metrics.

Definition 21

(Adaptive PUF Unpredictability) A PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\), for security parameter \(\uplambda \), is \((\gamma ,\delta )\)-unpredictable if for all adversaries \(\mathcal {A} \), there exists a negligible function \(\mathsf {negl}(\uplambda )\), such that,

where \(\mathcal {Q}\) is the list of all queries made by \(\mathcal {A} \).

The adaptive PUF unpredictability says that the only way to learn the output of \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,x)\) is to query the PUF on x (or something close enough to x). Our definition captures this by allowing adversary \(\mathcal {A} \) to know the challenge x before having oracle access to \(\mathsf {PUFEval}\).

The unsuitability of the standard PUF unpredictability of [3]. We first recall the standard unpredictability definition of [3]. As the definition itself is based on the notion of average min-entropy, for convenience, we present that first.

Definition 22

(Average Min-entropy [3]) The average min-entropy of the measurement \(\mathsf {PUFEval}(q)\) conditioned on the measurements of challenges \(\mathcal {Q} \!=\! \{q_1,\cdots \!,q_{\mathsf {poly}(\uplambda )}\}\) for the PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) is defined by

where the probability is taken over the choice of \(\sigma \) from \(\mathcal {I}_\uplambda \) and the choice of possible PUF responses on challenge q. The term \(\mathsf {PUFEval}(\mathcal {Q})\) denotes a sequence of random variables \(\mathsf {PUFEval}(q_1),\ldots , \mathsf {PUFEval}(q_{\mathsf {poly}(\uplambda )})\), each corresponding to an evaluation of the PUF on challenge \(q_k\), for \(1 \ge k \ge \mathsf {poly}(\uplambda )\).

Definition 23

(PUF Unpredictability [3]) A \(({{\mathsf {rg}}}, \delta )\)-PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) for security parameter \(\uplambda \) is \((\gamma (\uplambda ), {\mathsf {m}}(\uplambda ))\)-unpredictable if for any \(q \in \{0,1\}^\uplambda \) and challenge list \(\mathcal {Q} = \{q_1,\ldots ,q_{\mathsf {poly}(\uplambda )}\}\), one has that, if for all \(1 \ge k \ge \mathsf {poly}(\uplambda )\) the Hamming distance satisfies \({\mathsf {hd}}(q,q_k) \ge \gamma (\uplambda )\), then the average min-entropy satisfies \({\tilde{\mathsf {H}}}_{\infty }(\mathsf {PUFEval}(q)|\mathsf {PUFEval}(\mathcal {Q})) \ge {\mathsf {m}}(\uplambda )\), where \(\mathsf {PUFEval}(\mathcal {Q})\) denotes the sequence of random variables \(\mathsf {PUFEval}(q_1),\cdots ,\mathsf {PUFEval}(q_{\mathsf {poly}(\uplambda )})\), each corresponding to an evaluation of the PUF on challenge \(q_k\). Such a PUF family is called a \(({{\mathsf {rg}}}, \delta , \gamma , {\mathsf {m}})\)-PUF family.

We now argue why Definition 23 is not suited for our setting. We present a PUF family that satisfies Definition 23 and yet allows for an adversary to predict the response of the PUF on a challenge never queried to the PUF (and far apart from the other queried challenges). We prove the following theorem next.

Theorem 24

There exists a PUF family \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) that satisfies Definition 23 (with \({\mathsf {m}}> 0\)), such that there exists a PPT adversary \(\mathcal {A} \) that can predict with probability 1 the output of the PUF on an input far from every other input queried to the PUF prior (thereby contradicting Definition 21).

Proof

Let \(\mathsf {PUF}= (\mathsf {PUFSamp}, \mathsf {PUFEval})\) be a PUF family for challenges of size \((n+1)\)-bits and responses of size n-bits, We construct the family \(\mathsf {PUF}\) as follows:

  • \(\mathsf {PUFSamp}(1^\uplambda )\): Samples , and . Return \(\sigma :=(x^*,f)\).

  • \(\mathsf {PUFEval}(1^\uplambda ,\sigma , x)\):

    −:

    Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,{0^{n+1}})\) output \(x^*\).

    −:

    Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,0\Vert m)\) with \(m\ne 0^n\) output f(m).

    −:

    Upon query \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,1\Vert m)\) output \(f(m \oplus x^*)\).

We first show how an adversary can predict with probability 1 the output of a PUF from the family described above on a fresh input. Given some arbitrary fresh challenge input \(b\Vert m\), the adversary can find the corresponding response \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,b\Vert m)\), without ever querying the PUF on \(b\Vert m\), by doing the following: Compute \(x^*:=\mathsf {PUFEval}(1^\uplambda ,\sigma ,0^{n+1})\) and compute \(y:=\mathsf {PUFEval}(1^\uplambda ,\sigma ,{\bar{b}}\Vert m\oplus x^*)\). Note that both queries are far apart from \(b\Vert m\), yet the adversary learns \(y = \mathsf {PUFEval}(1^\uplambda ,{\sigma }, b\Vert m) = \mathsf {PUFEval}(1^\uplambda ,\sigma ,{\bar{b}}\Vert m\oplus x^*)\).

Now we show that the PUF family described above satisfies Definition 23.Footnote 6 Fix any polynomial-size challenge list \(\mathcal {Q} = \{q_1,\ldots ,q_{\kappa -1}\}\) and any challenge query \(q_{\kappa }\) such that, for any \(k\in [\kappa -1]:{\mathsf {hd}}(q_{\kappa },q_k) \ge 1\), which is clearly minimal. Since f is a random function, it holds that \(\mathsf {PUFEval}(1^\uplambda ,\sigma ,q)\) has maximal average min-entropy, unless the PUF is queried on two inputs \((q_{i}, q_j)\) that form a collision for f. Note that this happens only if \(q_i \oplus q_j = 1 \Vert x^*\). Thus, all we need to show is that, for any fixed set of queries \(\{q_1,\ldots ,q_{\kappa }\}\) the probability that \(q_i \oplus q_j = 1 \Vert x^*\) is negligible, over the random choice of \(x^*\). This holds because

$$\begin{aligned}&\Pr _{x^* \leftarrow \{0,1\}^n}\left[ \exists ~(i,j): q_i \oplus q_j = 1 \Vert x^*\right] \\&\quad = 1 - \left( 1 - 2^{-n}\right) ^{\kappa ^2} \le 1 - \frac{1}{1 + \kappa ^2 2^{-n}} = \frac{{\kappa ^2 2^{-n}}}{1 + \kappa ^2 2^{-n}} \end{aligned}$$

by applying the Bernoulli inequality. The above expression approaches 0 exponentially fast, as n grows. This concludes our proof. \(\square \)

Contrasting our unpredictability definition with the one of [3]. The motivation behind our newly proposed adaptive unpredictability notion (Definition 21) is that the standard PUF unpredictability notion of [3] implicitly assumes that PUFs are only dependent on random physical factors (likely introduced during manufacturing), and in particular it does not capture families of PUFs that could have some programmability built in, allowing to predict the output of a PUF on an input by querying a completely different input. What our new PUF unpredictability notion explicitly captures is that a “good” PUF must solely depend on random physical factors, and in particular cannot have any form of programmability. On a more philosophical level, we believe that our new notion is what was meant to be modelled as a property for PUFs from the start. Since PUFs are inherently randomized devices that are specifically built to be unpredictable and uncontrollable, a PUF family such as the one described above should not be considered to be a “good” PUF family; however, the previous notion fails to capture this fact.Footnote 7

Overall, our new definition of unpredictability does not hinder in any way the progress and development of new real-world PUFs, but merely addresses a technical oversight by the previous unpredictability notion. Therefore, we conjecture that most real-world PUFs that satisfy the unpredictability notion of [3] will most likely also satisfy our unpredictability notion, since real PUFs are inherently randomized physical devices built to be unpredictable and uncontrollable.

Impossibility of Everlasting OT with Malicious Hardware

In this section, we prove the impossibility of realizing everlasting secure oblivious transfer (OT) in the hardware token model, even in the presence of a trusted setup. The result carries over immediately to any secure computation protocol due to the completeness of OT [23]. We consider honest tokens to be stateful but non-erasable (Definition 25) and the tokens produced by the adversary can be malicious but not encapsulate other tokens (note that this restriction on malicious tokens only makes our result stronger, as the impossibility holds even against an adversary that is more limited). The adversary \(\mathcal {A} \) is PPT during the execution of the protocol, but \(\mathcal {A} \) becomes unbounded after the execution is over (i.e. everlasting security). This extends the seminal result of Goyal et al. [18] that shows the impossibility of having statistically (as opposed to everlasting) UC secure oblivious transfer from stateless (as opposed to non-erasable) tokens. We stress, however, that our negative results does not contradict the work of Döttling et al. [13, 14], since they assume honest tokens to be non-resettable or bounded-resettable (i.e. tokens cannot be reset to a previous state, or only reset up to an a-priori bound), whereas for our result to hold the token must be non-erasable.

In the following, we show the main theorem of the section. The result holds under the assumption that the token scheduling is fixed a-priori, which captures most of the known protocols for secure computation [13, 14, 20]. The scheduling of the tokens determines the exchange of the tokens among parties. We stress that we do not impose any restriction on which party will hold each hardware token in the end of the execution. For a formal definition of OT, we refer the reader to Sect. 2.2. We first define “non-erasability” for hardware tokens next.

Definition 25

(Non-erasable hardware token) A (stateful) hardware token is said to be non-erasable if any state ever recorded by the token can be efficiently retrieved.

Note in particular that stateless tokens are trivially non-erasable, as the former cannot keep any state.

Theorem 26

Let \(\Pi \) be a hardware token-based everlasting OT protocol between Alice (i.e. sender) and Bob (i.e. receiver) where the honest tokens are non-erasable and the scheduling of the tokens is fixed. Then, at least one of the following holds:

  • There exists an everlasting adversary \(\mathcal {S} '\) that uses malicious and stateful hardware tokens such that \(\mathbf {Adv}_{\mathcal {S} '}^\Pi \ge \epsilon (\uplambda )\), or

  • there exists an everlasting adversary \(\mathcal {R} '\) that uses malicious and stateful hardware tokens such that \(\mathbf {Adv}_{\mathcal {R} '}^\Pi \ge \epsilon (\uplambda )\),

for some non-negligible function \(\epsilon (\uplambda )\).

Proof

The proof consists of the following sequence of modified simulations. Let the game \({\mathfrak {G}}_{0}\) define an everlastingly secure OT protocol for \(\mathcal {S} \) and \(\mathcal {R} \). Then, by assumption, we have that for all everlasting adversaries \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that

$$\begin{aligned} \mathbf {Adv}_{\mathcal {S} '}^{{\mathfrak {G}}_{0}} \le \mathsf {negl}(\uplambda ) \text { and } \mathbf {Adv}_{\mathcal {R} '}^{{\mathfrak {G}}_{0}} \le \mathsf {negl}(\uplambda ). \end{aligned}$$

We define a quasi-semi-honest adversary to be an adversary that behaves semi-honestly but keeps a log of all queries ever made to the hardware token (i.e. the non-erasable token assumption). Let the game \({\mathfrak {G}}_{1}\) define an everlastingly secure OT protocol where \(\mathcal {S} '\) and \(\mathcal {R} '\) are quasi-semi-honest. Since we are strictly reducing the capabilities of the adversaries and the tokens are non-erasable, we can state the following lemma.

Lemma 27

For all quasi-semi-honest \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that

$$\begin{aligned} \mathbf {Adv}_{\mathcal {S} '}^{{\mathfrak {G}}_{1}} \le \mathsf {negl}(\uplambda ) \text { and } \mathbf {Adv}_{\mathcal {R} '}^{{\mathfrak {G}}_{1}} \le \mathsf {negl}(\uplambda ). \end{aligned}$$

Let \({\mathfrak {G}}_{2}\) be the same as \({\mathfrak {G}}_{1}\) except that whenever \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) queries a token from \(\mathcal {R} \) (resp. \(\mathcal {S} \)) that will return to \(\mathcal {R} \) (resp. \(\mathcal {S} \)), instead of making that query to the token, \(\mathcal {S} '\) queries directly \(\mathcal {R} \) who answers it as the token would have. Since the distribution of the answers for the queries does not change, we can state the following lemma.

Lemma 28

For all quasi-semi-honest \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that

$$\begin{aligned} \mathbf {Adv}_{\mathcal {S} '}^{{\mathfrak {G}}_{2}} \le \mathsf {negl}(\uplambda ) \text { and } \mathbf {Adv}_{\mathcal {R} '}^{{\mathfrak {G}}_{2}} \le \mathsf {negl}(\uplambda ). \end{aligned}$$

Let game \({\mathfrak {G}}_{3}\) be exactly the same as \({\mathfrak {G}}_{2}\) except that whenever \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) sends a token to \(\mathcal {R} \) (resp. \(\mathcal {S} \)) that will not return to \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)), then \(\mathcal {S} '\) sends a description of the token instead. Since we consider everlasting adversaries we assume that after the execution of the protocol all tokens can be read out. Therefore, both parties will have the description of all the tokens, even the ones that are not sent to the other party. Note that at this point there are no hardware tokens involved, and only description of tokens. Therefore, a quasi-semi-honest adversary is identical to a semi-honest everlasting one.Footnote 8

Lemma 29

For all semi-honest everlasting \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that

$$\begin{aligned} \mathbf {Adv}_{\mathcal {S} '}^{{\mathfrak {G}}_{3}} \le \mathsf {negl}(\uplambda ) \text { and } \mathbf {Adv}_{\mathcal {R} '}^{{\mathfrak {G}}_{3}} \le \mathsf {negl}(\uplambda ). \end{aligned}$$

We point out that a semi-honest unbounded adversary \(\mathcal {S} '\) (resp. \(\mathcal {R} '\)) is also a semi-honest everlasting adversary, since during the execution of the protocol it performs only the honest (PPT) actions. We are now in the position of stating the final lemma.

Lemma 30

For all semi-honest unbounded \(\mathcal {S} '\) and \(\mathcal {R} '\), it holds that

$$\begin{aligned} \mathbf {Adv}_{\mathcal {S} '}^{{\mathfrak {G}}_{3}} \le \mathsf {negl}(\uplambda ) \text { and } \mathbf {Adv}_{\mathcal {R} '}^{{\mathfrak {G}}_{3}} \le \mathsf {negl}(\uplambda ). \end{aligned}$$

It was shown [2] that it is not possible to build a secure OT protocol against semi-honest unbounded adversaries (even in the presence of a trusted setup), what gives us a contradiction and concludes our proof. \(\square \)

Everlasting Commitment from Fully Malicious PUFs

In this section, we build an everlastingly secure UC commitment scheme from fully malicious PUFs. Let \(\mathcal {C}= (\mathsf {Com},\mathsf {Open})\) be a statistically hiding UC-secure commitment scheme, let \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) be a 1-out-of-2 statistically receiver-private UC-secure OT, let \(f: \{0,1\}^\uplambda \rightarrow \{0,1\}^\uplambda \) be a one-way permutation, and let \(H: \{0,1\}^{d(\uplambda )} \times \{0,1\}^{\ell (\uplambda )} \rightarrow \{0,1\}^c\) be a strong randomness extractor, where d and \(\ell \) are two polynomials such that \(H\) allows for \((\ell (\uplambda )-c)\)-many bits of entropy loss, for \(c:= |m\Vert \mathsf {decom}|\). Let

$$\begin{aligned} R_1 := \left\{ \begin{array}{c} \left( y,\{\mathsf {com}_i\}_{i\in [\ell (\uplambda )]}\right) , \\ \left( \{m_i,\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]}, x\right) \end{array} \left| \begin{array}{c} y = f(x) \vee \\ \{m_i = \mathsf {Open}(\mathsf {decom}_i,\mathsf {com}_i)\}_{i\in [\ell (\uplambda )]}\end{array} \right\} \right. \end{aligned}$$

and let

$$\begin{aligned} R_2 := \left\{ \begin{array}{c} \left( \begin{array}{c} y, \mathsf {seed}, m, \mathsf {com}, \omega , \\ \{\mathsf {com}_i, q_i^0, q_i^1\}_{i\in [\ell (\uplambda )]} \end{array}\right) \\ \left( \begin{array}{c} k, \mathsf {decom}, x, \\ \{\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]} \end{array}\right) \end{array} \left| \begin{array}{c} y = f(x) \vee \\ \left( \begin{array}{c} m\Vert \mathsf {decom}=H(\mathsf {seed},k) \oplus \omega \wedge \\ m = \mathsf {Open}(\mathsf {com}, \mathsf {decom})) \wedge \\ \left\{ \begin{array}{c}s_i = \mathsf {Open}(\mathsf {decom}_i,\mathsf {com}_i) \\ \wedge {\mathsf {hd}}(s_i,q_i^{k_i}) \le \delta \end{array}\right\} _{i\in [\ell (\uplambda )]} \end{array} \right) \end{array}\right. \right\} . \end{aligned}$$

We denote by \((\mathcal {P}_1, \mathcal {V}_1)\) and \((\mathcal {P}_2, \mathcal {V}_2)\) the statistically witness-indistinguishable arguments of knowledge (SWIAoK) for the relations \(R_1\) and \(R_2\), respectively. Our commitment scheme is described next.

figure b

Theorem 31

Let

  • \(\mathcal {C}=(\mathsf {Com},\mathsf {Open})\) be a statistically hiding and computationally binding commitment scheme,

  • \((\mathsf {Sender}_{{\mathsf {OT}}}, \mathsf {Receiver}_{{\mathsf {OT}}})\) be a UC-secure 1-out-of-2 statistically receiver-private oblivious transfer,

  • \(H: \{0,1\}^{d(\uplambda )} \times \{0,1\}^{\ell (\uplambda )} \rightarrow \{0,1\}^c\) be a strong randomness extractor, where d and \(\ell \) are two polynomials such that \(H\) allows for \((\ell (\uplambda )-c)\)-many bits of entropy loss,

  • and let \((\mathcal {P}_1, \mathcal {V}_1)\) and \((\mathcal {P}_2,\mathcal {V}_2)\) be SWIAoK systems for the relations \(R_1\) and \(R_2\), respectively.

Then, the protocol above everlastingly UC-realizes the functionality \(\mathcal {F}_{\textsf {MCOM}}\) in the \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\)-hybrid model.

Proof

We consider the cases of the two corrupted parties separately. The proof consists of the description of a series of hybrids and we argue about the indistinguishability of neighbouring experiments. Then, we describe a simulator that reproduces the real-world protocol to the corrupted party while executing the protocol in interaction with the ideal functionality.

Corrupted Bob (recipient) Consider the following sequence of hybrids, with \(\mathcal {H}_0\) being the protocol as defined above in interaction with \(\mathcal {A} \) and \(\mathcal {Z}\):

\(\mathcal {H}_1\): Defined exactly as in \(H_0\) except that, for all executions of commitment and opening routines, the SWIAoK for \(R_1\) and \(R_2\) are computed using the knowledge of x, the pre-image of y. This is possible as the \(\mathcal {F}_{\textsf {CRS}}\) functionality is simulated by the simulator that samples an \(f(x)=y\) such that it knows x. In order to avoid trivial distinguishing attack, we additionally require Alice to explicitly check that \(\forall i \in \{ [\ell (\uplambda )]\}: {\mathsf {hd}}(\beta _i, q_i^{k_i}) \le \delta \) and abort (prior to computing the SWIAoK) if the condition is not satisfied. The two protocols are statistically indistinguishable due to the statistical witness indistinguishability of the SWIAoK scheme. In particular, for all unbounded distinguishers \(\mathcal {D}\) querying the functionality polynomially many times, it holds that

$$\begin{aligned} \{{\text { EXC }}'_{H_0,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{H_1,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_2, \dots , \mathcal {H}_{\ell (\uplambda ) + 1}\): Each \(\mathcal {H}_{1+i}\) for \(i \in [\ell (\uplambda )]\) is defined exactly as \(\mathcal {H}_1\) except that in all of the sessions Alice uses the simulator of the statistical receiver private OT protocol (that implements the \(\mathcal {F}_{\textsf {OT}}\) functionality) to run the first i instances of the oblivious transfers. Note that the simulator (using the knowledge of the CRS trapdoor) returns both of the inputs of the sender, in this case \((p_i^0, p_i^1)\). By statistical receiver privacy of the oblivious transfer, it holds that the simulated execution is statistically close to a honest run, and therefore, we have that for all unbounded distinguishers \(\mathcal {D}\) that queries \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) polynomially many times:

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_1,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\mathcal {H}_{\ell (\uplambda ) + 1},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{\ell (\uplambda ) + 2}, \dots , \mathcal {H}_{2 \cdot \ell (\uplambda ) + 1}\): Each \(\mathcal {H}_{\ell (\uplambda )+1+i}\) for \(i \in [\ell (\uplambda )]\) is defined exactly as \(\mathcal {H}_{\ell (\uplambda ) + 1}\) with the difference that, in all of the sessions, the first i-many commitments \(\mathsf {com}_i\) are computed as \(\mathsf {Com}(r_i)\), for some random \(r_i\) in the appropriate domain. Note that the corresponding decommitments are no longer used in the computation of the SWIAoK. Therefore, the statistically hiding property of the commitment scheme guarantees that the neighbouring simulations are statistically close for all unbounded \(\mathcal {D}\). That is

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{\ell (\uplambda ) + 1},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\mathcal {H}_{2\ell (\uplambda ) + 1},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2}\): Let n be a bound on the total number of sessions. The hybrid \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2}\) is defined as the previous except that Alice chooses some random values \((k_1, \dots , k_n) \in \{0,1\}^{\ell (\uplambda )}\) at the beginning of the execution. In the i-th session Alice uses the value of \(k_i\) instead of a fresh k in the interaction with the functionality \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\). The changes between the two hybrids are only syntactical, and therefore, it holds that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 1},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \ \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 3}\): Let f be the following deterministic stateless oracle: f is initialized with the initial state of the physical token sent by Bob, the tuples \((k_1, \dots , k_n)\), and a random tape. On input an index i, a set \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\), and a set \(\{p_j\}_{j\in [\ell (\uplambda )]}\), the oracle f returns 1 if and only if for all \(j \in [\ell (\uplambda )] :{\mathsf {hd}}(q_j^{k_{i,j}}, \beta _j) \le \delta \), where \(\beta _j\) is the output of the token on input \(p_j\). In this hybrid, Alice no longer queries the token but computes a valid opening for the i-th commitment only if f returns 1 on inputs i, \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\), and \(\{p_j\}_{j\in [\ell (\uplambda )]}\). Where the elements \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\) and \(\{p_j\}_{j\in [\ell (\uplambda )]}\) are defined in the i-th session. If f returns 0, then Alice interrupts all of the executions simultaneously. Note that this modification does not affect the view of the adversary: since Alice keeps ownership of the token, the state of the token is not included in the long-term tapes. Also note that Alice never uses the values \(\beta _j\), except for the check mentioned above. Thus, we have that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 2},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 3},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 4}\): Let \(\mathsf {F}_i\) be the set of tuples \(\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]}\) and \(\{p_j\}_{j\in [\ell (\uplambda )]}\) such that f on input i and those tuples returns 0. Note that \(\mathsf {F}_i\) is well defined as soon as Bob sends the token to Alice. In the i-th session, Alice no longer queries f but just checks whether \((\{q_j^0, q_j^1\}_{j\in [\ell (\uplambda )]} ,\{p_j\}_{j\in [\ell (\uplambda )]}) \in \mathsf {F}_i\) and aborts all of the executions if this is the case. We denote by \(\gamma \in \{1,\dots , n, \infty \}\) the session in which Alice aborts. Here, the two hybrids need to be equivalent only up to the first query to f that returns 0, thus

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 3},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 4},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\): Defined exactly as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 4}\) with the difference that for all sessions i of the protocol \(\omega _i\) is computed as \(\mathcal {H}_i \oplus m\Vert \mathsf {decom}\), where \(\mathcal {H}_i\) is a random string in \(\{ 0,1\}^c\). To prove the indistinguishability of \(\mathcal {H}_4\) and \(\mathcal {H}_5\), we define the intermediate hybrids \((\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0}, \dots , \mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n})\), where in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i}\) the strings \((\omega _1, \dots , \omega _i)\) are computed as in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\), whereas the strings \((\omega _{i+1}, \dots , \omega _n)\) are computed as in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4}\). Note that \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0} = \mathcal {H}_{2 \cdot \ell (\uplambda ) +4}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n} = \mathcal {H}_{2 \cdot \ell (\uplambda ) +5}\). By definition the hybrids \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i-1}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i}\) differ only in the value of \(\mathcal {H}_i\) (which is \(H(\mathsf {seed}, k_i)\) in the former case and a random string in the latter). Note that \(k_i\) is used only in the computation of \(\mathcal {H}_i\) and that the only variable that depends on \(k_i\) is \(\gamma \). Since \(\gamma \) is from a set of size \(n+1\), we can bound from above the entropy loss of \(k_i\) to \(log(n+1)\)-many bits. Recall that \((n+1) \ll 2^\uplambda \), therefore we have that \(\ell (\uplambda ) - c > log(n+1)\), for an appropriate choice of \(\ell ()\). Hence, by the strong randomness of \(H\) we have that

$$\begin{aligned}&\{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i-1},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}\\&\quad \approx \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,i},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

Since the distance between \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,0}\) and \(\mathcal {H}_{2 \cdot \ell (\uplambda ) +4,n}\) is the sum of the bounds obtained by the leftover hash lemma [21], we can conclude that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +4},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +5},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\): Defined as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\) except that in all sessions \(\mathsf {com}\) is a commitment to a random string s. Note that in the execution of \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 5}\) the value of \(\mathsf {decom}\) is masked by a random string \(\mathcal {H}_i\) and therefore it is information theoretically hidden to the eyes of the adversary. By the statistically hiding property of \(\mathsf {Com}\), we have that for all unbounded distinguisher \(\mathcal {A} \) the following holds:

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +5},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) +6},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7}\): Defined as \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\) except that Alice opens the commitment to an arbitrary message \(m'\). We observe that the execution of \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6}\) is completely independent from the message m, except when m is sent to Bob in clear in the opening phase. Therefore, we have that for all unbounded distinguishers \(\mathcal {D}\) that query the functionality polynomially many times:

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 6},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\({\mathsf {S}} \): We now define \({\mathsf {S}} \) as a simulator in the ideal world that engages the adversary in the simulation of a protocol when queried by the ideal functionality on input \((\mathsf {committed}, \mathsf {sid})\). The interaction of \({\mathsf {S}} \) with the adversary works exactly as specified in \(\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7}\), with the only difference that the message \(m'\) is set to be equal to x, where \((\mathsf {unveil}, \mathsf {sid}, x)\) is the message sent by the ideal functionality with the same value of \(\mathsf {sid}\). Since the simulation is unchanged to the eyes of the adversary we have that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_{2 \cdot \ell (\uplambda ) + 7},\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\rho ,{\mathsf {S}},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

By transitivity, we have that \(\mathcal {H}_0\) is statistically indistinguishable from \({\mathsf {S}} \) to the eyes of the environment \(\mathcal {Z}\). We can conclude that our protocol everlastingly UC-realizes the commitment functionality \(\mathcal {F}_{\textsf {MCOM}}\) for any corrupted Bob. We stress that that we allow Bob to be computationally unbounded and we only require that the number of sessions is bounded by some polynomial in \(\uplambda \).

Corrupted Alice (committer) Let \(\mathcal {H}_0\) be the execution of the protocol as described above in interaction with \(\mathcal {A} \) and \(\mathcal {Z}\). We define the following sequence of hybrids:

\(\mathcal {H}_1\): Defined as \(\mathcal {H}_0\) except that the following algorithm is executed locally by Bob at the end of the commit phase of each session, in addition to Bob’s normal actions.

\(\mathcal {E}(1^\uplambda )\)::

Let K be a bitstring of length \(\ell (\uplambda )\), the extractor parses the list of queries \(\mathcal {Q}\) that Alice sent to \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) before the last message of Bob in the commitment phase. Then, for all \(\mathcal {Q}_j \in \mathcal {Q}\) it checks whether \(\exists j \in [\ell (\uplambda )]\) such that \(\exists z \in \{0,1\}\) such that \({\mathsf {hd}}(\mathcal {Q}_j , p_i^z) \le \gamma \), where \(p_i^z\) is defined as in the original protocol. If this is the case the extractor sets \(K_i = z\). If the value of \(K_i\) is already set to a different bit the extractor aborts. If at the end of list \(\mathcal {Q}\) there is some i such that \(K_i\) is undefined, the extractor aborts. Otherwise it parses \(\omega \oplus H(\mathsf {seed},K)\) as \(m'||\mathsf {decom}\) and it returns \((m',\mathsf {decom})\).

Note that Bob does not use the output of \(\mathcal {E}\) and therefore, for all distinguishers \(\mathcal {D}\), we have that:

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_0,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\mathcal {H}_1,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\(\mathcal {H}_2\): Let \(\mathcal {H}_2\) be defined as \(\mathcal {H}_1\) except that Bob outputs the message \(m'\) as computed by \(\mathcal {E}\) instead of the message m as sent by Alice in the opening phase. For the indistinguishability of \(\mathcal {H}_1\) and \(\mathcal {H}_2\), we have to argue that if the opening of the adversary succeeds, then the extraction succeeds with overwhelming probability, i.e. \(m = m'\). For the ease of exposition, we assume that the sessions are enumerated with a unique identifier, e.g. according to their initialization order. Let \(\mathsf {Abort}\) be the event such that there exists a session \(j \in [n]\) such that the simulator aborts but the opening is successful. We are going to prove the following lemma.

Lemma 32

.

Proof

We define \(\mathsf {NoUnique}\) as the event such that there exists a session \(j \in [n]\) such that the corresponding \(K^j\) as defined in \(\mathcal {E}\) is not uniquely defined but the commitment is successful, i.e. there exists some \(i \in [\ell (\uplambda )]\) such that \(K^j_i = 0\) and \(K^j_i=1\). Let \(\mathsf {NoDefined}\) be the event such that there exists a \(j \in [n]\) and \(i \in [\ell (\uplambda )]\) such that \(K^j_i\) is undefined at the end of the iteration, but the corresponding opening phase is successful. By definition of \(\mathcal {E}\), we have that

The rest of the proof proceeds as follows:

  • We show through a series of intermediate hybrids \((\mathcal {H}^\mathsf {U}_0, \dots , \mathcal {H}^\mathsf {U}_3)\) that the event \(\mathsf {NoUnique}\) happens only with negligible probability.

  • We show through a series of intermediate hybrids \((\mathcal {H}^\mathsf {D}_0, \dots , \mathcal {H}^\mathsf {D}_4)\) that the event \(\mathsf {NoDefined}\) happens only with negligible probability.

  • The proof of the lemma follows by a union bound.

We first derive a bound for the probability that the event \(\mathsf {NoUnique}\) happens. Consider the following sequence of hybrids.

\(\mathcal {H}^\mathsf {U}_0:\) The experiment \(\mathcal {H}^\mathsf {U}_0\) identical to \(\mathcal {H}_2\) except that we sample some \(j^*\) from the identifiers associated to all sessions and some \(i^*\) from \([\ell (\uplambda )]\). Let n be a bound on the total number of session and let \(\mathsf {NoUnique}(j^*, i^*)\) be the event where \(\mathsf {NoUnique}\) happens in session \(j^*\) and for the \(i^*\)-th bit. Since \(j^*\) and \(i^*\) are randomly chosen we have that

\(\mathcal {H}^\mathsf {U}_1:\) The experiment \(\mathcal {H}^\mathsf {U}_1\) is defined as \(\mathcal {H}^\mathsf {U}_0\) except that it stops before the execution of the \(i^*\)-th OT in session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(\mathcal {H}^\mathsf {U}_0\), the experiment does the following:

  • Continue the execution of \(\mathcal {H}^\mathsf {U}_0\) from \(\textsf {st}\).

  • Input/output all the \(i^*\)-th OT messages from session \(j^*\).

  • Simulate all other messages internally.

The experiments sets the bit \(b=1\) if and only if the commitment of the j-th session succeeds. Let \(\mathsf {NoUnique}^*(j^*, i^*)\) be the event that \(K_{i^*}^{j^*}\) is not uniquely defined. Since the execution does not change to the eyes of Alice we have that

\(\mathcal {H}^\mathsf {U}_2:\) Defined as \(\mathcal {H}^\mathsf {U}_2\) except that the CRS for the OT is sampled to be in extraction mode. By the computational indistinguishability of the CRS, it holds that

\(\mathcal {H}^\mathsf {U}_3:\) Defined as \(\mathcal {H}^\mathsf {U}_2\) except that the extractor for the OT is used in the \(i^*\)-th OT of the \(j^*\)-th session. The experiment sets \(b=1\) if the simulation succeeds. Recall that the simulator outputs the choice of the receiver \(b_{i^*}\) and expects as input the value \(p_{i^*}^{b_{i^*}}\). Note that this implies that the value \(p_{i^*}^{1-b_{i^*}}\) is information theoretically hidden to the eyes of Alice. Also note that

by the simulation security of the OT we can rewrite

thus by Jensen’s inequality we have that

As we argued before the value of \(p_{i^*}^{1-b_{i^*}}\) is information theoretically hidden to the eyes of Alice. However, by definition of \(\mathsf {NoUnique}^*(j^*, i^*)\) Alice queries both \((p_{i^*}^0, p_{i^*}^1)\) to the functionality \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\). It follows that we can bound the probability of the event \(\mathsf {NoUnique}^*(j^*, i^*)\) to happen to a negligible function in the security parameter. Therefore, we have that

In order to show a bound on the probability of \(\mathsf {NoDefined}\) to happen in \(\mathcal {H}_2\), we define another sequence of hybrids.

\(\mathcal {H}^\mathsf {D}_0:\) The experiment \(\mathcal {H}^\mathsf {D}_0\) identical to \(\mathcal {H}_2\) except that we sample some \(j^*\) from the identifiers associated to all sessions. Let n be a bound on the total number of session and let \(\mathsf {NoDefined}(j^*)\) be the event where \(\mathsf {NoDefined}\) happens for the session j. Since \(j^*\) is randomly chosen we have that

\(\mathcal {H}^\mathsf {D}_1:\) The experiment \(H^\mathsf {B}_1\) is defined as \(H^\mathsf {D}_0\) except that it stops before the execution of the SWIAoK in the commitment of session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(H^\mathsf {D}_0\) under the assumption that no machine keeps a copy of the pre-image x after generating \(\mathsf {crs} \). Let \(\mathcal {P}^*\) be the following algorithm:

  • Continue the execution of \(H^\mathsf {D}_0\) from \(\textsf {st}\).

  • Input/output all the SWIAoK messages from session \(j^*\).

  • Simulate all other messages internally.

The experiment \(H^\mathsf {D}_1\) runs \(b \leftarrow \left\langle \mathcal {P}^*(\textsf {st};r), \mathcal {V}_1(y,\{com_i\}_{i\in [\ell (\uplambda )]})\right\rangle \), where \(\{com_i\}_{i\in [\ell (\uplambda )]}\) are the messages sent in session \(j^*\) from Alice. Let \(\mathsf {NoDefined}^*(j^*)\) be the event that there exists some \(K_i^{j^*}\) that is undefined before the execution of the SWIAoK in session \(j^*\). Then, we have that

by definition of \(\mathsf {NoDefined}(j^*)\).

\(\mathcal {H}^\mathsf {D}_2:\) Defined as \(\mathcal {H}^\mathsf {D}_1\) except that the extractor \((\{m_i,\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]}, x) \leftarrow \mathsf {Ext}^{\mathcal {P}^*(\textsf {st})} (y, \{\mathsf {com}_i\}_{i\in [\ell (\uplambda )]};r)\) is executed instead of the SWIAoK. Note that

by the extraction property of the SWIAoK we can rewrite

By Jensen’s inequality, we can conclude that

\(\mathcal {H}^\mathsf {D}_3:\) The experiment \(H^\mathsf {B}_3\) is defined as \(H^\mathsf {D}_2\) except that it stops before the execution of the SWIAoK in the opening of session \(j^*\). Let \(\textsf {st}\) be the state of all the machines in the execution of \(H^\mathsf {D}_2\) under the assumption that no machine keeps a copy of the trapdoor x after generating \(\mathsf {crs} \). Let \(\mathcal {P}^*\) be the following algorithm:

  • Continue the execution of \(H^\mathsf {D}_2\) from \(\textsf {st}\).

  • Input/output all the SWIAoK messages from the opening phase of session \(j^*\).

  • Simulate all other messages internally.

The experiment \(H^\mathsf {D}_1\) runs \(b \leftarrow \langle \mathcal {P}^*(\textsf {st};r), \mathcal {V}_1(y,\mathsf {seed}, m, \mathsf {com}, \omega , \{com_i\}_{i\in [\ell (\uplambda )]}, \{q_i^0, q_i^1\}_{i\in [\ell (\uplambda )]})\rangle \), where the input of the verification algorithm corresponds to the messages exchanged in session \(j^*\). To the eyes of Alice, this change is only syntactical, and therefore, we have that

\(\mathcal {H}^\mathsf {D}_4:\) Defined as \(\mathcal {H}^\mathsf {D}_3\) except that the extractor \((k, \{\mathsf {decom}_i\}_{i\in [\ell (\uplambda )]}, \mathsf {decom}, x) \leftarrow \mathsf {Ext}^{\mathcal {P}^*(\textsf {st})} (y,\mathsf {seed}, m, \mathsf {com}, \omega , \{com_i\}_{i\in [\ell (\uplambda )]}, \{q_i^0, q_i^1\}_{i\in [\ell (\uplambda )]};r)\) is executed instead of the SWIAoK. An argument identical as above can be used to show that

In the following analysis, we ignore the case where the two extracted witnesses are a valid trapdoor for the common reference string y, as this event can be easily ruled out with a reduction to the one-wayness of f. Let us denote by \(\beta _i \leftarrow \mathsf {Open}(\mathsf {com}_i, \mathsf {decom}_i')\). Now it is now enough to observe that the successful termination of the protocol implies that for all \(i \in [\ell (\uplambda )]\) we have that \({\mathsf {hd}}(q_i^{k_i}, \beta _i') \le \delta \), for some \(k = k_1||\dots ||k_\ell (\uplambda )\). By definition of \(\mathsf {NoDefined}^*(j^*)\) there exists some \(i^*\) such that \(\mathcal {A} \) never queried any \(p'\) to \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) such that neither \({\mathsf {hd}}(p',p_{i^*}^0) \le \gamma \) nor \({\mathsf {hd}}(p',p_{i^*}^1) \le \gamma \), before seeing the last message of the commitment phase. By the unpredictability of the PUF, it follows that . We can conclude that there exists an \(i^*\) such that \(\beta _{i^*} \ne m_{i^*}\). Since \(\mathsf {decom}_{i^*}\) and \(\mathsf {decom}_{i^*}'\) are valid opening information for \(m_{i^*}\) and \(\beta _{i^*}\), respectively, then we can derive the following bound

by the binding property of the commitment scheme. Therefore, we can conclude that

This proves our lemma. \(\square \)

In order to conclude our proof, we need to show that the extractor always returns a valid message-decommitment pair for the same message that Alice outputs in the opening phase. More formally, let \(\mathsf {NoExt}\) be the event such that for the output of the extractor \((m', \mathsf {decom}) \leftarrow \mathcal {E}(1^\uplambda )\) it holds that \(m' \ne \mathsf {Open}(\mathsf {com}, \mathsf {decom})\), where \(\mathsf {com}\) is the variable sent by Alice in the same session. Additionally, let \(\mathsf {BadExt}\) be the event such that the output of extractor \((m', \mathsf {decom})\) is a valid opening for \(\mathsf {com}\) but \(m' \ne m\), where m is the message sent by Alice in the opening for the same session. We are now going to argue that the probability that either \(\mathsf {NoExt}\) or \(\mathsf {BadExt}\) happens is bounded by a negligible function.

Lemma 33

.

Proof

Consider the sequence of games \(\mathcal {H}_0^\mathsf {D}, \dots , \mathcal {H}_4^\mathsf {D}\) as defined in the proof of Lemma 32. Let \(\mathsf {NoExt}^*(j^*)\) be the event that the algorithm \(\mathcal {E}\) returns an invalid opening for the commitment in session \(j^*\) and the extractors of the zero knowledge proofs output a valid pair of witnesses. With an argument along the same lines of the proof of Lemma 32, we can show that

We now observe that whenever the extractor of the SWIAoK is successful then, for all \(i \in [\ell (\uplambda )]\) it holds that that \(\beta _i \leftarrow \mathsf {Open}(\mathsf {com}_i, \mathsf {decom}_i')\) and that \({\mathsf {hd}}(q_i^{k_i} ,\beta _i)\le \delta \), for some \(k = k_1||\dots ||k_{\ell (\uplambda )}\). Additionally, we have that \(H(k,\mathsf {seed}) \oplus \omega \) is a valid decommitment information for \(\mathsf {com}\). By definition of \(\mathsf {NoExt}\), we have that \(m' \ne \mathsf {Open}(\mathsf {com},\mathsf {decom})\), where \((m', \mathsf {decom})\) is the output of \(\mathcal {E}\) and it is defined as \(\omega \oplus H(\mathsf {seed}, K)\). This implies that \(K \ne k\), since the function H is deterministic. Therefore, there must exists some \(i^*\) such that \(K_{i^*} \ne k_{i^*}\). By Lemma 32, we know that K is uniquely defined and therefore Alice did not query \(\mathcal {F}^{\mathsf {PUFEval},\mathsf {PUFSamp}}_{\textsf {HToken}}\) for any \(p'\) such that \({\mathsf {hd}}(p_i^{z_i},p')\le \gamma \) for \(z_i \ne K_i\), and therefore for all \(i \in [\ell (\uplambda )]\) it holds, by the unpredictability of the PUF, that

and in particular we have that \(\beta _{i^*} \ne m_{i^*}\). Since \(\mathsf {decom}_{i^*}\) and \(\mathsf {decom}_{i^*}'\) are valid openings for \(m_{i^*}\) and \(\beta _{i^*}\) with respect to \(\mathsf {com}_{i^*}\), the probability of \(\mathsf {NoExt}^*(j^*)\) to happen in \(\mathcal {H}_4^\mathsf {D}\) can be bound to a negligible function by the binding property of the commitment scheme. This proves the initial lemma. \(\square \)

Lemma 34

.

Proof

The formal argument follows along the same lines as the proof of Lemma 33. The main observation here is that the argument implies that the output of \(\mathcal {E}\) and the tuple \((m, \mathsf {decom})\), where m is sent in plain by Alice and \(\mathsf {decom}\) is the output of the extractor for the SWIAoK, must be identical with overwhelming probability. \(\square \)

By the union bound we have that

It follows that for all session \(j \in [n]\) our extractor as defined above does not abort except with negligible probability and outputs the same message that the adversary opens to with overwhelming probability. Therefore, we can conclude that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_1,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} \approx \{{\text { EXC }}'_{\mathcal {H}_2,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

\({\mathsf {S}} \): We can now define the simulator \({\mathsf {S}} \) that is identical to \(\mathcal {H}_2\) except that the output \(m'\) of the algorithm \(\mathcal {E}\) (defined as above) is used in the message \((\mathtt {commit}, \mathsf {sid}, m')\) to the ideal functionality \(\mathcal {F}_{\textsf {MCOM}}\). The corresponding decommitment message \((\mathtt {unveil}, \mathsf {sid})\) is sent when the adversary returns a valid decommitment to some message m. Since the interaction is unchanged to the eyes of the adversary, we have that

$$\begin{aligned} \{{\text { EXC }}'_{\mathcal {H}_2,\mathcal {A},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}} = \{{\text { EXC }}'_{\rho ,{\mathsf {S}},\mathcal {D}}(\uplambda ,z)\}_{\uplambda \in {\mathbb {N}}, z\in \{0,1\}^{poly(\uplambda )}}. \end{aligned}$$

This implies that our protocol everlastingly UC-realizes the commitment functionality \(\mathcal {F}_{\textsf {MCOM}}\) for any corrupted Alice and concludes our proof. \(\square \)