1 Introduction

Obtaining round-optimal secure computation [14, 19] has been a long standing open problem. For the two-party case the work of Katz and Ostrovsky [15] demonstrated that 5 rounds are both necessary and sufficient, with black-box simulation, when both parties need to obtain the output. Their construction relies on the use of trapdoor permutationsFootnote 1. A more recent work of Ostrovsky et al. [16] showed that a black-box use of trapdoor permutations is sufficient for obtaining the above round-optimal construction.

A very recent work of Garg et al. [12] revisited the lower bound of [15] when the communication channel allows both players to send messages in the same round, a setting that has been widely used when studying the round complexity of multi-party computation. Focusing on the simultaneous message exchange model, Garg et al. showed that 4 rounds are necessary to build a secure two-party computation (2PC) protocol for every functionality with black-box simulation. In the same work they also designed a 4-round secure 2PC protocol for every functionality. However their construction compared to the one of [15] relies on much stronger complexity assumptions. Indeed the security of their protocol crucially relies on the existence of a 3-round 3-robust [11, 18] parallel non-malleable commitment scheme. According to [11, 18] such commitment scheme can be constructed either through non-falsifiable assumptions (i.e., using the construction of [17]) or through sub-exponentially-strong assumptions (i.e., using the construction of [3]). A recent work of Ananth et al. [1] studies the multi-party case in the simultaneous message exchange channel. More precisely the authors of [1] provide a 5-round protocol to securely compute every functionality for the multi-party case under the Decisional Diffie-Hellman (DDH) assumption and a 4-round protocol assuming one-way permutations and sub-exponentially secure DDH. The above gap in the state of affairs leaves open the following interesting open question:

Open Question: is there a 4-round construction for secure 2PC for any functionality in the simultaneous message exchange model assuming (standard) trapdoor permutations?

1.1 Our Contribution

In this work we solve the above open question. Moreover our construction only requires black-box simulation and is therefore round optimal. We now describe our approach.

As discussed before, the construction of [12] needs a 3-round 3-robust parallel non-malleable commitment, and constructing this primitive from standard polynomial-time assumptions is still an open problem. We circumvent the use of this primitive through a different approach. As done in [12], we start considering the 4-round 2PC protocol of [15] (KO protocol) that works only for those functionalities where only one player receives the output (we recall that the KO protocols do not assume the existence of a simultaneous message exchange channel). Then as in [12] we consider two simultaneous executions of the KO protocol in order to make both parties able to obtain the output assuming the existence of a simultaneous message exchange channel. We describe now the KO protocol and then we explain how we manage to avoid 3-round 3-robust parallel non-malleable commitments.

The 4-round KO protocol. Following Fig. 1, at a very high level the KO protocol between the players \(P_1\) and \(P_2\), where only \(P_1\) gets the output, works as follows. Let f be the function that \(P_1\) and \(P_2\) want to compute. In the second round \(P_2\) generates, using his input, a Yao’s garbled circuit C for the function f with the associated labels L. Then \(P_2\) commits to C using a commitment scheme that is binding if \(P_2\) runs the honest committer procedure. This commitment scheme however admits also an indistinguishable equivocal commitment procedure that allows later to open the equivocal commitment as any message. Let \(\mathtt {com}_0\) be such commitment. In addition \(P_2\) commits to L using a statistically binding commitment scheme. Let \(\mathtt {com}_1\) be such commitment. In the last round \(P_2\) sends the opening of the equivocal commitment to the message C. Furthermore, using L as input, \(P_2\) in the 2nd and in the 4th round runs as a sender of a specific 4-round oblivious transfer protocol \(\mathsf {KOOT}\) that is secure against a malicious receiver and secure against a semi-honest sender. Finally, in parallel with \(\mathsf {KOOT}\), \(P_2\) computes a specific delayed-input zero-knowledge argument of knowledge (ZKAoK) to prove that the labels L committed in \(\mathtt {com}_1\) correspond to the ones used in \(\mathsf {KOOT}\), and that \(\mathtt {com}_0\) is binding since it has been computed running the honest committer on input some randomness and some message. \(P_1\) plays as a receiver of \(\mathsf {KOOT}\) in order to obtain the labels associated to his input and computes the output of the two-party computation by running C on input the received labels. Moreover \(P_1\) acts as a verifier for the ZKAoK where \(P_2\) acts as a prover.

The 4-round protocol of Garg et al. In order to allow both parties to get the output in 4 rounds using a simultaneous message exchange channel, [12] first considers two simultaneous execution of the KO protocol (Fig. 2). Such natural approach yields to the following two problems (as stated in [12]): (1) nothing prevents an adversary from using two different inputs in the two executions of the KO protocol; (2) an adversary could adapt his input based on the input of the other party, for instance the adversary could simply forward the messages that he receives from the honest party. To address the first problem the authors of [12] add another statement to the ZKAoK where the player \(P_j\) (with \(j=1,2\)) proves that both executions of the KO protocol use the same input. The second problem is solved in [12] by using a 3-round 3-robust non-malleable commitment to construct \(\mathsf {KOOT}\) and the ZKAoK in such a way that the input used by the honest party in \(\mathsf {KOOT}\) cannot be mauled by the malicious party. The 3-robustness is required to avoid rewinding issues in the security proof. Indeed, in parallel with the 3-round 3-robust non-malleable commitment a WIPoK is executed in \(\mathsf {KOOT}\). At some point the security proof of [12] needs to rely on the witness-indistinguishability property of the WIPoK while the simulator of the ZKAoK is run. The simulator for the ZKAoK rewinds the adversary from the third to the second round, therefore rewinding also the challenger of the WIPoK of the reduction. To solve this problem [12, 18] rely on the stronger security of a 3-round 3-robust parallel non-malleable commitment scheme. Unfortunately, constructing this tool with standard polynomial-time assumptions is still an open question.

Our 4-round protocol. In our approach (that is summarized in Fig. 3), in order to solve problems 1 and 2 listed above using standard polynomial-time assumption (trapdoor permutations), we replace the ZKAoK and \(\mathsf {KOOT}\) (that uses the 3-round 3-robust parallel commitment scheme) with the following four tools. (1) A 4-round delayed-input non-malleable zero-knowledge (NMZK) argument of knowledge (AoK) \(\mathsf {NMZK}\) from one-way functions (OWFs) recently constructed in [4] (the theorem proved by \(\mathsf {NMZK}\) is roughly the same as the theorem proved ZKAoK of [12]). (2) A new special OT protocol \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) that is one-sided simulatable [16]. In this security notion for OT it is not required the existence of a simulator against a malicious sender, but only that a malicious sender cannot distinguish whether the honest receiver uses his real input or a fixed input (e.g., a string of 0s). Moreover some security against a malicious sender still holds even if the adversary can perform a mild form of “rewinds” against the receiver, and the security against a malicious receiver holds even when an interactive primitive (like a WIPoK) is run in parallel (more details about the security provided by \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) will be provided later). (3) An interactive commitment scheme \(\mathsf {PBCOM}\) that allows each party to commit to his input. In more details, in our 2PC protocol each party commits two times to his input and then proves using NMZK that (a) the two values committed are equal and (b) this committed value corresponds to the input used in the 2 simultaneous executions of our (modified KO) protocolFootnote 2. (4) A combination of two instantiations of Special Honest Verifier Zero-Knowledge (Special HVZK) PoK thus obtaining a WIPoK \(\varPi _\mathsf {OR}\). The idea behind the use of a combination of Special HVZK PoKs was introduced recently in [4]. The aim of this technique is to replace a WIPoK by non-interactive primitives (like Special HVZK) in such a way that rewinding issues, due to the other subprotocols, can be avoided. We use \(\varPi _\mathsf {OR}\) in our protocol to force each party to prove knowledge of one of the values committed using \(\mathsf {PBCOM}\). In the security proof we will use the PoK property of \(\varPi _\mathsf {OR}\) to extract the input from the malicious party.

Our security proof. In our security proof we exploit immediately the major differences with [12]. Indeed we start the security proof with an hybrid experiment where the simulator of \(\mathsf {NMZK}\) is used, and we are guaranteed that the malicious party is behaving honestly by the non-malleability/extractability of NMZK. Another major difference with the KO security proof is that in our 2PC protocol the simulator extracts the input from the malicious party through \(\varPi _\mathsf {OR}\) whereas in the KO protocol’s security proof the extraction is made from \(\mathsf {KOOT}\) (that is used in a non-black box way).

We remark that, in all the steps of our security proof the simulator-extractor of NMZK is used to check every time that the adversary is using the same input in both the executions of the KO protocol even though the adversary is receiving a simulated NMZK of a false statement. More precisely, every time that we change something obtaining a new hybrid experiment, we prove that: (1) the output distributions of the experiments are indistinguishable; (2) the malicious party is behaving honestly (the statement proved by the NMZK given by the adversary is true). We will show that if one of these two invariants does not hold then we can make a reduction that breaks a cryptographic primitive.

The need of a special 4-round OT protocol. Interestingly, the security proof has to address a major issue. After we switch to the simulator of the \(\mathsf {NMZK}\), we have that in some hybrid experiment \(H_i\), we change the input of the receiver of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) (following the approach used in the security proof of the KO protocol). To demonstrate the indistinguishability between \(H_i\) and \(H_{i-1}\) we want to rely on the security of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) against a malicious sender. Therefore we construct an adversarial sender \(\mathcal {A}_\mathcal {OT}\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\). \(\mathcal {A}_\mathcal {OT}\) acts as a proxy for the messages of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) and internally computes the other messages of our protocol. In particular, the 1st and the 3rd rounds of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) are given by the challenger (that acts as a receiver of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\)), and the 2nd and the 4th messages of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) are given by the malicious party. Furthermore, in order to compute the other messages of our 2PC protocol \(\mathcal {A}_\mathcal {OT}\) needs to run the simulator-extractor of NMZK, and this requires to rewind from the 3rd to 2nd round. This means that \(\mathcal {A}_\mathcal {OT}\) needs to complete a 3rd round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\), for every different 2nd round that he receives (this is due to the rewinds made by the simulator of \(\mathsf {NMZK}\) that are emulated by \(\mathcal {A}_\mathcal {OT}\)). We observe that since the challenger cannot be rewound, \(\mathcal {A}_\mathcal {OT}\) needs a strategy to answer to these multiple queries w.r.t. \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) without knowing the randomness and the input used by the challenger so far. For these reasons we need \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) to enjoy an additional property: the replayability of the 3rd round. More precisely, given the messages computed by an honest receiver, the third round can be indistinguishability used to answer to any second round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) sent by a malicious sender. Another issue is that the idea of the security proof explained so far relies on the simulator-extractor of \(\mathsf {NMZK}\) and this simulator rewinds also from the 4th to the 3rd round. The rewinds made by the simulator-extractor allow a malicious receiver to ask for different 3rd rounds of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\). Therefore we need our \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) to be also secure against a more powerful malicious receiver that can send multiple (up to a polynomial \(\gamma \)) third rounds to the honest sender. As far as we know the literature does not provide an OT with the properties that we require, so in this work we also provide an OT protocol with these additional features. This clearly is of independent interest.

Input extraction. One drawback of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) is that the simulator against a malicious receiver \(R_\mathcal {OT}^\star \) is not able to extract the input of \(R_\mathcal {OT}^\star \). This feature is crucial in the security proof of KO, therefore we need another way to allow the extraction of the input from the malicious party. In order to do that, as described before, each party commits two times using \(\mathsf {PBCOM}\); let \(c_0,c_1\) be the commitments computed by \(P_2\). \(P_2\) proves, using \(\varPi _\mathsf {OR}\), knowledge of either the message committed in \(c_0\) or the message committed in \(c_1\). Additionally, using \(\mathsf {NMZK}\), \(P_2\) proves that \(c_0\) and \(c_1\) are commitments of the same value and that this value corresponds to the input used in the two executions of the modified KO protocol. This combination of commitments, \(\varPi _\mathsf {OR}\) and NMZK allow the correct extraction through the PoK-extractor of \(\varPi _\mathsf {OR}\).

Fig. 1.
figure 1

The 4-round KO protocol from trapdoor permutations for functionalities where only one player receives the output.

Fig. 2.
figure 2

The 4-round protocol of [12] for any functionality assuming 3-round 3-robust parallel non-malleable commitments in the simultaneous message exchange model.

Fig. 3.
figure 3

Our 4-round protocol for any functionality assuming trapdoor permutations in the simultaneous message exchange model. \(c_0\) and \(c_1\) (\(\tilde{c}_0\) and \(\tilde{c}_1\)) are commitments of \(P_2\)’s (\(P_1\)’s) input.

1.2 Special One-Sided Simulatable OT

One of the main building blocks of our 2PC protocol is an OT protocol \(\varPi ^\gamma _\mathcal {OT}=(S_\mathcal {OT}, R_\mathcal {OT})\) one-sided simulatableFootnote 3. Our \(\varPi ^\gamma _\mathcal {OT}\) has four rounds where the first (\(\mathsf {ot}_1\)) and the third (\(\mathsf {ot}_3\)) rounds are played by the receiver, and the remaining rounds (\(\mathsf {ot}_2\) and \(\mathsf {ot}_4\)) are played by the sender. In addition \(\varPi ^\gamma _\mathcal {OT}\) enjoys the following two additional properties.

  1. 1.

    Replayable third round. Let \((\mathsf {ot}_1, \mathsf {ot}_2, \mathsf {ot}_3, \mathsf {ot}_4)\) be the messages exchanged by an honest receiver and a malicious sender during an execution of \(\varPi ^\gamma _\mathcal {OT}\). For any honestly computed \(\mathsf {ot}_2'\), we have that \((\mathsf {ot}_1, \mathsf {ot}_2, \mathsf {ot}_3)\) and \((\mathsf {ot}_1, \mathsf {ot}_2', \mathsf {ot}_3)\) are identically distributed. Roughly, we are requiring that the third round can be reused in order to answer to any second round \(\mathsf {ot}_2'\) sent by a malicious sender.

  2. 2.

    Repeatability. We require \(\varPi ^\gamma _\mathcal {OT}\) to be secure against a malicious receiver \(R^\star \) even when the last two rounds of \(\varPi ^\gamma _\mathcal {OT}\) can be repeated multiple times. More precisely a 4-round OT protocol that is secure in this setting can be seen as an OT protocol of \(2+2\gamma \) rounds, with \(\gamma \in \{1,\dots ,\mathsf{poly}(\lambda )\}\) where \(\lambda \) represents the security parameter. In this protocol \(R^\star \), upon receiving the 4th round, can continue the execution with \(S_\mathcal {OT}\) by sending a freshly generated third round of \(\varPi ^\gamma _\mathcal {OT}\) up to total of \(\gamma \) 3rd rounds. Roughly, we require that the output of such \(R^\star \) that runs \(\varPi ^\gamma _\mathcal {OT}\) against an honest sender can be simulated by an efficient simulator \(\mathsf {Sim}\) that has only access to the ideal world functionality \(F_\mathcal {OT}\) and oracle access to \(R^\star \).

The security of \(\varPi ^\gamma _\mathcal {OT}\) is based on the existence of trapdoor permutationsFootnote 4.

Our techniques. In order to construct \(\varPi ^\gamma _\mathcal {OT}\) we use as a starting point the following basic 3-round semi-honest OT \(\varPi _{\mathsf {sh}}\) based on trapdoor permutations (TDPs) of [9, 15]. Let \(l_0,l_1\in \{0,1\}^\lambda \) be the input of the sender S and b be the input bit of the receiver R.

  1. 1.

    The sender S chooses a trapdoor permutation \((f,f^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\) and sends f to the receiver R.

  2. 2.

    R chooses \(x\leftarrow \{0,1\}^\lambda \) and \(z_{1-b}\leftarrow \{0,1\}^\lambda \), computes \(z_b=f(x)\) and sends \((z_0,z_1)\).

  3. 3.

    For \(c=0,1\) S computes and sends \(w_c=l_c \oplus \mathsf {hc}(f^{-1}(z_c))\)

where \(\mathsf {hc}(\cdot )\) is a hardcore bit of f. If the parties follow the protocol (i.e. in the semi-honest setting) then S cannot learn the receiver’s input (the bit b) as both \(z_0\) and \(z_1\) are random strings. Also, due to the security of the TDP f, R cannot distinguish \(w_{1-b}\) from random as long as \(z_{1-b}\) is randomly chosen. If we consider a fully malicious receiver \(R^\star \) then this protocol is not secure anymore. Indeed \(R^\star \) could just compute \(z_{1-b}=f(y)\) picking a random \(y\leftarrow \{0,1\}^\lambda \). In this way \(R^\star \) can retrieve both the inputs of the sender \(l_0\) and \(l_1\). In [15] the authors solve this problem by having the parties engaging a coin-flipping protocol such that the receiver is forced to set at least one between \(z_0\) and \(z_1\) to a random string. This is done by forcing the receiver to commit to two strings \((r_0, r_1)\) in the first round (for the coin-flipping) and providing a witness-indistinguishable proof of knowledge (WIPoK) that either \(z_0=r_0\oplus r'_0\) or \(z_1=r_1\oplus r'_1\) where \(r_0'\) and \(r_1'\) are random strings sent by the sender in the second round. The resulting protocol, as observed in [16], leaks no information to S about R’s input. Moreover the soundness of the WIPoK forces a malicious \(R^\star \) to behave honestly, and the PoK allows to extract the input from the adversary in the simulation. Therefore the protocol constructed in [15] is one-sided simulatable. Unfortunately this approach is not sufficient to have an OT protocol that has a replayable third round. This is due to the to the added WIPoK. More precisely, the receiver has to execute a WIPoK (acting as a prover) in the first three rounds. Clearly, there is no 3-round WIPoK such that given an accepting transcript (acz) one can efficiently compute multiple accepting transcripts w.r.t. different second rounds without knowing the randomness used to compute a. This is the reason why we need to use a different approach in order to construct an OT protocol simulation-based secure against a malicious receiver that also has a replayable 3rd round.

Our construction: \(\varPi ^\gamma _\mathcal {OT}\). We start by considering a trick proposed in [16]. In [16] the authors construct a 4-round black-box OT starting from \(\varPi _{\mathsf {sh}}\). In order to force the receiver to compute a random \(z_{b-1}\), in the first round R sends two commitments \(c_0\) and \(c_1\) such that \(c_b=\mathsf {Eqcom}(\cdot ), c_{1-b}=\mathsf {Eqcom}(r_{1-b})\). \(\mathsf {Eqcom}\) is a commitment scheme that is binding if the committer runs the honest committer procedure; however this commitment scheme admits also an indistinguishable equivocal commitment procedure that allows later to open the equivocal commitment as any message. R then proves using a special WIPoK that either \(c_0\) or \(c_1\) is computed using the honest procedure (i.e., at least one of these commitments is binding). Then S in the second round computes \(r'_0\leftarrow \{0,1\}^\lambda \), \(r'_1\leftarrow \{0,1\}^\lambda \) and two TDPs \(f_0,f_1\) with the respective trapdoor and sends \((r'_0,r'_1, f_0, f_1)\) to R. R, upon receiving \((r'_0, r'_1, f_0, f_1)\), picks \(x\leftarrow \{0,1\}^\lambda \), computes \(r_b=f_b(x)\oplus r'_b\) and sends the opening of \(c_{1-b}\) to the message \(r_{1-b}\) and the opening of \(c_b\) to the message \(r_b\). At this point the sender computes and sends \(w_0=l_0 \oplus \mathsf {hc}(f_0^{-1}(r_0\oplus r'_{0}))\), \(w_1=l_1 \oplus \mathsf {hc}(f_1^{-1}(r_1\oplus r'_{1}))\). Since at least one between \(c_0\) and \(c_1\) is binding (due to the WIPoK), a malicious receiver can retrieve only one of the sender’s input \(l_b\). We observe that this OT protocol is still not sufficient for our propose due to the WIPoK used by the receiver (i.e., the 3rd round is not replayable). Moreover we cannot remove the WIPoK otherwise a malicious receiver could compute both \(c_0\) and \(c_1\) using the equivocal procedure thus obtaining \(l_0\) and \(l_1\). Our solution is to replace the WIPoK with some primitives that make replayable the 3rd round, still allowing the receiver to prove that at least one of the commitments sent in the first round is binding. Our key-idea is two use a combination of instance-dependent trapdoor commitment (IDTCom) and non-interactive commitment schemes. An IDTCom is defined over an instance x that could belong to the \(\mathcal {N}\mathcal {P}\)-language L or not. If \(x\notin L\) then the IDTCom is perfectly binding, otherwise it is equivocal and the trapdoor information is represented by the witness w for x. Our protocol is described as follows. R sends an IDTCom \(\mathtt {tcom}_0\) of \(r_0\) and an IDTCom \(\mathtt {tcom}_1\) of \(r_1\). In both cases the instance used is \(\mathtt {com}\), a perfectly binding commitment of the bit b. The \(\mathcal {N}\mathcal {P}\)-language used to compute \(\mathtt {tcom}_0\) consists of all valid perfectly binding commitments of the message 0, while the \(\mathcal {N}\mathcal {P}\)-language used to compute \(\mathtt {tcom}_1\) consists of all valid perfectly binding commitments of the message 1.

This means that \(\mathtt {tcom}_b\) can be opened to any valueFootnote 5 and \(\mathtt {tcom}_{1-b}\) is perfectly binding (we recall that b is the input of the receiver). It is important to observe that due to the binding property of \(\mathtt {com}\) it could be that both \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) are binding, but it can never happen that they are both equivocal. Now we can replace the two commitments and the WIPoK used in [16] with \(\mathtt {tcom}_0,\mathtt {tcom}_1\) and \(\mathtt {com}(b)\) that are sent in the first round. The rest of the protocol stay the same as in [16] with the difference that in the third round the openings to the messages \(r_0\) and \(r_1\) are w.r.t. \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\). What remains to observe is that when a receiver provides a valid third round of this protocol then the same message can be used to answer all second rounds. Indeed, a well formed third round is accepting if and only if the opening w.r.t. \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) are well computed. Therefore whether the third round is accepting or not does not depend on the second round sent by the sender.

Intuitively this protocol is also already secure when we consider a malicious receiver that can send multiple third rounds up to a total of \(\gamma \) 3rd rounds, thus obtaining an OT protocol of \(2+2\gamma \) rounds (repeatability). This is because, even though a malicious receiver obtains multiple fourth rounds in response to multiple third rounds sent by \(R^\star \), no information about the input of the sender is leaked. Indeed, in our \(\varPi ^\gamma _\mathcal {OT}\), the input of the receiver is fixed in the first round (only one between \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) can be equivocal). Therefore the security of the TDP ensures that only \(l_b\) can be obtained by \(R^\star \) independently of what he does in the third round. In the formal part of the paper we will show that the security of the TDP is enough to deal with such scenario.

We finally point out that the OT protocol that we need has to allow parties to use strings instead of bits as input. More precisely the sender’s input is represented by \((l^1_0, l^1_1, \dots , l^m_0, l^m_1)\) where each \(l_b^i\) is an \(\lambda \)-bit length string (for \(i=1,\dots , m\) and \(b=0,1\)), while the input of the receiver is \(\lambda \)-bit length string.

This is achieved in two steps. First we construct an OT protocol where the sender’s input is represented by just two \(m\)-bit strings \(l_0\) and \(l_1\) and the receiver’s input is still a bit. We obtain this protocol by just using in \(\varPi ^\gamma _\mathcal {OT}\) a vector of \(m\) hard-core bits instead of just a single hard core bit following the approach of [12, 15]. Then we consider \(m\) parallel execution of this modified \(\varPi ^\gamma _\mathcal {OT}\) (where the sender uses a pair of strings as input) thus obtaining \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\).

2 Definitions and Tools

2.1 Preliminaries

We denote the security parameter by \(\lambda \) and use “||” as concatenation operator (i.e., if a and b are two strings then by a|b we denote the concatenation of a and b). For a finite set Q, \(x\leftarrow Q\) sampling of x from Q with uniform distribution. We use the abbreviation ppt that stays for probabilistic polynomial time. We use \(\mathsf{poly}(\cdot )\) to indicate a generic polynomial function.

A polynomial-time relation \({\mathsf {Rel}}\) (or polynomial relation, in short) is a subset of \(\{0, 1\}^*\times \{0,1\}^*\) such that membership of (xw) in \({\mathsf {Rel}}\) can be decided in time polynomial in |x|. For \((x,w)\in {\mathsf {Rel}}\), we call x the instance and w a witness for x. For a polynomial-time relation \({\mathsf {Rel}}\), we define the \(\mathcal {N}\mathcal {P}\)-language \(L_{{\mathsf {Rel}}}\) as \(L_{{\mathsf {Rel}}}=\{x|\exists w: (x, w)\in {\mathsf {Rel}}\}\). Analogously, unless otherwise specified, for an \(\mathcal {N}\mathcal {P}\)-language L we denote by \({\mathsf {Rel}}_\mathsf {L}\) the corresponding polynomial-time relation (that is, \({\mathsf {Rel}}_\mathsf {L}\) is such that \(L=L_{{\mathsf {Rel}}_\mathsf {L}}\)). We denote by \(\hat{L}\) the language that includes both L and all well formed instances that do not have a witness. Moreover we require that membership in \(\hat{L}\) can be tested in polynomial time. We implicitly assume that a PPT algorithm that is supposed to receive an instance in \(\hat{L}\) will abort immediately if the instance does not belong to \(\hat{L}\).

Let A and B be two interactive probabilistic algorithms. We denote by \(\langle A(\alpha ),B(\beta )\rangle (\gamma )\) the distribution of B’s output after running on private input \(\beta \) with A using private input \(\alpha \), both running on common input \(\gamma \). Typically, one of the two algorithms receives \(1^\lambda \) as input. A transcript of \(\langle A(\alpha ),B(\beta )\rangle (\gamma )\) consists of the messages exchanged during an execution where A receives a private input \(\alpha \), B receives a private input \(\beta \) and both A and B receive a common input \(\gamma \). Moreover, we will refer to the view of A (resp. B) as the messages it received during the execution of \(\langle A(\alpha ),B(\beta )\rangle (\gamma )\), along with its randomness and its input. We say that a protocol (AB) is public coin if B sends to A random bits only. When it is necessary to refer to the randomness r used by and algorithm A we use the following notation: \(A(\cdot ;r)\).

2.2 Standard Definitions

Definition 1

(Trapdoor permutation). Let \(\mathcal {F}\) be a triple of ppt algorithms \((\mathsf {Gen},\mathsf {Eval}, \mathsf {Invert})\) such that if \(\mathsf {Gen}(1^\lambda )\) outputs a pair \((f, \mathtt {td})\), then \(\mathsf {Eval}(f, \cdot )\) is a permutation over \(\{0,1\}^\lambda \) and Invert \((f, \mathtt {td}, \cdot )\) is its inverse. \(\mathcal {F}\) is a trapdoor permutation such that for all ppt adversaries \(\mathcal {A}\):

$$\begin{aligned} \text{ Prob }\left[ \;(f, \mathtt {td})\leftarrow \mathsf {Gen}(1^\lambda ); y\leftarrow \{0,1\}^\lambda , x\leftarrow \mathcal {A}(f,y):\mathsf {Eval}(f,x)=y\;\right] \le \nu (\lambda ). \end{aligned}$$

For convenience, we drop \((f, \mathtt {td})\) from the notation, and write \(f(\cdot )\), \(f^{-1}(\cdot )\) to denote algorithms \(\mathsf {Eval}(f, \cdot )\), \(\mathsf {Invert}(f, \mathtt {td}, \cdot )\) respectively, when f, \(\mathtt {td}\) are clear from the context. Following [12, 15] we assume that \(\mathcal {F}\) satisfies (a weak variant of) “certifiability”: namely, given some f it is possible to decide in polynomial time whether \(\mathsf {Eval}(f, \cdot )\) is a permutation over \(\{0,1\}^\lambda \). Let \(\mathsf {hc}\) be the hardcore bit function for \(\lambda \) bits for the family \(\mathcal {F}\). \(\lambda \) hardcore bits are obtained from a single-bit hardcore function h and \(f\in \mathcal {F}\) as follows: \(\mathsf {hc}(z) = h(z)||h(f(z))||\dots || h(f^{\lambda -1}(z))\). Informally, \(\mathsf {hc}(z)\) looks pseudorandom given \(f^\lambda (z)\) Footnote 6.

In this paper we also use the notions of \(\varSigma \)-protocol, zero-knowledge (ZK) argument of knowledge (AoK), non-malleable zero-knowledge, commitment, instance-dependent commitment and garbled circuit. Because of the space constraint we give only an informal descriptions of those notions when is needed in the paper. We refer the reader to the full version [5] for the formal definitions. We also use the adaptive-input version of WI and AoK. The only difference is that in the adaptive version of ZK and AoK, the adversary can chose the statement to be proved (and the corresponding witness in the case of ZK) before that the last round of the protocol is played. For a more thorough treatment of these concepts, see [6, 7].

2.3 OR Composition of \(\varSigma \)-Protocols

In our paper we use the trick for composing two \(\varSigma \)-protocols to compute the OR of two statements [8, 10]. In more details, let \(\varPi =(\mathcal {P}, \mathcal {V})\) be a \(\varSigma \)-protocol for the relation \({\mathsf {Rel}}_\mathsf {L}\) with SHVZK simulator \(\mathsf {Sim}\). Then it is possible to use \(\varPi \) to construct \(\varPi _\mathsf {OR}=(\mathcal {P}_{\mathsf {OR}}, \mathcal {V}_{\mathsf {OR}})\) for relation \({\mathsf {Rel}}_\mathsf {L_{\mathsf {OR}}}=\{((x_0,x_1),w): ((x_0,w)\in {\mathsf {Rel}}_\mathsf {L}) \ \text {OR} \ ((x_1,w)\in {\mathsf {Rel}}_\mathsf {L})\}\) that works as follows.

Protocol \(\varPi _\mathsf {OR}=(\mathcal {P}_{\mathsf {OR}}, \mathcal {V}_{\mathsf {OR}})\) : \(\mathcal {P}_{\mathsf {OR}}\) and \(\mathcal {V}_{\mathsf {OR}}\) on common input \(x_0, x_1\) and private input w of \(\mathcal {P}_{\mathsf {OR}}\) s.t. \(((x_0, x_1), w) \in {\mathsf {Rel}}_\mathsf {L_{\mathsf {OR}}}\) compute the following steps.

  • \(\mathcal {P}_{\mathsf {OR}}\) computes \(\mathsf {a}_0\leftarrow \mathcal {P}(1^\lambda ,x_0,w)\). Furthermore he picks \(\mathsf {c}_1\leftarrow \{0,1\}^\lambda \) and computes \((\mathsf {a}_1, \mathsf {z}_1)\leftarrow \mathsf {Sim}(1^\lambda , x_1, \mathsf {c}_1)\). \(\mathcal {P}_{\mathsf {OR}}\) sends \(\mathsf {a}_0, \mathsf {a}_1\) to \(\mathcal {V}_{\mathsf {OR}}\).

  • \(\mathcal {V}_{\mathsf {OR}}\) picks \(\mathsf {c}\leftarrow \{0,1\}^\lambda \) and sends \(\mathsf {c}\) to \(\mathcal {P}_{\mathsf {OR}}\).

  • \(\mathcal {P}_{\mathsf {OR}}\) computes \(\mathsf {c}_0=\mathsf {c}_1 \oplus \mathsf {c}\) and computes \(\mathsf {z}_0 \leftarrow \mathcal {P}(\mathsf {c}_0)\). \(\mathcal {P}_{\mathsf {OR}}\) sends \(\mathsf {c}_0, \mathsf {c}_1, \mathsf {z}_0\) \(\mathsf {z}_1\) to \(\mathcal {V}_{\mathsf {OR}}\).

  • \(\mathcal {V}_{\mathsf {OR}}\) checks that \(\mathsf {c}=\mathsf {c}_0 \oplus \mathsf {c}_1\) and if \(\mathcal {V}(x_0, \mathsf {a}_0, \mathsf {c}_0, \mathsf {z}_0)=1\) and \(\mathcal {V}(x_1, \mathsf {a}_1, \mathsf {c}_1, \mathsf {z}_1)=1\). If all checks succeed then he outputs 1, otherwise he outputs 0.

Theorem 1

([8, 10]). \(\varPi _\mathsf {OR}=(\mathcal {P}_{\mathsf {OR}}, \mathcal {V}_{\mathsf {OR}})\) is a \(\varSigma \)-protocol for \({\mathsf {Rel}}_\mathsf {L_\mathsf {OR}}\), moreover \(\varPi _\mathsf {OR}\) is WI for the relation \({\mathsf {Rel}}_\mathsf {\hat{L}_\mathsf {OR}}=\{((x_0,x_1),w): ((x_0,w)\in {\mathsf {Rel}}_\mathsf {L} \ \text {AND} \ x_1 \in L) \ \text {OR} \ ((x_1,w)\in {\mathsf {Rel}}_\mathsf {L} \ \text {AND}\ x_0 \in L)\}\).

In our work we use as \(\varPi =(\mathcal {P}, \mathcal {V})\) Blum’s protocol [2] for the \(\mathcal {N}\mathcal {P}\)-complete language Hamiltonicity (that also is a \(\varSigma \)-Protocol). We will use the PoK of \(\varPi _\mathsf {OR}\) in a black-box way, but we will rely on the Special HVZK of the underlying \(\varPi \) following the approach proposed in [4]. Note that since Hamiltonicity is an \(\mathcal {N}\mathcal {P}\)-complete language, the above construction of \(\varPi _\mathsf {OR}\) works for any \(\mathcal {N}\mathcal {P}\)-language through \(\mathcal {N}\mathcal {P}\) reductions. For simplicity in the rest of the paper we will omit the \(\mathcal {N}\mathcal {P}\)-reduction therefore assuming that the above scheme works directly on a given \(\mathcal {N}\mathcal {P}\)-language L.

2.4 Oblivious Transfer

Here we follow [16]. Oblivious Transfer (OT) is a two-party functionality \(F_\mathcal {OT}\), in which a sender S holds a pair of strings \((l_0,l_1)\), and a receiver R holds a bit b, and wants to obtain the string \(l_b\). The security requirement for the \(F_\mathcal {OT}\) functionality is that any malicious receiver does not learn anything about the string \(l_{1-b}\) and any malicious sender does not learn which string has been transferred. This security requirement is formalized via the ideal/real world paradigm. In the ideal world, the functionality is implemented by a trusted party that takes the inputs from S and R and provides the output to R and is therefore secure by definition. A real world protocol \(\varPi \) securely realizes the ideal \(F_\mathcal {OT}\) functionalities, if the following two conditions hold. (a) Security against a malicious receiver: the output of any malicious receiver \(R^\star \) running one execution of \(\varPi \) with an honest sender S can be simulated by a ppt simulator \(\mathsf {Sim}\) that has only access to the ideal world functionality \(F_\mathcal {OT}\) and oracle access to \(R^\star \). (b) Security against a malicious sender. The joint view of the output of any malicious sender \(S^\star \) running one execution of \(\varPi \) with R and the output of R can be simulated by a ppt simulator \(\mathsf {Sim}\) that has only access to the ideal world functionality \(F_\mathcal {OT}\) and oracle access to \(S^\star \). In this paper we consider a weaker definition of \(F_\mathcal {OT}\) that is called one-sided simulatable \(F_\mathcal {OT}\), in which we do not demand the existence of a simulator against a malicious sender, but we only require that a malicious sender cannot distinguish whether the honest receiver is playing with bit 0 or 1. A bit more formally, we require that for any ppt malicious sender \(S^\star \) the view of \(S^\star \) executing \(\varPi \) with the R playing with bit 0 is computationally indistinguishable from the view of \(S^\star \) where R is playing with bit 1. Finally, we consider the \(F^m_\mathcal {OT}\) functionality where the sender S and the receiver R run \(m\) executions of OT in parallel. The formal definitions of one-sided secure \(F_\mathcal {OT}\) and one-sided secure \(F^m_\mathcal {OT}\) follow.

Fig. 4.
figure 4

The Oblivious Transfer Functionality \(F_\mathcal {OT}\).

Definition 2

([16]). Let \(F_\mathcal {OT}\) be the Oblivious Transfer functionality as shown in Fig. 4. We say that a protocol \(\varPi \) securely computes \(F_\mathcal {OT}\) with one-sided simulation if the following holds:

  1. 1.

    For every non-uniform ppt adversary \(R^\star \) controlling the receiver in the real model, there exists a non-uniform ppt adversary \(\mathsf {Sim}\) for the ideal model such that \(\{\mathsf {REAL}_{\varPi ,R^\star (z)}(1^\lambda )\}_{z\in \{0,1\}^\lambda } \approx \mathsf {IDEAL}_{f,\mathsf {Sim}(z)}(1^\lambda )\}_{z\in \{0,1\}^\lambda }\), where \(\mathsf {REAL}_{\varPi ,R^\star (z)}(1^\lambda )\) denotes the distribution of the output of the adversary \(R^\star \) (controlling the receiver) after a real execution of protocol \(\varPi \), where the sender S has inputs \(l_0, l_1\) and the receiver has input b. \(\mathsf {IDEAL}_{f,\mathsf {Sim}(z)}(1^\lambda )\) denotes the analogous distribution in an ideal execution with a trusted party that computes \(F_\mathcal {OT}\) for the parties and hands the output to the receiver.

  2. 2.

    For every non-uniform ppt adversary \(S^\star \) controlling the sender it holds that: \(\{\text{ View }^R_{\varPi , S^\star (z)}(l_0,l_1,0)\}_{z\in \{0,1\}^\star } \approx \{\text{ View }^R_{\varPi , S^\star (z)}(l_0,l_1,1)\}_{z\in \{0,1\}^\star }\), where \(\text{ View }^R_{\varPi , S^\star (z)}\) denotes the view of adversary \(S^\star \) after a real execution of protocol \(\varPi \) with the honest receiver R.

Definition 3

(Parallel oblivious transfer functionality \(F^m_\mathcal {OT}\) [16]). The parallel Oblivious Transfer Functionality \(F^m_\mathcal {OT}\) is identical to the functionality \(F_\mathcal {OT}\), with the difference that takes in input \(m\) pairs of string from S \((l_0^1,l_1^1, \dots , l_0^m, l_1^m)\) (whereas \(F_\mathcal {OT}\) takes just one pair of strings from S) and \(m\) bits from R, \(b_1,\dots , b_m\) (whereas \(F_\mathcal {OT}\) takes one bit from R) and outputs to the receiver values \((l_{b_1}^1,\dots ,l_{b_m}^m)\) while the sender receives nothing.

Definition 4

([16]). Let \(F^m_\mathcal {OT}\) be the Oblivious Transfer functionality as described in Definition 3. We say that a protocol \(\varPi \) securely computes \(F^m_\mathcal {OT}\) with one-sided simulation if the following holdsFootnote 7:

  1. 1.

    For every non-uniform ppt adversary \(R^\star \) controlling the receiver in the real model, there exists a non-uniform ppt adversary \(\mathsf {Sim}\) for the ideal model such that for every \(x_1\in \{0,1\},\dots , x_m\in \{0,1\}\) it holds that \( \{\mathsf {REAL}_{\varPi ,R^\star (z)}(1^\lambda , (l_0^1,l_1^1, \dots , l_0^m, l_1^m), (x_1,\dots ,x_m))\}_{z\in \{0,1\}^\lambda } \approx \) \( \mathsf {IDEAL}_{f,\mathsf {Sim}(z)}(1^\lambda ), (l_0^1,l_1^1, \dots , l_0^m, l_1^m), (x_1,\dots ,x_m))\}_{z\in \{0,1\}^\lambda } \) where \(\mathsf {REAL}_{\varPi ,R^\star (z)}(1^\lambda )\) denotes the distribution of the output of the adversary \(R^\star \) (controlling the receiver) after a real execution of protocol \(\varPi \), where the sender S has inputs \((l_0^1,l_1^1, \dots , l_0^m, l_1^m)\) and the receiver has input \((x_1,\dots , x_m)\). \(\mathsf {IDEAL}_{f,\mathsf {Sim}(z)}(1^\lambda )\) denotes the analogous distribution in an ideal execution with a trusted party that computes \(F^m_\mathcal {OT}\) for the parties and hands the output to the receiver.

  2. 2.

    For every non-uniform ppt adversary \(S^\star \) controlling the sender it holds that for every \(x_1\in \{0,1\},\dots , x_m\in \{0,1\}\) and for every \(y_1\in \{0,1\},\dots ,y_m\in \{0,1\}\): \( \{\text{ View }^R_{\varPi , S^\star (z)}((l_0^1,l_1^1, \dots , l_0^m, l_1^m), (x_1,\dots ,x_m))\}_{z\in \{0,1\}^\star }\approx \) \(\{\text{ View }^R_{\varPi , S^\star (z)}((l_0^1,l_1^1, \dots , l_0^m, l_1^m), (y_1,\dots ,y_m))\}_{z\in \{0,1\}^\star }\), where \(\text{ View }^R_{\varPi , S^\star (z)}\) denotes the view of adversary \(S^\star \) after a real execution of protocol \(\varPi \) with the honest receiver R.

3 Our OT Protocol \(\varPi ^\gamma _\mathcal {OT}=(S_\mathcal {OT}, R_\mathcal {OT})\)

We use the following tools.

  1. 1.

    A non-interactive perfectly binding, computationally hiding commitment scheme \(\mathsf {PBCOM}=(\mathsf{{Com}},\mathsf{{Dec}}).\)

  2. 2.

    A trapdoor permutation \(\mathcal {F}=(\mathsf {Gen},\mathsf {Eval}, \mathsf {Invert})\) Footnote 8 with the hardcore bit function for \(\lambda \) bits \(\mathsf {hc}(\cdot )\) (see Definition 1).

  3. 3.

    A non-interactive IDTC scheme \(\mathsf {TC}_0=(\mathsf {Sen}_0, \mathsf {Rec}_0, \mathsf {TFake}_0)\) Footnote 9 for the \(\mathcal {N}\mathcal {P}\)-language \(L_0=\{\mathtt {com}: \exists \ \mathtt {dec}\ \text {s.t.}\ \mathsf{{Dec}}(\mathtt {com},\mathtt {dec}, 0)=1\}\).

  4. 4.

    A non-interactive IDTC scheme \(\mathsf {TC}_1=(\mathsf {Sen}_1, \mathsf {Rec}_1, \mathsf {TFake}_1)\) for the \(\mathcal {N}\mathcal {P}\)-language \(L_1=\{\mathtt {com}: \exists \ \mathtt {dec}\ \text {s.t.}\ \mathsf{{Dec}}(\mathtt {com},\mathtt {dec}, 1)=1\}\).

Let \(b\in \{0,1\}\) be the input of \(R_\mathcal {OT}\) and \(l_0,l_1\in \{0,1\}^{\lambda }\) be the input of \(S_\mathcal {OT}\), we now give the description of our protocol following Fig. 5.

In the first round \(R_\mathcal {OT}\) runs \(\mathsf{{Com}}\) on input the message to be committed b in order to obtain the pair \((\mathtt {com}, \mathtt {dec})\). On input the instance \(\mathtt {com}\) and a random string \(r_{b-1}^1\), \(R_\mathcal {OT}\) runs \(\mathsf {Sen}_{1-b}\) in order to compute the pair \((\mathtt {tcom}_{1-b}, \mathtt {tdec}_{1-b})\). We observe that the Instance-Dependent Binding property of the IDTCs, the escription of the \(\mathcal {N}\mathcal {P}\)-language \(L_{1-b}\) and the fact that in \(\mathtt {com}\) the bit b has been committed, ensure that \(\mathtt {tcom}_{1-b}\) can be opened only to the value \(r_{b-1}^1\).Footnote 10 \(R_\mathcal {OT}\) runs the trapdoor procedure of the IDTC scheme \(\mathsf {TC}_b\). More precisely \(R_\mathcal {OT}\) runs \(\mathsf {TFake}_b\) on input the instance \(\mathtt {com}\) to compute the pair \((\mathtt {tcom}_b, \mathtt {aux})\). In this case \(\mathtt {tcom}_b\) can be equivocated to any message using the trapdoor (the opening information of \(\mathtt {com}\)), due to the trapdoorness of the IDTC, the description of the \(\mathcal {N}\mathcal {P}\)-language \(L_b\) and the message committed in \(\mathtt {com}\) (that is represented by the bit b). \(R_\mathcal {OT}\) sends \(\mathtt {tcom}_0\), \(\mathtt {tcom}_1\) and \(\mathtt {com}\) to \(S_\mathcal {OT}\).

In the second round \(S_\mathcal {OT}\) picks two random strings \(R_0\), \(R_1\) and two trapdoor permutations \((f_{0,1}, f_{1,1})\) along with their trapdoors \((f^{-1}_{0,1}, f^{-1}_{1,1})\). Then \(S_\mathcal {OT}\) sends \(R_0\), \(R_1\), \(f_{0,1}\) and \(f_{1,1}\) to \(R_\mathcal {OT}\).

In the third round \(R_\mathcal {OT}\) checks whether or not \(f_{0,1}\) and \(f_{1,1}\) are valid trapdoor permutations. In the negative case \(R_\mathcal {OT}\) aborts, otherwise \(R_\mathcal {OT}\) continues with the following steps. \(R_\mathcal {OT}\) picks a random string \(z_1^\prime \) and computes \(z_1=f(z_1^\prime )\). \(R_\mathcal {OT}\) now computes \(r_b^1=z_1\oplus R_b\) and runs \(\mathsf {TFake}_b\) on input \(\mathtt {dec}\), \(\mathtt {com}\), \(\mathtt {tcom}_b\), \(\mathtt {aux}\) and \(r_b^1\) in order to obtain the equivocal opening \(\mathtt {tdec}_b\) of the commitment \(\mathtt {tcom}_b\) to the message \(r_b^1\). \(R_\mathcal {OT}\) renames \(r_b\) to \(r_b^1\) and \(\mathtt {tdec}_b\) to \(\mathtt {tdec}_b^1\) and sends to \(S_\mathcal {OT}\) \((\mathtt {tdec}^1_0, r_0^1)\) and \((\mathtt {tdec}^1_1, r_1^1)\).

In the fourth round \(S_\mathcal {OT}\) checks whether or not \((\mathtt {tdec}^1_0, r_0^1)\) and \((\mathtt {tdec}^1_1, r_1^1)\) are valid openings w.r.t. \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\). In the negative case \(S_\mathcal {OT}\) aborts, otherwise \(S_\mathcal {OT}\) computes \(W_0^1=l_0\oplus \mathsf {hc}(f_{0,1}^{-\lambda }(r_0^1\oplus R_0))\) and \(W_1^1=l_1\oplus \mathsf {hc}(f_{1,1}^{-\lambda }(r_1^1\oplus R_1))\). Informally \(S_\mathcal {OT}\) encrypts his inputs \(l_0\) and \(l_1\) through a one-time pad using as a secret key the pre-image of \( r_0^1\oplus R_0\) for \(l_0\) and the pre-image of \(r_1^1 \oplus R_1\) for \(l_1\). \(S_\mathcal {OT}\) also computes two trapdoor permutations \((f_{0,2}, f_{1,2})\) along with their trapdoors \((f^{-1}_{0,2}, f^{-1}_{1,2})\) and sends \((W_0^1,W_1^1, f_{0,2}, f_{1,2})\) to \(R_\mathcal {OT}\). At this point the third and the fourth rounds are repeated up to \(\gamma -1\) times using fresh randomness as showed in Fig. 5. In the last round no trapdoor permutation is needed/sent.

In the output phase, \(R_\mathcal {OT}\) computes and outputs \(l_b=W^1_b\oplus \mathsf {hc}(z_1^\prime )\). That is, \(R_\mathcal {OT}\) just uses the information gained in the first four rounds to compute the output. It is important to observe that \(R_\mathcal {OT}\) can correctly and efficiently compute the output because \(z^\prime =r_b^1\oplus R_b\). Moreover \(R_\mathcal {OT}\) cannot compute \(l_{1-b}\) because he has no way to change the value committed in \(\mathtt {tcom}_{1-b}\) and invert the TDP is suppose to be hard without having the trapdoor.

Fig. 5.
figure 5

Description of \(\varPi ^\gamma _\mathcal {OT}.\)

In order to construct our protocol for two-party computation in the simultaneous message exchange model we need to consider an extended version of \(\varPi ^\gamma _\mathcal {OT}\), that we denote by \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}=(S_{\overrightarrow{\mathcal {OT}}},R_{\overrightarrow{\mathcal {OT}}})\). In \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) the \(S_{\overrightarrow{\mathcal {OT}}}\)’s input is represented by \(m\) pairs \((l_0^1, l_1^1, \dots , l_0^m, l_1^m)\) and the \(R_{\overrightarrow{\mathcal {OT}}}\)’s input is represented by the sequence \(b_1,\dots ,b_m\) with \(b_i\in \{0,1\}\) for all \(i=1,\dots ,m\). In this case the output of \(R_{\overrightarrow{\mathcal {OT}}}\) is \((l_{b_1},\dots ,l_{b_m})\). We construct \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}=(S_{\overrightarrow{\mathcal {OT}}},R_{\overrightarrow{\mathcal {OT}}})\) by simply considering \(m\) parallel iterations of \(\varPi ^\gamma _\mathcal {OT}\) and then we prove that it securely computes \(F^m_\mathcal {OT}\) with one-sided simulation (see Definition 4).

Proof sketch. The security proof of \(\varPi ^\gamma _\mathcal {OT}\) is divided in two parts. In the former we prove the security against a malicious sender and in the latter we prove the security of \(\varPi ^\gamma _\mathcal {OT}\) against a malicious receiver. In order to prove the security against malicious sender we recall that for the definition of one-sided simulation it is just needed the no information about R’s input is leaked to \(S^\star \). We consider the experiment \(H_0\) where R’s input is 0 and the experiment \(H_1\) where R’s input is 1 and we prove that \(S^\star \) cannot distinguish between \(H_0\) and \(H_1\). More precisely we consider the experiment \(H^a\) where \(\mathtt {tcom}_0\) and the corresponding opening is computed without using the trapdoor (the randomness of \(\mathtt {com}\)) and relying on the trapdoorness of the IDTCom \(\mathsf {TC}_0\) we prove that \(H_0\approx H^a\). Then we consider the experiment \(H^b\) where the value committed in \(\mathtt {com}\) goes from 0 to 1 and prove that \(H^a\approx H^b\) due to the hiding of \(\mathtt {com}\). We observe that this reduction can be made because to compute both \(H^a\) and \(H^b\) the opening informations of \(\mathtt {com}\) are not required anymore. The proof ends with the observation the \(H^b\approx H_1\) due to the trapdoorness of the IDTCom \(\mathsf {TC}_1\). To prove the security against a malicious receiver \(R^\star \) we need to show a simulator \(\mathsf {Sim}\). \(\mathsf {Sim}\) rewinds \(R^\star \) from the third to the second round by sending every time freshly generated \(R_0\) and \(R_1\). \(\mathsf {Sim}\) then checks whether the values \(r_0^1\) and \(r_1^1\) change during the rewinds. We recall that \(\mathtt {com}\) is a perfectly binging commitment, therefore only one between \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) can be opened to multiple values using the trapdoor procedure (\(\mathtt {com}\) can belong only to one of the \(\mathcal {N}\mathcal {P}\)-languages \(L_0\) and \(L_1\)). Moreover, intuitively, the only way that \(R^\star \) can compute the output is by equivocating one between \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) based on the values \(R_0, R_1\) received in the second round. This means that if during the rewinds the value opened w.r.t. \(\mathtt {tcom}_b\) changes, then the input that \(R^\star \) is using is b. Therefore the simulator can call the ideal functionality thus obtaining \(l_b\). At this point \(\mathsf {Sim}\) uses \(l_b\) to compute \(W_b^1\) according to the description of \(\varPi ^\gamma _\mathcal {OT}\) and sets \(W_{1-b}^1\) to a random string. Moreover \(\mathsf {Sim}\) will use the same strategy used to compute \(W_b^1\) and \(W_{1-b}^1\) to compute, respectively \(W_b^i\) and \(W_{1-b}^i\) for \(i=2,\dots ,\gamma \). In case during the rewinds the value \(r_0^1, r_1^1\) stay the same, then \(\mathsf {Sim}\) sets both \(W_0^1\) and \(W_1^1\) to random strings. We observe that \(R^\star \) could detect that now \(W_0^1\) and \(W_1^1\) are computed in a different way, but this would violate the security of the TDPs.

Theorem 2

Assuming TDPs, for any \(\gamma >0\) \(\varPi ^\gamma _\mathcal {OT}\) securely computes \(F_\mathcal {OT}\) with one-sided simulation. Moreover the third round is replayable.

Proof

We first observe that in third round of \(\varPi ^\gamma _\mathcal {OT}\) only the opening information for the IDTCs \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) are sent. Therefore once that a valid third round is received, it is possible to replay it in order to answer to many second rounds sent by a malicious sender. Roughly, whether the third round of \(\varPi ^\gamma _\mathcal {OT}\) is accepting or not is independent of what a malicious sender sends in the second round. Therefore we have proved that \(\varPi ^\gamma _\mathcal {OT}\) has a replayable third round. In order to prove that \(\varPi ^\gamma _\mathcal {OT}\) is one-sided simulatable secure for \(F_\mathcal {OT}\) (see Definition 2) we divide the security proof in two parts; the former proves the security against a malicious sender, and the latter proves the security against a malicious receiver. More precisely we prove that \(\varPi ^\gamma _\mathcal {OT}\) is secure against a malicious receiver for an arbitrary chosen \(\gamma =\mathsf{poly}(\lambda )\), and is secure against malicious sender for \(\gamma =1\) (i.e. when just the first four rounds of the protocol are executed).

Security against a malicious sender. In this case we just need to prove that the output of \(S_\mathcal {OT}^\star \) of the execution of \(\varPi ^\gamma _\mathcal {OT}\) when \(R_\mathcal {OT}\) interacts with \(S_\mathcal {OT}^\star \) using \(b=0\) as input is computationally indistinguishable from when \(R_\mathcal {OT}\) uses \(b=1\) as input. The differences between these two hybrid experiments consist of the message committed in \(\mathtt {com}\) and the way in which the IDTCs are computed. More precisely, in the first experiment, when \(b=0\) is used as input, \(\mathtt {tcom}_0\) and the corresponding opening \((\mathtt {tdec}^1_0, r_0^1)\) are computed using the trapdoor procedure (in this case the message committed in \(\mathtt {com}\) is 0), while \(\mathtt {tcom}_1\) and \((\mathtt {tdec}^1_1, r_1^1)\) are computed using the honest procedure. In the second experiment, \(\mathtt {tcom}_0\) and the respective opening \((\mathtt {tdec}^1_0, r_0^1)\) are computed using the honest procedure, while \(\mathtt {tcom}_1\) and \((\mathtt {tdec}^1_1, r_1^1)\) are computed using the trapdoor procedure of the IDTC scheme. In order to prove the indistinguishability between these two experiments we proceed via hybrid arguments. The first hybrid experiment \(\mathcal {H}_1\) is equal to when \(R_\mathcal {OT}\) interacts with against \(S_\mathcal {OT}^\star \) according \(\varPi ^\gamma _\mathcal {OT}\) when \(b=0\) is used as input. In \(\mathcal {H}_2\) the honest procedure of IDTC is used instead of the trapdoor one in order to compute \(\mathtt {tcom}_0\) and the opening \((\mathtt {tdec}^1_0, r_0^1)\). We observe that in \(\mathcal {H}_2\) both the IDTCs are computed using the honest procedure, therefore no trapdoor information (i.e. the randomness used to compute \(\mathtt {com}\)) is required. The computational-indistinguishability between \(\mathcal {H}_1\) and \(\mathcal {H}_2\) comes from the trapdoorness of the IDTC \(\mathsf {TC}_0\). In \(\mathcal {H}_3\) the value committed in \(\mathtt {com}\) goes from 0 to 1. \(\mathcal {H}_2\) and \(\mathcal {H}_3\) are indistinguishable due to the hiding of \(\mathsf {PBCOM}\). It is important to observe that a reduction to the hiding of \(\mathsf {PBCOM}\) is possible because the randomness used to compute \(\mathtt {com}\) is no longer used in the protocol execution to run one of the IDTCs. In the last hybrid experiment \(\mathcal {H}_4\) the trapdoor procedure is used in order to compute \(\mathtt {tcom}_1\) and the opening \((\mathtt {tdec}^1_1, r_1^1)\). We observe that it is possible to run the trapdoor procedure for \(\mathsf {TC}_1\) because the message committed in \(\mathtt {com}\) is 1. The indistinguishability between \(\mathcal {H}_3\) and \(\mathcal {H}_4\) comes from the trapdoorness of the IDTC. The observation that \(\mathcal {H}_4\) corresponds to the experiment where the honest receiver executes \(\varPi ^\gamma _\mathcal {OT}\) using \(b=1\) as input concludes the security proof.

Security against a malicious receiver. In order to prove that \(\varPi ^\gamma _\mathcal {OT}\) is simulation-based secure against malicious receiver \(R_\mathcal {OT}^\star \) we need to show a ppt simulator \(\mathsf {Sim}\) that, having only access to the ideal world functionality \(F_\mathcal {OT}\), can simulate the output of any malicious \(R_\mathcal {OT}^\star \) running one execution of \(\varPi ^\gamma _\mathcal {OT}\) with an honest sender \(S_\mathcal {OT}\). The simulator \(\mathsf {Sim}\) works as follows. Having oracle access to \(R_\mathcal {OT}^\star \), \(\mathsf {Sim}\) runs as a sender in \(\varPi ^\gamma _\mathcal {OT}\) by sending two random strings \(R_0\) and \(R_1\) and the pair of TDPs \(f_{0,1}\) and \(f_{1,1}\) in the second round. Let \((\mathtt {tdec}^1_0, r^1_0), (\mathtt {tdec}^1_1, r^1_1)\) be the messages sent in the third round by \(R_\mathcal {OT}^\star \). Now \(\mathsf {Sim}\) rewinds \(R_\mathcal {OT}^\star \) by sending two fresh random strings \(\overline{R}_0\) and \(\overline{R}_1\) such that \(\overline{R}_0\ne R_0\) and \(\overline{R}_1\ne R_1\).

Let \((\overline{\mathtt {tdec}}^1_0, \overline{r}^1_0), (\overline{\mathtt {tdec}}^1_1, \overline{r}^1_1)\) be the messages sent in the third round by \(R_\mathcal {OT}^\star \) after this rewind, then there are only two things that can happenFootnote 11:

  1. 1.

    \(r^1_{b^\star }\ne \overline{r}^1_{b^\star }\) and \(r^1_{1-{b^\star }}=\overline{r}^1_{1-{b^\star }}\) for some \({b^\star }\in \{0,1\}\) or

  2. 2.

    \(r^1_0=\overline{r}^1_0\) and \(r^1_1=\overline{r}^1_1\).

More precisely, due to the perfect binding of \(\mathsf {PBCOM}\) at most one between \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) can be opened to a different message. Therefore \(R_\mathcal {OT}^\star \) can either open both \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) to the same messages \(r^1_0\) and \(r^1_1\), or change in the opening at most one of them. This yields to the following important observation. If one among \(r^1_0\) and \(r^1_1\) changes during the rewind, let us say \(r_{b^\star }\) for \({b^\star }\in \{0,1\}\) (case 1), then the input bit used by \(R_\mathcal {OT}^\star \) has to be \({b^\star }\). Indeed we recall that the only efficient way (i.e. without inverting the TDP) for a receiver to get the output is to equivocate one of the IDTCs in order to compute the inverse of one between \(R_0\oplus r^1_0\) and \(R_1\oplus r^1_1\). Therefore the simulator invokes the ideal world functionality \(F_\mathcal {OT}\) using \(b^\star \) as input, and upon receiving \(l_{b^\star }\) computes \(W^1_{b^\star }=l_{b^\star }\oplus \mathsf {hc}(f_{b^\star ,1}^{-\lambda }( r^1_{b^\star }\oplus R_{b^\star }))\) and sets \(W^1_{1-b^\star }\) to a random string. Then sends \(W^1_0\) and \(W^1_1\) with two freshly generated TDPs \(f_{0,2}, f_{1,2}\) (according to the description of \(\varPi ^\gamma _\mathcal {OT}\) given in Fig. 5) to \(R_\mathcal {OT}^\star \). Let us now consider the case where the opening of \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) stay the same after the rewinding procedure (case two). In this case, \(\mathsf {Sim}\) comes back to the main thread and sets both \(W_0^1\) and \(W_1^1\) to a random string. Intuitively if \(R_\mathcal {OT}^\star \) does not change neither \(r^1_0\) nor \(r^1_1\) after the rewind, then his behavior is not adaptive on the second round sent by \(\mathsf {Sim}\). Therefore, he will be able to compute the inverse of neither \(R_0\oplus r^1_0\) nor \(R_1\oplus r^1_1\). That is, both \(R_0\oplus r^1_0\) and \(R_1\oplus r^1_1\) would be the results of the execution of two coin-flipping protocols, therefore both of them are difficult to invert without knowing the trapdoors of the TDPs. This implies that \(R_\mathcal {OT}^\star \) has no efficient way to tells apart whether \(W_0^1\) and \(W_1^1\) are random strings or not.

Completed the fourth round, for \(i=2,\dots ,\gamma \), \(\mathsf {Sim}\) continues the interaction with \(R_\mathcal {OT}^\star \) by always setting both \(W^i_0\) and \(W^i_1\) to a random string when \(r^1_0= r^i_0\) and \(r^1_1= r^i_1\), and using the following strategy when \(r^1_{b^\star }\ne r^i_{b^\star }\) and \(r^1_{1-{b^\star }}=r^i_{1-{b^\star }}\) for some \({b^\star }\in \{0,1\}\). \(\mathsf {Sim}\) invokes the ideal world functionality \(F_\mathcal {OT}\) using \(b^\star \) as input, and upon receiving \(l_{b^\star }\) computes \(W^i_{b^\star }=l_{b^\star }\oplus \mathsf {hc}(f_{b^\star ,i}^{-\lambda }( r^i_{b^\star }\oplus R_{b^\star }))\), sets \(W^i_{1-b^\star }\) to a random string and sends with them two freshly generated TDPs \(f_{0,i+1}, f_{1,i+1}\) to \(R_\mathcal {OT}^\star \). When the interaction against \(R_\mathcal {OT}^\star \) is over, \(\mathsf {Sim}\) stops and outputs what \(R_\mathcal {OT}^\star \) outputs. We observe that the simulator needs to invoke the ideal world functionality just once. Indeed, we recall that only one of the IDTCs can be equivocated, therefore once that the bit \(b^\star \) is decided (using the strategy described before) it cannot change during the simulation. The last thing that remains to observe is that it could happen that \(\mathsf {Sim}\) never needs to invoke the ideal world functionality in the case that: (1) during the rewind the values \((r_0^1, r_1^1)\) stay the same; (2) \(r_b^i=r_b^j\) for all \(i,j\in \{1,\dots , \gamma \}\) and \(b=\{0,1\}\). In this case \(\mathsf {Sim}\) never outputs the bit \(b^\star \) that corresponds to the \(R_\mathcal {OT}^\star \)’s input. That is, even though \(\mathsf {Sim}\) is sufficient to prove that \(\varPi ^\gamma _\mathcal {OT}\) is simulation-based secure against malicious receiver, it is insufficient to extract the input from \(R_\mathcal {OT}^\star \). We formally prove that the output of \(\mathsf {Sim}\) is computationally indistinguishable from the output of \(R_\mathcal {OT}^\star \) in the real world execution for every \(\gamma =\mathsf{poly}(\lambda )\). The proof goes trough hybrid arguments starting from the real world execution. We gradually modify the real world execution until the input of the honest party is not needed anymore such that the final hybrid would represent the simulator for the ideal world. We denote by \(\mathsf {OUT}_{\mathcal {H}_i,R_\mathcal {OT}^\star (z)}(1^\lambda )\) the output distribution of \(R_\mathcal {OT}^\star \) in the hybrid experiment \(\mathcal {H}_i\).

-\(\mathcal {H}_0\) is identical to the real execution. More precisely \(\mathcal {H}_0\) runs \(R_\mathcal {OT}^\star \) using fresh randomness and interacts with him as the honest sender would do on input \((l_0,l_1)\).

-\(\mathcal {H}_0^\mathsf {rew}\) proceeds according to \(\mathcal {H}_0\) with the difference that \(R_\mathcal {OT}^\star \) is rewound up to the second round by receiving two fresh random strings \(\overline{R}_0\) and \(\overline{R}_1\). This process is repeated until \(R_\mathcal {OT}^\star \) completes the third round again (every time using different randomness). More precisely, if \(R_\mathcal {OT}^\star \) aborts after the rewind then a fresh second round is sent up to \(\lambda /p\) times, where p is the probability of \(R_\mathcal {OT}^\star \) of completing the third round in \(\mathcal {H}_0\). If \(p=\mathsf{poly}(\lambda )\) then the expected running time of \(\mathcal {H}^\mathsf {rew}\) is \(\mathsf{poly}(\lambda )\) and its output is statistically close to the output of \(\mathcal {H}_0\). When the third round is completed the hybrid experiment comes back to the main thread and continues according to \(\mathcal {H}_0\)

-\(\mathcal {H}_1\) proceeds according to \(\mathcal {H}_0^\mathsf {rew}\) with the difference that after the rewinds executes the following steps. Let \(r_0^1\) and \(r_1^1\) be the messages opened by \(R_\mathcal {OT}^\star \) in the third round of the main thread and \(\overline{r}_0^1\) and \(\overline{r}_1^1\) be the messages opened during the rewind. We distinguish two cases that could happen:

  1. 1.

    \(r^1_0=\overline{r}^1_0\) and \(r^1_1=\overline{r}^1_1\) or

  2. 2.

    \(r^1_{b^\star }\ne \overline{r}^1_{b^\star }\) and \(\overline{r}^1_{1-{b^\star }}=r^1_{1-{b^\star }}\) for some \({b^\star }\in \{0,1\}\).

In this hybrid we assume that the first case happen with non-negligible probability. After the rewind \(\mathcal {H}_1\) goes back to the main thread, and in order to compute the fourth round, picks \(W_0^1\leftarrow \{0,1\}^\lambda \) computes \(W_1^1=l_1\oplus \mathsf {hc}(f_{1,1}^{-\lambda }(r_1^1\oplus R_1))\), \((f_{0,2}, f^{-1}_{0,2})\leftarrow \mathsf {Gen}(1^\lambda )\), \((f_{1,2}, f^{-1}_{1,2})\leftarrow \mathsf {Gen}(1^\lambda )\) and sends \((W_0^1, W_1^1, f_{0,2}, f_{1,2})\) to \(R_\mathcal {OT}^\star \). Then the experiment continues according to \(\mathcal {H}_0\). Roughly, the difference between \(\mathcal {H}_0\) and \(\mathcal {H}_1\) is that in the latter hybrid experiment \(W_0^1\) is a random string whereas in \(\mathcal {H}_1\) \(W_0^1=l_0\oplus \mathsf {hc}(f_{0,1}^{-\lambda }(r_0^1\oplus R_0))\). We now prove that the indistinguishability between \(\mathcal {H}_0\) and \(\mathcal {H}_1\) comes from the security of the hardcore bit function for \(\lambda \) bits \(\mathsf {hc}\) for the TDP \(\mathcal {F}\). More precisely, assuming by contradiction that the outputs of \(\mathcal {H}_0\) and \(\mathcal {H}_1\) are distinguishable we construct and adversary \(\mathcal {A}^\mathcal {F}\) that distinguishes between the output of \(\mathsf {hc}(x)\) and a random string of \(\lambda \) bits having as input \(f^\lambda (x)\). Consider an execution where \(R_\mathcal {OT}^\star \) has non-negligible advantage in distinguishing \(\mathcal {H}_0\) from \(\mathcal {H}_1\) and consider the randomness \(\rho \) used by \(R_\mathcal {OT}^\star \) and the first round computed by \(R_\mathcal {OT}^\star \) in this execution, let us say \(\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1}\). \(\mathcal {A}^\mathcal {F}\), on input the randomness \(\rho \), the messages \(r_0^1\) and \(r_1^1\) executes the following steps.

  1. 1.

    Start \(R_\mathcal {OT}^\star \) with randomness \(\rho \).

  2. 2.

    Let \((f, H, f^\lambda (x))\) be the challenge. Upon receiving the first round \((\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1})\) by \(R_\mathcal {OT}^\star \), compute \(R_{0}=r_{0}^1 \oplus f^\lambda (x)\), pick a random string \(R_{1}\), compute \((f_{1,1}, f^{-1}_{1,1})\leftarrow \mathsf {Gen}(1^\lambda )\), set \(f_{0,1}=f\) and sends \(R_0,R_1, f_{0,1}, f_{1,1}\) to \(R_\mathcal {OT}^\star \).

  3. 3.

    Upon receiving \((\mathtt {tdec}_0^1, r_0^1), (\mathtt {tdec}^1_1, r_1^1)\) compute \(W_{0}^1=l_{0}\oplus H\), \(W_{1}^1=l_{1}\oplus \mathsf {hc}(f_{1,1}^{-\lambda }(r_{1}^1\oplus R_1))\), \((f_{0,2}, f_{0,2}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\), \((f_{1,2}, f_{1,2}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\) and send \((W_0^1,W_1^1, f_{0,2}, f_{1,2})\).Footnote 12

  4. 4.

    Continue the interaction with \(R_\mathcal {OT}^\star \) according to \(\mathcal {H}_1\) (and \(\mathcal {H}_0\)) and output what \(R_\mathcal {OT}^\star \) outputs.

This part of the security proof ends with the observation that if \(H=\mathsf {hc}(x)\) then \(R_\mathcal {OT}^\star \) acts as in \(\mathcal {H}_0\), otherwise \(R_\mathcal {OT}^\star \) acts as in \(\mathcal {H}_{1}\).

- \(\mathcal {H}_2\) proceeds according to \(\mathcal {H}_1\) with the difference that both \(W_0\) and \(W_1\) are set to random strings. Also in this case the indistinguishability between \(\mathcal {H}_1\) and \(\mathcal {H}_2\) comes from the security of the hardcore bit function for \(\lambda \) bits \(\mathsf {hc}\) for the family \(\mathcal {F}\) (the same arguments of the previous security proof can be used to prove the indistinguishability between \(\mathcal {H}_2\) and \(\mathcal {H}_1\)).

- \(\mathcal {H}_3\) In this hybrid experiment we consider the case where after the rewind, with non-negligible probability, \(r^1_{b^\star }\ne \overline{r}^1_{b^\star }\) and \(\overline{r}^1_{1-{b^\star }}=r^1_{1-{b^\star }}\) for some \({b^\star }\in \{0,1\}\).

In this case, in the main thread the hybrid experiment computes \(W^1_{b^\star }=l_{b^\star }\oplus \mathsf {hc}(f_{b^\star ,1}^{-\lambda }( r^1_{b^\star }\oplus R_{b^\star }))\), picks \(W^1_{1-b^\star }\leftarrow \{0,1\}^\star \) sends \(W^1_0, W^1_1\) with two freshly generated TDPs \(f_{0,2}, f_{1,2}\). \(\mathcal {H}_3\) now continues the interaction with \(R_\mathcal {OT}^\star \) according to \(\mathcal {H}_2\). The indistinguishability between \(\mathcal {H}_2\) and \(\mathcal {H}_3\) comes from the security of the hardcore bit function for \(\lambda \) bits \(\mathsf {hc}\) for the TDP \(\mathcal {F}\). More precisely, assuming by contradiction that \(\mathcal {H}_2\) and \(\mathcal {H}_3\) are distinguishable, we construct and adversary \(\mathcal {A}^\mathcal {F}\) that distinguishes between the output of \(\mathsf {hc}(x)\) and a random string of \(\lambda \) bits having as input \(f^\lambda (x)\). Consider an execution where \(R_\mathcal {OT}^\star \) has non-negligible advantage in distinguish \(\mathcal {H}_2\) from \(\mathcal {H}_3\) and consider the randomness \(\rho \) used by \(R_\mathcal {OT}^\star \) and the first round computed in this execution, let us say \(\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1}\). \(\mathcal {A}^\mathcal {F}\), on input the randomness \(\rho \), the message \(b^\star \) committed in \(\mathtt {com}\) and the message \(r_{1-b^\star }^1\) committed \(\mathtt {tcom}_{1-b^\star }\), \(\mathcal {A}^\mathcal {F}\) executes the following steps.

  1. 1.

    Start \(R_\mathcal {OT}^\star \) with randomness \(\rho \).

  2. 2.

    Let \((f, H, f^\lambda (x))\) be the challenge. Upon receiving the first round \((\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1})\) by \(R_\mathcal {OT}^\star \), compute \(R_{1-b^\star }=r_{1-b^\star }^1 \oplus f^\lambda (x)\), pick a random string \(R_{b^\star }\), computes \((f_{b^\star ,1}, f^{-1}_{b^\star ,1})\leftarrow \mathsf {Gen}(1^\lambda )\), sets \(f_{1-b^\star ,1}=f\) and send \((R_0,R_1, f_{0,1}, f_{1,1})\) to \(R_\mathcal {OT}^\star \).

  3. 3.

    Upon receiving \((\mathtt {tdec}_0^1, r_0^1), (\mathtt {tdec}^1_1, r_1^1)\) compute \(W_{1-b^\star }^1=l_{1-b^\star }\oplus H\), \(W_{b^\star }^1=l_{b^\star }\oplus \mathsf {hc}(f_{b^\star ,1}^{-\lambda }(r_{b^\star }^1\oplus R_{b^\star }))\), \((f_{0,2}, f_{0,2}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\), \((f_{1,2}, f_{1,2}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\) and send \((W_0^1,W_1^1, f_{0,2}, f_{1,2})\).

  4. 4.

    Continue the interaction with \(R_\mathcal {OT}^\star \) according to \(\mathcal {H}_2\) (and \(\mathcal {H}_3\)) and output what \(R_\mathcal {OT}^\star \) outputs.

This part of the security proof ends with the observation that if \(H=\mathsf {hc}(x)\) then \(R_\mathcal {OT}^\star \) acts as in \(\mathcal {H}_2\), otherwise he acts as in \(\mathcal {H}_3\).

- \(\mathcal {H}^{j}_3\) proceeds according to \(\mathcal {H}_3\) with the differences that for \(i=2,\dots , j\)

  1. 1.

    if \(r_{b^\star }^i\ne r_{b^\star }^1\) for some \(b^\star \in \{0,1\}\) then \(\mathcal {H}^{j}_3\) picks \(W_{1-b^\star }^i\leftarrow \{0,1\}^\lambda \), computes \(W_{b^\star }^i=l_{b^\star }\oplus \mathsf {hc}(f_{{b^\star },i}^{-\lambda }(r_{b^\star }^i\oplus R_{b^\star }))\) and sends \(W_0^i, W_i^1\) with two freshly generated TDPs \(f_{0,i+1}, f_{1,i+1}\) to \(R_\mathcal {OT}^\star \) otherwise

  2. 2.

    \(\mathcal {H}^{j}_3\) picks \(W_0^i\leftarrow \{0,1\}^\lambda \) and \(W_1^i\leftarrow \{0,1\}^\lambda \) and sends \(W_0^1, W_1^1\) with two freshly generated TDPs \(f_{0,i+1}, f_{1,i+1}\) to \(R_\mathcal {OT}^\star \).

Roughly speaking, if \(R_\mathcal {OT}^\star \) changes the opened message w.r.t. \(\mathtt {tcom}_{b^\star }\), then \(W_{b^\star }^i\) is correctly computed and \(W_{1-b^\star }^i\) is sets to a random string. Otherwise, if the opening of \(\mathtt {tcom}_0\) and \(\mathtt {tcom}_1\) stay the same as in the third round, then both \(W_0^i\) and \(W_1^i\) are random strings (for \(i=2,\dots ,j\)). We show that \(\mathsf {OUT}_{\mathcal {H}_3^{j-1},R_\mathcal {OT}^\star (z)}(1^\lambda )\approx \mathsf {OUT}_{\mathcal {H}_3^j,R_\mathcal {OT}^\star (z)}(1^\lambda )\) in two steps. In the first step we show that the indistinguishability between these two hybrid experiments holds for the first case (when \(r_{b^\star }^i\ne r_{b^\star }^1\) for some bit \(b^\star \)), and in the second step we show that the same holds when \(r_{0}^i = r_{0}^1\) and \(r_{1}^i = r_{1}^1\). We first recall that if \(r_{b^\star }^i\ne r_{b^\star }^1\), then \(\mathtt {tcom}_{1-b^\star }\) is perfectly binding, therefore we have that \(r_{1-b^\star }^i= r_{1-b^\star }^1\). Assuming by contradiction that \(\mathcal {H}^{j-1}_3\) and \(\mathcal {H}^{j}_3\) are distinguishable then we construct and adversary \(\mathcal {A}^\mathcal {F}\) that distinguishes between the output of \(\mathsf {hc}(x)\) and a random string of \(\lambda \) bits having as input \(f^\lambda (x)\). Consider an execution where \(R_\mathcal {OT}^\star \) has non-negligible advantage in distinguishing \(\mathcal {H}^{j-1}_3\) from \(\mathcal {H}^{j}_3\) and consider the randomness \(\rho \) used by \(R_\mathcal {OT}^\star \) and the first round computed by \(R_\mathcal {OT}^\star \) in this execution, let us say \(\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1}\). \(\mathcal {A}^\mathcal {F}\), on input the randomness \(\rho \), the message \(b^\star \) committed in \(\mathtt {com}\) and the message \(r_{1-b^\star }^1\) committed \(\mathtt {tcom}_{1-b^\star }\), executes the following steps.

  1. 1.

    Start \(R_\mathcal {OT}^\star \) with randomness \(\rho \).

  2. 2.

    Let \(f, H, f^\lambda (x)\) be the challenge. Upon receiving the first round \((\mathtt {com}, \mathtt {tcom}_0, \mathtt {tcom}_{1})\) by \(R_\mathcal {OT}^\star \), \(\mathcal {A}^\mathcal {F}\) compute \(R_{1-b^\star }=r_{1-b^\star }^1 \oplus f^\lambda (x)\), pick a random string \(R_{b^\star }\), compute \((f_{0,1}, f^{-1}_{0,1})\leftarrow \mathsf {Gen}(1^\lambda )\) and \((f_{1,1}, f^{-1}_{1,1})\leftarrow \mathsf {Gen}(1^\lambda )\) send \(R_0,R_1, f_{0,1}, f_{1,1}\) to \(R_\mathcal {OT}^\star \).

  3. 3.

    Continue the interaction with \(R_\mathcal {OT}^\star \) according to \(\mathcal {H}^{j-1}_3\) using \(f_{1-b^\star ,j}=f\) instead of using the generation function \(\mathsf {Gen}(\cdot )\) when it is required.

  4. 4.

    Upon receiving \((\mathtt {tdec}_0^j, r_0^j), (\mathtt {tdec}^1_j, r_1^j)\) compute \(W_{1-b^\star }^j=l_{1-b^\star }\oplus H\),Footnote 13 \(W_{b^\star }^j=l_{b^\star }\oplus \mathsf {hc}(f_{b^\star ,j}^{-\lambda }(r_{b^\star }^j\oplus R_{b^\star }))\), \((f_{0,j+1}, f_{0,j+1}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\), \((f_{1,j+1}, f_{1,j+1}^{-1})\leftarrow \mathsf {Gen}(1^\lambda )\) and sends \((W_0^{j+1},W_1^{j+1}, f_{0,{j+1}}, f_{1,{j+1}})\).

  5. 5.

    Continue the interaction with \(R_\mathcal {OT}^\star \) according to \(\mathcal {H}^{j-1}_3\) (and \(\mathcal {H}^j_3\)) and output what \(R_\mathcal {OT}^\star \) outputs.

This step of the security proof ends with the observation that if \(H=\mathsf {hc}(x)\) then \(R_\mathcal {OT}^\star \) acts as in \(\mathcal {H}^{j-1}_3\), otherwise he acts as in \(\mathcal {H}^{j}_3\).

The second step of the security proof is almost identical to the proof used to prove the indistinguishability between \(\mathcal {H}_0\) and \(\mathcal {H}_2\).

The entire security proof is almost over, indeed the output of \(\mathcal {H}_3^\gamma \) corresponds to the output of the simulator \(\mathsf {Sim}\) and \(\mathsf {OUT}_{\mathcal {H}_3,R_\mathcal {OT}^\star (z)}(1^\lambda )=\mathsf {OUT}_{\mathcal {H}_3^1,R_\mathcal {OT}^\star (z)}(1^\lambda )\approx \mathsf {OUT}_{\mathcal {H}_3^2,R_\mathcal {OT}^\star (z)}(1^\lambda )\approx \dots \approx \mathsf {OUT}_{\mathcal {H}_3^\gamma ,R_\mathcal {OT}^\star (z)}(1^\lambda )\). Therefore we can claim that the output of \(\mathcal {H}_0\) is indistinguishable from the output of \(\mathsf {Sim}\) when at most one between \(l_0\) and \(l_1\) is used.

Theorem 3

Assuming TDPs, for any \(\gamma >0\) \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) securely computes \(F_\mathcal {OT}^m\) with one-sided simulation. Moreover the third round is replayable.

Proof

The third round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) is replayable due to the same arguments used in the security proof of Theorem 2. We now prove that \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) securely computes \(F^m_\mathcal {OT}\) with one-sided simulation according to Definition 4. More precisely to prove the security against the malicious sender \(S_{\overrightarrow{\mathcal {OT}}}^\star \) we start by consider the execution \(\mathcal {H}_0\) that correspond to the real execution where the input \(b_1,\dots ,b_m\) is used by the receiver and then we consider the experiment \(\mathcal {H}_i\) where the input used by the receiver is \(1-b_1,\dots ,1-b_i,b_{i+1},\dots , b_m\). Suppose now by contradiction that the output distributions of \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\) (for some \(i\in \{1,m-1\)}) are distinguishable, then we can construct a malicious sender \(S_\mathcal {OT}^\star \) that breaks the security of \(\varPi ^\gamma _\mathcal {OT}\) against malicious sender. This allow us to claim that the output distribution of \(\mathcal {H}_0\) is indistinguishable from the output distribution of \(\mathcal {H}_m\). A similar proof can be made when the malicious party is the receiver, but this time we need to consider how the the security proof for \(\varPi ^\gamma _\mathcal {OT}\) works. More precisely, we start by consider the execution \(\mathcal {H}_0\) that correspond to the real execution where the input \(((l^1_0,l^1_1)\dots , (l^m_0, l^m_1))\) is used by the sender and then we consider the experiment \(\mathcal {H}_i\) where the simulator instead of the honest sender procedure is used in the first i parallel executions of \(\varPi ^\gamma _\mathcal {OT}\). Supposing by contradiction that the output distributions of \(\mathcal {H}_i\) and \(\mathcal {H}_{i+1}\) (for some \(i\in \{1,m-1\)}) are distinguishable, then we can construct a malicious receiver \(R_\mathcal {OT}^\star \) that breaks the security of \(\varPi ^\gamma _\mathcal {OT}\) against malicious sender. We observe that in \(\mathcal {H}_i\) in the first i parallel executions of \(\varPi ^\gamma _\mathcal {OT}\) the simulator \(\mathsf {Sim}\) is used and this could disturb the reduction to the security of \(\varPi ^\gamma _\mathcal {OT}\) when proving that the output distribution of \(\mathcal {H}_i\) is indistinguishable from the output distribution of \(\mathcal {H}_{i+1}\). In order to conclude the security proof we need just to show that \(\mathsf {Sim}\)’s behaviour does not disturb the reduction. As described in the security proof of \(\varPi ^\gamma _\mathcal {OT}\), the simulation made by \(\mathsf {Sim}\) roughly works by rewinding from the third to the second round while from the fourth round onwards \(\mathsf {Sim}\) works straight line. An important feature enjoyed by \(\mathsf {Sim}\) is that he maintains the main thread. Let \(\mathcal {C}^\mathcal {OT}\) be the challenger of \(\varPi ^\gamma _\mathcal {OT}\) against malicious receiver, our adversary \(R_\mathcal {OT}^\star \) works as following.

  1. 1.

    Upon receiving the first round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) from \(R_{\overrightarrow{\mathcal {OT}}}^\star \), forward the \((i+1)\)-th component \(\mathsf {ot}_1\) to \(\mathcal {C}^\mathcal {OT}\) Footnote 14.

  2. 2.

    Upon receiving \(\mathsf {ot}_2\) from \(\mathcal {C}^\mathcal {OT}\) interacts against \(R_{\overrightarrow{\mathcal {OT}}}^\star \) by computing the second round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) according to \(\mathcal {H}_i\) (\(\mathcal {H}_{i+1}\)) with the difference that in the \((i+1)\)-th position the value \(\mathsf {ot}_2\) is used.

  3. 3.

    Upon receiving the third round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) from \(R_{\overrightarrow{\mathcal {OT}}}^\star \), forward the \((i+1)\)-th component \(\mathsf {ot}_3\) to \(\mathcal {C}^\mathcal {OT}\).

  4. 4.

    Upon receiving \(\mathsf {ot}_4\) from \(\mathcal {C}^\mathcal {OT}\) interacts against \(R_{\overrightarrow{\mathcal {OT}}}^\star \) by computing the fourth round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) according to \(\mathcal {H}_i\) (\(\mathcal {H}_{i+1}\)) with the difference that in the \((i+1)\)-th position the value \(\mathsf {ot}_4\) is used.

  5. 5.

    for \(i=2,\dots , \gamma \) follow the strategy described in step 3 and 4 and output what \(R_{\overrightarrow{\mathcal {OT}}}^\star \) outputs.

We recall that in \(\mathcal {H}_i\) (as well as in \(\mathcal {H}_{i+1}\)) in the first i execution of \(\varPi ^\gamma _\mathcal {OT}\) the simulator is used, therefore a rewind is made from the third to the second round. During the rewinds \(R_\mathcal {OT}^\star \) can forward to \(R_{\overrightarrow{\mathcal {OT}}}^\star \) the same second round \(\mathsf {ot}_2\). Moreover, due to the main thread property enjoyed by \(\mathsf {Sim}\), after the rewind \(R_\mathcal {OT}^\star \) can continue the interaction against \(R_{\overrightarrow{\mathcal {OT}}}^\star \) without rewind \(\mathcal {C}^\star \). Indeed if \(\mathsf {Sim}\) does not maintains the main thread then, even though the same \(\mathsf {ot}_2\) is used during the rewind, \(R_{\overrightarrow{\mathcal {OT}}}^\star \) could send a different \(\mathsf {ot}_3\) making impossible to efficiently continue the reduction.

4 Secure 2PC in the Simultaneous Message Exchange Model

In this section we give an high-level overview of our 4-round 2PC protocol \(\varPi _{2\mathcal {PC}}=(P_1,P_2)\) for every functionality \(F=(F_1,F_2)\) in the simultaneous message exchange model. \(\varPi _{2\mathcal {PC}}\) consists of two simultaneous symmetric executions of the same subprotocol in which only one party learns the output. In the rest of the paper we indicate as left execution the execution of the protocol where \(P_1\) learns the output and as right execution the execution of the protocol where \(P_2\) learns the output. In Fig. 6 we provide the high level description of the left execution of \(\varPi _{2\mathcal {PC}}\). We denoted by \((m_1,m_2,m_3,m_4)\) the messages played in the left execution where \((m_1,m_3)\) are sent by \(P_1\) and \((m_2,m_4)\) are sent by \(P_2\). Likewise, in the right execution of the protocol the messages are denoted by \((\tilde{m}_1,\tilde{m}_2,\tilde{m}_3,\tilde{m}_4)\) where \((\tilde{m}_1, \tilde{m}_3)\) are sent by \(P_2\) and \((\tilde{m}_2,\tilde{m}_4)\) are sent by \(P_1\). Therefore, messages \(( m_j,\tilde{m}_j)\) are exchanged simultaneously in the j-th round, for \(j\in \{1,\dots ,4\}\). Our construction uses the following tools.

  • A non-interactive perfectly binding computationally hiding commitment scheme \(\mathsf {PBCOM}=(\mathsf{{Com}}, \mathsf{{Dec}})\).

  • A Yao’s garbled circuit scheme \((\mathsf {GenGC}, \mathsf {EvalGC})\) with simulator \(\mathsf {SimGC}\).

  • A protocol \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}=(S_{\overrightarrow{\mathcal {OT}}}, R_{\overrightarrow{\mathcal {OT}}})\) that securely computes \(F^m_\mathcal {OT}\) with one-sided simulation.

  • A \(\varSigma \)-protocol \(\mathsf {BL}_{L}=(\mathcal {P}_{L},\mathcal {V}_{L})\) for the \(\mathcal {N}\mathcal {P}\)-language \(L=\{\mathtt {com}: \exists \ (\mathtt {dec}, m)\ \text {s.t.}\ \mathsf{{Dec}}(\mathtt {com}, \mathtt {dec}, m)=1\}\) with Special HVZK simulator \(\mathsf {Sim}_{L}\). We uses two instantiations of \(\mathsf {BL}_{L}\) in order to construct the protocol for the OR of two statements \(\varPi _{\mathsf {OR}}\) as described in Sect. 2.3. \(\varPi _{\mathsf {OR}}\) is a proof system for the \(\mathcal {N}\mathcal {P}\)-language \(L_{\mathsf {OR}}=\{(\mathtt {com}_0, \mathtt {com}_1): \exists \ (\mathtt {dec}, m)\ \text {s.t.}\ \mathsf{{Dec}}(\mathtt {com}_0, \mathtt {dec}, m)=1 \ \text {OR} \ \mathsf{{Dec}}(\mathtt {com}_1, \mathtt {dec}, m)=1 \}\) Footnote 15.

  • A 4-round delayed-input NMZK AoK \(\mathsf {NMZK}=(\mathcal {P}_\mathsf {NMZK},\mathcal {V}_\mathsf {NMZK})\) for the \(\mathcal {N}\mathcal {P}\)-language \(L_{\mathsf {NMZK}}\) that will be specified later (see Sect. 4.1 for the formal definition of \(L_{\mathsf {NMZK}}\)).

In Fig. 6 we propose the high-level description of the left execution of \(\varPi _{2\mathcal {PC}}\) where \(P_1\) runs on input \(x\in \{0,1\}^\lambda \) and \(P_2\) on input \(y\in \{0,1\}^\lambda \).

Fig. 6.
figure 6

High-level description of the left execution of \(\varPi _{2\mathcal {PC}}\).

4.1 Formal Description of Our \(\varPi _{2\mathcal {PC}}=(P_1, P_2)\)

We first start by defining the following \(\mathcal {N}\mathcal {P}\)-language

The NMZK AoK \(\mathsf {NMZK}\) used in our protocol is for the \(\mathcal {N}\mathcal {P}\)-language \(L_\mathsf {NMZK}\) described above. Now we are ready to describe our protocol \(\varPi _{2\mathcal {PC}}=(P_1,P_2)\) in a formal way.

Protocol \(\varPi _{2\mathcal {PC}}=(P_1,P_2)\).

Common input: security parameter \(\lambda \) and instance length \(\ell _\mathsf {NMZK}\) of the statement of the NMZK.

\(P_1\) ’s input: \(x\in \{0,1\}^\lambda \), \(P_2\) ’s input: \(y\in \{0,1\}^\lambda \).

  • Round 1. In this round \(P_1\) sends the message \(m_1\) and \(P_2\) the message \(\tilde{m}_1\). The steps computed by \(P_1\) to construct \(m_1\) are the following.

  1. 1.

    Run \(\mathcal {V}_\mathsf {NMZK}\) on input the security parameter \(1^\lambda \) and \(\ell _\mathsf {NMZK}\) thus obtaining the first round \(\mathsf {nmzk}^1\) of \(\mathsf {NMZK}\).

  2. 2.

    Run \(R_{\overrightarrow{\mathcal {OT}}}\) on input \(1^\lambda \), x and the randomness \(\alpha \) thus obtaining the first round \(\mathsf {ot}^1\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\).

  3. 3.

    Compute \((\mathtt {com}_0, \mathtt {dec}_0) \leftarrow \mathsf{{Com}}(x)\) and \((\mathtt {com}_1, \mathtt {dec}_1) \leftarrow \mathsf{{Com}}(x)\).

  4. 4.

    Compute \(\mathsf {a}_0\leftarrow \mathcal {P}_{L}(1^\lambda , \mathtt {com}_0, (\mathtt {dec}_0, x))\).

  5. 5.

    Pick \(\mathsf {c}_{1} \leftarrow \{0,1\}^\lambda \) and compute \((\mathsf {a}_{1}, \mathsf {z}_{1})\leftarrow \mathsf {Sim}_{L}(1^\lambda , \mathtt {com}_1, \mathsf {c}_{1})\).

  6. 6.

    Set \(m_1=\big (\mathsf {nmzk}^1, \mathsf {ot}^1, \mathtt {com}_0, \mathtt {com}_1, \mathsf {a}_0, \mathsf {a}_1 \big )\) and send \(m_1\) to \(P_2\).

Likewise, \(P_2\) performs the same actions of \(P_1\) constructing message .

  • Round 2. In this round \(P_2\) sends the message \(m_2\) and \(P_1\) the message \(\tilde{m}_2\). The steps computed by \(P_2\) to construct \(m_2\) are the following.

  1. 1.

    Compute \((Z_{1,0}, Z_{1,1},\dots , Z_{\lambda ,0}, Z_{\lambda ,1}, \mathsf {GC_y}) \leftarrow \mathsf {GenGC}(1^\lambda , F_2,y;\omega )\).

  2. 2.

    Compute \((\mathtt {com}_\mathsf {GC_y}, \mathtt {dec}_\mathsf {GC_y}) \leftarrow \mathsf{{Com}}(\mathsf {GC_y})\) and \((\mathtt {com}_\mathsf {L}, \mathtt {dec}_\mathsf {L}) \leftarrow \mathsf{{Com}}(Z_{1,0}||Z_{1,1}||, \dots ,||Z_{\lambda ,0}||Z_{\lambda ,1})\).

  3. 3.

    Run \(\mathcal {P}_\mathsf {NMZK}\) on input \(1^\lambda \) and \(\mathsf {nmzk}^1\) thus obtaining the second round \(\mathsf {nmzk}^2\) of \(\mathsf {NMZK}\).

  4. 4.

    Run \(S_{\overrightarrow{\mathcal {OT}}}\) on input \(1^\lambda , Z_{1,0}, Z_{1,1},\) \(\dots , Z_{\lambda ,0}, Z_{\lambda ,1}\), \(\mathsf {ot}^1\) and the randomness \(\beta \) thus obtaining the second round \(\mathsf {ot}^2\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\).

  5. 5.

    Pick \(\mathsf {c} \leftarrow \{0,1\}^\lambda \).

  6. 6.

    Set \(m_2=\big (\mathsf {ot}^2, \mathtt {com}_\mathsf {L}, \mathtt {com}_\mathsf {GC_y}, \mathsf {nmzk}^2, \mathsf {c} \big )\) and send \(m_2\) to \(P_1\).

Likewise, \(P_2\) performs the same actions of \(P_1\) constructing message .

  • Round 3. In this round \(P_1\) sends the message \(m_3\) and \(P_2\) the message \(\tilde{m}_3\). The steps computed by \(P_1\) to construct \(m_3\) are the following.

  1. 1.

    Run \(\mathcal {V}_\mathsf {NMZK}\) on input \(\mathsf {nmzk}^2\) thus obtaining the third round \(\mathsf {nmzk}^3\) of \(\mathsf {NMZK}\).

  2. 2.

    Run \(R_{\overrightarrow{\mathcal {OT}}}\) on input \(\mathsf {ot}^2\) thus obtaining the third round \(\mathsf {ot}^3\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\).

  3. 3.

    Compute \(\mathsf {c}_0 = \mathsf {c} \oplus \mathsf {c}_1\) and \(\mathsf {z}_0 \leftarrow \mathcal {P}_{L}(\mathsf {c}_0)\).

  4. 4.

    Set \(m_3=\big (\mathsf {nmzk}^3,\mathsf {ot}^3, \mathsf {c}_0, \mathsf {c}_1, \mathsf {z}_0, \mathsf {z}_1\big )\) and send \(m_3\) to \(P_2\).

Likewise, \(P_2\) performs the same actions of \(P_1\) constructing message .

  • Round 4. In this round \(P_2\) sends the message \(m_4\) and \(P_1\) the message \(\tilde{m}_4\). The steps computed by \(P_2\) to construct \(m_4\) are the following.

  1. 1.

    Check if: \(\mathsf {c}=\mathsf {c}_0\oplus \mathsf {c}_1\), the transcript \(\mathsf {a}_0,\mathsf {c}_0, \mathsf {z}_0\) is accepting w.r.t. the instance \(\mathtt {com}_0\) and the transcript \(\mathsf {a}_1,\mathsf {c}_1, \mathsf {z}_{1}\) is accepting w.r.t. the instance \(\mathtt {com}_1\). If one of the checks fails then output \(\perp \), otherwise continue with the following steps.

  2. 2.

    Run \(S_{\overrightarrow{\mathcal {OT}}}\) on input \(\mathsf {ot}^3\), thus obtaining the fourth round \(\mathsf {ot}^4\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\).

  3. 3.

    Set Footnote 16 and .

  4. 4.

    Run \(\mathcal {P}_\mathsf {NMZK}\) on input \(\mathsf {nmzk}^3,\) \(\mathtt {stm}\) and \(w_{\mathtt {stm}}\) thus obtaining the fourth round \(\mathsf {nmzk}^4\) of \(\mathsf {NMZK}\).

  5. 5.

    Set .

Likewise, \(P_1\) performs the same actions of \(P_2\) constructing message .

  • Output computation. \(P_1\) ’s output: \(P_1\) checks if the transcript \((\mathsf {nmzk}^1,\mathsf {nmzk}^2,\) \(\mathsf {nmzk}^3,\mathsf {nmzk}^4)\) is accepting w.r.t. \(\mathtt {stm}\). In the negative case \(P_1\) outputs \(\perp \), otherwise \(P_1\) runs \(R_{\overrightarrow{\mathcal {OT}}}\) on input \(\mathsf {ot}^4\) thus obtaining \(Z_{1,x_1}, \dots ,Z_{\lambda ,x_\lambda }\) and computes the output \(v_1=\mathsf {EvalGC}(\mathsf {GC_y}, Z_{1,x_1}, \dots ,Z_{\lambda ,x_\lambda })\).

\(P_2\) ’s output: \(P_2\) checks if the transcript \(\tilde{\mathsf {nmzk}}^1, \tilde{\mathsf {nmzk}}^2, \tilde{\mathsf {nmzk}}^3,\tilde{\mathsf {nmzk}}^4\) is accepting w.r.t. \(\mathtt {\tilde{stm}}\). In the negative case \(P_2\) outputs \(\perp \), otherwise \(P_2\) runs \(R_{\overrightarrow{\mathcal {OT}}}\) on input \(\tilde{\mathsf {ot}}^4\) thus obtaining \(\tilde{Z}_{1,y_1}, \dots ,\tilde{Z}_{\lambda ,y_\lambda }\) and computes the output \(v_2=\mathsf {EvalGC}(\tilde{\mathsf {GC}}_{x}, \tilde{Z}_{1,y_1}, \dots ,\tilde{Z}_{\lambda ,y_\lambda })\).

High-level overview of the security proof. Due to the symmetrical nature of the protocol, it is sufficient to prove the security against one party (let this party be \(P_2\)). We start with the description of the simulator \(\mathsf {Sim}\). \(\mathsf {Sim}\) uses the PoK extractor \(\mathsf {E_{OR}}\) for \(\varPi _\mathsf {OR}\) to extract the input \(y^\star \) from the malicious party. \(\mathsf {Sim}\) sends \(y^\star \) to the ideal functionality F and receives back \(v_2=F_2(x, y^\star )\). Then, \(\mathsf {Sim}\) computes \((\tilde{\mathtt {GC}}_{\star }, (\widetilde{Z}_{1},\dots , \widetilde{Z}_{\lambda }))\leftarrow \mathsf {SimGC}(1^\lambda ,F_2, y^\star , v_2)\) and sends \(\tilde{\mathtt {GC}}_{\star }\) in the last round. Moreover instead of committing to the labels of Yao’s garbled circuit and \(P_1\)’s inputs in \(\mathtt {com}_0\) and \(\mathtt {com}_1\), \(\mathsf {Sim}\) commits to 0. \(\mathsf {Sim}\) runs the simulator \(\mathsf {Sim}_\mathsf {NMZK}\) of \(\mathsf {NMZK}\) and the simulator \(\mathsf {Sim}_\mathcal {OT}\) of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) where \(P_1\) acts as \(S_{\overrightarrow{\mathcal {OT}}}\) using \((\tilde{Z}_{1},\dots , \tilde{Z}_{\lambda })\) as input. For the messages of \(\varPi _\mathcal {OT}\) where \(P_1\) acts as the receiver, \(\mathsf {Sim}\) runs \(R_{\overrightarrow{\mathcal {OT}}}\) on input \(0^\lambda \) instead of using x. In our security proof we proceed through a sequence of hybrid experiments, where the first one corresponds to the real-world execution and the final represents the execution of \(\mathsf {Sim}\) in the ideal world. The core idea of our approach is to run the simulator of \(\mathsf {NMZK}\), while extracting the input from \(P_2^\star \). By running the simulator of \(\mathsf {NMZK}\) we are able to guarantee that the value extracted from \(\varPi _\mathsf {OR}\) is correct, even though \(P_2^\star \) is receiving proofs for a false statement (e.g. the value committed in \(\mathtt {com}_0\) differs form \(\mathtt {com}_1\)). Indeed in each intermediate hybrid experiment that we will consider, also the extractor of \(\mathsf {NMZK}\) is run in order to extract the witness for the theorem proved by \(P_2^\star \). In this way we can prove that the value extracted from \(\varPi _\mathsf {OR}\) is consistent with the input that \(P_2\) is using. For what we have discussed, the simulator of \(\mathsf {NMZK}\) rewinds first from the third to the second round (to extract the trapdoor), and then from the fourth to the third round (to extract the witness for the statement proved by \(P_2^\star \)). We need to show that these rewinding procedures do not disturb the security proof when we rely on the security of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) and \(\varPi _\mathsf {OR}\). This is roughly the reason why we require the third round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) to be reusable and rely on the security of Special HVZK of the underlying \(\mathsf {BL}_L\) instead of relying directly on the WI of \(\varPi _\mathsf {OR}\).

Theorem 4

Assuming TDPs, \(\varPi _{2\mathcal {PC}}\) securely computes every two-party functionality \(F=(F_1,F_2)\) with black-box simulation.

Proof

In order to prove that \(\varPi _{2\mathcal {PC}}\) securely computes \(F=(F_1,F_2)\), we first observe that, due to the symmetrical nature of the protocol, it is sufficient to prove the security against one party (let this party be \(P_2\)). We now show that for every adversary \(P_2^\star \), there exists an ideal-world adversary (simulator) \(\mathsf {Sim}\) such that for all inputs x, y of equal length and security parameter \(\lambda \): \(\{\mathsf {REAL}_{\varPi _{2\mathcal {PC}},P_2^\star (z)}(1^\lambda , x, y)\} \approx \{\mathsf {IDEAL}_{F,\mathsf {Sim}(z)}(1^\lambda ,x,y)\}.\) Our simulator \(\mathsf {Sim}\) is the one showed in Sect. 4.1. In our security proof we proceed through a series of hybrid experiments, where the first one corresponds to the execution of \(\varPi _{2\mathcal {PC}}\) between \(P_1\) and \(P_2^\star \) (real-world execution). Then, we gradually modify this hybrid experiment until the input of the honest party is not needed anymore, such that the final hybrid would represent the simulator (simulated execution). We now give the descriptions of the hybrid experiments and of the corresponding security reductions. We denote the output of \(P_2^\star \) and the output of the procedure that interacts against \(P_2^\star \) on the behalf of \(P_1\) in the hybrid experiment \(\mathcal {H}_i\) with \(\{\mathsf {OUT}_{\mathcal {H}_i,P_2^\star (z)}(1^\lambda , x, y)\}_{x\in \{0,1\}^\lambda , y\in \{0,1\}^\lambda }\).

-\(\mathcal {H}_0\) corresponds to the real executions. More in details, \(\mathcal {H}_0\) runs \(P_2^\star \) with a fresh randomness, and interacts with it as the honest player \(P_1\) does using x as input. The output of the experiment is \(P_2^\star \)’s view and the output of \(P_1\). Note that we are guarantee from the soundness of \(\mathsf {NMZK}\) that \(\mathsf {stm}\in L_{\mathsf {NMZK}}\), that is: (1) \(P_2^\star \) uses the same input \(y^\star \) in both the OT executions; (2) the garbled circuit committed in \(\mathtt {com}_\mathsf {GC_y}\) and the corresponding labels committed in \(\mathtt {com}_\mathsf {L}\), are computed using the input \(y^\star \); (3) \(y^\star \) is committed in both \(\tilde{\mathtt {com}}_0\) and \(\tilde{\mathtt {com}}_1\) and that the garbled circuit sent in the last round is actually the one committed in \({\mathtt {com}}_{\mathsf {GC_y}}\).

-\(\mathcal {H}_1\) proceeds in the same way of \(\mathcal {H}_0\) except that the input \(y^\star \) of the malicious party \(P_2^\star \) is extracted. In order to obtain \(y^\star \), \(\mathcal {H}_1\) runs the extractor \(\mathsf {E_{OR}}\) of \(\varPi _{\mathsf {OR}}\) (that exists from the property of PoK) of \(\varPi _{\mathsf {OR}}\). If the extractor fail, then \(\mathcal {H}_1\) aborts. The PoK property of \(\varPi _{\mathsf {OR}}\) ensures that with all but negligible probability the value \(y^\star \) is extracted, therefore \(\{\mathsf {OUT}_{\mathcal {H}_0,P_2^\star (z)}(1^\lambda , x, y)\}\) and \(\{\mathsf {OUT}_{\mathcal {H}_1,P_2^\star (z)}(1^\lambda , x, y)\}\) are statistically closeFootnote 17.

-\(\mathcal {H}_2\) proceeds in the same way of \(\mathcal {H}_1\) except that the simulator \(\mathsf {Sim}_\mathsf {NMZK}\) of \(\mathsf {NMZK}\) is used in order to compute the messages of \(\mathsf {NMZK}\) played by \(P_1\). Note that \(\mathsf {Sim}_\mathsf {NMZK}\) rewinds \(P_2^\star \) from the 3rd to the 2nd round in oder to extract the trapdoor. The same is done by \(\mathsf {E_{OR}}\). Following [1, 12] we let \(\mathsf {E_{OR}}\) and the extraction procedure of \(\mathsf {Sim}_\mathsf {NMZK}\) work in parallel. Indeed they just rewind from the third to the second round by sending a freshly generated second round. The indistinguishability between the output distribution of these two hybrids experiments holds from the property 1 of NMZK (see the full version of this paper). In this, and also in the next hybrids, we prove that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\). That is, we prove that \(P_2^\star \) behaves honestly across the hybrid experiments even though he is receiving a simulated proof w.r.t. \(\mathsf {NMZK}\) and \(\tilde{\mathsf {stm}}\) does not belong to \(L_\mathsf {NMZK}\). In this hybrid experiment we can prove that if by contradiction this probability is non-negligible, then we can construct a reduction that breaks the property 2 of NMZK (see the full version of this paper for a formal definition). Indeed, in this hybrid experiment, the theorem that \(P_2^\star \) receives belongs to \(L_\mathsf {NMZK}\) and the simulator of \(\mathsf {Sim}_\mathsf {NMZK}\) is used in order to compute and accepting transcript w.r.t. \(\mathsf {NMZK}\). Therefore, relying on property 2 of the definition of NMZK, we know that there exists a simulator that extracts the witness for the statement \(\mathsf {stm}\) proved by \(P_2^\star \) with all but negligible probability.

-\(\mathcal {H}_3\) proceeds exactly as \(\mathcal {H}_2\) except for the message committed in \(\mathtt {com}_{1}\). More precisely in this hybrid experiment \(\mathtt {com}_1\) is a commitment of 0 instead of x. The indistinguishability between the output of the experiments \(\mathcal {H}_2\) and \(\mathcal {H}_3\) follows from the hiding property of \(\mathsf {PBCOM}\). Indeed we observe that the rewind made by \(\mathsf {Sim}_\mathsf {NMZK}\) does not involve \(\mathtt {com}_1\) that is sent in the first round, moreover the decommitment information of \(\mathtt {com}_1\) is not used neither in \(\varPi _{\mathsf {OR}}\) nor in \(\mathsf {NMZK}\). To argue that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\) also in this hybrid experiment we still use the simulator-extractor \(\mathsf {Sim}_\mathsf {NMZK}\) in order to check whether the theorem proved by \(P_2^\star \) is still true. If it is not the case then we can construct a reduction to the hiding of \(\mathsf {PBCOM}\). Note that \(\mathsf {Sim}_\mathsf {NMZK}\) rewinds from the 4th to the 3rd round in order to extract the witness \(w_{\mathtt {stm}}\) for the statement \( \mathtt {stm}\) proved by \(P_2^\star \), and the rewinds do not effect the reduction.

-\(\mathcal {H}_4\) proceeds exactly as \(\mathcal {H}_3\) except that the honest prover procedure (\(\mathcal {P}_L\)), instead of the special HVZK simulator (\(\mathsf {Sim}_L\)), is used to compute the messages \(\mathsf {a}_1, \mathsf {z}_1\) of the transcript \(\tau _1=(\mathsf {a}_1, \mathsf {c}_1, \mathsf {z}_1)\) w.r.t. the instance \(\mathtt {com}_1\). Suppose now by contradiction that the output distributions of the hybrid experiments are distinguishable, then we can show a malicious verifier \(\mathcal {V}^\star \) that distinguishes between the transcript \(\tau _1=(\mathsf {a}_1, \mathsf {c}_1, \mathsf {z}_1)\) computed using \(\mathsf {Sim}_L\) from a transcript computed using the honest prover procedure. In more details, let \(\mathcal {C}_{\mathsf {SHVZK}}\) be the challenger of the Special HVZK. \(\mathcal {V}^\star \) picks \( \mathsf {c}_1 \leftarrow \{0,1\}^\lambda \) and sends \( \mathsf {c}_1\) to \(\mathcal {C}_{\mathsf {SHVZK}}\). Upon receiving \(\mathsf {a}_1, \mathsf {z}_1\) from \(\mathcal {C}_{\mathsf {SHVZK}}\) \(\mathcal {V}^\star \) plays all the messages of \(\varPi _{2\mathcal {PC}}\) as in \(\mathcal {H}_3\) (\(\mathcal {H}_4\)) except for the messages of \(\tau _1\). For these messages \(\mathcal {V}^\star \) acts as a proxy between \(\mathcal {C}_{\mathsf {SHVZK}}\) and \(R_{\overrightarrow{\mathcal {OT}}}^\star \). At the end of the execution \(\mathcal {V}^\star \) runs the distinguisher D that distinguishes \(\{\mathsf {OUT}_{\mathcal {H}_3,P_2^\star (z)}(1^\lambda , x, y)\}\) from \(\{\mathsf {OUT}_{\mathcal {H}_4,P_2^\star (z)}(1^\lambda , x, y)\}\) and outputs what D outputs. We observe that if \(\mathcal {C}_{\mathsf {SHVZK}}\) sends a simulated transcript then \(P_2^\star \) acts as in \(\mathcal {H}_3\) otherwise he acts as in \(\mathcal {H}_4\). There is a subtlety in the reduction. \(\mathcal {V}^\star \) runs \(\mathsf {Sim}_\mathsf {NMZK}\) that rewinds from the third to the second round. This means that \(\mathcal {V}^\star \) has to be able to complete every time the third round even though he is receiving different challenges \(\mathsf {c}^1,\dots ,\mathsf {c}^{\mathsf{poly}(\lambda )}\) w.r.t to \(\varPi _{\mathsf {OR}}\). Since we are splitting the challenge \(\mathsf {c}\), \(\mathcal {V}^\star \) can just keep fixed the value \(\mathsf {c}_1\) reusing the same \(\mathsf {z}_1\) (sent by \(\mathcal {C}_{\mathsf {SHVZK}}\)) and can compute an answer to a different \(\mathsf {c}_0'=\mathsf {c}^i\oplus \mathsf {c}_1\) using the knowledge of the decommitment information of \(\mathtt {com}_0\). To argue that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\), also in this hybrid experiment we can use the simulator-extractor \(\mathsf {Sim}_\mathsf {NMZK}\) to check whether the theorem proved by \(P_2^\star \) is still true. If it is not the case we can construct a reduction to the special HVZK property of \(\mathsf {BL}_L\). Note that the rewinds of \(\mathsf {Sim}_\mathsf {NMZK}\) from the fourth to the third round do not affect the reduction.

-\(\mathcal {H}_5\) proceeds exactly as \(\mathcal {H}_4\) except that the special HVZK simulator (\(\mathsf {Sim}_L\)), instead of honest prover procedure, is used to compute the prover’s messages \(\mathsf {a}_0, \mathsf {z}_0\) for the transcript \(\tau _0=(\mathsf {a}_0, \mathsf {c}_0, \mathsf {z}_0)\) w.r.t. the instance \(\mathtt {com}_0\). The indistinguishability between the outputs of \(\mathcal {H}_4\) and \(\mathcal {H}_5\) comes from the same arguments used to prove that \(\{\mathsf {OUT}_{\mathcal {H}_3,P_2^\star (z)}(1^\lambda , x, y)\}\approx \{\mathsf {OUT}_{\mathcal {H}_4,P_2^\star (z)}(1^\lambda , x, y)\}\). Moreover the same arguments of before can be used to prove that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\).

-\(\mathcal {H}_6\) proceeds exactly as \(\mathcal {H}_5\) except for the message committed in \(\mathtt {com}_{0}\). More precisely in this hybrid experiment \(\mathtt {com}_0\) is a commitment of 0 instead of x. The indistinguishability between the outputs of \(\mathcal {H}_5\) and \(\mathcal {H}_6\) comes from the same arguments used to prove that \(\{\mathsf {OUT}_{\mathcal {H}_2,P_2^\star (z)}(1^\lambda , x, y)\}\approx \{\mathsf {OUT}_{\mathcal {H}_3,P_2^\star (z)}(1^\lambda , x, y)\}\). Moreover the same arguments as before can be used to prove that

\(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\).

-\(\mathcal {H}_7\) proceeds in the same way of \(\mathcal {H}_6\) except that the simulator of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\), \(\mathsf {Sim}_\mathcal {OT}\), is used instead of the sender algorithm \(S_{\overrightarrow{\mathcal {OT}}}\). From the simulatable security against malicious receiver of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) for every \(\gamma =\mathsf{poly}(\lambda )\) follows that the output distributions of \(\mathcal {H}_7\) and \(\mathcal {H}_6\) are indistinguishable. Suppose by contradiction this claim does not hold, then we can show a malicious receiver \(R_{\overrightarrow{\mathcal {OT}}}^\star \) that breaks the simulatable security of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) against a malicious receiver. In more details, let \(\mathcal {C}_\mathcal {OT}\) be the challenger of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\). \(R_{\overrightarrow{\mathcal {OT}}}^\star \) plays all the messages of \(\varPi _{2\mathcal {PC}}\) as in \(\mathcal {H}_6\) (\(\mathcal {H}_7\)) except for the messages of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\). For these messages \(R_{\overrightarrow{\mathcal {OT}}}^\star \) acts as a proxy between \(\mathcal {C}_\mathcal {OT}\) and \(P_2^\star \). In the end of the execution \(R_{\overrightarrow{\mathcal {OT}}}^\star \) runs the distinguisher D that distinguishes \(\{\mathsf {OUT}_{\mathcal {H}_6,P_2^\star (z)}(1^\lambda , x, y)\}\) from \(\{\mathsf {OUT}_{\mathcal {H}_7,P_2^\star (z)}(1^\lambda , x, y)\}\) and outputs what D outputs. We observe that if \(\mathcal {C}_\mathcal {OT}\) acts as the simulator then \(P_2^\star \) acts as in \(\mathcal {H}_7\) otherwise he acts as in \(\mathcal {H}_6\). To prove that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \) is still negligible we use the same arguments as before with this additional important observation. The simulator-extractor \(\mathsf {Sim}_\mathsf {NMZK}\) rewinds also from the 4th to the 3rd round. These rewinds could cause \(P_2^\star \) to ask multiple third rounds of OT \(\tilde{\mathsf {ot}}^3_{i}\) (\(i=1,\dots , \mathsf{poly}(\lambda )\)). In this case \(R_{\overrightarrow{\mathcal {OT}}}^\star \) can simply forward \(\tilde{\mathsf {ot}}^3_{i}\) to \(\mathcal {C}_\mathcal {OT}\) and obtains from \(\mathcal {C}_\mathcal {OT}\) an additional \(\tilde{\mathsf {ot}}^4_{i}\). This behavior of \(R_{\overrightarrow{\mathcal {OT}}}^\star \) is allowed because \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) is simulatable secure against a malicious receiver even when the last two rounds of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) are executed \(\gamma \) times (as stated in Theorem 2). Therefore the reduction still works if we set \(\gamma \) equals to the expected number of rewinds that \(\mathsf {Sim}_\mathsf {NMZK}\) could do. We observe that since we have proved that \(\mathsf {stm}\in L_\mathsf {NMZK}\), then the value extracted \(y^\star \) is compatible with the query that \(\mathsf {Sim}_\mathcal {OT}\) could do. That is, \(\mathsf {Sim}_\mathcal {OT}\) will ask only the value \((\tilde{Z}_{1,y_1},\dots , \tilde{Z}_{\lambda ,y_\lambda })\).

-\(\mathcal {H}_8\) differs from \(\mathcal {H}_7\) in the way the rounds of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\), where \(P_2^\star \) acts as sender, are computed. More precisely instead of using x as input, \(0^\lambda \) is used. Note that from this hybrid onward it is not possible anymore to compute the output by running \(\mathsf {EvalGC}\) as in the previous hybrid experiments. This is because we are not able to recover the correct labels to evaluate the garbled circuit. Therefore \(\mathcal {H}_8\) computes the output by directly evaluating \(v_1=F_1(x, y^\star )\), where \(y^\star \) is the input of \(P_2^\star \) obtained by using \(\mathsf {E_{OR}}\). The indistinguishability between the output distributions of \(\mathcal {H}_7\) and \(\mathcal {H}_8\) comes from the security of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) against malicious sender. Indeed, suppose by contradiction that it is not the case, then we can show a malicious sender \(S_{\overrightarrow{\mathcal {OT}}}^\star \) that breaks the indistinguishability security of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) against a malicious sender. In more details, let \(\mathcal {C}_\mathcal {OT}\) be the challenger. \(S_{\overrightarrow{\mathcal {OT}}}^\star \) plays all the messages of \(\varPi _{2\mathcal {PC}}\) as in \(\mathcal {H}_7\) (\(\mathcal {H}_8\)) except for the messages of OT where he acts as a receiver. For these messages \(S_{\overrightarrow{\mathcal {OT}}}^\star \) plays as a proxy between \(\mathcal {C}_\mathcal {OT}\) and \(P_2^\star \). At the end of the execution \(S_{\overrightarrow{\mathcal {OT}}}^\star \) runs the distinguisher D that distinguishes the output of \(\mathcal {H}_7\) from \(\mathcal {H}_8\) and outputs what D outputs. We observe that if \(\mathcal {C}_\mathcal {OT}\) computes the messages of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) using the input \(0^{\lambda }\) then \(P_2^\star \) acts as in \(\mathcal {H}_8\) otherwise he acts as in \(\mathcal {H}_7\). In this security proof there is another subtlety. During the reduction \(S_{\overrightarrow{\mathcal {OT}}}^\star \) runs \(\mathsf {Sim}_\mathsf {NMZK}\) that rewinds from the third to the second round. This means that \(P_2^\star \) could send multiple different second round \(\mathsf {ot}^2_{i}\) of OT (with \(i=1,\dots ,\mathsf{poly}(\lambda )\)). \(S_{\overrightarrow{\mathcal {OT}}}^\star \) cannot forward these other messages to \(\mathcal {C}_\mathcal {OT}\) (he cannot rewind the challenger). This is not a problem because the third round of \(\varPi ^\gamma _{\overrightarrow{\mathcal {OT}}}\) is replayable (as proved in Theorem 2). That is the round \(\mathsf {ot}^3\) received from the challenger can be used to answer to any \(\mathsf {ot}^2\). To prove that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\) we use the same arguments as before by observing the rewinds made by the simulator-extractor from the fourth round to the third one do not affect the reduction.

-\(\mathcal {H}_{9}\) proceeds in the same way of \(\mathcal {H}_{8}\) except for the message committed in \(\tilde{\mathtt {com}}_{lab}\). More precisely, instead of computing a commitment of the labels

\((\tilde{Z}_{1,0}, \tilde{Z}_{1,1},\dots , \tilde{Z}_{\lambda ,0}, \tilde{Z}_{\lambda ,1})\), a commitment of \(0^\lambda ||\dots ||0^\lambda \) is computed. The indistinguishability between the output distributions of \(\mathcal {H}_8\) and \(\mathcal {H}_9\) follows from the hiding of \(\mathsf {PBCOM}\). Moreover, \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\) in this hybrid experiment due to the same arguments used previously.

-\(\mathcal {H}_{10}\) proceeds in the same way of \(\mathcal {H}_{9}\) except for the message committed in \(\tilde{\mathtt {com}}_\mathsf {GC_y}\): instead of computing a commitment of the Yao’s garbled circuit \(\tilde{\mathtt {GC}}_{x}\), a commitment of 0 is computed. The indistinguishability between the output distributions of \(\mathcal {H}_9\) and \(\mathcal {H}_{10}\) follow from the hiding of \(\mathsf {PBCOM}\). \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\) in this hybrid experiment due to the same arguments used previously.

-\(\mathcal {H}_{11}\) proceeds in the same way of \(\mathcal {H}_{10}\) except that the simulator \(\mathsf {SimGC}\) it is run (instead of \(\mathsf {GenGC}\)) in order to obtain the Yao’s garbled circuit and the corresponding labels. In more details, once \(y^\star \) is obtained by \(\mathsf {E_{OR}}\) (in the third round), the ideal functionality F is invoked on input \(y^\star \). Upon receiving \(v_2=F_2(x, y^\star )\) the hybrid experiment compute \((\tilde{\mathtt {GC}}_{\star }, \tilde{Z}_{1},\dots ,\tilde{Z}_{\lambda })\leftarrow \mathsf {SimGC}(1^\lambda ,F_2, y^\star , v_2)\) and replies to the query made by \(\mathsf {Sim}_\mathcal {OT}\) with \((\tilde{Z}_{1},\dots ,\tilde{Z}_{\lambda })\). Furthermore, in the 4th round the simulated Yao’s garbled circuit \(\tilde{\mathtt {GC}}_{\star }\) is sent, instead of the one generated using \(\mathsf {GenGC}\). The indistinguishability between the output distributions of \(\mathcal {H}_{10}\) and \(\mathcal {H}_{11}\) follows from the security of the Yao’s garbled circuit. To prove that \(\text{ Prob }\left[ \; \mathsf {stm}\notin L_\mathsf {NMZK}\;\right] \le \nu (\lambda )\) we use the same arguments as before by observing the rewinds made by the simulator-extractor from the fourth round to the third one do not affect the reduction. The proof ends with the observation that \(\mathcal {H}_{11}\) corresponds to the simulated execution with the simulator \(\mathsf {Sim}\).