Public Verifiability in the Covert Model (Almost) for Free
 5 Citations
 1.4k Downloads
Abstract
The covert security model (Aumann and Lindell, TCC 2007) offers an important security/efficiency tradeoff: a covert player may arbitrarily cheat, but is caught with a certain fixed probability. This permits more efficient protocols than the malicious setting while still giving meaningful security guarantees. However, one drawback is that cheating cannot be proven to a third party, which prevents the use of covert protocols in many practical settings. Recently, Asharov and Orlandi (ASIACRYPT 2012) enhanced the covert model by allowing the honest player to generate a proof of cheating, checkable by any third party. Their model, which we call the PVC (publicly verifiable covert) model, offers a very compelling tradeoff.
Asharov and Orlandi (AO) propose a practical protocol in the PVC model, which, however, relies on a specific expensive oblivious transfer (OT) protocol incompatible with OT extension. In this work, we improve the performance of the PVC model by constructing a PVCcompatible OT extension as well as making several practical improvements to the AO protocol. As compared to the stateoftheart OT extensionbased twoparty covert protocol, our PVC protocol adds relatively little: four signatures and an \(\approx 67\,\%\) wider OT extension matrix. This is a significant improvement over the AO protocol, which requires publickeybased OTs per input bit. We present detailed estimates showing (up to orders of magnitude) concrete performance improvements over the AO protocol and a recent malicious protocol.
Keywords
Secure computation Publicly verifiable covert security1 Introduction
Twoparty secure computation addresses the problem where two parties need to evaluate a common function f on their inputs while keeping the inputs private. Several security models for secure computation have been proposed. The most basic is the semihonest model, where the parties are expected to follow the protocol description but must not be able to learn anything about the other party’s input from the protocol transcript. A much stronger guarantee is provided by the malicious model, where parties may deviate arbitrarily from the protocol description. This additional security comes at a cost. Recent garbled circuitbased protocols [3, 17] have an overhead of at least \(40\times \) that of their semihonest counterparts, and are considerably more complex.
Aumann and Lindell [8] introduced a very practical compromise between these two models, that of covert security. In the covert security model, a party can deviate arbitrarily from the protocol description but is caught with a fixed probability \(\epsilon \), called the deterrence factor. In many practical scenarios, this guaranteed risk of being caught (likely resulting in loss of business and/or embarrassment) is sufficient to deter wouldbe cheaters. Importantly, covert protocols are much more efficient and simpler than their malicious counterparts.
Motivating the Publicly Verifiable Covert (PVC) Model. At the same time, the cheating deterrent introduced by the covert model is relatively weak. Indeed, a party catching a cheater certainly knows what happened and can respond accordingly, e.g., by taking their business elsewhere. However, the impact is largely limited to this, since the honest player cannot credibly accuse the cheater publicly. If, however, credible public accusation were possible, the deterrent for the cheater would be immeasurably greater: suddenly, all the cheater’s customers would be aware of the cheating and thus any cheating may affect the cheater’s global customer base.
The addition of credible accusation greatly improves the covert model even in scenarios with a small number of players, such as those involving the government. Consider, for example, the setting where two agencies are engaged in secure computation on their respective classified data. The covert model may often be insufficient here. Indeed, consider the case where one of the two players deviates from the protocol, perhaps due to an insider attack. The honest player detects this, but we are now faced with the problem of identifying the culprit across two domains, where the communication is greatly restricted due to trust, policy, data privacy legislation, or all of the above. On the other hand, credible accusation immediately provides the ability to exclude the honest player from the suspect list, and focus on tracking the problem within one organization/trust domain, which is dramatically simpler.
PVC Definition and Protocol. Asharov and Orlandi [7] proposed a security model, covert with public verifiability, and an associated protocol, addressing these concerns. At a high level, they proposed that when cheating is detected, the honest player is able to publish a “certificate of cheating” which can be checked by any third party. In this work, we abbreviate their model as PVC: publicly verifiable covert. Their proposed protocol (which we call the “AO protocol”) has performance similar to the original covert protocol of Aumann and Lindell [8], with the exception of requiring signedOT, a special form of oblivious transfer (OT). Their signedOT construction is based on the OT of Peikert et al. [18], and thus requires several expensive publickey operations.
In this work, we propose several critical performance improvements to the AO protocol. Our most technically involved contribution is a novel signedOT extension protocol which eliminates perinstance publickey operations. Before discussing our contributions and technical approach in Sect. 1.1, we review the AO protocol.
The AsharovOrlandi (AO) PVC Protocol [7]. The AO protocol is based on the covert construction of Aumann and Lindell [8]. Let \(P_{1}\) be the circuit generator, \(P_{2}\) be the evaluator, and \(f(\cdot , \cdot )\) be the function to be computed. Recall the standard garbled circuit (GC) construction in the semihonest model: \(P_{1}\) constructs a garbling of f and sends it to \(P_{2}\) along with the wire labels associated with its input. The parties then run OT, with \(P_{1}\) acting as the sender and inputting the wire labels associated with \(P_{2}\) ’s input, and \(P_{2}\) acting as the receiver and inputting as its choice bits the associated bits of its input.
We now adapt this protocol to the PVC setting. Recall the “selective failure” attack on \(P_{2}\) ’s input wires, where \(P_{1}\) can send \(P_{2}\) via OT an invalid wire label for one \(P_{2}\) ’s two inputs and learn one of \(P_{2}\) ’s input bits based on whether \(P_{2}\) aborts. To protect against this attack, the parties construct \(f'(\mathbf {x} _1, \mathbf {x} _2^1, \dots , \mathbf {x} _2^\nu ) = f(\mathbf {x} _1, \bigoplus _{i\in [\nu ]} \mathbf {x} _2^i)\), where \(\nu \) is the XORtree replication factor, and compute \(f'\) instead of f. Party \(P_{1}\) then constructs \(\lambda \) (the GC replication factor) garblings of \(f'\) and \(P_{2}\) checks that \(\lambda 1\) of the GCs are correctly constructed, evaluating the remaining GC to derive the output. The main difficulty of satisfying the PVC model is ensuring that neither party can improve its odds by aborting (e.g., based on the other party’s challenge). For example, if \(P_{1}\) could abort whenever \(P_{2}\) ’s challenge would reveal \(P_{1}\) ’s cheating, this would enable \(P_{1}\) to cheat without the risk of generating a proof of cheating. Thus, \(P_{1}\) sends the GCs to \(P_{2}\) through a 1outof\(\lambda \) OT; namely, in the ith input to the OT \(P_{1}\) provides openings for all the GCs but the ith, as well as the input wire labels needed to evaluate \(\text {GC} _i\). Party \(P_{2}\) inputs a random \(\gamma \), checks that all GCs besides \(\text {GC} _\gamma \) are constructed correctly, and if so, evaluates \(\text {GC} _\gamma \).
Finally, it is necessary for \(P_{1}\) to operate in a verifiable manner, so that an honest \(P_{2}\) has proof if \(P_{1}\) tries to cheat and gets caught. (Note that GCs guarantee that \(P_{2}\) cannot cheat in the GC evaluation at all, so we only worry about catching \(P_{1}\).) The AO protocol addresses this by having \(P_{1}\) sign all its messages and the parties using signedOT in place of all standard OTs (including wire label transfers and GC openings). Informally, the signedOT functionality proceeds as follows: rather than the receiver \(R\) getting message \(\mathbf {m} _b\) from the sender \(S\) for choice bit b, \(R\) receives \(((b, \mathbf {m} _b), \sigma )\), where \(\sigma \) is \(S\) ’s signature of \((b, \mathbf {m} _b)\). This guarantees that if \(R\) detects any cheating by \(S\), it has \(S\) ’s signature on an inconsistent set of messages, which can be used as proof of this cheating. Asharov and Orlandi show that this construction is \(\epsilon \)PVCsecure for \(\epsilon = (11/\lambda )(1  2^{\nu + 1})\).
1.1 Our Contribution
Our main contribution is a signedOT extension protocol built on the recent malicious OT extension of Asharov et al. [6]. Informally, signedOT extension ensures that (1) a cheating sender \(S\) is held accountable in the form of a “certificate of cheating” that the honest receiver \(R\) can generate, and (2) a malicious \(R\) cannot defame an honest \(S\) by presenting a false “certificate of cheating”. Achieving the first goal is fairly straightforward by having \(S\) simply sign all its messages. The challenge is in simultaneously protecting against a malicious \(R\). In particular, we need to commit \(R\) to its particular choices throughout the OT extension protocol to prevent it from defaming an honest \(S\), while maintaining that those commitments do not leak any information about \(R\) ’s choices.
Recall that in the standard OT extension protocol of Ishai et al. [12] (cf. Fig. 3), \(R\) constructs a random matrix M, and \(S\) obtains a matrix \(M'\) derived from M, \(S\) ’s random string \(\mathbf {s} \) and \(R\) ’s vector of OT inputs \(\mathbf {r} \). The key challenge of adapting this protocol to the signed variant is to efficiently prevent \(R\) from submitting a malleated M as part of the proof without it ever explicitly revealing M to \(S\) (as this would leak \(R\) ’s choice bits). We achieve this by observing that \(S\) does in fact learn some of M, as in the OT extension construction some of the columns of M and \(M'\) are the same (i.e., those corresponding to zero bits of \(S\) ’s string \(\mathbf {s} \)). We prevent \(R\) from cheating by having \(S\) include in its signature carefully selected information from the columns in M which \(S\) sees. Finally, we require that \(R\) generates each row of M from a seed, and that \(R\) ’s proof of cheating includes this seed such that the row rebuilt from the seed is consistent with the columns included in \(S\) ’s signature. We show that this makes it infeasible for \(R\) to successfully present an invalid row in the proof of cheating. We describe this approach in greater detail in Sect. 3 ^{1}.
As another contribution, we present a new more communication efficient PVC protocol, building off of the AO protocol; see Sect. 4. Our main (simple) trick there is a careful amendment allowing us to send GC hashes instead of GCs; this is based on an idea from Goyal et al. [11].
We work in the random oracle model, a slight strengthening of the assumptions needed for standard OT extension and freeXOR, two standard secure computation tools.
Comparison with Existing Approaches. The cost of our protocol is almost the same as that of the covert protocol of Goyal et al. [11]; the only extra cost is essentially a \(\approx 67\,\%\) wider OT extension matrix and four signatures. This often negligible additional overhead (versus covert protocols) provides us with dramatically stronger (than covert) deterrent. We believe that our PVC protocol could be used in many applications where covert security is insufficient at the orderofmagnitude cost advantage over previouslyneeded malicious protocols or the PVC protocol of Asharov and Orlandi [7]. See Sect. 5 for more details.
Related Work. The only directly related work is that of Asharov and Orlandi [7], already discussed at length. We also note a recent line of work on secure computation with cheaters (including fairness violators) punished by an external entity, such as the Bitcoin network [4, 10, 16]. Similarly to the PVC model and our protocols, this line of work relies on generating proofs of misbehavior which could be accepted by a thirdparty authority. However, these works address a different setting and use different techniques; in particular, they build on maliciouslysecure computation and require the Bitcoin framework.
2 Preliminaries
We use bold lowercase letters (e.g., \(\mathbf {x} \)) to denote bitstrings and use the notation \(\mathbf {x} [i]\) to denote the ith bit in bitstring \(\mathbf {x} \). Likewise, we use bold uppercase letters (e.g., \(\mathbf {T} \)) to denote matrices over bits. We use [n] to denote \(\{1, \dots , n\}\). Let “\(a \leftarrow f(x_1, x_2, \dots )\)” denote setting a to be the deterministic output of f on inputs \(x_1, x_2, \dots \); the notation “ Open image in new window ” is the same except that f here is randomized. We abuse notation and let Open image in new window denote selecting a uniformly at random from set S.
Our constructions are in the \(\mathcal {F}_\mathbf {\mathsf {PKI}}\) model, where each party \(P_i\) can register a verification key, and other parties can retrieve \(P_i\)’s verification key by querying \(\mathcal {F}_\mathbf {\mathsf {PKI}}\) on \(\mathsf {id}_{i} \). We use the notation \({\mathsf {Sign}} _{P_i}(\cdot )\) to denote a signature signed by \(P_i\)’s secret key, and we assume that this signature can be verified by any third party. We often leave off the subscript if the identity of the signing party is clear.
2.1 Publicly Verifiable Covert Security
We assume the reader is familiar with the covert security model; however, we review the less familiar publicly verifiable covert (PVC) security model of Asharov and Orlandi [7] below. When we say a protocol is “secure in the covert model,” we assume it is secure under the strong explicit cheat formulation with \(\epsilon \)deterrent [8, §3.4], for some value of \(\epsilon \).

Protocol \(\pi '\) is the same as \(\pi \) except that if an honest party \(P_{\text {}i^*} \) outputs \(\mathsf {corrupted}_{{i^*}} \) when executing \(\pi \), it computes \(\mathsf {Cert} \leftarrow \mathsf {Blame} (\mathsf {id}_{{i^*}}, {\mathsf {key}}, \mathsf {View}_{{\text {}i^*}})\), where \({\mathsf {key}} \) denotes the type of cheating detected, and sends \(\mathsf {Cert}\) to \(P_{i^*} \).

Algorithm \(\mathsf {Blame}\) is a deterministic algorithm which takes as input a cheating identity \(\mathsf {id}_{}\), a cheating type \(\mathsf {key}\), and a view \(\mathsf {View}_{}\) of a protocol execution, and outputs a certificate \(\mathsf {Cert}\).

Algorithm \(\mathsf {Judgment}\) is a deterministic algorithm which takes as input a certificate \(\mathsf {Cert}\) and outputs either an identity \(\mathsf {id}_{}\) or \(\bot \).
Before proceeding to the definition, we first introduce some notation. Let \(\mathsf {Exec} _{\pi , {{\mathcal {A}_{}}} (z)}(x_1, x_2; 1^{\kappa } )\) denote the transcript (i.e., messages and output) produced by \(P_{1}\) with input \(x_1\) and \(P_{2}\) with input \(x_2\) running protocol \(\pi \), where adversary \(\mathcal {A}_{}\) with auxiliary input z can corrupt parties before execution begins. Let \(\mathsf {Output} _{P_i}(\mathsf {Exec} _{\pi , {{\mathcal {A}_{}}} (z)}(x_1, x_2; 1^{\kappa } ))\) denote the output of \(P_i\) on the input transcript.
Definition 1
 1.
The protocol \(\pi '\) is a nonhalting and secure realization of f in the covert model with \(\epsilon \)deterrent.
 2.
(Accountability) For every ppt adversary \(\mathcal {A}_{}\) corrupting party \(P_{i^*} \), there exists a negligible function \(\mathsf {negl} (\cdot )\) such that if \(\mathsf {Output} _{P_{\text {}i^*}}(\mathsf {Exec} _{\pi , {{\mathcal {A}_{}}} (z)}(x_1, x_2; 1^{\kappa } )) = \mathsf {corrupted}_{i^*} \) then \({\Pr \left[ {\mathsf {Judgment} (\mathsf {Cert}) = \mathsf {id}_{{i^*}}}\right] } > 1  \mathsf {negl} ({\kappa })\), where \(\mathsf {Cert} \leftarrow \mathsf {Blame} (\mathsf {id}_{{i^*}}, {\mathsf {key}}, \mathsf {View}_{{\text {}i^*}})\) and the probability is over the randomness used in the protocol execution.
 3.
(Defamationfree) For every ppt adversary \(\mathcal {A}_{}\) corrupting party \(P_{i^*}\) and outputting a certificate \(\mathsf {Cert}\), there exists a negligible function \(\mathsf {negl} (\cdot )\) such that \({\Pr \left[ {\mathsf {Judgment} (\mathsf {Cert}) = \mathsf {id}_{{\text {}i^*}}}\right] } < \mathsf {negl} ({\kappa })\), where the probability is over the randomness used by \(\mathcal {A}_{}\).
Note that, in particular, the PVC definition implicitly disallows \(\mathsf {Blame}\) to reveal \(P_{\text {}i^*} \)’s input. This is because \(\pi '\) specifies that \(\mathsf {Cert}\) is sent to \(P_{i^*} \).
2.2 Signed Oblivious Transfer
However, as in prior work [7], this definition is too strong for our signedOT extension construction to satisfy. We introduce a relaxed signedOT variant (slightly different from Asharov and Orlandi’s variant [7]) which is tailored for OT extension and is sufficient for obtaining PVCsecurity. Essentially, we need a signature scheme that satisfies a weaker notion than EUCMA in which the signing algorithm takes randomness, a portion of which can be controlled by the adversary^{3}. This is because in our signedOT extension construction, a malicious party can influence the randomness used in the signing algorithm. In addition, we introduce an associated data parameter to the signing algorithm which allows the signer to specify some additional information unrelated to the message being signed but used in the signature. In our construction, we use the associated data to tie the signature to a specific counter (such as a session ID or message ID), preventing a malicious receiver from “mixing” properly signed values to defame an honest sender.
 1.
\({\mathsf {Gen}} (1^{\kappa } )\): On input security parameter \(1^{\kappa } \), output key pair \(({\mathsf {vk}}, {\mathsf {sk}})\).
 2.
\({\mathsf {Sign}} _{\mathsf {sk}} (m, a; (r_1, r_2))\): On input secret key \(\mathsf {sk}\), message \(m \in \mathcal M \), associated data \(a\in \mathcal D \), and randomness \(r_1 \in \mathcal R _1\) and \(r_2 \in \mathcal R _2\), output signature \(\sigma = (a, \sigma ')\).
 3.
\({\mathsf {Verify}} _{\mathsf {vk}} (m, \sigma )\): On input verification key \(\mathsf {vk}\), message \(m\in \mathcal M \), and signature \(\sigma \), output 1 if \(\sigma \) is a valid signature for m and 0 otherwise.
For security, we need the condition that unforgeability remains even if the adversary inputs some arbitrary \(r_1\) or \(r_2\). However, the adversary is prevented from inputting values for both \(r_1\) and \(r_2\). This reflects the fact that in our signedOT extension construction, a malicious sender can control only \(r_1\) and a malicious receiver can control only \(r_2\). We place a further restriction that the choice of \(r_1\) must be consistent; namely, all queries to \(\mathsf {Sign}\) must use the same value for \(r_1\). Looking ahead, this property exactly captures the condition we need (\(r_1\) corresponds to the zero bits in the sender’s column selection string in the OT extension), where the choice of \(r_1\) is made once and then fixed throughout the protocol execution.
Towards our definition, we define an oracle \({\mathcal {O}} _{\mathsf {sk}} (\cdot , \cdot , \cdot , \cdot )\) as follows. Let \(\bot \) be a special symbol. On input \((m, a, r_1, r_2)\), proceed as follows. If neither \(r_1\) nor \(r_2\) equal \(\bot \), output \(\bot \). Otherwise, proceed as follows. If \(r_1 = \bot \) and \(r_1'\) has not been set, set \(r_1'\) uniformly at random; if \(r_1 \ne \bot \) and \(r_1'\) has not been set, set \(r_1' = r_1\); if \(r_2 = \bot \), set \(r_2'\) uniformly at random; otherwise, set \(r_2' = r_2\). Finally, output \({\mathsf {Sign}} _{\mathsf {sk}} (m, a; (r_1', r_2'))\).
Now, consider the following game \(\mathsf {Sig\text {}forge}_{{{\mathcal {A}_{}}},\varPi }^{\mathsf {\scriptstyle {CMPRA}}} ({\kappa })\) for signature scheme \(\varPi \) between ppt adversary \(\mathcal {A}_{}\) and ppt challenger \(\mathcal C\) .
 1.
\(\mathcal C\) runs Open image in new window and sends \(\mathsf {vk}\) to \(\mathcal {A}_{}\).
 2.
\(\mathcal {A}_{}\), who has oracle access to \({\mathcal {O}} _{\mathsf {sk}} (\cdot , \cdot , \cdot , \cdot )\), outputs a tuple \((m, (a, \sigma '))\). Let \(\mathcal Q\) be the set of messages and associated data pairs input to \({\mathcal {O}} _{\mathsf {sk}} (\cdot , \cdot , \cdot , \cdot )\).
 3.
\(\mathcal {A}_{}\) succeeds if and only if (1) \({\mathsf {Verify}} _{\mathsf {vk}} (m, (a, \sigma ')) = 1\) and (2) \((m, a) \not \in {\mathcal Q} \).
Definition 2
Signature scheme \(\varPi = ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Verify}})\) is existentially unforgeable under adaptive chosen message and partial randomness attack (EUCMPRA) if for all ppt adversaries \(\mathcal {A}_{}\) there exists a negligible function \(\mathsf {negl} (\cdot )\) such that \(\Pr [\mathsf {Sig\text {}forge}_{{{\mathcal {A}_{}}},\varPi }^{\mathsf {\scriptstyle {CMPRA}}} ({\kappa })] < \mathsf {negl} ({\kappa })\).
SignedOT Functionality. We are now ready to introduce our relaxed signedOT functionality. As is our EUCMPRA signature, it is tailored for OT extension, and is sufficient for building PVC protocols. This functionality, denoted by \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\), is parameterized by an EUCMPRA signature scheme \(\varPi \) and is defined in Fig. 2. As in standard OT, the sender inputs two messages (of equal length) and the receiver inputs a choice bit. However, in this formulation we allow a malicious sender to specify some random value \(r_1^*\) as well as signatures \(\sigma _0^*\) and \(\sigma _1^*\). Likewise, a malicious receiver can specify some random value \(r_2^*\). (Honest players input \(\bot \) for these values.) If both players are honest, the functionality computes \(\sigma \leftarrow {\mathsf {Sign}} ((b, m_b); (r_1, r_2))\) with uniformly random values \(r_1\) and \(r_2\) and outputs \(((b, m_b), \sigma )\) to the receiver. However, if either party is malicious and specifies some random value, this is fed into the \(\mathsf {Sign}\) algorithm. Likewise, if the sender is malicious and specifies some signature \(\sigma _b^* \ne \bot \), this value is used as the signature sent to the receiver.
Note that \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) is nearly identical to the signedOT functionality presented by Asharov and Orlandi [7, Functionality 2]; it differs in the use of EUCMPRA signature schemes instead of \(\rho \text {}\mathsf {EU\text {}CMRA}\) schemes. We also note that it is straightforward to adapt \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) to realize OTs with more than two inputs from the sender. We let \(\left( {\begin{array}{c}\lambda \\ 1\end{array}}\right) \text {}\mathcal {F}^{\varPi }_\mathbf {signedOT} \) denote a 1outof\(\lambda \) variant of \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\).
A Compatible Commitment Scheme. Our construction of an EUCMPRA signature scheme (cf. Sect. 3.3) uses a noninteractive commitment scheme, which we define here. Our definition follows the standard commitment definition, except we tweak the \(\mathsf {Com}\) algorithm to take an additional associated data value.
 1.
\(\mathsf {ComGen} (1^{\kappa } )\): On input security parameter \(1^{\kappa } \), compute parameters \(\mathsf {params} \).
 2.
\({\mathsf {Com}} (m, a; r)\): On input message \(m\in \mathcal M \), associated data \(a\in \mathcal D \), and randomness r, output commitment \(\mathsf {com}\).
A commitment can be opened by revealing the randomness r used to construct that commitment.
We now define security for our commitment scheme. We only consider the binding property; namely, the inability for a ppt adversary to open a commitment to some other value than that committed to. Security is the same as for standard commitment schemes, except we allow the adversary to control the randomness used in \(\mathsf {ComGen}\).
Consider the game \(\mathsf {Com\text {}bind}_{{{\mathcal {A}_{}}},\varPi _{\mathsf {Com}}}^{\mathsf {\scriptstyle {}}} ({\kappa })\) for commitment scheme \(\varPi _{\mathsf {Com}} \) between a ppt adversary \(\mathcal {A}_{}\) and a ppt challenger \(\mathcal C\), defined as follows.
 1.
\(\mathcal {A}_{}\) sends randomness r to \(\mathcal C\) .
 2.
\(\mathcal C\) computes \(\mathsf {params} \leftarrow \mathsf {ComGen} (1^{\kappa } ; r)\) and sends \(\mathsf {params} \) to \(\mathcal {A}_{}\).
 3.
\(\mathcal {A}_{}\) outputs \(({\mathsf {com}}, m_1, a_1, r_1, m_2, a_2, r_2)\) and wins if and only if (1) \(m_1 \ne m_2\), and (2) \({\mathsf {com}} = {\mathsf {Com}} (\mathsf {params}, m_1, a_1; r_1) = {\mathsf {Com}} (\mathsf {params}, m_2, a_2; r_2)\).
Definition 3
A commitment scheme \(\varPi _{\mathsf {Com}} = (\mathsf {ComGen}, {\mathsf {Com}})\) is (computationally) binding if for all ppt adversaries \(\mathcal {A}_{}\), there exists a negligible function \(\mathsf {negl} (\cdot )\) such that \(\Pr [\mathsf {Com\text {}bind}_{{{\mathcal {A}_{}}},\varPi _{\mathsf {Com}}}^{\mathsf {\scriptstyle {}}} ({\kappa })] < \mathsf {negl} ({\kappa })\).
3 Signed Oblivious Transfer Extension
We now present our main contribution: an efficient instantiation of signed oblivious transfer (signedOT) extension. We begin in Sect. 3.1 by describing in detail the logic of the construction, iteratively building it up from the passively secure protocol of Ishai et al. [12]. We motivate the need for EUCMPRA signature schemes in Sect. 3.2 and present a compatible such scheme in Sect. 3.3. In Sect. 3.4 we present the proof of security.
3.1 Intuition for the Construction
As a first attempt, suppose \(S\) simply signs all its messages in Step 3. Recall that we will use this construction to have \(P_{1}\) send the appropriate input wire labels to \(P_{2}\) ; namely, \(P_{1}\) acts as \(S\) in the OT extension and inputs the wire labels for \(P_{2}\) ’s input wires whereas \(P_{2}\) acts as \(R\) and inputs its input bits. Thus, our first step is to enhance the protocol in Fig. 3 to have \(S\) send Open image in new window and Open image in new window in Step 3.
Now, if \(P_{2}\) gets an invalid (with respect to a signed GC sent in the PVC protocol of Sect. 4) wire label \(\mathbf {x} _j\), it can easily construct a certificate \(\mathsf {Cert}\) which demonstrates \(P_{1}\) ’s cheating. Namely, it outputs as its certificate the tuple \(\left( b, j, \mathbf {y} _j^0, \mathbf {y} _j^1,\right. \left. \sigma ', \sigma '', \mathbf {t} _j\right) \) along with the (signed by \(P_{1}\) and opened) GC containing the invalid wire label. A third party can (1) check that \(\sigma '\) and \(\sigma ''\) are valid signatures and (2) compute \(\mathbf {x} _j^b \leftarrow H(j, \mathbf {t} _j) \oplus \mathbf {y} _j^b\) and check that \(\mathbf {x} _j^b\) is indeed an invalid wire label for the given garbled circuit.
This works for protecting against a malicious \(P_{1}\) ; however, note that \(P_{2}\) can easily defame an honest \(P_{1}\) by outputting \(\mathbf {t} ^*_j \ne \mathbf {t} _j\) as part of its certificate (in which case \(\mathbf {x} _j^b \leftarrow H(j, \mathbf {t} _j^*) \oplus \mathbf {y} _j^b\) will very likely be an invalid wire label). Thus, the main difficulty in constructing signedOT extension is tying \(P_{2}\) to its choice of the matrix \(\mathbf {T} \) generated in Step 1 of the protocol so it cannot blame an honest \(P_{1}\) by using invalid rows \(\mathbf {t} _j^*\) in its certificate.
Towards this end, consider the following modification. In Step 1, \(R\) now additionally sends commitments to each \(\mathbf {t} _j\) to \(S\), and \(S\) signs these and sends them as part of its messages in Step 3. This prevents \(R\) from later changing \(\mathbf {t} _j\) to blame \(S\). This does not quite work, however, as \(R\) could simply commit to an incorrect \(\mathbf {t} _j^*\) in the first place! Clearly, \(R\) cannot send \(\mathbf {T} \) to \(S\), as this would leak \(R\) ’s selection bits, yet we still need \(R\) to somehow be committed to its choice of the matrix \(\mathbf {T} \). The key insight is noting that \(S\) does in fact know some of the bits of \(\mathbf {T} \); namely, it knows those columns at which \(s_i = 0\) (as it learns \(\mathbf {t} ^i\) in the base OT). We can use this information to tie \(R\) to its choice of \(\mathbf {T} \) such that it cannot later construct some matrix \(\mathbf {T} ^* \ne \mathbf {T} \) to defame \(S\).
 1.
Sending \(\{(i, t_{j,i}) \}_{i\in I}\) to \(R\) leaks \(\mathbf {s} \), which allows \(R\) to learn both of \(S\) ’s inputs. We address this by increasing the number of base OTs in Step 1 and having \(S\) only send some subset \(I \subseteq I^0\) such that \(I = {\kappa } \). Thus, while \(R\) learns that \(s_i = 0\) for \(i\in I\), by increasing the number of base OTs enough, \(R\) does not have enough information to recover \(\mathbf {s} \).
 2.
\(R\) can still flip one bit in \(\mathbf {t} _j\) and pass the check with high probability. We fix this by having each \(\mathbf {t} _j\) be generated by a seed \(\mathbf {k} _j\). Namely, \(R\) computes \(\mathbf {t} _j \leftarrow G(\mathbf {k} _j)\) in Step 1, where G is a random oracle^{4}. Then, when blaming \(S\), \(R\) must reveal \(\mathbf {k} _j\) instead of \(\mathbf {t} _j\). Thus, with high probability a malicious polytime \(R\) cannot find some \(\mathbf {k} _j^* \ne \mathbf {k} _j\) such that the Hamming distance between \(G(\mathbf {k} _j^*)\) and \(G(\mathbf {k} _j)\) is small enough that the above check succeeds.
Finally, note that we have thus far considered the passively secure OT extension protocol, which is insecure against a malicious \(R\). We thus utilize the maliciously secure OT extension protocol of Asharov et al. [6]. The only way \(R\) can cheat in passively secure OT extension is by using different \(\mathbf {r} \) values in Step 2. Asharov et al. add a “consistency check” phase between Steps 1 and 2 to enforce that \(\mathbf {r} \) is consistent. This does not affect our construction, and thus we can include this step to complete the protocol^{5}. We refer the reader to Asharov et al. [6] for the justification and intuition of this step; as far as this work is concerned we can treat this consistency check as a “black box”.
Observation 1
(OT Extension Matrix Size). We set \(\ell \), the number of base OTs, so that leaking \(\kappa \) bits to \(R\) does not allow it to recover \(\mathbf {s} \) and thus both messages. We do this as follows. Let \(\ell '\) be the number of base OTs required in malicious OT extension [6]. We set \(\ell = \ell ' + {\kappa } \) and require that when \(S\) chooses \(\mathbf {s} \), it first fixes \(\kappa \) randomly selected bits to zero before randomly setting the rest of the bits. Now, when \(S\) reveals I to \(R\), the number of unknown bits in \(\mathbf {s} \) is equal to \(\ell '\) and thus the security of the Asharov et al. scheme carries over to our setting. Asharov et al. set \(\ell ' \approx 1.6{\kappa } \), and thus us using \(\kappa \) extra columns results in an \(\approx 67\,\%\) matrix size increase.
Observation 2
(Batching Signatures). The main computational cost of our protocol is the signatures sent by \(S\) in Step 4. This cost can easily be brought to negligible, as follows. Recall that when using our protocol for transferring the input wire labels of a GC using freeXOR we can optimize the communication slightly by setting \(\mathbf {x} _j^0 \leftarrow H(j, \mathbf {q} _j)\) and \(\mathbf {y} _j^1 \leftarrow \mathbf {x} _j^0 \oplus \varDelta \oplus H(j, \mathbf {q} _j \oplus \mathbf {s} )\), where \(\varDelta \) is the freeXOR global offset. Thus, \(S\) only needs to send (and sign) \(\mathbf {y} _j^1\).
The most important idea, however, is to batch messages across OT executions and have \(S\) sign (and send) only one signature which includes all the necessary information across many OTs. Namely, using the freeXOR optimization above, \(S\) signs and sends the tuple \((I, \{\mathbf {y} _j^1, \{t_{j,i}\}_{i\in I}\}_{j\in [m]})\) to \(R\). We note that the j values need not be sent as they are implied by the protocol execution.
Figure 4 gives the full protocol for signedOT extension. For clarity of presentation, this description, and the following proof of security, does not take into account the optimizations described in Observation 2.
3.2 Towards a Proof of Security
Before presenting the security proof, we first motivate the need for EUCMPRA signature schemes. As mentioned in Sect. 3.1, ideally we could just have \(S\) sign everything using an EUCMA signature scheme; however, this presents opportunities for \(R\) to defame \(S\). Thus, we need to enforce that \(R\) cannot output an \(\mathbf {x} _j^b\) value different from the one sent by \(S\). We do so by using a binding commitment scheme \(\varPi _{\mathsf {Com}} = (\mathsf {ComGen}, {\mathsf {Com}})\), and show that the messages sent by \(S\) in Step 4 are essentially binding commitments to the underlying \(\mathbf {x} _j^b\) values.
We define \(\varPi _{\mathsf {Com}} \) as follows, where \(G : {{\{0,1\}}^{}} ^{\kappa } \rightarrow {{\{0,1\}}^{\ell }} \) and \(H : \mathbb N \times {{\{0,1\}}^{\ell }} \rightarrow {{\{0,1\}}^{}} ^{\kappa } \) are random oracles, and \(\ell \ge {\kappa } \).
 1.
\(\mathsf {ComGen} (1^{\kappa } )\): choose set \(I \subseteq [\ell ]\) uniformly at random subject to \(I = {\kappa } \); output \(\mathsf {params} \leftarrow I\).
 2.
\({\mathsf {Com}} (\mathsf {params}, \mathbf {m}, j; \mathbf {r} )\): On input parameters \(I \leftarrow \mathsf {params} \), message \(\mathbf {m} \), counter j, and randomness \(\mathbf {r} \in {{\{0,1\}}^{}} ^{\kappa } \), proceed as follows. Compute \(\mathbf {t} \leftarrow G(\mathbf {r} )\), set \({\mathsf {com}} \leftarrow (j, \mathbf {m} \oplus H(j, \mathbf {t} ), I, {\{t_i\}}_{i\in I})\), and output \(\mathsf {com}\).
We make the assumption that given I, one can derive the randomness input to \(\mathsf {ComGen}\). (We use this when defining our EUCMPRA signature scheme below, which uses a generic binding commitment scheme). We can satisfy this by simply letting the randomness input to \(\mathsf {ComGen}\) be the set I.
In our signedOT extension protocol, the set I chosen by \(S\) is used as \(\mathsf {params}\) and the \(\mathbf {k} _j\) values chosen by \(R\) are used as the randomness to \(\mathsf {Com}\). The commitment value \(\mathsf {com}\) is exactly the message signed and sent by \(S\) in Step 4. Thus, ignoring the signatures for now, we have an OT extension protocol that binds \(S\) to its \(\mathbf {x} _j^b\) values, and thus prevents a malicious \(R\) from defaming an honest \(S\). Adding in the signatures (cf. Sect. 3.3) gives us an EUCMPRA signature scheme. Namely, \(S\) is tied to its messages due to the signatures and \(R\) is prevented from “changing” the messages to defame \(S\) due to the binding property of the commitment scheme.
We now prove that the commitment scheme described above is binding. We actually prove something stronger than what is required in our protocol. Namely, we prove that an adversary who can control both random values still cannot win, whereas when we use this commitment scheme in our signedOT extension protocol, only one of the two random values can be controlled by any one party.
Theorem 1
Protocol \(\varPi _{\mathsf {Com}} \) is binding according to Definition 3.
Proof
Adversary \(\mathcal {A}_{}\) needs to come up with choices of I, \(\mathbf {m} \), \(\mathbf {m} '\), j, \(j'\), \(\mathbf {r} \), and \(\mathbf {r} '\) such that \((j, \mathbf {m} \oplus H(j, \mathbf {t} ), I, {\{t_i\}}_{i\in I}) = (j', \mathbf {m} ' \oplus H(j', \mathbf {t} '), I, {\{t_i'\}}_{i\in I'})\), where \(\mathbf {t} \leftarrow G(\mathbf {r} )\) and \(\mathbf {t} ' \leftarrow G(\mathbf {r} ')\). Clearly, \(j = j'\). Thus, \(\mathcal {A}_{}\) must find \(\mathbf {t} \) and \(\mathbf {t} '\) such that \(t_i = t_i'\) for all \(i\in I\). However, by the property that G is a random oracle, the values \(\mathbf {t} \) and \(\mathbf {t} '\) are distributed uniformly at random in \({{\{0,1\}}^{\ell }} \). Thus, the probability that \(\mathcal {A}_{}\) finds two bitstrings \(\mathbf {t} \) and \(\mathbf {t} '\) that match in \({\kappa } \) bits is negligible, regardless of the choice of I. \(\blacksquare \)
3.3 An EUCMPRA Signature Scheme
We now show that the messages sent by \(S\) in Step 4 form an EUCMPRA signature scheme. Let \(\varPi ' = ({\mathsf {Gen}} ', {\mathsf {Sign}} ', {\mathsf {Verify}} ')\) be an EUCMA signature scheme and \(\varPi _{\mathsf {Com}} = (\mathsf {ComGen}, {\mathsf {Com}})\) be a commitment scheme satisfying Definition 3 (e.g., the scheme presented in Sect. 3.2). Consider the scheme \(\varPi = ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Verify}})\) defined as follows.
 1.
\({\mathsf {Gen}} (1^{\kappa } )\): On input \(1^{\kappa } \), run Open image in new window and output \(({\mathsf {vk}}, {\mathsf {sk}})\).
 2.
\({\mathsf {Sign}} _{\mathsf {sk}} (\mathbf {m}, j; (\mathbf {r} _1^*, \mathbf {r} _2^*))\): On input message \(\mathbf {m} \in {{\{0,1\}}^{}} ^{\kappa } \), counter \(j\in \mathbb N \), and randomness \(\mathbf {r} _1^*\) and \(\mathbf {r} _2^*\), proceed as follows. Compute \(\mathsf {params} \leftarrow \mathsf {ComGen} (1^{\kappa } ; \mathbf {r} _1^*)\) and \({\mathsf {com}} \leftarrow {\mathsf {Com}}(\mathsf {params}, \mathbf {m}, j; \mathbf {r} _2^*)\). Next, choose Open image in new window and compute \({\mathsf {com}} ' \leftarrow {\mathsf {Com}}(\mathsf {params}, \mathbf {m} ', j; \mathbf {r} _2^*)\) ^{6}. Output \(\sigma \leftarrow (j, \mathsf {params}, \mathbf {r} _2^*, {\mathsf {com}}, {\mathsf {com}} ', {\mathsf {Sign}} '_{\mathsf {sk}} ((\mathsf {params}, {\mathsf {com}})), {\mathsf {Sign}} '_{\mathsf {sk}} ((\mathsf {params}, {\mathsf {com}} ')))\).
 3.
\({\mathsf {Verify}} _{\mathsf {pk}} (\mathbf {m}, \sigma )\): On input message \(\mathbf {m} \) and signature \(\sigma \), parse \(\sigma \) as \((j, \mathsf {params}, \mathbf {r} , {\mathsf {com}} ', {\mathsf {com}} '', \sigma ', \sigma '')\), and output 1 if and only if (1) \({\mathsf {Com}} (\mathsf {params}, \mathbf {m}; \mathbf {r} ) = {\mathsf {com}} '\), (2) \({\mathsf {Verify}} _{\mathsf {vk}} '((\mathsf {params}, {\mathsf {com}} '), \sigma ') = 1\), and (3) \({\mathsf {Verify}} _{\mathsf {vk}} '((\mathsf {params}, {\mathsf {com}} ''), \sigma '') = 1\); otherwise output 0.
As explained in Sect. 3.2, this signature scheme exactly captures the behavior of \(S\) in our signedOT extension protocol. We now prove that this is indeed an EUCMPRA signature scheme.
Theorem 2
Given an EUCMA signature scheme \(\varPi ' = ({\mathsf {Gen}} ', {\mathsf {Sign}} ', {\mathsf {Verify}} ')\) and a commitment scheme \(\varPi _{\mathsf {Com}} = (\mathsf {ComGen}, {\mathsf {Com}})\) secure according to Definition 3, then \(\varPi = ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Verify}})\) described above is an EUCMPRA signature scheme.
Proof
Let \(\mathcal {A}_{}\) be a ppt adversary attacking \(\varPi \). We construct an adversary \(\mathcal B\) attacking \(\varPi '\). Adversary \(\mathcal B\) receives \(\mathsf {vk}\) from the challenger and initializes \(\mathcal {A}_{}\) with \(\mathsf {vk}\) as input. Let \((\mathbf {m}, j, \mathbf {r} _1^*, \mathbf {r} _2^*)\) be the input of \(\mathcal {A}_{}\) to its signing oracle. Adversary \(\mathcal B\) emulates the execution of \(\mathcal {A}_{}\) ’s signing oracle as follows: it computes \(\mathsf {params} \leftarrow \mathsf {ComGen} (1^{\kappa } ; \mathbf {r} _1^*)\) and \({\mathsf {com}} \leftarrow {\mathsf {Com}} (\mathsf {params}, \mathbf {m}, j; \mathbf {r} _2^*)\), chooses \(\mathbf {m} '\) uniformly at random and computes \({\mathsf {com}} ' \leftarrow {\mathsf {Com}} (\mathsf {params}, \mathbf {m} ', j; \mathbf {r} _2^*)\), constructs \(\sigma \leftarrow (j, \mathsf {params}, \mathbf {r} _2^*, {\mathsf {com}}, {\mathsf {com}} ', {\mathsf {Sign}} '_{\mathsf {sk}} ((\mathsf {params}, {\mathsf {com}})), {\mathsf {Sign}} '_{\mathsf {sk}} ((\mathsf {params}, {\mathsf {com}} ')))\), and sends \(\sigma \) to \(\mathcal {A}_{}\). After each of \(\mathcal {A}_{}\) ’s queries, \(\mathcal B\) stores \((\mathbf {m}, j)\) in set \({\mathcal Q} _{{{\mathcal {A}_{}}}}\) and stores all the messages it sent to its signing oracle in set \({\mathcal Q} _\mathcal B \).
Eventually, \(\mathcal {A}_{}\) outputs \((\mathbf {m}, (j, \sigma '))\) as its forgery. Adversary \(\mathcal B\) checks that \({\mathsf {Verify}} _{\mathsf {vk}} (\mathbf {m}, (j, \sigma ')) = 1\) and that \((\mathbf {m}, j)\not \in {\mathcal Q} _{{{\mathcal {A}_{}}}}\). If not, \(\mathcal B\) outputs 0. Otherwise, \(\mathcal B\) parses \(\sigma '\) as \((\mathsf {params}, \mathbf {r} , {\mathsf {com}} ', {\mathsf {com}} '', \sigma ', \sigma '')\) and checks that \({\mathsf {com}} ' \not \in {\mathcal Q} _\mathcal B \). If so, it outputs \(({\mathsf {com}} ', \sigma ')\); otherwise it outputs 0.
Note that \(\mathsf {Sig\text {}forge}_{{{\mathcal {A}_{}}},\varPi }^{\mathsf {\scriptstyle {CMPRA}}} ({\kappa }) = 1\) and \(\mathsf {Sig\text {}forge}_{\mathcal B,\varPi '}^{\mathsf {\scriptstyle {CMA}}} ({\kappa }) = 0\) if and only if \({\mathsf {Verify}} _{\mathsf {vk}} (\mathbf {m}, (j, \mathsf {params}, \mathbf {r} , {\mathsf {com}} ', {\mathsf {com}} '', \sigma ', \sigma '')) = 1\) and \((\mathbf {m}, j) \not \in {\mathcal Q} _{{{\mathcal {A}_{}}}}\) but \({\mathsf {com}} ' \in {\mathcal Q} _\mathcal B \). Fix some \((\mathbf {m}, (j, \mathsf {params}, \mathbf {r} , {\mathsf {com}} _1, {\mathsf {com}} _{1'}, \sigma _1, \sigma _{1'}))\) such that this is the case. Thus it holds that \({\mathsf {com}} _1 \in {\mathcal Q} _\mathcal B \). This implies that \(\mathcal B\) queried \({\mathsf {Sign}} '\) on \({\mathsf {com}} _1\), which means that \(\mathcal {A}_{}\) queried its signing oracle on some \((\mathbf {m} ', j', \mathbf {r} _1^{*}, \mathbf {r} _2^{*})\), where \(\mathbf {m} ' \ne \mathbf {m} \), and received back \((j', \mathsf {params}, \mathbf {r} ', {\mathsf {com}} _1, {\mathsf {com}} _{2'}, \sigma _{1''}, \sigma _{2'})\). However, this implies that \({\mathsf {Com}} (\mathsf {params}, {\mathsf {com}} _1; \mathbf {r} ) = \mathbf {m} \) and \({\mathsf {Com}} (\mathsf {params}, {\mathsf {com}} _1; \mathbf {r} ') = \mathbf {m} '\). Thus, \(\Pr [\mathsf {Sig\text {}forge}_{{{\mathcal {A}_{}}},\varPi }^{\mathsf {\scriptstyle {CMPRA}}} ({\kappa })] = \Pr [\mathsf {Sig\text {}forge}_{\mathcal B,\varPi }^{\mathsf {\scriptstyle {CMA}}} ({\kappa })] + \Pr [\mathsf {Com\text {}bind}_{\mathcal B ',\varPi _{\mathsf {Com}}}^{\mathsf {\scriptstyle {}}} ({\kappa })]\) for some ppt adversary \(\mathcal B '\). We now bound \(\Pr [\mathsf {Com\text {}bind}_{\mathcal B ',\varPi _{\mathsf {Com}}}^{\mathsf {\scriptstyle {}}} ({\kappa })]\).
Adversary \(\mathcal B '\) runs almost exactly like \(\mathcal B\) . On the first query \((\mathbf {m}, j, \mathbf {r} _1^*, \mathbf {r} _2)\) by \(\mathcal {A}_{}\), it sets \(\mathbf {r} = \mathbf {r} _1^*\) if \(\mathbf {r} _1^* \ne \bot \) and otherwise it sets \(\mathbf {r} \) uniformly at random; \(\mathcal B '\) then sends \(\mathbf {r} \) to \(\mathcal C\), receiving back \(\mathsf {params}\).
Let \((\mathbf {m} _1, j_1, \mathbf {r} _1^*, \mathbf {r} _2^*)\) and \((\mathbf {m} _2, j_2, \mathbf {r} _1^{*}, \mathbf {r} _2^{*'})\) be the two queries made by \(\mathcal {A}_{}\) resulting in a common commitment value. Let \((j_1, \mathsf {params}, \mathbf {r} _1, {\mathsf {com}} _1, {\mathsf {com}} _1', \sigma _1, \sigma _{1'})\) and \((j_2, \mathsf {params}, \mathbf {r} _2, {\mathsf {com}} _1, {\mathsf {com}} _2', \sigma _{1''}, \sigma _{2'})\) be the corresponding signatures resulting from \(\mathcal {A}_{}\) ’s queries. Adversary \(\mathcal B '\) sends \(({\mathsf {com}} _1, \mathbf {m} _1, j_1, \mathbf {r} _2^*, \mathbf {m} _2, j_2, \mathbf {r} _2^{*'})\) to its challenger and wins with probability one, contradicting the security of the commitment scheme. Thus, we have that \(\Pr [\mathsf {Com\text {}bind}_{\mathcal B ',\varPi _{\mathsf {Com}}}^{\mathsf {\scriptstyle {}}} ({\kappa })] < \mathsf {negl} ({\kappa })\), completing the proof. \(\blacksquare \)
3.4 Proof of Security
We are now ready to prove the security of our signedOT extension protocol. Most of the proof complexity is hidden in the proofs of the associated EUCMPRA signature scheme and commitment scheme. Thus, the signedOT extension simulator is relatively straightforward, and mostly involves parsing the output of \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) and passing the correct values to the adversary. The analysis follows almost exactly that of Asharov et al. [6] and thus we elide most of the details.
Theorem 3
Let \(\varPi = ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Verify}})\) be the EUCMPRA signature scheme in Sect. 3.3. Then the protocol in Fig. 4 is a secure realization of \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) in the \(\mathcal {F}_\mathbf {\mathsf {OT}}\)hybrid model.
Proof
We separately consider the case where \(S\) is malicious and \(R\) is malicious. The case where the parties are either both honest or both malicious is straightforward.
Malicious \(S\) . Let \(\mathcal {A}_{}\) be a ppt adversary corrupting \(S\). We construct a simulator \(\mathcal S _{}\) as follows.
 1.
The simulator \(\mathcal S _{}\) acts as an honest \(R\) would in Step 1, extracting \(\mathbf {s} \) from \(\mathcal {A}_{}\) ’s input to \(\mathcal {F}_\mathbf {\mathsf {OT}}\).
 2.
The simulator \(\mathcal S _{}\) acts as an honest \(R\) would in Steps 2 and 3.
 3.
Let I and \(\left( j, \mathbf {y} _j^0, \mathbf {y} _j^1, {\{ t_{j,i} \}}_{i \in I}, \sigma _{j,0}', \sigma _{j,1}' \right) \), for \(j\in [m]\), be the messages sent by \(\mathcal {A}_{}\) in Step 4. If any of these are invalid, \(\mathcal S _{}\) sends \(\mathsf {abort}_{}\) to \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) and simulates \(R\) aborting, outputting whatever \(\mathcal {A}_{}\) outputs.
 4.
For \(j\in [m]\), proceed as follows. The simulator \(\mathcal S _{}\) extracts \(\mathbf {x} _j^0 \leftarrow \mathbf {y} _j^0 \oplus H(j, \mathbf {q} _j)\) and \(\mathbf {x} _j^1 \leftarrow \mathbf {y} _j^1 \oplus H(j, \mathbf {q} _j \oplus \mathbf {s} )\), constructs \(\sigma _{j,b}^* \leftarrow (j, I, \mathbf {k} _j, (I, (j, \mathbf {y} _j^b, I, {\{ t_{j,i}\}}_{i\in I})), (I, (j, \mathbf {y} _j^{1b}, I, {\{ t_{j,i}\}}_{i\in I})), \sigma _{j,b}', \sigma _{j,1b}')\) for \(b\in {{\{0,1\}}^{}} \), and sends \(\mathbf {x} _j^0\), \(\mathbf {x} _j^1\), \(\sigma _{j,0}^*\), and \(\sigma _{j,1}^*\) to \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\), receiving back either \(((b, m_b), \sigma _{j,b})\) or \(\mathsf {abort}_{}\).
 5.
If \(\mathcal S _{}\) received \(\mathsf {abort}_{}\) in any of the above iterations, it simulates \(R\) aborting, outputting whatever \(\mathcal {A}_{}\) outputs. Otherwise, for \(j\in [m]\), \(\mathcal S _{}\) parses \(\sigma _{j,b}\) as \((j, I, \mathbf {k} _j, (I, (j, \mathbf {y} _j^b, I, \{{ t_{j,i} \}}_{i\in I})), (I, (j, \mathbf {y} _j^{1b}, I, \{{ t_{j,i} \}}_{i\in I})), \sigma _{j,b}', \sigma _{j,1b}')\), constructs message \(\sigma _j \leftarrow (j, \mathbf {y} _j^0, \mathbf {y} _j^1, {\{ t_{j,i} \}}_{i\in I}, \sigma _{j,0}', \sigma _{j,1}' )\), and acts as an honest \(R\) would when receiving messages I and \({\{\sigma _j\}}_{j\in [m]}\).
 6.
The simulator \(\mathcal S _{}\) outputs whatever \(\mathcal {A}_{}\) outputs.
It is easy to see that this protocol perfectly simulates a malicious sender since \(\mathcal S _{}\) acts exactly as an honest \(R\) would (beyond feeding the appropriate messages to \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\)).
Malicious \(R\) . Let \(\mathcal {A}_{}\) be a ppt adversary corrupting \(R\). We construct a simulator \(\mathcal S _{}\) as follows.
 1.
The simulator \(\mathcal S _{}\) acts as an honest \(S\) would in Step 1, extracting matrices \(\mathbf {T} \) and \(\mathbf {V} \) through \(S\) ’s \(\mathcal {F}_\mathbf {\mathsf {OT}}\) inputs, and thus the values \({\{\mathbf {k} _j\}}_{j\in [m]}\).
 2.
The simulator \(\mathcal S _{}\) uses the values extracted above to extract selection bits \(\mathbf {r} \) after receiving the \(\mathbf {u} ^i\) values from \(\mathcal {A}_{}\) in Step 2.
 3.
The simulator \(\mathcal S _{}\) acts as an honest \(S\) would in Step 3.
 4.
Let \(I^0\) be the indices at which \(\mathbf {s} \) (generated in Step 1) is zero, and let \(I\subseteq I^0\) be a set of size \(\kappa \). For \(j\in [m]\), \(\mathcal S _{}\) sends \(r_j\), \(\mathsf {vk}\), and I to \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\), receiving back \(((r_j, \mathbf {x} _j^{r_j}), \sigma _{j,r_j})\); \(\mathcal S _{}\) parses \(\sigma _{j,r_j}\) as \((j, I, \mathbf {r} , (I, (j, \mathbf {c} _{r_j}, I, {\{ t_{j,i} \}}_{i \in I})), (I, (j, \mathbf {c} _{1r_j}, I, {\{ t_{j,i} \}}_{i \in I})), \sigma _{j,r_j}', \sigma _{j,1r_j}')\).
 5.
In Step 4, \(\mathcal S _{}\) sends I and \((j, \mathbf {c} _0, \mathbf {c} _1, {\{ t_{j,i'} \}}_{i' \in I'}, \sigma _{j,0}', \sigma _{j,1}')\), for \(j\in [m]\), to \(\mathcal {A}_{}\).
 6.
The simulator \(\mathcal S _{}\) outputs whatever \(\mathcal {A}_{}\) outputs.
The analysis is almost exactly that of the malicious receiver proof in the construction of Asharov et al. [6]; we thus give an informal security argument here and refer the reader to the aforementioned work for the full details.
A malicious \(R\) has two main attacks: using inconsistent choices of its selection bits \(\mathbf {r} \) and trying to cheat in the signature creation in Step 4. This latter attack is prevented by the security of our EUCMPRA signature scheme. The former is prevented by the consistency check in Step 3. Namely, Asharov et al. show that the consistency check guarantees that: (1) most inputs are consistent with some string \(\mathbf {r} \), and (2) the number of inconsistent inputs is small and thus allow \(R\) to only learn a small number of bits of \(\mathbf {s} \). Thus, for specific choices of \(\ell \) and \(\mu \), the probability of a malicious \(R\) cheating is negligible. Asharov et al. provide concrete parameters for various settings of the security parameter [6, §3.2]; let \(\ell '\) denote the number of base OTs used in their protocol. Now, in our protocol we set \(\ell = \ell ' + {\kappa } \); \(S\) leaks \({\kappa } \) bits of \(\mathbf {s} \) when revealing the set I in Step 4, and so is left with \(\ell '\) unknown bits of \(\mathbf {s} \). Thus, the security argument presented by Asharov et al. carries over into our setting. \(\blacksquare \)
4 Our Complete PVC Protocol
In this section we present a new PVC protocol based on signedOT extension. Our protocol is similar to the AO protocol in the \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\)hybrid model, but with applying several simple yet very effective optimizations, resulting in a much lower communication cost.
We present our protocol by starting off with the AO protocol and pointing out the differences. We presented the AO protocol intuition in the Introduction; see Fig. 5 for its formal description; due to lack of space, we omit the (straightforward) \(\mathsf {Blame}\) and \(\mathsf {Judgment}\) algorithms. In presenting our changes, we sketch the improvement each of them brings. Thus, we start by reviewing the communication cost of the AO protocol.
Communication Cost of the AO Protocol. Using stateoftheart optimizations [13, 19, 20], the size of each GC sent in Step 5 is \(2\kappa G_{C}\), where \(G_{C}\) is the number of nonXOR gates in circuit C (note that \(G_{C} = G_{C'}\) for circuit \(C'\) generated in Step 1 since the XORtree only adds XOR gates to the circuit, which are “free” [13]). Let \(\tau \) be the field size (in bits), \(\nu \) the XORtree replication factor, \(\lambda \) the GC replication factor, and n the length of the inputs, and assume that each signature is of length \(\tau \) and the commitment and decommitment values are of length \({\kappa } \). Using the signedOT instantiations of Asharov and Orlandi [7, Protocols 1 and 2], we get a total communication cost of \(\tau (7 \nu n + 11) + 2\lambda {\kappa } \nu n + \ell (2{\kappa } G_{C} + \tau ) + 2n\lambda ({\kappa } + \tau ) +\tau (3 + 2\lambda + 11(\lambda  1)) + \lambda {\kappa } (2(n + \nu n)(\lambda  1) + 2n(\lambda  1) + n)\).
As an example, consider the secure computation of \(\text {AES}(\mathbf {m}, \mathbf {k} )\), where \(P_{1}\) inputs message \(\mathbf {m} \in {{\{0,1\}}^{128}} \) and \(P_{2}\) inputs key \(\mathbf {k} \in {{\{0,1\}}^{128}} \), and suppose we set both the GC replication factor \(\lambda \) and the XORtree replication factor \(\nu \) to 3, giving a cheating probability of \(\epsilon = 1/2\). Letting \({\kappa } = 128\) and \(\tau = 256\), we have a total communication cost of 9.3 Mbit (where we assume that the AES circuit has 9,100 nonXOR gates [15]).

In Step 6, instead of using a commitment scheme we can use a hash function. This saves on communication in Step 7 as \(P_{1}\) no longer needs to send the openings \(\{\mathbf {o} _{w_p,b}^i\}\) to the commitments in the signedOT, and is secure when treating H as a random oracle since the keys are generated uniformly at random and thus it is infeasible for \(P_{2}\) to guess the committed values. The total savings are \(2n(\lambda 1){\kappa } \lambda \) bits; in our example, this saves us 196 kbit.

In Step 3, we use a random seed to generate the input wire keys. Namely, for all \(j\in [\lambda ]\) we compute Open image in new window , and compute the input wire keys for circuit j as \(\mathbf {k} _{w_1, 0}^j \Vert \mathbf {k} _{w_1,1}^j \Vert \cdots \Vert \mathbf {k} _{w_{n+\nu n},0}^j \Vert \mathbf {k} _{w_{n+\nu n},1}^j \leftarrow G(\mathbf {s} _j)\), where G is a pseudorandom generator. Now, in the 1outof\(\lambda \) signedOT in Step 7 we can just send the seeds to the input wire keys rather than the input wire keys themselves. The total savings are \(2(n+\nu n)(\lambda 1)\lambda {\kappa } n(\lambda  1)\lambda {\kappa } \) bits; in our example, this saves us 688 kbit.

In Step 5, \(P_{1}\) generates each \(\text {GC} _j\) from a seed \(\mathbf {s} _\text {GC} ^j\). (This idea was first put forward by Goyal et al. [11].) That is, \(\mathbf {s} _\text {GC} ^j\) specifies the randomness used to construct all wire keys except for the input wire keys which were set in Step 3. Instead of \(P_{1}\) sending each GC to \(P_{2}\) in Step 5, \(P_{1}\) instead sends a commitment \(\mathbf {c} _\text {GC} ^j \leftarrow H(\text {GC} _j)\). Now, in Step 7, \(P_{1}\) can send the appropriate seeds \({\{\mathbf {s} _\text {GC} ^j\}}_{j\in [\lambda ]\backslash \{j\}}\) in the jth input of the 1outof\(\lambda \) signedOT to allow \(P_{2}\) to check the correctness of the check GCs. We then add an additional step where, if the checks pass, \(P_{1}\) sends \(\text {GC} _\gamma \) (along with a signature on \(\text {GC} _\gamma \)) to \(P_{2}\), who can check whether \(H(\text {GC} _\gamma ) = \mathbf {c} _\text {GC} ^\gamma \). Note that this does not violate the security conditions required by the PVC model because \(P_{2}\) catches any cheating of \(P_{1}\) before the evaluation circuit is sent. If \(P_{1}\) tries to cheat here, \(P_{2}\) already has a commitment to the circuit so can detect any cheating. The total savings are \((\lambda  1)2{\kappa } G_{C}  \lambda \tau  \lambda {\kappa } (\lambda  1)\) bits; in our example, this saves us 4.6 Mbit.
Our PVC Protocol and Its Cost. Fig. 6 presents our optimized protocol. For simplicity, we sign each message in Steps 5 and 6 separately; however, we note that we can group all the messages in a given step into a single signature (cf. Observation 2). The \(\mathsf {Blame}\) and \(\mathsf {Judgment}\) algorithms are straightforward and similar to the AO protocol (\(\mathsf {Blame}\) outputs the relevant parts of the view, including the cheater’s signatures, and \(\mathsf {Judgment}\) checks the signatures). We prove the following theorem in the full version.
Theorem 4
Let \(\lambda < p({\kappa })\) and \(\nu < p({\kappa })\), for some polynomial \(p(\cdot )\), be parameters to the protocol, and set \(\epsilon = (1  1/\lambda ) (1  2^{\nu +1})\). Let f be a ppt function, let H be a random oracle, let \(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) and \(\left( {\begin{array}{c}\lambda \\ 1\end{array}}\right) \)\(\mathcal {F}^{\varPi }_\mathbf {signedOT}\) be the \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \)signedOT and \(\left( {\begin{array}{c}\lambda \\ 1\end{array}}\right) \)signedOT ideal functionalities, respectively, where \(\varPi \) is an EUCMPRA signature scheme. Then the protocol in Fig. 6 securely computes f in the presence of (1) an \(\epsilon \)PVC adversary corrupting \(P_{1}\) and (2) a malicious adversary corrupting \(P_{2}\) .
Using our AES circuit example, we find that the total communication cost is now 2.5 Mbit, plus the cost of signedOT/signedOT extension. In this particular example, signedOT requires around 1 Mbit and signedOT extension requires around 1.4 Mbit. However, as we show below, as the number of OTs required grows, signedOT extension quickly outperforms signedOT, both in communication and computation.
5 Comparison with Prior Work
We now compare our signedOT extension construction (including optimizations, and in particular, the signature batching of Observation 2) with the signedOT protocol of Asharov and Orlandi [7], along with a comparison of existing covert and malicious protocols and our PVC protocol using both signedOT and signedOT extension. All comparisons are done through calculating the number of bits transferred and estimated running times based on the relative cost of publickey versus symmetrickey operations. We use a very conservative (lowend) estimate on the public/symmetric speed ratio. We note that this ratio does vary greatly across platforms, being much higher on low power mobile devices, which often employ a weak CPU but have hardware AES support. For such platforms our numbers would be even better.
Recall that \(\tau \) is the field size (in bits), \(\nu \) is the XORtree replication factor, \(\lambda \) is the GC replication factor, n is the input length, and we assume that each signature is of length \(\tau \).
Communication Cost. We first focus on the communication cost of the two protocols. The signedOT protocol of Asharov and Orlandi [7] is based on the maliciously secure OT protocol of Peikert et al. [18], and inherits similar costs. Namely, the communication cost of executing \(\ell \) OTs each of length n is \((6 \ell + 11)\tau \) if \(n \le \tau \), and \((6 \ell + 11)\tau + 2 n \ell \) if \(n > \tau \). SignedOT requires the additional communication of a signature per OT, adding an additional \(\tau \ell \) bits. In the underlying secure computation protocol we have that \(n = \lambda {\kappa } \), where \(\lambda \) is the garbled circuit replication factor. For simplicity, we set \(\lambda = 3\) (which along with an XORtree replication factor of three equates to a deterrence factor of \(\epsilon = 1/2\)) and thus \(n = 3{\kappa } \). Thus, the total communication cost of executing t signedOTs is \(\mathbf {\varvec{\tau }\varvec{(}7 t \varvec{+} 11\varvec{)}}\) bits if \(3{\kappa } \le \tau \) and \(\mathbf {\varvec{\tau }\varvec{(}7 t \varvec{+} 11\varvec{)} \varvec{+} 6 \varvec{{\kappa }}t}\) bits otherwise.
Figure 7 presents a comparison of the communication cost of both approaches when executing 1,000 and 10,000 OTs, for various keylength settings and underlying publickey cryptosystems. We see improvements from 1.1–10.3\(\times \), depending on the number of OTs, the underlying publickey cryptosystem, and the size of the security parameter. Note that for a smaller number of OTs (such as 100), signedOT is more efficient, which makes sense due to the overhead of OT extension and the need to compute the base OTs. However, as the number of OTs grows, we see that signedOT extension is superior across the board.
Computational Cost. We now look at the computational cost of the two protocols. Let \(\xi \) denote the cost of a publickey operation (we assume exponentiations and signing take the same amount of time), and let \(\zeta \) denote the cost of a symmetrickey operation (where we let \(\zeta \) denote the cost of operating over \(\kappa \) bits; e.g., hashing a 2\(\kappa \)bit value costs \(2\zeta \)). We assume all other operations are “free”. This is obviously a very coarse analysis; however, it gives a general idea of the performance characteristics of the two approaches.
The cost of executing \(\ell \) OTs on nbit messages is \((14\ell + 12)\xi \) if \(n \le \tau \) and \((14\ell + 12)\xi + 2\ell \frac{n}{{\kappa }}\zeta \) if \(n > \tau \). SignedOT requires an additional \(2\ell \xi \) operations (for signing and verifying). We again set \(n = 3{\kappa } \), and thus the cost of executing t signedOTs is \(\mathbf {\varvec{(}16 t \varvec{+} 12\varvec{)}\varvec{\xi }}\) if \(3{\kappa } \le \tau \) and \(\mathbf {\varvec{(}16 t \varvec{+} 12\varvec{)}\varvec{\xi }\varvec{+} 6t\varvec{\zeta }}\) otherwise.
Figure 8 presents a comparison of the computational cost of both approaches when executing 1,000 and 10,000 OTs, for various keylength settings and underlying publickey cryptosystems. Here we see that regardless of the number of OTs and publickey cryptosystem used, signedOT extension is (often much) more efficient, and as the number of OTs increases so does this improvement. For as few as 1,000 OTs we already see a 3.5–5.1\(\times \) improvement, and for 10,000 OTs we see a 30.9–42.4\(\times \) improvement.
Figure 9 presents a comparison of the computation cost of our protocol using both signedOT (Ours\(^\text {sOT}\)) and signedOT extension (Ours\(^\text {sOText}\)), as well as comparisons to the Goyal et al. protocol (GMS) and Lindell protocol (Lin). Due to lack of space, the detailed cost formulas appear in the full version. We fix \({\kappa } = 128\), \(\lambda = \nu = 3\) (giving a deterrence factor of \(\epsilon = 1/2\)), and assume the use of elliptic curve cryptography (and thus \(\tau = 256\)). We expect publickey operations to take between 125–1250\(\times \) more than symmetrickey operations, depending on implementation details, whether one uses AESNI, etc. This range is a very conservative estimate using the Crypto++ benchmark [2], experiments using OpenSSL, and estimated ratios of running times between finite field and elliptic curve cryptography [1].
When comparing against GMS, we find that Ours\(^\text {sOText}\) is slightly more expensive, due almost entirely to the larger number of base OTs in the signedOT extension. We note that in practice, however, a deterrence factor of 1 / 2 may not be sufficient for a covert protocol but may be sufficient for a PVC protocol, due to the latter’s ability to “nameandshame” the perpetrator. When increasing the deterrence factor for the covert protocol to \(\epsilon \approx .9\), the cost ratios favor Ours\(^\text {sOText}\). For example, for 16\(\times \)16 matrix multiplication, the ratio becomes 3.60–3.53\(\times \), depending on the cost of publickey operations (versus 1.00–0.98\(\times \)).
Comparing Ours\(^\text {sOText}\) with Ours\(^\text {sOT}\), we find that the former is 1.0–86.7\(\times \) more efficient, depending largely on the characteristics of the underlying circuit. For circuits with a large number of inputs but a relatively small number of gates (e.g., 16384bit Comp., Hamming 16000, and 1024bit Sum) this difference is greatest, which makes sense, as the cost of the OT operations dominates. The circuits for which the ratio is around 1.0 (e.g., 1024bit RSA) are those that have a huge number of gates compared to the number of inputs, and thus the cost of processing the GC far outweighs the cost of signedOT/signedOT extension.
Finally, comparing Ours\(^\text {sOText}\) with Lin, the former is 9.6–1887.2\(\times \) more efficient, again depending in a large part on the characteristics of the circuit. We see that for circuits with a large number of inputs this difference is starkest; e.g., for the Hamming 16000 circuit, we get an improvement of 224.7–1408.4\(\times \). The reason we see such large improvements for these circuits is that Lin requires cutandchoose oblivious transfer, which cannot take advantage of OT extension. Thus, the number of publickey operations is huge compared to the circuit size, and this cost has a large impact on the overall running time. Note, however, that even for circuits where the number of gates dominates, we still see a relatively significant improvement (e.g., 14.2–54.3\(\times \) for 16\(\times \)16 Matrix Mult.). These results demonstrate that for settings where public shaming is enough of a deterrent from cheating, Ours\(^\text {sOText}\) presents a better choice than malicious protocols.
Footnotes
 1.
Our construction is also interesting from a theoretical perspective in that we construct signedOT from any maliciously secure OT protocol, whereas Asharov and Orlandi [7] build a specific construction based on the Decisional DiffieHellman problem.
 2.
A failstop adversary is one which acts semihonestly but may halt at any time.
 3.
Our notion is similar to the \(\rho \text {}\mathsf {EU\text {}CMRA}\) notion introduced by Asharov and Orlandi [7]. It differs in that we allow different portions of the randomness to be corrupted, but not both portions at once. Looking forward, this is needed because the sender in our signedOT functionality is only allowed to control some of the randomness.
 4.
Note that G cannot be a pseudorandom generator because the input to G is not necessarily uniform as the inputs may be adversarially chosen by \(R\).
 5.
The reason this does not affect our construction is because the consistency check phase only involves \(R\) sending messages to \(S\). A malicious \(R\) cannot defame \(S\) because we are only enforcing that \(R\) ’s value \(\mathbf {r} \) is consistent.
 6.
This extra commitment on a random message is needed for our signedOT extension proof.
 7.
Lindell’s malicious protocol can also be adapted into a covert protocol; however, we found that the computation cost is much more than that of Goyal et al., at least for deterrence factor 1 / 2.
Notes
Acknowledgments
The authors thank Michael Zohner for a brief discussion on the relative performance of public and symmetrickey primitives, and the anonymous reviewers for helpful suggestions.
The authors acknowledge the Office of Naval Research and its support of this work under contract N0001414C0113. Work of Alex J. Malozemoff was also supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship.
References
 1.The case for elliptic curve cryptography. https://www.nsa.gov/business/programs/elliptic_curve.shtml
 2.Crypto++ 5.6.0 benchmarks. http://www.cryptopp.com/benchmarks.html
 3.Afshar, A., Mohassel, P., Pinkas, B., Riva, B.: Noninteractive secure computation based on cutandchoose. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 387–404. Springer, Heidelberg (2014) CrossRefGoogle Scholar
 4.Andrychowicz, M., Dziembowski, S., Malinowski, D., Mazurek, L.: Secure multiparty computations on bitcoin. In: 2014 IEEE Symposium on Security and Privacy, pp. 443–458. IEEE Computer Society Press, May 2014Google Scholar
 5.Asharov, G., Lindell, Y., Schneider, T., Zohner, M.: More efficient oblivious transfer and extensions for faster secure computation. In: ACM CCS 13, pp. 535–548 (2013)Google Scholar
 6.Asharov, G., Lindell, Y., Schneider, T., Zohner, M.: More efficient oblivious transfer extensions with security for malicious adversaries. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 673–701. Springer, Heidelberg (2015) Google Scholar
 7.Asharov, G., Orlandi, C.: Calling out cheaters: covert security with public verifiability. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 681–698. Springer, Heidelberg (2012) CrossRefGoogle Scholar
 8.Aumann, Y., Lindell, Y.: Security against covert adversaries: efficient protocols for realistic adversaries. J. Cryptol. 23(2), 281–343 (2010)zbMATHMathSciNetCrossRefGoogle Scholar
 9.Barker, E., Barker, W., Burr, W., Polk, W., Smid, M.: Recommendation for key management – Part 1: General (Revision 3). NIST Special Publication 800–57, July 2012Google Scholar
 10.Bentov, I., Kumaresan, R.: How to use bitcoin to design fair protocols. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part II. LNCS, vol. 8617, pp. 421–439. Springer, Heidelberg (2014) Google Scholar
 11.Goyal, V., Mohassel, P., Smith, A.: Efficient two party and multi party computation against covert adversaries. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 289–306. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 12.Ishai, Y., Kilian, J., Nissim, K., Petrank, E.: Extending oblivious transfers efficiently. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 145–161. Springer, Heidelberg (2003) CrossRefGoogle Scholar
 13.Kolesnikov, V., Schneider, T.: Improved garbled circuit: free XOR gates and applications. In: Aceto, L., Damgård, I., Goldberg, L.A., Halldórsson, M.M., Ingólfsdóttir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 486–498. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 14.Kreuter, B., Mood, B., Shelat, A., Butler, K.: PCF: a portable circuit format for scalable twoparty secure computation. In: 22nd USENIX Security Symposium, August 2013Google Scholar
 15.Kreuter, B., Shelat, A., Shen, C.H.: Towards billiongate secure computation with malicious adversaries. In: 21st USENIX Security Symposium, August 2012Google Scholar
 16.Kumaresan, R., Bentov, I.: How to use bitcoin to incentivize correct computations. In: Ahn, G.J., Yung, M., Li, N. (eds.) ACM CCS 14, pp. 30–41, November 2014Google Scholar
 17.Lindell, Y.: Fast cutandchoose based protocols for malicious and covert adversaries. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 1–17. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 18.Peikert, C., Vaikuntanathan, V., Waters, B.: A framework for efficient and composable oblivious transfer. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 554–571. Springer, Heidelberg (2008) CrossRefGoogle Scholar
 19.Pinkas, B., Schneider, T., Smart, N.P., Williams, S.C.: Secure twoparty computation is practical. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 250–267. Springer, Heidelberg (2009) CrossRefGoogle Scholar
 20.Zahur, S., Rosulek, M., Evans, D.: Two halves make a whole. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9057, pp. 220–250. Springer, Heidelberg (2015) Google Scholar