Efficient Constant-Round Multi-party Computation Combining BMR and SPDZ


Recently, there has been huge progress in the field of concretely efficient secure computation, even while providing security in the presence of malicious adversaries. This is especially the case in the two-party setting, where constant-round protocols exist that remain fast even over slow networks. However, in the multi-party setting, all concretely efficient fully secure protocols, such as SPDZ, require many rounds of communication. In this paper, we present a constant-round multi-party secure computation protocol that is fully secure in the presence of malicious adversaries and for any number of corrupted parties. Our construction is based on the constant-round protocol of Beaver et al. (the BMR protocol) and is the first version of that protocol that is concretely efficient for the dishonest majority case. Our protocol includes an online phase that is extremely fast and mainly consists of each party locally evaluating a garbled circuit. For the offline phase, we present both a generic construction (using any underlying MPC protocol) and a highly efficient instantiation based on the SPDZ protocol. Our estimates show the protocol to be considerably more efficient than previous fully secure multi-party protocols.


Background and Prior Work

Protocols for secure multi-party computation (MPC) enable a set of mutually distrustful parties to securely compute a joint functionality of their inputs. Such a protocol must guarantee privacy (meaning that only the output is learned), correctness (meaning that the output is correctly computed from the inputs), and independence of inputs (meaning that each party must choose its input independently of the others). Formally, security is defined by comparing the distribution of the outputs of all parties in a real protocol to an ideal model where an incorruptible trusted party computes the functionality for the parties. The two main types of adversaries that have been considered are semi-honest adversaries who follow the protocol specification but try to learn more than allowed by inspecting the transcript, and malicious adversaries who can run any arbitrary strategy in an attempt to break the protocol. Secure MPC has been studied since the late 1980s, and powerful feasibility results were proven showing that any two-party or multi-party functionality can be securely computed [14, 32], even in the presence of malicious adversaries. When an honest majority (or a 2/3 majority) is assumed, security can even be obtained information theoretically [5, 6, 30]. In this paper, we focus on the problem of obtaining security in the presence of malicious adversaries, and a dishonest majority.

Recently, there has been much interest in the problem of concretely efficient secure MPC, where “concretely efficient” refers to protocols that are sufficiently efficient to be implemented in practice (in particular, these protocols should not, say, use generic ZK proofs that operate on the circuit representation of these primitives). In the last few years, there has been tremendous progress on this problem, and there now exist extremely fast protocols that can be used in practice; see [12, 22,23,24, 26] for just a few examples. In general, there are two approaches that have been followed: The first uses Yao’s garbled circuits [32] and the second utilizes interaction for every gate like the GMW protocol [14].

There are extremely efficient variants of Yao’s protocol for the two-party case that are secure against malicious adversaries (e.g., [23, 24]). These protocols run in a constant number of rounds and therefore remain fast over high latency networks. The BMR protocol [1, 31] is a variant of Yao’s protocol that runs in a multi-party setting with more than two parties. This protocol works by the parties jointly constructing a garbled circuit (possibly in an offline phase) and then later computing it (possibly in an online phase). However, in the case of malicious adversaries, the original BMR protocol suffers from two main drawbacks:

  • The protocol uses circuit-based zero-knowledge proofs to ensure that the parties input correct values obtained from the pseudorandom generator in the protocol. This requires a very large proof (of a circuit computing a pseudorandom generator) for every gate of the circuit. Thus, the protocol serves as a feasibility result for achieving constant-round MPC, but cannot be run in practice.

  • The original BMR protocol only guarantees security for malicious adversaries if at most a minority of the parties are corrupt. This is due to the fact that constant-round protocols for multi-party commitment, coin tossing, and zero knowledge were not known at the time for the setting of dishonest majority. The existence of constant-round protocols for multi-party secure computation in the presence of a dishonest majority was proven later in [18, 27]. However, these too are feasibility results and are not concretely efficient.

The TinyOT and SPDZ protocols [12, 26] follow the GMW paradigm and have separate offline and online phases. Both of these protocols overcome the issues of the BMR protocols in that they are secure against any number of corrupt parties, make only black box usage of cryptographic primitives, and have very fast online phases that require only very simple (information theoretic) operations. A black box constant-round MPC construction for the case of an honest majority appears in [9] and for the case of a dishonest majority in [17]; however, their construction appears to be “concretely inefficient.” In a setting of more than two parties, these protocols are currently the only practical approach known. However, since they follow the GMW paradigm, their online phase requires a communication round for each level of the circuit. This results in a large amount of interaction and high latency, especially when the parties wish to compute deep circuits over slow networks (e.g., the Internet). To sum up, prior to this work, there was no known concretely efficient constant-round protocol for the multi-party and dishonest majority setting (with the exception of [7] that considers the specific three-party case only). We therefore address this setting.

Our Contribution

In this paper, we provide the first concretely efficient constant-round protocol for the general multi-party case, with security in the presence of malicious adversaries and a dishonest majority. Our protocol has 12 communication rounds, of which only 3 rounds are in the online phase. This makes it much more efficient than prior protocols [12, 26] for deep circuits and/or slow networks, since prior works all have a number of rounds that is (at least) the depth of the circuit being computed. The basic idea behind the construction is to use an efficient (either constant or non-constant round) protocol, with security for malicious adversaries, to compute the gate tables of the BMR garbled circuit (and since the computation of these tables is of constant depth, this step is constant round). Our main conceptual contribution, resulting in a great performance improvement, is to show that it is not necessary for the parties to prove (expensive) zero-knowledge proofs that they used the correct pseudorandom generator values in the offline circuit generation phase. Rather, validation of the correctness is an immediate by-product of the online computation phase and therefore does not add any overhead to the computation. Although our basic generic protocol can be instantiated with any MPC protocol (either constant or non-constant round), we provide an optimized version that utilizes specific features of the SPDZ protocol [12].

In our general construction, the new constant-round protocol consists of two phases. In the first (offline) phase, the parties securely compute random shares of the BMR garbled circuit. If this is done naively, then the result is highly inefficient since part of the computation involves computing a pseudorandom generator or a pseudorandom function multiple times for every gate. By modifying the original BMR garbled circuit, we show that it is possible to actually compute the circuit very efficiently. Specifically, each party locally computes the pseudorandom function as needed for every gateFootnote 1 and uses the results as input to the secure computation. Our proof of security shows that if a party cheats and inputs incorrect values, then no harm is done, since this only causes the honest parties to abort (which is inevitable in the dishonest majority anyway). Next, in the online phase, all that the parties need to do is reconstruct the single garbled circuit, exchange garbled values on the input wires, and evaluate compute the garbled circuit. The online phase is therefore very fast.

In our concrete instantiation of the protocol using SPDZ [12], there are actually three separate phases, with each being faster than the previous. The first two phases can be run offline, and the last phase is run online after the inputs become known.

  • The first (slow) phase depends only on an upper bound on the number of wires and the number of gates in the function to be evaluated. This phase uses somewhat homomorphic encryption (SHE) and is equivalent to the offline phase of the SPDZ protocol.

  • The second phase depends on the function to be evaluated but not on the function inputs; in our proposed instantiation, this mainly involves information theoretic primitives and is equivalent to the online phase of the SPDZ protocol.

  • In the third phase, the parties provide their inputs and evaluate the function; this phase just involves exchanging shares of the circuit and garbled values on the input wires and locally evaluate the BMR garbled circuit.

We stress that our protocol is constant round in all phases since the depth of the circuit required to compute the BMR garbled circuit is constant. In addition, the computational cost of preparing the BMR garbled circuit is not much more than the cost of using SPDZ itself to compute the functionality directly. However, the key advantage that we gain is that our online time is extraordinarily fast, requiring only two rounds and a local computation of a single garbled circuit. This is faster than all other existing circuit-based multi-party protocols.

Finite Field Optimization of BMR. In order to efficiently compute the BMR garbled circuit, we define the garbling and evaluation operations over a finite field. A similar technique of using finite fields in the BMR protocol was introduced in [2] in the case of semi-honest security with an honest majority. In contrast to [2], our utilization of finite fields is carried out via vectors of field elements and uses the underlying arithmetic of the field as opposed to using very large finite fields to simulate integer arithmetic. This makes our modification in this respect more efficient.

Subsequent Work. Following our work, there has been renewed interest in the BMR protocol. First and foremost, the works of [16, 25] build directly on this work and show how to construct the BMR garbled circuit more efficiently. In addition, [3] considered the semi-honest setting and [21] apply an optimized version of our construction to efficient RAM-based MPC. The fact that constant-round BMR protocol outperform secret-sharing-based protocols for not-shallow circuits and slow (Internet-like) networks has been demonstrated in [3, 4, 21].

Paper Structure

In Sect. 2, we give a detailed description of the BMR protocol. In Sect. 3, we present our general protocol that can utilize any MPC protocol for arithmetic circuits as a subprocedure. Then, in Sect. 4, we describe our specific BMR protocol that uses SPDZ [12] as the underlying MPC protocol, and we analyze its complexity. We utilize specific properties of SPDZ for further optimizations, in order to obtain an even more efficient evaluation. Finally, we provide a full proof of our construction in Sect. 5.

Background—The BMR Protocol [1]

We outline the basis of our protocol, which is the protocol of Beaver, Micali, and Rogaway against a semi-honest adversary.Footnote 2 The protocol is comprised of an offline phase and an online phase. In the offline phase, the garbled circuit is constructed by the parties, while in the online phase, a matching set of garbled inputs is exchanged between the parties and each party evaluates it locally.

We now describe the main elements of the BMR protocol. Let \(\kappa \) denote the computational security parameter, n denote the number of parties, and let \([n]=\{1,\ldots ,n\}\). The wires in the circuit that computes the function f are indexed \(0,\ldots ,W-1\). The protocol is based on the following key components:

Seeds and Superseeds. Two random seeds are associated with each wire in the circuit by each party. We denote the 0-seed and 1-seed that are chosen by party \(P_i\) (\(i\in [n]\)) for wire w as \(s^{i}_{w,0}\) and \(s^{i}_{w,1}\) such that \(s^{i}_{w,j} \in \{0,1\}^\kappa \). During the garbling process, the parties produce two superseeds for each wire, where the 0-superseed and 1-superseed for wire w are a simple concatenation of the 0-seeds and 1-seeds chosen by all the parties, namely, \(S_{w,0}=s^{1}_{w,0}\Vert \cdots \Vert s^{n}_{w,0}\) and \(S_{w,1}=s^{1}_{w,1}\Vert \cdots \Vert s^{n}_{w,1}\) where \(\Vert \) denotes concatenation. Denote \(L=|S_{w,j}|= n \cdot \kappa \).

Garbling Wire Values. For each gate g that calculates the function \(f_g\) (where \(f_g:\{0,1\}\times \{0,1\}\rightarrow \{0,1\}\)), the garbled gate of g is computed such that the superseeds associated with the output wire are encrypted (via a simple XOR) using the superseeds associated with the input wires, according to the truth table of \(f_g\). Specifically, a superseed \(S_{w,0}=s^{1}_{w,0}\Vert \cdots \Vert s^{n}_{w,0}\) is used to encrypt a value M of length L by computing \(M \bigoplus _{i=1}^n G(s^{i}_{w,0})\), where G is a pseudorandom generator stretching a seed of length \(\kappa \) to an output of length L. This means that every one of the seeds that make up the superseed must be known in order to learn the mask and decrypt.

Masking Values. Using random seeds instead of the original 0/1 values does not hide the original value if it is known that the first seed corresponds to 0 and the second seed to 1. Therefore, an unknown random masking bit, denoted by \(\lambda _{w}\), is assigned to each wire w independently. These masking bits remain unknown to the parties during the entire protocol, thereby preventing them from knowing the real values\(\rho _{w}\) that actually pass through the wires. The values that the parties do know are called the external values, denoted \(\Lambda _{w}\). An external value is defined to be the exclusive-or of the real value and the masking value; that is, \(\Lambda _{w}=\rho _{w}\oplus \lambda _{w}\). When evaluating the garbled circuit, the parties only see the external values of the wires, which are random bits that reveal nothing about the real values, unless they know the masking values. We remark that each party \(P_i\) is given the masking value associated with its input; hence, it can compute the external value itself (based on its actual input) and can send it to all other parties.

BMR Garbled Gates and Circuit. We can now define the BMR garbled circuit, which consists of the set of garbled gates, where a garbled gate is defined via a functionality that maps inputs to outputs. Let g be a gate with input wires ab and output wire c. Each party \(P_i\) inputs \(s^{i}_{a,0},s^{i}_{a,1},s^{i}_{b,0},s^{i}_{b,1},s^{i}_{c,0},s^{i}_{c,1}\). Thus, the appropriate superseeds are \(S_{a,0},S_{a,1},S_{b,0},\)\(S_{b,1},S_{c,0},S_{c,1}\), where each superseed is given by \(S_{\alpha ,\beta }=s^{1}_{\alpha ,\beta }\Vert \cdots \Vert s^{n}_{\alpha ,\beta }\). In addition, \(P_i\) also inputs the output of a pseudorandom generator G applied to each of its seeds, and its share of the masking bits, i.e., \(\lambda ^{i}_{a},\lambda ^{i}_{b},\lambda ^{i}_{c}\) (where \(\lambda _{w}=\bigoplus _{i=1}^n \lambda ^{i}_{w}\)).

The output is the garbled gate of g which is comprised of a table of four ciphertexts, each of them encrypting either \(S_{c,0}\) or \(S_{c,1}\). The property of the gate construction is that given one superseed for wire a and one superseed for wire b it is possible to decrypt exactly one ciphertext and reveal the appropriate superseed for c (based on the values on the input wires and the gate type). The functionality, garble-gate-BMR, for garbling a single gate, is formally described in Functionality 1.

The BMR Online Phase. In the online phase, the parties only have to obtain one superseed for every circuit-input wire, and then every party can evaluate the circuit on its own, without interaction with the rest of the parties. Formally, Protocol 1 realizes the online phase.


Correctness. We explain now why the conditions for masking \(S_{c,0}\) and \(S_{c,1}\) are correct. The external values \(\Lambda _{a},\Lambda _{b}\) indicate to the parties which ciphertext to decrypt. Specifically, the parties decrypt \(A_g\) if \(\Lambda _{a}=\Lambda _{b}=0\), they decrypt \(B_g\) if \(\Lambda _{a}=0\) and \(\Lambda _{b}=1\), they decrypt \(C_g\) if \(\Lambda _{a}=1\) and \(\Lambda _{b}=0\), and they decrypt \(D_g\) if \(\Lambda _{a}=\Lambda _{b}=1\).

We need to show that given \(S_{a,\Lambda _{a}}\) and \(S_{b,\Lambda _{b}}\), the parties obtain \(S_{c,\Lambda _{c}}\). Consider the case that \(\Lambda _{a}=\Lambda _{b}=0\). (Note that \(\Lambda _{a}=0\) means that \(\lambda _{a}=\rho _{a}\), and \(\Lambda _{a}=1\) means that \(\lambda _{a}\ne \rho _{a}\), where \(\rho _{a}\) is the real value.) Since \(\rho _{a}=\lambda _{a}\) and \(\rho _{b}=\lambda _{b}\), we have that \(f_g(\lambda _{a},\lambda _{b})=f_g(\rho _{a},\rho _{b})\). If \(f_g(\lambda _{a},\lambda _{b})=\lambda _{c}\), then by definition \(f_g(\rho _{a},\rho _{b})=\rho _{c}\), and so we have \(\lambda _{c}=\rho _{c}\) and thus \(\Lambda _{c}=0\). Thus, the parties obtain \(S_{c,0} = S_{c,\Lambda _{c}}\). In contrast, if \(f_g(\lambda _{a},\lambda _{b})\ne \lambda _{c}\), then by definition \(f_g(\rho _{a},\rho _{b})\ne \rho _{c}\), and so we have \(\lambda _{c}=\bar{\rho _{c}}\) and thus \(\Lambda _{c}=1\). A similar analysis show that the correct values are encrypted for all other combinations of \(\Lambda _{a},\Lambda _{b}\).

Broadcast. As described in Protocol 1, in the online phase the parties are instructed to broadcast one key per input wire. In the semi-honest setting, broadcasting a value simply takes one communication round in which the sender sends its value to all other parties.Footnote 3 However, in the malicious setting a corrupted sender may send the value v to one party and a different value \(v'\ne v\) to another party. In the setting of \(t<n/3\)corrupted parties, fully secure broadcast (without abort) can be achieved in an expected constant number of rounds [13] (or deterministically in t rounds [28]). In the setting of \(t\ge n/3\)corrupted parties, and in particular with no honest majority at all, fully secure broadcast (without abort) can only be achieved using a public key infrastructure and with t rounds of communication. However, we do not actually need a fully secure broadcast, since we allow an adversary to cause some parties to abort. Thus, we can use a simple two-round echo-broadcast protocol; this has been shown to be sufficient for secure computation with abort [15] (and even UC secure). In more detail, the echo-broadcast works by having the sender send its message v to all parties \(P_1,\ldots ,P_n\) in the first round, and then every party \(P_i\) sends (echoes) the value that it received in the first round to all other parties. If any party received two different values, then it aborts. Otherwise, it outputs the unique value that it saw. It is not difficult to show that if the dealer is honest, then all honest parties either output the dealer’s message v or abort (but no other value can be output). Furthermore, if the dealer is dishonest, then there is a unique value v such that every honest party either outputs v or aborts. See [15] for more details. We therefore write our protocol assuming a broadcast channel and utilize the above protocol to achieve broadcast in the point-to-point setting. As a result, our protocol has a constant number of rounds even in the point-to-point model.

The General Protocol

A Modified BMR Garbling

In order to facilitate fast secure computation of the garbled circuit in the offline phase, we make some changes to the original BMR garbling described above. First, instead of using the XOR of bit strings, and hence a binary circuit to instantiate the garbled gate, we use additions of elements in a finite field, and hence an arithmetic circuit. This idea was used by [2] in the FairplayMP system, which used the BGW protocol [5] in order to compute the BMR circuit. Note that FairplayMP achieved semi-honest security with an honest majority, whereas our aim is malicious security for any number of corrupted parties. Consequently, we naturally replace the seeds \(s^{i}_{x,b}\), which are bit strings input to a pseudorandom generator, that the parties input with keys \(k^{i}_{x,b}\), which are field elements input to a pseudorandom function.

Second, we observe that the external values of wiresFootnote 4 do not need to be explicitly encoded, since each party can learn them by looking at its own “part” of the garbled value. In the original BMR garbling, each superseed contains n seeds provided by the parties. Thus, if a party’s zero seed is in the decrypted superseed, then it knows that the external value (denoted by \(\Lambda _{}\)) is zero, and otherwise it knows that it is one.

Naively, it seems that independently computing each gate securely in the offline phase is insufficient, since the corrupted parties might use inconsistent inputs for the computations of different gates. For example, if the output wire of gate g is an input to gate \(g'\), the input provided for the computation of the table of g might not agree with the inputs used for the computation of the table of \(g'\). It therefore seems that the offline computation must verify the consistency of the computations of different gates. This type of verification would greatly increase the cost since the evaluation of the pseudorandom functions (or pseudorandom generator in the original BMR) used in computing the tables needs to be checked inside the secure computation. This would mean that the pseudorandom function is not treated as a black box, and the circuit for the offline phase would be huge (as it would include multiple copies of a subcircuit for computing pseudorandom function computations for every wire). Instead, we prove that this type of corrupt behavior can only result in an abort in the online phase, which would not affect the security of the protocol. This observation enables us to compute each gate independently and model the pseudorandom function used in the computation as a black box, thus simplifying the protocol and optimizing its performance.

We also encrypt garbled values as vectors; this enables us to use a finite field that can encode values from \(\{0,1\}^\kappa \) (for each vector coordinate), rather than a much larger finite field that can encode all of \(\{0,1\}^L\). Due to this, the parties choose keys (for a pseudorandom function) rather than seeds for a pseudorandom generator. The keys that \(P_i\) chooses for wire w are denoted \(k^{i}_{w,0}\) and \(k^{i}_{w,1}\), which will be elements in a finite field \({\mathbb {F}}_p\) such that \(2^\kappa< p < 2^{\kappa +1}\). In fact, we pick p to be the smallest prime number larger than \(2^\kappa \), and set \(p = 2^\kappa +\alpha \), where (by the prime number theorem) we expect \(\alpha \approx \kappa \). We shall denote the pseudorandom function by \(F_k(x)\), where the key and output will be interpreted as elements of \({\mathbb {F}}_p\) in much of our MPC protocol. In practice, the function \(F_k(x)\) we suggest will be implemented using CBC-MAC using a block cipher \(\mathsf {enc}\) with key and block size \(\kappa \) bits, as \( F_k(x) = \mathsf {CBC}\text {-}\mathsf {MAC}_{\mathsf {enc}}(k \pmod {2^{\kappa }},x)\). Note that the inputs x to our pseudorandom function will all be of the same length and so using naive CBC-MAC will be secure.

We interpret the \(\kappa \)-bit output of \(F_k(x)\) as an element in \({\mathbb {F}}_p\) where \(p=2^\kappa +\alpha \). Note that a mapping which sends an element \(k \in {\mathbb {F}}_p\) to a \(\kappa \)-bit key by computing \(k \pmod {2^{\kappa }}\) induces a distribution on the key space of the block cipher which has statistical distance from uniform of only

$$\begin{aligned} \frac{1}{2} \left( (2^\kappa -\alpha ) \cdot \left( \frac{1}{2^\kappa }-\frac{1}{p} \right) + \alpha \cdot \left( \frac{2}{p}-\frac{1}{2^\kappa } \right) \right) \approx \frac{\alpha }{p} \approx \frac{\kappa }{2^\kappa }. \end{aligned}$$

The output of the function \(F_k(x)\) will also induce a distribution which is close to uniform on \({\mathbb {F}}_p\). In particular, the statistical distance of the output in \({\mathbb {F}}_p\), for a block cipher with block size \(\kappa \), from uniform is given by

$$\begin{aligned} \frac{1}{2} \left( 2^\kappa \cdot \left( \frac{1}{2^\kappa }-\frac{1}{p} \right) + \alpha \cdot \left( \frac{1}{p}-0 \right) \right) = \frac{\alpha }{p} \approx \frac{\kappa }{2^\kappa }. \end{aligned}$$

(Note that \(1-\frac{2^\kappa }{p} = \frac{\alpha }{p}\).) The statistical difference is therefore negligible. In practice, we set \(\kappa =128\) and use the AES cipher as the block cipher \(\mathsf {enc}\).


The goal of this paper is to present a protocol \(\Pi _{\mathsf {SFE}}\) that realizes Functionality 2 in a constant number of rounds in the setting of a malicious adversary who may corrupt up to \(n-1\) parties. Our constant-round protocol \(\Pi _{\mathsf {SFE}}\) implementing \(\mathcal {F}_{\mathsf {SFE}}\) is built in the \(\mathcal {F}_{\mathsf {MPC}}\)-hybrid model, i.e., utilizing a sub-protocol \(\Pi _{\mathsf {MPC}}\) which implements the functionality \(\mathcal {F}_{\mathsf {MPC}}\) given in Functionality 3. The relation diagram between functionalities and protocols presented in this paper is presented in Fig. 1. The generic MPC functionality \(\mathcal {F}_{\mathsf {MPC}}\) is reactive. We require a reactive MPC functionality because our protocol \(\Pi _{\mathsf {SFE}}\) will make repeated sequences of calls to \(\mathcal {F}_{\mathsf {MPC}}\) involving both output and computation commands. In terms of round complexity, all that we require of the sub-protocol \(\Pi _{\mathsf {MPC}}\) is that each of the commands which it implements can be implemented in constant rounds. Given this requirement, our larger protocol \(\Pi _{\mathsf {SFE}}\) will be constant round.

Fig. 1

Outline of our construction: Dashed lines mean that \(\Pi \) securely realizes \(\mathcal{{F}}\); solid lines mean that \(\Pi \) is constructed in the \(\mathcal{F}\)-hybrid model

In what follows, we assume that the \(\mathcal {F}_{\mathsf {MPC}}\) functionality maintains a data structure in which it stores its internal values, so that the parties may request to perform operations (i.e., Input, Output, Add, Multiply) over the entries of the data structure. We use the notation \([ val ]\) to represent the key used by the functionality to store value \( val \). In addition, we use the arithmetic shorthands \([z] = [x]+[y]\) and \([z]=[x]\cdot [y]\) to represent the result of calling the Add and Multiply commands over the inputs xy and output z. That is, after calling \([z] = [x]+[y]\) (resp. \([z]=[x]\cdot [y]\)) the key [z] is associated with the value \(x+y\) (resp. \(x\cdot y\)).

In the Output command of \(\mathcal {F}_{\mathsf {MPC}}\) (Functionality 3), \(i=0\) means that the value indexed by \({ varid}\) is output to all parties and \(i\ne 0\) means that it is output to party \(P_i\) only. In both cases, the adversary has the power to decide if an honest party receives the output value or not (where in the latter case, it aborts). Furthermore, when \(i=0\), the adversary has the ability to inspect that value before deciding whether to abort.Footnote 5


The Offline Functionality: preprocessing-I and preprocessing-II

Our protocol, \(\Pi _{\mathsf {SFE}}\), is comprised of an offline phase and an online phase, where the offline phase, which implements the functionality \(\mathcal {F}_{\mathsf {offline}}\), is divided into two subphases: preprocessing-I  and preprocessing-II. To aid exposition, we first present the functionality \(\mathcal {F}_{\mathsf {offline}}\) in Functionality 4. In Sect. 4, we present an efficient methodology to implement \(\mathcal {F}_{\mathsf {offline}}\) which uses the SPDZ protocol as the underlying MPC protocol for securely computing functionality \(\mathcal {F}_{\mathsf {MPC}}\); while in “Appendix A” we present a generic implementation of \(\mathcal {F}_{\mathsf {offline}}\) based on any underlying protocol \(\Pi _{\mathsf {MPC}}\) implementing \(\mathcal {F}_{\mathsf {MPC}}\).

In describing functionality \(\mathcal {F}_{\mathsf {offline}}\), we distinguish between attached wires and common wires: The attached wires are the circuit-input wires that are directly connected to the parties (i.e., these are inputs wires to the circuit). Thus, if every party has \(\ell \) inputs to the functionality f then there are \(n\cdot \ell \) attached wires. The rest of the wires are considered as common wires, i.e., they are directly connected to none of the parties.

Our preprocessing-I phase takes as input an upper bound W on the number of wires in the circuit and an upper bound G on the number of gates in the circuit. The upper bound G is not strictly needed, but will be needed in any efficient instantiation based on the SPDZ protocol. In contrast, the preprocessing-II phase requires knowledge of the precise function f being computed, which we assume is encoded as a binary circuit \(C_f\).

In order to optimize the performance of preprocessing-II phase, the secure computation does not evaluate the pseudorandom function F(), but rather has the parties compute F() and provide the results as an input to the protocol. Observe that corrupted parties may provide incorrect input values \(F_{k^i_{x,j}}()\), and thus the resulting garbled circuit may not actually be a valid BMR garbled circuit. Nevertheless, we show that such behavior can only result in an abort. This is due to the fact that if a value is incorrect and honest parties see that their key (coordinate) is not present in the resulting vector, then they will abort. In contrast, if their seed is present, then they proceed and the incorrect value had no effect. Since the keys are secret, the adversary cannot give an incorrect value that will result in a correct different key, except with negligible probability. Likewise, a corrupted party cannot influence the masking values \(\lambda \), and thus they are consistent throughout (when a given wire is input into multiple gates), ensuring correctness.

Securely Computing \(\mathcal {F}_{\mathsf {SFE}}\) in the \(\mathcal {F}_{\mathsf {offline}}\)-Hybrid Model

In Protocol 2, we present our protocol \(\Pi _{\mathsf {SFE}}\) for securely computing \(\mathcal {F}_{\mathsf {SFE}}\) in the \(\mathcal {F}_{\mathsf {offline}}\)-hybrid model. In this paper, we prove the following:

Theorem 1

(Main Theorem) If F is a pseudorandom function, then Protocol \(\Pi _{\mathsf {SFE}}\) securely computes \(\mathcal {F}_{\mathsf {SFE}}\) in the \(\mathcal {F}_{\mathsf {offline}}\)-hybrid model, in the presence of a static malicious adversary corrupting any number of parties.


Implementing \(\mathcal {F}_{\mathsf {offline}}\) in the \(\mathcal {F}_{\mathsf {MPC}}\)—Hybrid Model

At first sight, it may seem that in order to construct an entire garbled circuit (i.e., the output of \(\mathcal {F}_{\mathsf {offline}}\)), an ideal functionality that computes each garbled gate can be used separately for each gate of the circuit (that is, for each gate the parties provide their PRF results on the keys and shares of the masking values associated with that gate’s wires). This is sufficient when considering semi-honest adversaries. However, in the setting of malicious adversaries, this can be problematic since parties might input inconsistent values. For example, the masking value \(\lambda _{w}\) that is common to a number of gates (which happens when some wire enters more than one gate) needs to be identical in all of these gates. In addition, the pseudorandom function values might not be correctly computed from the pseudorandom function keys that are input. In order to make the computation of the garbled circuit efficient, we will not check that the pseudorandom function values are correct. However, it is necessary to ensure that the \(\lambda _{w}\) values are correct and that they (and likewise the keys) are consistent between gates (e.g., as in the case where the same wire is input to multiple gates). We achieve this by computing the entire circuit at once, via a single functionality.

The cost of this computation is actually almost the same as separately computing each gate. The functionality receives from party \(P_i\) the values \(k^{i}_{w,0},k^{i}_{w,1}\) and the output of the pseudorandom function applied to the keys only once, regardless of the number of gates to which w is input. Thereby consistency is immediate throughout, and the potential attack against consistency is prevented. Moreover, the \(\lambda _{w}\) values are generated once and used consistently by the circuit, making it easy to ensure that the \(\lambda \) values are correct.

Another issue that arises is that the single garbled gate functionality expects to receive a single masking value for each wire. However, since this value is secret, it must be generated from shares that are input by the parties. This introduces a challenge since the functionality \(\mathcal {F}_{\mathsf {MPC}}\) works (i.e., its commands are) over a finite field \({\mathbb {F}}_p\) while the masking bit must be a single binary bit. Thus, it is not possible to simply have each party choose its own share bit and then XOR these bits inside \(\mathcal {F}_{\mathsf {MPC}}\). The only option offered by \(\mathcal {F}_{\mathsf {MPC}}\) is to have the parties each input a field element, and then use these inputs to produce a uniform bit that will be used to mask the wire’s signal. This must be done in a way that results in a uniform value in \(\{0,1\}\), even in the presence of malicious parties may input field elements that are not according to the prescribed distribution, and potentially harm the security. This must therefore be prevented by our protocol.

In “Appendix A,” we describe a general method for securely computing \(\mathcal {F}_{\mathsf {offline}}\) in the \(\mathcal {F}_{\mathsf {MPC}}\)-hybrid model, using any protocol that securely computes the \(\mathcal {F}_{\mathsf {MPC}}\) ideal functionality. The aforementioned problem issue is solved in the following way. The computation is performed by having the parties input random masking values \(\lambda ^{i}_{w}\in \{1,-1\}\), instead of bits. This enables the computation of a value \(\mu _w\) to be the product of \(\lambda ^{1}_{w},\ldots ,\lambda ^{n}_{w}\) and to be random in \(\{-1,1\}\) as long as one of them is random. The product is then mapped to \(\{0,1\}\) in \({\mathbb {F}}_p\) by computing \(\lambda _{w}=\frac{\mu _w+1}{2}\).

In order to prevent corrupted parties from inputting \(\lambda ^{i}_{w}\) values that are not in \(\{-1,+1\}\), the protocol for computing the circuit outputs \((\prod _{i=1}^n\lambda ^{i}_{w})^2-1\), for every wire w (where \(\lambda ^{i}_{w}\) is the share contributed from party i for wire w), and the parties can simply check whether it is equal to zero or not. Thus, if any party cheats by causing some \(\lambda _{w} = \prod _{i=1}^n\lambda ^{i}_{w} \notin \{-1,+1\}\), then this will be discovered since the circuit outputs a nonzero value for \((\prod _{i=1}^n\lambda ^{i}_{w})^2-1\), and so the parties detect this and can abort. Since this occurs before any inputs are used, nothing is revealed by this. Furthermore, if \(\prod _{i=1}^n\lambda ^{i}_{w}\in \{-1,+1\}\), then the additional value output reveals nothing about \(\lambda _{w}\) itself.

In the next section, we show that all of these complications can be removed by basing our implementation for \(\mathcal {F}_{\mathsf {MPC}}\) upon the specific SPDZ protocol. The reason why the SPDZ implementation is simpler—and more efficient—is that SPDZ provides generation of such shared values effectively for free.

The SPDZ-Based Instantiation

Utilizing the SPDZ Protocol

As discussed in Sect. 3.1, in the offline phase we use an underlying secure computation protocol, which, given a binary circuit and the matching inputs to its input wires, securely and distributively computes that binary circuit. In this section, we simplify and optimize the implementation of the protocol \(\Pi _{\mathsf {offline}}\) which implements the functionality \(\mathcal {F}_{\mathsf {offline}}\) by utilizing the specific SPDZ protocol as the underlying implementation of \(\mathcal {F}_{\mathsf {MPC}}\). These optimizations are possible because the SPDZ protocol provides a richer interface to the protocol designer than the naive generic MPC interface given in the functionality \(\mathcal {F}_{\mathsf {MPC}}\). In particular, it provides the capability of directly generating shared random bits and strings. These are used for generating the masking values and pseudorandom function keys. Note that one of the most expensive steps in FairplayMP [2] was coin tossing to generate the masking values; by utilizing the specific properties of SPDZ, this is achieved essentially for free.

In Sect. 4.2, we describe explicit operations that are to be carried out on the inputs in order to achieve the desired output; the circuit’s complexity analysis appears in Sect. 4.3, and the expected results from an implementation of the circuit using the SPDZ protocol are in Sect. 4.6.

Throughout, we utilize \(\mathcal {F}_{\mathsf {SPDZ}}\) (Functionality 6), which represents an idealized representation of the SPDZ protocol, akin to the functionality \(\mathcal {F}_{\mathsf {MPC}}\) from Sect. 3.1. Note that in the real protocol, \(\mathcal {F}_{\mathsf {SPDZ}}\) is implemented itself by an offline phase (essentially corresponding to our preprocessing-I) and an online phase (corresponding to our preprocessing-II). We fold the SPDZ offline phase into the Initialize command of \(\mathcal {F}_{\mathsf {SPDZ}}\). In the SPDZ offline phase, we need to know the maximum number of multiplications, random values and random bits required in the online phase. In that phase, the random shared bits and values are produced, as well as the multiplication (Beaver) TriplesFootnote 6 for use in the multiplication gates performed in the SPDZ online phase [11]. In particular, the consuming of shared random bits and values results in no cost during the SPDZ online phase, with all consumption costs being performed in the SPDZ offline phase. The protocol, which utilizes somewhat homomorphic encryption (SHE) to produce the shared random values/bits and the Beaver multiplication triples, is given in [11].

The \(\Pi _{\mathsf {offline}}\)  SPDZ-Based Protocol

As remarked earlier, \(\mathcal {F}_{\mathsf {offline}}\) can be securely computed using any secure multi-party protocol. This is advantageous since it means that future efficiency improvements to concretely secure multi-party computation (with dishonest majority) will automatically make our protocol faster. However, currently the best option is SPDZ. Specifically, this option utilizes the fact that SPDZ can very efficiently generate coin tosses. This means that it is not necessary for the parties to input the \(\lambda _w^i\) values, multiply them together to obtain \(\lambda _w\) and to output the check values \((\lambda _w)^2-1\). Thus, this yields a significant efficiency improvement. We now describe the protocol which implements \(\mathcal {F}_{\mathsf {offline}}\) in the \(\mathcal {F}_{\mathsf {SPDZ}}\)-hybrid model.



  1. 1.

    Initialize the MPC EngineFootnote 7: Call Initialize on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\) with input p, a prime with \(p > 2^k\) and with parameters

    $$\begin{aligned}&M = G_X(2n+3) + G_A(4n+5), ~~~~ B = W, ~~~~\\&\quad R = 2 \cdot W \cdot n, ~~~~ I = 8 \cdot G \cdot n, \end{aligned}$$

    where \(G_X,G_A\) are the number of XOR and AND gates in \(C_f\), respectively, n is the number of parties, and W is the number of input wires per party. In practice, the term W in the calculation of I needs only be an upper bound on the total number of input wires per party in the circuit which will eventually be evaluated. The value of M is derived from the complexity analysis below and \(I= 8 \cdot G \cdot n\) since every gate has 2 input wires, each input wire has 2 keys per party, who inputs 2 pseudorandom function outputs values per party.

  2. 2.

    Generate wire masks: For every circuit wire w, we need to generate a sharing of the (secret) masking values \(\lambda _{w}\). Thus, for all wires w the parties execute the command RandomBit on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\), the output is denoted by \([\lambda _{w}]\). The functionality \(\mathcal {F}_{\mathsf {SPDZ}}\) guarantees that \(\lambda _{w} \in \{0,1\}\).

  3. 3.

    Generate keys: For every wire w, each party \(i \in [1,\ldots ,n]\) and for \(j \in \{0,1\}\), the parties call Random on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\) to obtain output \([k^{i}_{w,j}]\). The parties then call Output to open \([k^{i}_{w,j}]\) to party i for all j and w. The vector of shares \([k^{i}_{w,j}]_{i=1}^n\) shall be denoted by \([\mathbf{k}_{w,j}]\).

preprocessing-II. (This protocol implements the computation of gate tables as it is detailed in the BMR protocol. The correctness of this construction is explained at the end of “Sect. 2.”)

  1. 1.

    Output input wire values: For all wires w which are attached to party \(P_i\) (i.e., correspond to input bits of \(P_i\)), we execute the command Output on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\) to open \([\lambda _{w}]\) to party i.

  2. 2.

    Output masks for circuit-output wires:  In order to reveal the real values of the circuit-output wires, it is required to reveal their masking values. That is, for every circuit-output wire w, the parties execute the command Output on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\) for the stored value \([\lambda _{w}]\).

  3. 3.

    Calculate garbled gates:  This step is operated for each gate g in the circuit in parallel. Specifically, let g be a gate whose input wires are ab and output wire is c. Do as follows:

    1. (a)

      Calculate output indicators:  This step calculates four indicators \([x_a]\), \([x_b]\), \([x_c]\), \([x_d]\) whose values will be in \(\{0,1\}\). Each one of the garbled labels \(\mathbf{A}_g, \mathbf{B}_g, \mathbf{C}_g, \mathbf{D}_g\) is a vector of n elements that hide either the vector \(\mathbf{k}_{c,0}=k^{1}_{c,0},\ldots ,k^{n}_{c,0}\) or \(\mathbf{k}_{c,1}=k^{1}_{c,1},\ldots ,k^{n}_{c,1}\); which vector is hidden depends on these indicators, i.e., if \(x_a=0\) then \(\mathbf{A}_g\) hides \(\mathbf{k}_{c,0}\) and if \(x_a=1\) then \(\mathbf{A}_g\) hides \(\mathbf{k}_{c,1}\). Similarly, \(\mathbf{B}_g\) depends on \(x_b\), \(\mathbf{C}_g\) depends on \(x_c\) and \(\mathbf{D}_c\) depends on \(x_d\). Each indicator is determined by some function on \([\lambda _{a}]\), \([\lambda _{b}]\),\([\lambda _{c}]\) and the truth table of the gate \(f_g\). Every indicator is calculated slightly differently, as follows (concrete examples are given after the preprocessing specification):

      $$\begin{aligned}{}[x_a]&= \left( f_g([\lambda _{a}],[\lambda _{b}]){\mathop {\ne }\limits ^{?}}[\lambda _{c}] \right) =(f_g([\lambda _{a}],[\lambda _{b}])-[\lambda _{c}])^2\\ [x_b]&= \left( f_g([\lambda _{a}],[\overline{\lambda _{b}}]){\mathop {\ne }\limits ^{?}}[\lambda _{c}] \right) =(f_g([\lambda _{a}],(1-[\lambda _{b}]))-[\lambda _{c}])^2\\ [x_c]&=\left( f_g([\overline{\lambda _{a}}],[\lambda _{b}]){\mathop {\ne }\limits ^{?}}[\lambda _{c}] \right) =(f_g((1-[\lambda _{a}]),[\lambda _{b}])-[\lambda _{c}])^2\\ [x_d]&= \left( f_g([\overline{\lambda _{a}}],[\overline{\lambda _{b}}]){\mathop {\ne }\limits ^{?}}[\lambda _{c}] \right) =(f_g((1-[\lambda _{a}]),(1-[\lambda _{b}]))-[\lambda _{c}])^2 \end{aligned}$$

      where the binary operator \({\mathop {\ne }\limits ^{?}}\) is defined as \( [a] {\mathop {\ne }\limits ^{?}}[b]\) equals [0] if \(a=b\), and equals [1] if \(a \ne b\). For the XOR function on a and b, for example, the operator can be evaluated by computing \([a]+[b]-2\cdot [a]\cdot [b]\). Thus, these calculations can be computed using Add and Multiply.

    2. (b)

      Assign the correct vector:  As described above, we use the calculated indicators to choose for every garbled label either \(\mathbf{k}_{c,0}\) or \(\mathbf{k}_{c,1}\). Calculate:

      $$\begin{aligned}{}[\mathbf{v}_{c,x_a}]&= [\mathbf{k}_{c,0}] + [x_a]\cdot ([\mathbf{k}_{c,1}] - [\mathbf{k}_{c,0}]) \\ [\mathbf{v}_{c,x_b}]&= [\mathbf{k}_{c,0}] + [x_b]\cdot ([\mathbf{k}_{c,1}] - [\mathbf{k}_{c,0}]) \\ [\mathbf{v}_{c,x_c}]&= [\mathbf{k}_{c,0}] + [x_c]\cdot ([\mathbf{k}_{c,1}] - [\mathbf{k}_{c,0}]) \\ [\mathbf{v}_{c,x_d}]&= [\mathbf{k}_{c,0}] + [x_d]\cdot ([\mathbf{k}_{c,1}] - [\mathbf{k}_{c,0}]). \end{aligned}$$

      In each equation, either the value \(\mathbf{k}_{c,0}\) or the value \(\mathbf{k}_{c,1}\) is taken, depending on the corresponding indicator value. Once again, these calculations can be computed using Add and Multiply.

    3. (c)

      Calculate garbled labels:  Party i knows the value of \(k^{i}_{w,b}\) (for wire w that enters gate g) for \(b \in \{0,1\}\), and so can compute the \(2 \cdot n\) values \(F_{k^{i}_{w,b}}(0\!\parallel \!1\!\parallel \!g), \ldots , F_{k^{i}_{w,b}}(0\!\parallel \!n\!\parallel \!g) \) and \(F_{k^{i}_{w,b}}(1\!\parallel \!1\!\parallel \!g), \ldots , F_{k^{i}_{w,b}}(1\!\parallel \!n\!\parallel \!g) \). Party i inputs these values by calling InputData on the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\). The resulting input pseudorandom vectors are denoted by

      $$\begin{aligned}{}[F^0_{k^{i}_{w,b}}(g)]&=[F_{k^{i}_{w,b}}(0\!\parallel \!1\!\parallel \!g), \ldots , F_{k^{i}_{w,b}}(0\!\parallel \!n\!\parallel \!g) ] \\ [F^1_{k^{i}_{w,b}}(g)]&=[F_{k^{i}_{w,b}}(1\!\parallel \!1\!\parallel \!g), \ldots , F_{k^{i}_{w,b}}(1\!\parallel \!n\!\parallel \!g) ]. \end{aligned}$$

      The parties now compute \([\mathbf{A}_g], [\mathbf{B}_g], [\mathbf{C}_g], [\mathbf{D}_g]\), using Add, via

      $$\begin{aligned}{}[\mathbf{A}_g]&= \sum \nolimits _{i=1}^{n} \left( [F^0_{k^{i}_{a,0}}(g)]+[F^0_{k^{i}_{b,0}}(g)]\right) ~+~[\mathbf{v}_{c,x_a}] \\ [\mathbf{B}_g]&= \sum \nolimits _{i=1}^{n} \left( [F^1_{k^{i}_{a,0}}(g)]+[F^0_{k^{i}_{b,1}}(g)]\right) ~+~[\mathbf{v}_{c,x_b}] \\ [\mathbf{C}_g]&= \sum \nolimits _{i=1}^{n} \left( [F^0_{k^{i}_{a,1}}(g)]+[F^1_{k^{i}_{b,0}}(g)]\right) ~+~[\mathbf{v}_{c,x_c}]\\ [\mathbf{D}_g]&= \sum \nolimits _{i=1}^{n} \left( [F^1_{k^{i}_{a,1}}(g)]+[F^1_{k^{i}_{b,1}}(g)]\right) ~+~[\mathbf{v}_{c,x_d}], \end{aligned}$$

      where every + operation is performed on vectors of n elements.

  4. 4.

    Notify parties:  Output construction-done.

The functions \(f_g\) in Step 3a above depend on the specific gate being evaluated. For example, on clear values we have,

  • If \(f_g=\wedge \) (i.e., the AND function), \(\lambda _{a}=1\), \(\lambda _{b}=1\) and \(\lambda _{c}=0\), then \(x_a=( (1\wedge 1) - 0)^2 = (1-0)^2=1\). Similarly, \(x_b=((1\wedge (1-1))- 0)^2 = (0-0)^2=0\), \(x_c=0\) and \(x_d=0\). The parties can compute \(f_g\) on shared values [x] and [y] by computing \(f_g([x],[y])= [x] \cdot [y]\).

  • If \(f_g=\oplus \) (i.e., the XOR function), then \(x_a=((1\oplus 1)-0)^2=(0-0)^2=0\), \(x_b=((1\oplus (1-1))-0)^2=(1-0)^2=1\), \(x_c=1\) and \(x_d=0\). The parties can compute \(f_g\) on shared values [x] and [y] by computing \(f_g([x],[y])= [x]+[y]-2 \cdot [x] \cdot [y]\).

Below, we will show how \([x_a]\), \([x_b]\), \([x_c]\), and \([x_d]\) can be computed more efficiently.

Circuit Complexity

In this section, we analyze the complexity of the circuit that constructs the garbled version of \(C_f\) in terms of the number of multiplication gates and the depth of the circuit.Footnote 8 We are mainly concerned with multiplication gates since, given the SPDZ shares [a] and [b] of the secrets a, and b resp., an interaction between the parties is required to achieve a secret sharing of the secret \(a\cdot b\). Achieving a secret sharing of a linear combination of a and b (i.e., \(\alpha \cdot a+\beta \cdot b\) where \(\alpha \) and \(\beta \) are constants), however, can be done locally and is thus considered to have a negligible overhead. We are interested in the depth of the circuit because it gives a lower bound on the number of rounds of interaction that are required for computing the circuit. (Note that here, as before, we are concerned with the depth in terms of multiplication gates.)

Multiplication Gates. We first analyze the number of multiplication operations that are carried out per gate (i.e., in Step 3) and later analyze the entire circuit.

  • Multiplications per gate.  We will follow the calculation that is done per gate in the same order as it appears in Step 3 of preprocessing-II phase:

    1. 1.

      In order to calculate the indicators in Step 3a, it suffices to compute one multiplication and four squarings. We can do this by altering the equations a little. For example, for \(f_g=AND\), we calculate the indicators by first computing \([t]=[\lambda _{a}]\cdot [\lambda _{b}]\) (this is the only multiplication) and then \([x_a]=([t] - [\lambda _{c}])^2\), \([x_b]=([\lambda _{a}]-[t]-[\lambda _{c}])^2\), \([x_c]=([\lambda _{b}]-[t]-[\lambda _{c}])^2\), and \([x_d]=(1-[\lambda _{a}]-[\lambda _{b}]+[t]-[\lambda _{c}])^2\).

      $$\begin{aligned}{}[x_a]&=([t] - [\lambda _{c}])^2\\ [x_b]&=([\lambda _{a}]-[t]-[\lambda _{c}])^2\\ [x_c]&=([\lambda _{b}]-[t]-[\lambda _{c}])^2\\ [x_d]&=(1-[\lambda _{a}]-[\lambda _{b}]+[t]-[\lambda _{c}])^2. \end{aligned}$$

      As another example, for \(f_g=XOR\), we first compute \([t]=[\lambda _{a}]\oplus [\lambda _{b}]=[\lambda _{a}]+[\lambda _{b}]-2 \cdot [\lambda _{a}] \cdot [\lambda _{b}]\) (this is the only multiplication), and then \([x_a]=([t] - [\lambda _{c}])^2\), \([x_b]=(1-[\lambda _{a}]-[\lambda _{b}]+2 \cdot [t]-[\lambda _{c}])^2\), \([x_c]=[x_b]\), and \([x_d]=[x_a]\).

      $$\begin{aligned}{}[x_a]&=([t] - [\lambda _{c}])^2\\ [x_b]&=(1-[\lambda _{a}]-[\lambda _{b}]+2 \cdot [t]-[\lambda _{c}])^2\\ [x_c]&=[x_b]\\ [x_d]&=[x_a]. \end{aligned}$$

      Observe that in XOR gates only two squaring operations are needed.

    2. 2.

      To obtain the correct vector (in Step 3b) which is used in each garbled label, we carry out 4n multiplications (since we multiply the bit \([x_a]\) with each component of the vector \(([\mathbf{k}_{c,1}] - [\mathbf{k}_{c,0}])\). The same holds for bits \([x_b],[x_c]\), and \([x_d]\)). Note that in XOR gates only 2n multiplications are needed, because \(\mathbf{k}_{c,x_c}=\mathbf{k}_{c,x_b}\) and \(\mathbf{k}_{c,x_d}=\mathbf{k}_{c,x_a}\).

    Summing up (and counting a squaring operation as a multiplication), we have \(4n+5\) multiplications per AND gate and \(2n+3\) multiplications per XOR gate.

  • Multiplications in the entire circuit.  Denote the number of multiplication operations per gate (i.e., \(4n+5\) for AND and \(2n+3\) for XOR) by c. We get \(G\cdot c\) multiplications for garbling all gates (where G is the number of gates in \(C_f\)). Besides garbling the gates, we have no other multiplication operations. Thus, we require \(c \cdot G\) multiplications in total.

Depth of the Circuit and Round Complexity. Each gate can be garbled by a circuit of depth 3 (two levels are required for Step 3a and another one for Step 3b). Recall that additions are local operations only, and thus we measure depth in terms of multiplication gates only. Since all gates can be garbled in parallel, this implies an overall depth of three. Since the number of rounds of the SPDZ protocol is in the order of the depth of the circuit, it follows that \(\mathcal {F}_{\mathsf {offline}}\) can be securely computed in a constant number of rounds.

Other Considerations. The overall cost of the preprocessing does not just depend on the number of multiplications. Rather, the parties also need to produce the random data via calls to Random and RandomBit to the functionality \(\mathcal {F}_{\mathsf {SPDZ}}\).Footnote 9 It is clear all of these can be executed in parallel. If W is the number of wires in the circuit, then the total number of calls to RandomBit is equal to W, whereas the total number of calls to Random is \(2 \cdot n \cdot W\).

Communication and Computation Complexity

Denote by \(W_I\) and \(W_O\) the number of input and output wires in \(C_f\). We first analyze the communication complexity of our online phase and then the offline. We count the number of underlying operations in \(\mathcal {F}_{\mathsf {MPC}}\) and then plug in the complexity of these operations when using SPDZ [12]. In addition, we count the number of bit/element broadcasts, which will be replaced later with \(O(n^2)\) using the simple broadcast with abort protocol discussed above.

Online Phase. The parties first broadcast the external bit for w, which is used to open the appropriate key for w for every \(w\in W_I\). Then, the parties open the garbled version of \(C_f\) (i.e., the four entries \(\mathbf{A}_g,\mathbf{B}_g,\mathbf{C}_g,\mathbf{D}_g\)) by calling Output 4Gn times, where G is the number of gates in the circuit. Overall, \(W_I\) bits are broadcast and \(W_I+4Gn\) field elements are opened. The Output command in SPDZ has communication and computation complexity of \(O(n^3)\); plugging this into the above, we obtain communication complexity \(O(W_I\cdot n^3+G\cdot n^4)\) and computation complexity \(O(G\cdot n^4)\). (Note that the bit broadcast operations require essentially no computation.)

Offline Phase. Our offline phase consists of two subphases, preprocessing-I  and preprocessing-II , which are SPDZ’s offline and online phases, respectively. In the former, the parties generate the raw materials like multiplication triples and input pairs (see [12]), while in the latter they evaluate the arithmetic circuit that produces the garbled circuit of \(C_f\). We count the complexity of preprocessing-II  first: We count the number of command invocations and then plug in SPDZ’s complexities. For all \(w\in W_I\), the protocol runs Output for the masking bit of input wire w to the party with whom it is associated, and for all \(w\in W_O\) the protocols runs Output of the masking bit of w to all parties. Then, as mentioned before, the parties input \(I=8Gn\) PRF outputs for the computation of the garbled circuit using the Input command. Finally, there are O(n) invocations of \(\mathbf{Multiply}\) to garble each gate (specifically, \(4n+5\) for an AND gate and \(3n+2\) for a XOR gate). Performing both Input and Multiply in SPDZ takes O(n) communication and computation, and thus we get overall \(O\left( (W_I+W_O)\cdot n^3+G\cdot n^2\right) \) communication and computation.

As for preprocessing-I, we take the complexity analysis from [19]. Generating an input pair requires the communication of \((n-1)\cdot (\kappa ^2+\kappa )\) bit, resulting in \(O(n^2\cdot \kappa ^2\cdot G)\) for all inputs. Generating a multiplication triple costs \(O(n^2\cdot \kappa ^2)\) and we need O(n) multiplication triples per gate, resulting in \(O(G\cdot n^3\cdot \kappa ^2)\) bits of communication.

Round Complexity

As analyzed above, the circuit that constructs the garbled version of \(C_f\) has multiplication depth of three. It therefore remains to plug in the round complexity of the SPDZ offline phase and its implementation of the commands in \(\mathcal {F}_{\mathsf {MPC}}\). This yields an overall complexity of 12 rounds of communication: 6 rounds for preprocessing-I  (SPDZ offline), 3 rounds for preprocessing-II  (SPDZ online), and 3 rounds for the online phase to evaluate the garbled circuit.

Expected Runtimes

To estimate the run time of our protocol, we extrapolate from known public data [11, 12]. (This involves some speculation, but is based on real values for actual cost of the SPDZ operations, which dominates the computation and communication.) The offline phase of the protocol runs both the offline and online phases of the SPDZ protocol. The numbers that are listed in Table 1 refer (in milliseconds) to the SPDZ offline phase, as described in [11], with covert security and a \(20\%\) probability of cheating, using finite fields of size 128 bits. As described in [11], comparable times are obtainable for running in the fully malicious mode (but more memory is needed). The offline phase of SPDZ is comprised of the generation of several types of raw material: The multiplication (Beaver) triples are used by the parties to perform a secure multiplication over two variables stored by the functionality; the number of required multiplication triples is equivalent to the number of multiplication gates in the arithmetic circuit that we use to construct the garbled circuit \(C_f\). The number of random bits (resp. random field elements) is the number of bits (resp. field elements) that will be used in the arithmetic circuit. Finally, we also consider the number of required inputs since the SPDZ’s offline phase produces some raw data for every input wire in the circuit; these raw data are essentially an authenticated share of a random element [r] which is opened only to the party to which that input wire is associated such that in the online phase that party broadcasts \(d=x-r\) where x is its input to that wire; then, the parties perform a constant addition to obtain an authenticated share of x. See [12] for more details.

Table 1 SPDZ offline generation times in milliseconds per operation

Denote by btr(n), rnb(n), rnd(n), inp(n) the times to generate one beaver triple, one random bit, one random element, and entering one input element, respectively (which depend on the number of parties n). Let \(G_X\) and \(G_A\) be the number of XOR and AND gates in \(C_f\), respectively. The \(\textsf {preprocessing-I}\) (SPDZ’s offline phase) time is computed by

$$\begin{aligned} T_{\textsf {preprocessing}\text {-}\textsf {I}}= & {} \big (G_X(2n+3) + G_A(4n+5) \big )\cdot btr(n) \\&+B\cdot rnb(n) + R(n)\cdot rnd(n) + I(n)\cdot inp(n)\\= & {} \big (G_X(2n+3) + G_A(4n+5) \big )\cdot btr(n) \\&+B\cdot rnb(n) + 2Wn\cdot rnd(n) + 8Gn\cdot inp(n). \end{aligned}$$

(Note that R and I also depend on n.)

The implementation of the SPDZ online phase, described in both [11, 20], reports online throughputs of between 200,000 and 600,000 multiplications per second, depending on the system configuration. As remarked earlier, the online time of other operations is negligible and is therefore ignored. Thus, the preprocessing-II  time is computed by

$$\begin{aligned} T_{{\textsf {preprocessing}\text {-}\textsf {II}}} = \frac{\big (G_X(2n+3) + G_A(4n+5) \big )}{mps}, \end{aligned}$$

where mps is the number of multiplications per second that the SPDZ system is able to perform.

To see what this would imply in practice, consider the AES circuit described in [29], which has become the standard benchmarking case for secure computation calculations. The basic AES circuit has \(G\approx 33,000\) (with \(G_A\approx 6000\) and \(G_X\approx 27000\)) gates and a similar number of wires W, including the key expansion within the circuit.Footnote 10 Assuming the parties share a XOR sharing of the AES key and data (which adds an additional \(2 \cdot (n-1) \cdot 128\) gates and wires to the circuit), the parameters for the Initialize call to the \(\mathcal {F}_{\mathsf {SPDZ}}\) functionality in the preprocessing-I protocol will be

$$\begin{aligned} M\approx & {} (G_X+256(n-1))(2n+3) + G_A(4n+5) \\ B\approx & {} G+256n \\ R\approx & {} 2Wn+256n\\ I\approx & {} 8\cdot (G+256(n-1)) \cdot n. \end{aligned}$$

Recall that M is the number of multiplications, B the number of random bits, R the number of random field elements, and I the number of input wires. Using the above execution times for the SPDZ protocol, we can then estimate the time needed for the two parts of our preprocessing step for the AES circuit. The expected execution times, in seconds, are given in Table 2.

These expected times, due to the methodology of our protocol, are likely to estimate both the latency and throughput amortized over many executions. (We only have these times for 2, 3, and 4 parties, since these are the times that have been published for SPDZ offline computations.)

Table 2 Preprocessing times (in seconds) for the AES circuit

The execution of the online phase of our protocol, when the parties are given their inputs and actually want to compute the function, is very efficient: All that is needed is the evaluation of a garbled circuit based on the data obtained in the offline stage. Specifically, for each gate each party needs to process two input wires, and for each wire it needs to expand n seeds to a length which is n times their original length (where n denotes the number of parties). Namely, for each gate each party needs to compute a pseudorandom function \(2n^{2}\) times. (More specifically, it needs to run 2n key schedulings and use each key for n encryptions.) We examined the cost of implementing these operations for an AES circuit of 33,000 gates when the pseudorandom function is computed using the AES-NI instruction set. The runtimes for \(n=2,3,4\) parties were 6.35 ms, 9.88 ms, and 15 ms, respectively, for C code compiled using the gcc compiler on a 2.9 GHZ Xeon machine. The actual runtime, including all non-cryptographic operations, should be higher, but of the same order.

Our runtimes estimates compare favorably to several other results on implementing secure computation of AES in a multi-party setting:

  • In [10], an actively secure computation of AES using SPDZ took an offline time of over five minutes per AES block, with an online time of around a quarter of a second; that computation used a security parameter of 64 as opposed to our estimates using a security parameter of 128.

  • In [20], another experiment was shown which can achieve a latency of 50 milliseconds in the online phase for AES (but no offline times are given).

  • In [26], the authors report on a two-party MPC evaluation of the AES circuit using the TinyOT protocol; they obtain for 80 bits of security an amortized offline time of nearly 3 s per AES block, and an amortized online time of 30 ms; but the reported non-amortized latency is much worse. Furthermore, this implementation is limited to the case of two parties, whereas we obtain security for multiple parties.

Most importantly, all of the above experiments were carried out in a LAN setting where communication latency is very small. However, in other settings where parties are not connected by very fast connections, the effect of the number of rounds on the protocol will be extremely significant. For example, in [10], an arithmetic circuit for AES is constructed of depth 120, and this is then reduced to depth 50 using a bit decomposition technique. Note that if parties are in separate geographical locations, then this number of rounds will very quickly dominate the running time. For example, the latency on Amazon EC2 between Virginia and Ireland is 75ms. For a circuit depth of 50, and even assuming just a single round per level, the running time cannot be less than 3750 ms (even if computation takes zero time). In contrast, our online phase has just 2 rounds of communication and so will take in the range of 150 ms. We stress that even on a much faster network with a latency of just 10 ms, protocols with 50 rounds of communication will still be slow.

Security Proof

In this section, we prove the main theorem of this paper, i.e., see Theorem 1. The security proof contains two steps. In the first step, we reduce the security in the semi-honest case to the security of the original BMR protocol, that is, we only consider an adversary \(\mathcal {A}\) that does not deviate from the prescribed protocol and only tries to learn information from the transcript. In the second step, we show that our protocol remains secure even if \(\mathcal {A}\) is malicious, i.e., is allowed to deviate from the protocol. This second step is performed by showing a reduction from the malicious model to the semi-honest model. In both steps, the adversary \(\mathcal {A}\) is assumed to corrupt parties in the beginning of the execution of the protocol.

We first present some conventions and notations. In both the original BMR protocol and our protocol, the players obtain a garbled circuit and a matched set of garbled inputs; they are then able to evaluate the circuit without further interaction. The players evaluate the circuit from the bottom up until they reach the circuit-output wires. That is, the input wires are said to be at the “bottom” of the circuit, while the output wires are at the “top.” In their evaluation, the players use the garbled gate g to reveal a single external value for wire c (i.e., \(\Lambda _{c}\), where c is g’s output wire) together with an appropriate key vector \(\mathbf{k}_{c,\Lambda _{c}}=k^{1}_{c,\Lambda _{c}},\ldots ,k^{n}_{c,\Lambda _{c}}\). There is only one entry in the garbled gate that can be used to reveal the pair \(( \Lambda _{c},\mathbf{k}_{c,\Lambda _{c}} )\); specifically, if g’s input wires are a and b then entry \((2\Lambda _{a}+\Lambda _{b})\) in the table of the garbled gate of g is used (where the entries indices are 0 for \(A_g\), 1 for \(B_g\), 2 for \(C_g\), and 3 for \(D_g\)). For each gate, we denote the garbled entry for which the players evaluate that gate as the active entry. The other three entries are denoted as the inactive entries. Similarly, we use the term active signal to denote the value \(\Lambda _{c}\) that is revealed for some wire c, and the term active path for the set of active signals that have been revealed to the players during the evaluation of the circuit. Recall that in the online phase of our protocol the players exchange the active signal of all the circuit-input wires. We denote by I the set of indices of the players that are under the control of the adversary \(\mathcal {A}\), and by \(x_I\) their inputs to the functionality. (Note that in the malicious case these inputs might be different from the inputs that the players have been given originally.) In the same manner, J is the set of indices of the honest parties and \(x_J\) denotes their inputs. (Therefore, \(|I\cup J|=n\) and \(I\cap J=\varnothing \).) We denote by W, \(W_{in}\), and \(W_{out}\) the sets of all wires, the set of circuit-input wires (a.k.a. attached wires), and the set of circuit-output wires of the circuit C, respectively. We denote the set of gates in the circuit as \(G=\{ g_1,\ldots ,g_{|G|} \}\). Recall that \(\kappa \) is the security parameter.

Security in the Semi-honest Model

The basic goal of the proof is to show that there exists a probabilistic polynomial time procedure, \(\mathcal {P}\), whose input is a view sampled from the view distribution of a semi-honest adversary involved in a real execution of the original BMR protocol,Footnote 11 namely \({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}}\) in View 1; and its output is a view from the view distribution of a semi-honest adversary involved in a real execution of our protocol, namely \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) in View 2. Formally, the procedure is defined as

$$\begin{aligned} \mathcal {P}: \{{\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}}\}_{\bar{x}} \rightarrow \{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{\bar{x}}, \end{aligned}$$

where \({\bar{x}}=x_1,\ldots ,x_n\) is the players’ input to the functionality.


In this section, we present the procedure \(\mathcal {P}\) and show that \(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{\bar{x}}\) and \(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{\bar{x}}\) are indistinguishable. We then show that the existence of a simulator, \(\mathcal {S_{{\text {BMR}}}}\), for \(\mathcal {A}\)’s view in the execution of the original BMR protocol implies the existence of a simulator \(\mathcal {S_{{\text {OUR}}}}\) for \(\mathcal {A}\)’s view in the execution of our protocol. In the following, we first describe \({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}}\) (View 1) and \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) (View 2), then we describe the procedure \(\mathcal {P}\) and prove the mentioned claims.


We are ready to describe the procedure \(\mathcal {P}\) (Procedure 1), which is given a view \({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}}\) that is sampled from the distribution of the adversary’s views under the input \(\bar{x}\) of the players in the original BMR protocol, and outputs a view from the distribution of the adversary’s views in our protocol (i.e., \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \)). We will then show that the resulting distribution of views is indistinguishable from \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) for every \(\bar{x}\). Since \(\mathcal {P}\) sees the garbled circuit and the matched set of (garbled) inputs from all players, it can evaluate the circuit by itself and determine the active path and the output \(\bar{y}_I\); however, \(\mathcal {P}\) does not know \(\bar{x}_J\) (it only knows \(\bar{x}_I\)) and thus cannot construct a garbled circuit for our protocol from scratch. It must instead use the information that can be extracted from it input view.


Claim 1

Given that the BMR protocol is secure in the semi-honest model, our protocol is secure in the semi-honest model as well.


From the security of the BMR protocol, we know that

$$\begin{aligned} \{\mathcal {S_{{\text {BMR}}}}(1^\kappa ,I,x_I,y_I)\}_{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ {\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}}\right\} _{\bar{x}}. \end{aligned}$$

Thus, for every PPT algorithm, and specifically for algorithm \(\mathcal {P}\), it holds that

$$\begin{aligned} \{\mathcal {P}(\mathcal {S_{{\text {BMR}}}}(1^\kappa ,I,x_I,y_I))\}_{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ \mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\right\} _{\bar{x}}. \end{aligned}$$

Then, if the following computational indistinguishability holds (proven in Claim 2)

$$\begin{aligned} \left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ \mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\right\} _{\bar{x}}, \end{aligned}$$

then by transitivity of indistinguishability, it follows that

$$\begin{aligned} \{\mathcal {P}(\mathcal {S_{{\text {BMR}}}}(1^\kappa ,I,x_I,y_I))\}_{\bar{x}} {\mathop {\equiv }\limits ^{c}}&\left\{ \mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}}\\ \Rightarrow \{\mathcal {P}(\mathcal {S_{{\text {BMR}}}}(1^\kappa ,I,x_I,y_I))\}_{\bar{x}} {\mathop {\equiv }\limits ^{c}}&\left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}}. \end{aligned}$$

Hence, \(\mathcal {P}\circ \mathcal {S_{{\text {BMR}}}}\) is a good simulator for the view of the adversary in the semi-honest model. \(\square \)

In the following, we prove Eq. 1:

Claim 2

The probability ensemble of the view of the adversary in the real execution of our protocol and the probability ensemble of the view of the adversary resulting by procedure \(\mathcal {P}\), both indexed by the players’ inputs to the functionality \(\bar{x}\), are computationally indistinguishable. That is:

$$\begin{aligned} \left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ \mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\right\} _{\bar{x}}. \end{aligned}$$


Remember that in procedure \(\mathcal {P}\) we do not have any information about the masking values \(\{\lambda _{w}\mid w\in W\}\) (except of those which are known to the adversary); therefore, we cannot compute the indicators \(x_A,x_B,x_C,x_D\) (as described in Sect. 4.2) and thus could not tell which key vector is encrypted in each entry. That is, we cannot fill out correctly the four garbled gate entries ABCD. On the other hand, in procedure \(\mathcal {P}\) we do know the set of external values \(\{\Lambda _{w}\mid w\in W\}\); thus, we know for sure that for every gate g, with input wires ab and output wire c, the key vector encrypted in the \(2\Lambda _{a}+\Lambda _{b}\)th entry of the garbled table of gate g is the \(\Lambda _{c}\)th key vector \(\mathbf{k}_{c,\Lambda _{c}}\).

Let us denote by \(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) the view of the adversary in the execution of our protocol (which computes the functionality f) with players’ inputs \(\bar{x}\) when using the keys \(\{k^{i}_{w,\beta }\mid 1\le i\le n, w\in W, \beta \in \{0,1\} \}\) and the masking values \(\{\lambda _{j}\mid j\in W\}\). Similarly, denote by \(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) the view of the adversary in the output of procedure \(\mathcal {P}\).

Given that

$$\begin{aligned} \{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}{\mathop {\equiv }\limits ^{c}} \{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\end{aligned}$$

are computationally indistinguishable (i.e., under the same functionality, players’ inputs, keys, and masking values), it follows that

$$\begin{aligned} \left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ \mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\right\} _{\bar{x}} \end{aligned}$$

since the functionality, keys, and masking values are taken from exactly the same distribution in both cases. \(\square \)

In Claim 3, we prove that Eq. 2 holds.

Claim 3

Fix a functionality f, the players’ inputs \(\bar{x}\), keys \(\{k^{i}_{w,\beta }\mid 1\le i\le n, w\in W, \beta \in \{0,1\} \}\) and masking values \(\{\lambda _{j}\mid j\in W\}\) used in both the execution of our protocol and procedure \(\mathcal {P}\), then Eq. (2) holds; that is,

$$\begin{aligned} \{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}{\mathop {\equiv }\limits ^{c}} \{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}. \end{aligned}$$


Remember that the difference between \(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) and \(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) is in the values of the entries of the garbled gates which are not in the active path, that is, in \(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) these values are computed as described in Sect. 4.2 while in \(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) they are just random values from \(({\mathbb {F}}_p)^n\).

Let \(\mathcal {D}\) be a polynomial time distinguisher such that

$$\begin{aligned}&|Pr[\mathcal {D}(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}})=1] \\&\quad - Pr[\mathcal {D}(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}})=1]|=\varepsilon (\kappa ) \end{aligned}$$

and assume by contradiction that \(\varepsilon \) is some non-negligible function in \(\kappa \).

Let C be the Boolean circuit that computes the functionality f. For the purpose of the proof, we index the gates of C (the set of gates is denoted by G) in the following manner: C may be considered as a directed acyclic graph (DAG), where the gates are the nodes in the graph and a output wire of gate \(g_1\) which enters as input wire to gate \(g_2\) indicates the edge \((g_1,g_2)\) in the graph. We compute a topological orderings of the graph, that is, if the output wire of gate \(g_1\) enters to gate \(g_2\), then the index that \(g_1\) gets in the ordering is lower than the index of gate \(g_2\). (Note that there might exist many valid topological ordering for the same graph.) For the sake of the proof, whenever we write \(g_i\) we refer to the \(i^{th}\) gate in the topological ordering.

We define the hybrid \(H^t\) as the view in which the gates \(g_1,g_2,\ldots ,g_t\) are computed as in procedure \(\mathcal {P}\) (i.e., the inactive entries are just random elements from \(({\mathbb {F}}_p)^n\)) and the gates \(g_{t+1},\ldots ,g_{|G|}\) are computed as described in our protocol (Sect. 4.2). Observe that \(H^0\) is distributed exactly as the view of the adversary in \(\{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\) and \(H^{|G|}\) is distributed exactly as the view of the adversary in \(\{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}\). Thus, by a hybrid argument it follows that there exists an integer \(0\le z<|G|-1\) such that the distinguisher \(\mathcal{D}\) can distinguish between the two distributions \(H^z\) and \(H^{z+1}\) with non-negligible probability \(\varepsilon '\).

Let us take a closer look at the hybrids \(H^z\) and \(H^{z+1}\): Let g be a gate from layer \(z+1\) with input wires ab and output wire c,

  • If the view is taken from \(H^{z+1}\), then the garbled table \((A_g,B_g,C_g,D_g)\) is computed as described in procedure \(\mathcal {P}\). That is, the external values \(\Lambda _{a},\Lambda _{b},\Lambda _{c}\) are known and thus the key \(\mathbf{k}_{c,\Lambda _{c}}\) is encrypted using keys \(\mathbf{k}_{a,\Lambda _{a}}\) and \(\mathbf{k}_{b,\Lambda _{b}}\) in the \(2\Lambda _{a}+\Lambda _{b}\)th entry (the active entry), while the other three (inactive) entries are independent of \(\mathbf{k}_{a,\Lambda _{a}}\), \(\mathbf{k}_{b,\Lambda _{b}}\), \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\) and \(\mathbf{k}_{b,\bar{\Lambda _{b}}}\) (because \(\mathcal {P}\) chooses them at random from \(({\mathbb {F}}_p)^n\)).

  • If the view is taken from \(H^z\), then the garbled table of g is computed correctly for all the four entries. Let \(\tilde{g}_a\) be a gate whose output wire is a (which, as written above, is an input wire to gate g); note that by the topological ordering of the gates the index of \(\tilde{g}_a\) has a lower index than the index of g, and thus there is exactly one entry (the active entry) in the garbled table of \(\tilde{g}_a\) which encrypts \(\mathbf{k}_{a,\Lambda _{a}}\), while the other three (inactive) entries are random values from \((F_p)^n\) and therefore reveal no information about \(\mathbf{k}_{a,\Lambda _{a}}\), and, more importantly, no information about \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\). The same observation holds for the gate \(\tilde{g}_b\) whose output wire is b. Therefore, the computation of the garbled table of gate g (recall that it is in layer \(z+1\) and we are currently looking at hybrid \(H^z\)) involves exactly one entry (i.e., the active entry) which depends on both \(\mathbf{k}_{a,\Lambda _{a}}\) and \(\mathbf{k}_{b,\Lambda _{b}}\), while the other three (inactive) entries depend on at least one of \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\) and \(\mathbf{k}_{b,\bar{\Lambda _{b}}}\), which the distinguisher \(\mathcal {D}\) has no information about. Thus, whenever a computation of F using a key from the vectors \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\) or \(\mathbf{k}_{b,\bar{\Lambda _{b}}}\) is required in order to compute the inactive entries of gate g (in the view \(H^z\)), we could use some other key \(\tilde{k}\) instead; in particular, we could use F without even knowing \(\tilde{k}\) at all, e.g., when working with an oracle.

In the following analysis, we exploit this observation: Since the distinguisher \(\mathcal {D}\) has no information about \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\) or \(\mathbf{k}_{b,\bar{\Lambda _{b}}}\), we can construct the garbled table using some other keys, and because we are interested in the result of F under those keys (and not in the keys themselves) we can even use an oracle to replace a PRF. Thus, if \(\mathcal {D}\) distinguishes between \(H^z\) and \(H^{z+1}\), then we can use it to distinguish between an oracle to a pseudorandom function and an oracle to a truly random function (under multiple invocations of the oracle, because there are 2n keys in the two vectors \(\mathbf{k}_{a,\bar{\Lambda _{a}}}\) and \(\mathbf{k}_{b,\bar{\Lambda _{b}}}\)).

Let us first define the notion of a pseudorandom function under multiple keys:

Definition 1

Let \(F:\{0,1\}^n\times \{0,1\}^n\rightarrow \{0,1\}^n\) be an efficient, length preserving, keyed function. F is a pseudorandom function under multiple keys if for all polynomial time distinguishers \(\mathcal {D}\), there is a negligible function neg such that:

$$\begin{aligned} |Pr[\mathcal {D}^{F_{\bar{k}}(\cdot )}(1^n)=1]-Pr[\mathcal {D}^{\bar{f}(\cdot )}(1^n)=1]|\le neg(n), \end{aligned}$$

where \(F_{\bar{k}}=F_{k_1},\ldots ,F_{k_{m(n)}}\) are the pseudorandom function F keyed with polynomial number of randomly chosen keys \(k_1,\ldots ,k_{m(n)}\) and \(\bar{f}=f_1,\ldots ,f_{m(n)}\) are m(n) random functions from \(\{0,1\}^n\rightarrow \{0,1\}^n\). The probability in both cases is taken over the randomness of \(\mathcal {D}\) as well.

It is easy to see (by a hybrid argument) that if F is a pseudorandom function, then it is a pseudorandom function under multiple keys. Thus, since the function F used in our protocol is a PRF, for every polynomial time distinguisher \(\tilde{\mathcal {D}}\), every positive polynomial p and for all sufficiently large \(\kappa \):

$$\begin{aligned} |Pr[\tilde{\mathcal {D}}^{F_{\bar{k}}(\cdot )}(1^{\kappa })=1]-Pr[\tilde{\mathcal {D}}^{\bar{f}(\cdot )}(1^{\kappa })=1]|\le \frac{1}{p(\kappa )}. \end{aligned}$$

We now present a reduction from the indistinguishability between \(H^z\) and \(H^{z+1}\) to the indistinguishability of the pseudorandom function F under multiple keys. Given the polynomial time distinguisher \(\mathcal {D}\) which distinguishes between \(H^z\) and \(H^{z+1}\) with non-negligible probability \(\varepsilon '\), we construct a polynomial time distinguisher \(\mathcal {D}'\) who distinguishes between F under multiple keys and a set of truly random functions (and thus contradicting the pseudorandomness of F). The distinguisher \(\mathcal {D}'\) has an access to \(\overline{\mathcal {O}}=\mathcal {O}_1,\ldots ,\mathcal {O}_{2n}\) (which is either a PRF under multiple keys or a set of truly random functions). \(\mathcal {D}'\) acts as follows:

  1. 1.

    Chooses keys and masking values for all players and wires, i.e., \(\{k^{i}_{w,b} \mid w\in W, b\in \{0,1\}, i\in \{1,\ldots ,n\}\}\) and \(\{ \lambda _{w} \mid w\in W \}\).

  2. 2.

    Constructs the gates \(g_1,\ldots ,g_z\) as described in procedure \(\mathcal {P}\), i.e., only the active entry is calculated correctly, while the other three entries are taken to be random from \(({\mathbb {F}}_p)^n\).

  3. 3.

    Constructs the garbled table of gate \(g_{z+1}\) in the following manner: Denote the input wires of the gate by ab and the output wire by c; we want that the key vector \(\mathbf{k}_{c,\Lambda _{c}}\) be encrypted using the key vectors \(\mathbf{k}_{a,\Lambda _{a}}\) and \(\mathbf{k}_{b,\Lambda _{b}}\) and held in the \(2\Lambda _{a}+\Lambda _{b}\) entry. Thus:

    • Whenever a result of F applied to the key \(k^{i}_{a,\Lambda _{a}}\) is required, it is computed correctly as in the protocol. (The same holds for the key \(k^{i}_{b,\Lambda _{b}}\).)

    • Whenever a result of F applied to the key \(k^{i}_{a,\overline{\Lambda _{a}}}\) is required, the distinguisher \(\mathcal{D}'\) queries the oracle \(\mathcal {O}_i\) instead. (The same holds for the key \(k^{i}_{b,\overline{\Lambda _{b}}}\); here, however, the distinguisher \(\mathcal{D}'\) queries the oracle \(\mathcal {O}_{n+i}\).)

  4. 4.

    Completes the computation of the garbled circuit, i.e., the garbled tables of gates \(g_{z+2},\ldots ,g_{|G|}\), correctly, as in the protocol.

  5. 5.

    Hands the resulting view to \(\mathcal{D}\) and outputs whatever it outputs.

Observe that if \(\overline{\mathcal {O}}=F_{\bar{k}}\), then the view that \(\mathcal{D}'\) hands to \(\mathcal{D}\) is distributed identically to \(H^z\), while if \(\overline{\mathcal {O}}=\bar{f}\), then the view that \(\mathcal{D}'\) hands to \(\mathcal{D}\) is distributed identically to \(H^{z+1}\). Thus:

$$\begin{aligned} | Pr[\mathcal{D}'^{F_{\bar{k}}(\cdot )}(1^\kappa )=1] - Pr[\mathcal{D}'^{\bar{f}(\cdot )}(1^\kappa )=1] |= & {} \\ | Pr[\mathcal{D}(H^{z})=1] - Pr[\mathcal{D}(H^{z+1})=1] |= & {} \varepsilon ', \end{aligned}$$

where \(\varepsilon '\) is a non-negligible probability (as mentioned above), in contradiction to the pseudorandomness of F. We conclude that the assumption of the existence of \(\mathcal {D}\) is incorrect and therefore

$$\begin{aligned} \{{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}{\mathop {\equiv }\limits ^{c}} \{\mathcal {P}({\textsc {REAL}}^\mathsf{BMR}_{\mathcal {A}})\}_{f,\bar{x},k^{i}_{w,\beta }, \lambda _{j}}. \end{aligned}$$

\(\square \)

Security in the Malicious Model

When our protocol relies on SPDZ as its underlying MPC, the keys that each party sees are guaranteed to be uniformly chosen from \({\mathbb {F}}_p\) and the masking values of all wires are guaranteed to be random values from \(\{0,1\}\). Thus, the garbled circuit is guaranteed to be built correctly and privately by the parties as a function of the original circuit C (which computes the functionality f), the set of keys of all parties, the set of masking values of all wires and by the PRF results that the parties apply to these keys. However, the elements of the last item (the PRF results) are not guaranteed to be computed correctly (moreover, below we show that it is wasteful to verify the correctness of their computation) and we must show that cheating in a PRF result(s) would cause the honest parties to abort.

Specifically, there are two locations in which a maliciously corrupted party might deviate from the protocol:

  • A corrupted party might cheat in the offline phase by inputting a false value as one (or more) of the PRF results of its keys (i.e., PRF result that is not computed as described in the protocol).

  • A corrupted party \(P_c\), to whom the circuit-input wire w is attached, might cheat in the online phase by sending the external value \(\Lambda _{w}'\ne \lambda _{w}\oplus \rho _{w}\) (i.e., \(P_c\) sends \(\overline{\Lambda _{w}}\)).

It is clear that the second kind of behavior has the same effect as if the adversary inputs to the functionality the value \(\bar{\rho _{w}}\) instead of \(\rho _{w}\), since \(\bar{\Lambda _{w}}=\lambda _{w}\oplus \bar{\rho _{w}}\), and thus, this behavior is permitted to a malicious adversary (i.e., a malicious adversary is able to change the input to the functionality without being considered as a cheat since this behavior is unavoidable even in the ideal model).

We break the proof of security in the malicious case into two steps: First we show that the adversary cannot break the correctness of the protocol with more than negligible probability, and then we use that result (of correctness) in order to show that the joint distributions of the output of the parties in the ideal and real worlds are indistinguishable.


Let us denote the event in which a corrupted party cheats by inputting a false PRF result in the offline phase as cheat (The event refers to one corrupted party and we show below that even if only one party cheats, then the honest parties abort). In the following, we prove the following claim:

Claim 4

A malicious adversary cannot break the correctness property of our protocol except with negligible probability. Formally, denote the set of outputs of the honest parties in our protocol as \(\Pi _{\mathsf {SFE}}^J\) and their outputs when computed by the functionality f as \(y_J\), then for every positive polynomial p and for sufficiently large \(\kappa \) it holds that

$$\begin{aligned} Pr[\Pi _{\mathsf {SFE}}^J\ne y_J \wedge \Pi _{\mathsf {SFE}}^J\ne \bot \mid \mathsf{cheat}] \le \frac{1}{p(\kappa )}. \end{aligned}$$


To harm the correctness property of the protocol, the adversary has to provide to the offline phase incorrect results of F applied to its keys, such that the generated garbled circuit will cause the honest parties to output some set of values that is different from \(y_J\).

Let \(GC_{SH}\) be the garbled circuit generated by the offline phase in the semi-honest model, i.e., when the adversary provides the correct results of F, and let \(GC_M\) be the garbled circuit resulted in the malicious model (such that in both cases the random tape used by the underlying MPC, the adversary and the parties is the same, that is, same keys and masking values are being used).

Observe that if the adversary succeeds in breaking the correctness, then there must be at least one wire c and at least one honest party \(P_j\) where the gate g has input wires ab and output wire c, such that in the evaluation of \(GC_{SH}\) (in the online phase) the active signal that \(P_j\) sees is \((v,k^{j}_{c,v})\) (where \(v=\Lambda _{c}\) is the external value) and in \(GC_M\) the active signal is \((\bar{v},k^{j}_{c,\bar{v}})\) (that is, the adversary succeeded in flipping the signal that passes through wire c).

In the following analysis, we provide the adversary with more power than it has in reality and assume that it can predict, even before supplying its PRF results (i.e., in the offline phase), which entries are going to be evaluated in the online phase (i.e., it knows the active path). For example, it knows that for some gate g with input wires ab and output wire c, \(\Lambda _{a}=\Lambda _{b}=0\) and thus the active entry for gate g is \(A_g\). In addition, observe that the success probability of the adversary (of breaking the correctness property) is independent for every gate; thus, it is sufficient to calculate the success probability of the adversary for a single gate and then multiply the result by the number of gates in the circuit.

We first analyze the success probability of the adversary in breaking the correctness of the gate g with input wires ab and output wire c. Assume, without loss of generality, that the active entry of gate g is \(A_g\) which is a vector of n elements from \({\mathbb {F}}_p\), such that the jth element of \(A_g\) is calculated (as described in Functionality 4) by

$$\begin{aligned} A^j_g = \left( \sum _{i=1}^n F_{k^{i}_{a,0}}(0\!\parallel \!j\!\parallel \!g) + F_{k^{i}_{b,0}}(0\!\parallel \!j\!\parallel \!g)\right) + k^{j}_{c,v}. \end{aligned}$$

Recall that J is the set of corrupted parties and \(J=[N]\smallsetminus I\). For simplicity, define

$$\begin{aligned} X^j\triangleq & {} F_{k^{I}_{a,0}}(0\!\parallel \!j\!\parallel \!g)+F_{k^{I}_{b,0}}(0\!\parallel \!j\!\parallel \!g)\\= & {} \sum _{i\in I} \big ( F_{k^{i}_{a,0}}(0\!\parallel \!j\!\parallel \!g) + F_{k^{i}_{b,0}}(0\!\parallel \!j\!\parallel \!g) \big )\\ Y^j\triangleq & {} F_{k^{J}_{a,0}}(0\!\parallel \!j\!\parallel \!g)+F_{k^{J}_{a,0}}(0\!\parallel \!j\!\parallel \!g)\\= & {} \sum _{i\in J} \big ( F_{k^{i}_{a,0}}(0\!\parallel \!j\!\parallel \!g) + F_{k^{i}_{b,0}}(0\!\parallel \!j\!\parallel \!g) \big ), \end{aligned}$$

i.e., \(X^j\) is the sum of the PRF results that the adversary provides and \(Y^j\) is the sum of the PRF results that the honest player provides. Thus, rewriting Eq. (4) we obtain

$$\begin{aligned} A^j_g = X^j + Y^j + k^{j}_{c,v}. \end{aligned}$$

In order to break the correctness of gate g, the adversary has to flip the active signal for at least one \(j\in J\) (i.e., for at least one honest party), that is, the adversary has to provide false PRF results \(\tilde{X}^j\) such that

$$\begin{aligned} \tilde{A}^j_g = \tilde{X}^j + Y^j + k^{j}_{c,\bar{v}}. \end{aligned}$$

Let \(\Delta ^j\) be the difference between the two hidden keys, i.e., \(\Delta ^j = k^{j}_{c,v}-k^{j}_{c,\bar{v}} \mod p\), then it follows that \(k^{j}_{c,\bar{v}}=k^{j}_{c,v}-\Delta ^j \mod p\) and thus in order to make the honest party \(P_j\) evaluate the key \(k^{j}_{c,\bar{v}}\) instead of the key \(k^{j}_{c,v}\) the adversary has to set \(\tilde{X}=X-\Delta ^j\). Then, it holds that

$$\begin{aligned} \tilde{X} + Y + k^{j}_{c,v}= & {} X - \Delta ^j + Y + k^{j}_{c,v}\\= & {} X + Y + k^{j}_{c,\bar{v}}\\= & {} \tilde{A}^j_g \end{aligned}$$

as required and the jth element (which actually verified by \(P_j\)) will be flipped. Observe that in order to succeed the adversary has to find \(\Delta ^j\). But, since \(k^{j}_{c,v}\) and \(k^{j}_{c,\bar{v}}\) are random elements from \({\mathbb {F}}_p\), the value \(\Delta ^j\) is also a random element from \({\mathbb {F}}_p\). Note that the adversary provides all the PRF result before the garbled circuit and the garbled inputs are revealed, and thus the values that it provides are independent of the garbled circuit. (In particular, they are independent of the keys \(k^{j}_{c,v}\) and \(k^{j}_{c,\bar{v}}\).) Note that the same analysis holds for the entries \(B_g,C_g,D_g\) as well.

Let flipped-g be the event in which the adversary succeeds in flipping the signal for at least one honest party \(P_j\) in the active entry of gate g, it follows that:

$$\begin{aligned} \Pr [{\mathsf{flipped}\text {-}\mathsf{g}}] = \Pr [\Delta ^j=k^{j}_{c,v}-k^{j}_{c,\bar{v}}] = \frac{1}{p} < \frac{1}{2^\kappa }. \end{aligned}$$

Now, assume that when the adversary guesses a wrong \(\Delta ^j\) for some entry of some gate, the parties do not abort and somehow can keep evaluating the circuit using the correct key; then, the probability of the adversary breaking the correctness of the protocol is just a sum of its success probability for all gates. Let t be a polynomial such that \(t(\kappa )\) is an upper bound for the number of gates in the circuit, then by union bound we get:

$$\begin{aligned} Pr[\Pi _{\mathsf {SFE}}^J\ne y_J \mid \mathsf{cheat}]< \frac{t(\kappa )}{2^{\kappa }} < \frac{1}{q(\kappa )} \end{aligned}$$

for every positive polynomial q. \(\square \)

Emulation in the Ideal Model

Next, we describe the ideal model in which the adversary’s view will be emulated, and we show the existence of a simulator \(\mathcal {S'_{{\text {OUR}}}}\) in the malicious model which uses the simulator \(\mathcal {S_{{\text {OUR}}}}\) in the semi-honest model. The ideal model is as follows:

  • Inputs. The parties send their inputs (\(\bar{x}\)) to the trusted party.

  • Function computed. The trusted party computes \(f(\bar{x})\).

  • Adversary decides. The adversary gets the output \(y_I\) and sends to the trusted party whether to “continue” or “halt.” If “continue” then the trusted party sends to the honest parties \(P_J\) the output \(y_J\), otherwise the trusted party sends abort to players \(P_J\).

  • Outputs. The honest parties output whatever the trusted party sent them, while the corrupted parties output nothing. The adversary \(\mathcal {A}\) outputs any arbitrary (PPT) function of the initial input of the corrupted parties and the value \(y_J\) obtained from the trusted party.

The reason that the adversary may decide whether the honest parties obtain the output or not is due to the fact that guaranteed output delivery and fairness cannot be achieved with dishonest majority in the general case (as shown in [8]).

The ideal execution of f on inputs \(\bar{x}\) and corrupted parties \(P_I\) (that are controlled by adversary \(\mathcal {A}\)) is denoted by \({\textsc {IDEAL}}^{f}_{\mathcal {A},I}(\bar{x})\) and the real execution is denoted by \({\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})\); in both cases, they refer to the joint distribution of the outputs of all parties. (In the following proof, we use \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) to refer to the real execution in the semi-honest model.)

Proof outline. In the following proof, we make use of two procedures, \(\mathcal {P}'\) (which is close to procedure \(\mathcal {P}\)) and \(\mathcal {H}\). The procedure \(\mathcal {P}'\) is given a view of the adversary in the semi-honest model (or a view that is indistinguishable from it, e.g., a simulated view) and a set of keys \(\mathbf{K_I}\), and outputs the exact same view, with the exception that the keys that are opened to the adversary are now \(\mathbf{K_I}\). The procedure \(\mathcal {H}\) is given a view of the adversary in the semi-honest model (or a view that is indistinguishable from it) and a set of PRF results \(\mathbf{F_I}\), and outputs the exact same view, with the exception that it applies the set of PRF results \(\mathbf{F_I}\) to the view as if the adversary has provided them in the real execution of the protocol (that is, the set \(\mathbf{F_I}\) affects the exact same locations in the input view that it would have affect it in a real execution of the protocol in the malicious model).

The simulator \(\mathcal {S'_{{\text {OUR}}}}\) will engage in the ideal computation such that it only gives the input \(x_I\) to the trusted party and then receives the output \(y_I\). The simulator \(\mathcal {S'_{{\text {OUR}}}}\) also instructs the trusted party whether to abort or not (i.e., whether to send the honest parties their output). The output of the parties (all of them) in the ideal settings must be indistinguishable from their output in the real execution of our protocol.

The idea of the simulation method is that we can use the fact that there exists a simulator \(\mathcal {S_{{\text {OUR}}}}\) in the semi-honest model which can construct a garbled circuit that is indistinguishable from the one constructed by our protocol in the semi-honest model. By internally running \(\mathcal {A}\), the simulator \(\mathcal {S'_{{\text {OUR}}}}\) can extract the inputs of the adversary \(\bar{x}\), the keys \(\mathbf{K_I}\) that were opened to it and the exact locations in which \(\mathcal {A}\) has cheated (that is, the set \(\mathbf{F_I}\) of PRF results that it provides given that the set of keys that it sees are \(\mathbf{K_I}\)). Hence, using the procedures \(\mathcal {P}'\) and \(\mathcal {H}\), the simulator \(\mathcal {S'_{{\text {OUR}}}}\) can tweak the garbled circuit which resulted by \(\mathcal {S_{{\text {OUR}}}}\) in the specific locations to match the garbled circuit.

The procedure\(\mathcal {P}'\). Let us define the procedure \(\mathcal {P}'\) (Procedure 2) which receives as input a view simulated by \(\mathcal {S_{{\text {OUR}}}}\) or a real view of the adversary in the semi-honest model (\({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \)), along with a set of keys \(\mathbf{K_I}=\{k^{i}_{w,j}\mid i\in I, w\in W, j\in \{0,1\}\}\) (i.e., two keys per wire per corrupted party) and rebuilds the garbled circuit just as \(\mathcal {P}\) did (in the semi-honest case), but instead of using random keys of its choice it uses the keys received as input for the corrupted parties I. Even though Procedure \(\mathcal {P}\) was originally used to transform a view of the BMR execution into a view of the execution of our protocol, we can use it to transform a view of our protocol into another view of our protocol (e.g., by only changing the keys); this is exactly what we do in the simulation.


Claim 5

Denote by \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) (\bar{x})\) the view of the semi-honest adversary in our protocol when the inputs of the parties are \(\bar{x}\), and denote by \(\mathcal {P}'({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) (\bar{x}),\mathbf{K_I})\) the result of procedure \(\mathcal {P}'\) applied on \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) (\bar{x})\) using the keys \(\mathbf{K_I}\); then, given that the keys in \(\mathbf{K_I}\) are chosen uniformly from \({\mathbb {F}}_p\) it follows that for every \(\bar{x}\)

$$\begin{aligned} {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) {\mathop {\equiv }\limits ^{c}} \mathcal {P}'({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) ,\mathbf{K_I}) \end{aligned}$$


The proof follows identically the proof of Claim 2. \(\square \)

Corollary 5.1

Given that the keys in \(\mathbf{K_I}\) are chosen uniformly from \({\mathbb {F}}_p\), the probability ensemble of the view in the semi-honest model \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) and the view when procedure \(\mathcal {P}'\) applied on it (using \(\mathbf{K_I}\)), such that the ensembles are indexed by the inputs of the parties \(\bar{x}\), are indistinguishable. That is

$$\begin{aligned} \left\{ {\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ \mathcal {P}'({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) ,\mathbf{K_I})\right\} _{\bar{x}}. \end{aligned}$$

The procedure\(\mathcal {H}\) We now define the procedure \(\mathcal {H}\) (Procedure 3) which is given a view from the distribution \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) and a set of PRF results \(\mathbf{F_I}\) (computed correctly or not) for every key of parties \(\{P_i\}_{i\in I}\). The procedure returns a corresponding view in which the garbled circuit is computed as if it was computed in a real execution of our protocol where the adversary inputs in the offline phase the PRF results \(\mathbf{F_I}\).

Let \(\mathbf{K_I}\), as before, be the set of keys generated for the corrupted parties in the offline phase, and \(\mathbf{\lambda _I}\) be the set of masking values generated for the circuit-output wires and for the wires that are attached to the corrupted parties (i.e., the masking values that are in the adversary’s view). Note that the PRF results that the corrupted parties input to the functionality (in the offline phase) depend only on the adversary’s random tape r, and on the keys and masking values outputted to them from the underlying MPC. That is, the PRF results that they provide can be seen as \(\mathcal {A}(r,\mathbf{K_I},\mathbf{\lambda _I})\). Since the PRF results that the corrupted parties input to the functionality influence only the resulted garbled gates, in the exact same manner as described in Procedure \(\mathcal {H}\); we get the following:

Claim 6

Let \({{\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})}_\mathbf{K_I,F_I}\) be the view of the adversary (not the joint view of all parties) in the execution of our protocol in the malicious model where the keys that the adversary sees are \(\mathbf{K_I,}\) and the PRF results that it provides are \(\mathbf{F_I}\). Similarly, let \({{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) }_\mathbf{K_I}\) be the view of the adversary in the execution of our protocol in the semi-honest model where the keys that it sees are \(\mathbf{K_I}\). For every \(\{ \mathbf{K_I,F_I} \}\), it follows that

$$\begin{aligned} {{\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})}_\mathbf{K_I,F_I}\equiv & {} \mathcal {H}({{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) }_\mathbf{K_I}, \mathbf{F_I})\nonumber \\= & {} \mathcal {H}({{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) }_\mathbf{K_I}, \mathcal {A}(r,\mathbf{K_I},\mathbf{\lambda _I})). \end{aligned}$$


The proof follows immediately from the definition of the procedure \(\mathcal {H}\). \(\square \)

The simulator\(\mathcal {S'_{{\text {OUR}}}}\) As mentioned earlier, the simulator \(\mathcal {S'_{{\text {OUR}}}}\) uses the procedures \(\mathcal {H}\) and \(\mathcal {P}'\) described above:

  1. 1.

    The simulator \(\mathcal {S'_{{\text {OUR}}}}\) runs our protocol internally such that it takes the role of the honest parties \(P_J\) and the trusted party, and uses the algorithm \(\mathcal {A}\) to control the parties \(P_I\). The simulator halts the internal execution right after it receives the external values \(\mathbf{\Lambda _I}\) to all the corrupted parties in the online phase (that is, it halts after Step 4 of the online phase of Protocol 2). From the internal execution, the simulator \(\mathcal {S'_{{\text {OUR}}}}\) can extract (learn) the following values:

    1. (a)

      The keys \(k^{I}_{w,0},k^{I}_{w,1}\) (of the adversary, in addition to the honest party’s keys \(k^{J}_{w,0},k^{J}_{w,1}\) since \(\mathcal {S'_{{\text {OUR}}}}\) is the trusted party who chooses them) for every wire w.

    2. (b)

      Masking values \(\mathbf{\lambda }\) for all wires, in particular, the masking values of the circuit-input wires that are attached to \(P_I\), i.e., \(\mathbf{\lambda _I}\).

    3. (c)

      The values \(\mathbf{F_I}\), i.e., 2n results for every key. Since \(\mathcal {S'_{{\text {OUR}}}}\) is the trusted party in the internal execution, it also knows the PRF results for the honest parties’ keys. We denote the set of PRF results (for all keys, both adversary’s and honest party’s) as \(\mathbf{F}\). Moreover, observe that \(\mathcal {S'_{{\text {OUR}}}}\) can check whether \(\mathcal {A}\) has cheated in \(\mathbf{F_I}\).

    4. (d)

      From \(\mathbf{\lambda _I}\) and \(\mathbf{\Lambda _I}\), the simulator \(\mathcal {S'_{{\text {OUR}}}}\) can conclude \(\mathcal {A}\)’s input to the functionality \(x_I\).

  2. 2.

    Now, focusing on the ideal world, the honest parties and \(\mathcal {S'_{{\text {OUR}}}}\) (this time as the adversary) send their inputs to the trusted party. \(\mathcal {S'_{{\text {OUR}}}}\) sends \(x_I\) (that was extracted earlier).

  3. 3.

    The simulator \(\mathcal {S'_{{\text {OUR}}}}\) receives the output \(y_I\) from the trusted party.

  4. 4.

    \(\mathcal {S'_{{\text {OUR}}}}\) now knows \(\mathcal {A}\)’s input to the functionality \(x_I\) and the output of f on \(x_I\) and \(x_J\) (where \(x_J\) remains hidden to it), it computes \(v=\mathcal {S_{{\text {OUR}}}}(1^\kappa , I,x_I,y_I)\).

  5. 5.

    The simulator \(\mathcal {S'_{{\text {OUR}}}}\) computes \(v'=\mathcal {P}'(v,\mathbf{K_I})\).

  6. 6.

    The simulator \(\mathcal {S'_{{\text {OUR}}}}\) computes \(v''=\mathcal {H}(v',\mathbf{F_I})\) (note that \(\mathbf{F_I}=\mathcal {A}(r,\mathbf{K_I},\mathbf{\lambda _I}))\).

  7. 7.

    Having the modified view \(v''\) and the garbled circuit \(GC_M\) within it, \(\mathcal {S'_{{\text {OUR}}}}\) now evaluates the circuit on behalf of the honest players with the inputs \(x_I\) and \(x_J=0^{|J|}\).Footnote 12 If these parties abort, then \(\mathcal {S'_{{\text {OUR}}}}\) instructs the trusted party to not send the output \(y_J\) to \(P_J\) (i.e., to output \(\bot \)). Otherwise, if the evaluation succeeds, then \(\mathcal {S'_{{\text {OUR}}}}\) instructs the trusted party to output the correct output \(y_J\).Footnote 13

  8. 8.

    The simulator \(\mathcal {S'_{{\text {OUR}}}}\) outputs the view \(v''\) as the adversary’s simulated output.

Indistinguishability: Real Versus Ideal. To complete the proof of security in the malicious model, we have to prove the following:

Claim 7

The distribution ensemble of the output of the parties under the simulation of \(\mathcal {S'_{{\text {OUR}}}}\) and under the real execution of our protocol are indistinguishable.

Formally, let \(\{{\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})\}_{\bar{x}}\) be the probability ensemble (indexed by the inputs of the parties) of the view of the parties that are under the control of the adversary \(\mathcal {A}\) in the real execution of our protocol and \(\{{\textsc {IDEAL}}^{\mathcal {S'_{{\text {OUR}}}}}_{\mathcal {A}}( \bar{x})\}_{\bar{x}}\) be the probability ensemble of their view in the execution aided by a trusted party (i.e., in the ideal model with the simulator \(\mathcal {S'_{{\text {OUR}}}}\)), then:

$$\begin{aligned} \left\{ {\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})\right\} _{\bar{x}} {\mathop {\equiv }\limits ^{c}} \left\{ {\textsc {IDEAL}}^{\mathcal {S'_{{\text {OUR}}}}}_{\mathcal {A}}( \bar{x})\right\} _{\bar{x}}. \end{aligned}$$


Immediate from the proof of Claim 8. That is, in Claim 8 we state the same thing and prove it for every possible set of inputs of the players \(\bar{x}\). \(\square \)

Claim 8

For every \(\bar{x}\), it holds that

$$\begin{aligned} {\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x}){\mathop {\equiv }\limits ^{c}} {\textsc {IDEAL}}^{\mathcal {S'_{{\text {OUR}}}}}_{\mathcal {A}}( \bar{x}). \end{aligned}$$


Let \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\) be the view of the adversary in the real execution of our protocol (i.e., the view of the adversary that is taken from \({\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})\)) and \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) be the view of the adversary that the simulator \(\mathcal {S_{{\text {OUR}}}}'\) outputs; also, let \(O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\) be the output of the honest parties in the real execution of the protocol and \(O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) be their output in the ideal model.

We can obviously restate our claim as:

$$\begin{aligned} \left\{ V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}), O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\right\} {\mathop {\equiv }\limits ^{c}} \left\{ V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}), O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\right\} . \end{aligned}$$

Given that \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}){\mathop {\equiv }\limits ^{c}} V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) (which is proven in Claim 9) we now prove the above by a reduction. Assume by contradiction that there exist a PPT distinguisher \(\mathcal {D}\) and a non-negligible function \(\varepsilon \) in \(\kappa \) such that

$$\begin{aligned}&| Pr[\mathcal {D}(\{V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}), O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\})=1] \\&\quad - Pr[\mathcal {D}( \{V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}), O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\})=1] | = \varepsilon (\kappa ). \end{aligned}$$

We describe a distinguisher \(\mathcal {D}'\) that is able to distinguish between \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\) and \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) with non-negligible probability; note that since we prove the above for every choice of \(\bar{x}\) the distinguisher may use \(\bar{x}\) in its algorithm. The distinguisher \(\mathcal {D}'\) acts as follows:

  1. 1.

    The distinguisher \(\mathcal {D}'\) is given a view v of the adversary which is either from a real execution of the protocol or a simulated view, i.e., either \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\) or \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\).

  2. 2.

    The view v contains the garbled circuit constructed either by the players or by the simulator; moreover, as mentioned above, \(\mathcal {D}'\) knows the inputs of all parties (because we prove the claim for specific choice of \(\bar{x}\)); thus, \(\mathcal {D}'\) evaluates the circuit using \(\bar{x}\) and assigns the output of the honest parties into \(y_J\).

  3. 3.

    The distinguisher \(\mathcal {D}'\) hands \(\{ v, y_J \}\) to \(\mathcal {D}\) and outputs whatever it outputs.

From the correctness property shown in the proof of Claim 4, it follows that if v has been taken from \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\), then \(\{ v, y_J \}\) and \(\{V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}), O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\}\) are indistinguishable; otherwise, if v has been taken from \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\), then \(\{ v, y_J \}\) and \(\{V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}), O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\}\) are indistinguishable due to the simple fact that the distinguisher \(\mathcal {D}'\) does exactly what the honest parties do in the real execution. Formally:

$$\begin{aligned}&| Pr[\mathcal {D}(\{V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}), O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\})=1] \\&\quad - Pr[\mathcal {D}'( V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}))=1] | = \varepsilon _2(\kappa ) \\&| Pr[\mathcal {D}(\{V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}), O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\})=1] \\&\quad - Pr[\mathcal {D}'( V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}))=1] | = \varepsilon _3(\kappa ), \end{aligned}$$

where \(\varepsilon _2(\kappa )\) and \(\varepsilon _3(\kappa )\) are negligible. It follows that

$$\begin{aligned}&Pr[\mathcal {D}'( V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}))=1] \\&\quad = Pr[\mathcal {D}(\{V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}), O^J_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}(\bar{x})\})=1] - \varepsilon _2(\kappa ) ~~~\text {and}\\&Pr[\mathcal {D}'( V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}))=1] \\&\quad = Pr[\mathcal {D}(\{V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}), O^{J,\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\})=1] | - \varepsilon _3(\kappa ) \end{aligned}$$

and thus

$$\begin{aligned}&Pr[\mathcal {D}'( V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}))=1] - Pr[\mathcal {D}'( V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}))=1] \\&\quad = \varepsilon (\kappa ) - \varepsilon _2(\kappa ) + \varepsilon _3(\kappa ), \end{aligned}$$

which is non-negligible, in contradiction to the result in Claim 9. \(\square \)

Claim 9

Let \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\) be the view of the adversary in the real execution of our protocol and \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) be the view of the adversary outputted by the simulator \(\mathcal {S_{{\text {OUR}}}}'\) such that in both cases the inputs to the protocol are \(\bar{x}\).

For every \(\bar{x}\), it holds that

$$\begin{aligned} V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x}){\mathop {\equiv }\limits ^{c}} V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x}). \end{aligned}$$


From the above definitions of Procedure \(\mathcal {P}'\) and \(\mathcal {H}\), we get:

$$\begin{aligned} {{\textsc {REAL}}\text {-}{\textsc {MAL}}^\mathsf{Our}_{\mathcal {A},I}(\bar{x})}_\mathbf{K_I,F_I}&\equiv \mathcal {H}({{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) }_\mathbf{K_I}, \mathbf{F_I}) \\&{\mathop {\equiv }\limits ^{c}} \mathcal {H}(\mathcal {P}'({{\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) }_\mathbf{K_I}), \mathbf{F_I}) \\&{\mathop {\equiv }\limits ^{c}} \mathcal {H}(\mathcal {P}'({\mathcal {S_{{\text {OUR}}}}(1^\kappa ,I,x_I,y_I)}_\mathbf{K_I}), \mathbf{F_I}). \end{aligned}$$

Where the first equality is given from Eq. 7, the second follows from 5 and the third follows from the operation of the simulator of the semi-honest model. That is, if there exist a distinguisher who succeeds to distinguish between \(V^{\mathcal {A}}_{{\textsc {REAL}}\text {-}{\textsc {MAL}}}( \bar{x})\) and \(V^{\mathcal {A},\mathcal {S'_{{\text {OUR}}}}}_{{\textsc {IDEAL}}}(\bar{x})\) with non-negligible probability, then we can easily construct a distinguisher who is able to distinguish between \({\textsc {REAL}}^\mathsf{Our}_{\mathcal {A}} (\bar{x}) \) and \(\mathcal {S_{{\text {OUR}}}}(1^\kappa ,I,x_I,y_I)\) in contradiction to the security in the semi-honest model. \(\square \)


  1. 1.

    In our construction, we use a pseudorandom function as opposed to a pseudorandom generator used in the original BMR [1].

  2. 2.

    Their original work also offers a version against a malicious adversary; however, it requires an honest majority and is not concretely efficient.

  3. 3.

    We assume that the parties interact over a point-to-point network.

  4. 4.

    The external values (as denoted in [2]) are the signals (as denoted in [1]) observable by the parties when evaluating the circuit in the online phase.

  5. 5.

    Recall that we write our protocol assuming a broadcast channel. Thus, even though we write that in the output stage all parties receive output if \(i=0\), when instantiating the broadcast channel with the simple echo-broadcast described in Sect. 2, some of the honest parties may receive the output and some may abort.

  6. 6.

    Multiplication (Beaver) triples are a standard part of the implementation of the SPDZ protocol; we assume familiarity with this method in this paper.

  7. 7.

    By “MPC Engine,” we refer to the underlying secure computation, the SPDZ functionality in this case.

  8. 8.

    This analysis refers to the complexity of the circuit that the parties garble in the offline phase, not the circuit that the parties wish to compute over their private inputs (i.e., \(C_f\)).

  9. 9.

    These Random calls are followed immediately with an Open to a party. However, in SPDZ Random followed by Open has roughly the same cost as Random alone.

  10. 10.

    Note that unlike [29] and other Yao-based techniques we cannot process XOR gates for free. On the other hand, we are not restricted to only two parties.

  11. 11.

    In this section, we actually refer to the execution in the hybrid model where the parties have access to the underlying MPC functionality. We denote it as real execution for convenience.

  12. 12.

    Note that the correctness property has shown earlier holds for every input of the honest parties \(x_J\); thus, in order to decide whether to instruct the trusted party to “halt” or “continue” \(\mathcal {S'_{{\text {OUR}}}}\) can just use some fake input \(x_J=0^{|J|}\).

  13. 13.

    The decision whether to abort or not is not based on whether the adversary cheated or not, but is rather based on the actual evaluation of the circuit because there might be cases where the adversary cheats and influences only the corrupted parties, for example, when cheating in ith PRF values used in a garbled gate of some gate whose output wire is a circuit-output wire (where \(i\in I\)).


  1. 1.

    D. Beaver, S. Micali, P. Rogaway, The round complexity of secure protocols, in 22nd STOC, pp. 503–513, 1990

  2. 2.

    A. Ben-David, N. Nisan, B. Pinkas, M. P. Fairplay, A system for secure multi-party computation, in ACM CCS, pp. 257–266, 2008

  3. 3.

    A. Ben-Efraim, Y. Lindell, E. Omri, Optimizing semi-honest secure multiparty computation for the internet, in ACM CCS, pp. 578–590, 2016

  4. 4.

    A. Ben-Efraim, Y. Lindell, E. Omri, Efficient scalable constant-round MPC via garbled circuits, in ASIACRYPT, pp. 471–498, 2017

  5. 5.

    M. Ben-Or, S. Goldwasser, A. Wigderson, Completeness theorems for non-cryptographic fault-tolerant distributed computation, in 20th STOC, pp. 1–10, 1988

  6. 6.

    D. Chaum, C. Crépeau, I. Damgård, Multiparty unconditionally secure protocols, in 20th STOC, pp. 11–19, 1988

  7. 7.

    S. G. Choi, J. Katz, A. J. Malozemoff, V. Zikas, Efficient three-party computation from cut-and-choose, in CRYPTO, pp. 513–530, 2014

  8. 8.

    R. Cleve, Limits on the security of coin flips when half the processors are faulty (extended abstract), in 18th STOC, pp. 364–369, 1986

  9. 9.

    I. Damgård, Y. Ishai, Constant-round multiparty computation using a black-box pseudorandom generator, in CRYPTO, pp. 378–394, 2005

  10. 10.

    I. Damgård, M. Keller, E. Larraia, C. Miles, N. P. Smart, Implementing AES via an actively/covertly secure dishonest-majority MPC protocol, in SCN, pp. 241–263, 2012

  11. 11.

    I. Damgård, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart, Practical covertly secure MPC for dishonest majority—or: Breaking the SPDZ limits, in ESORICS, pp. 1–18, 2013

  12. 12.

    I. Damgård, V. Pastro, N. P. Smart, S. Zakarias, Multiparty computation from somewhat homomorphic encryption, in CRYPTO, pp. 643–662, 2012

  13. 13.

    P. Feldman, S. Micali, An optimal probabilistic protocol for synchronous byzantine agreement. SIAM Journal on Computing 26(4), 873–933 (1997)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    O. Goldreich, S. Micali, A. Wigderson, How to play any mental game or A completeness theorem for protocols with honest majority, in 19th STOC, pp. 218–229, 1987

  15. 15.

    S. Goldwasser, Y. Lindell, Secure computation without agreement, in DISC, pp. 17–32, 2002

  16. 16.

    C. Hazay, P. Scholl, E. Soria-Vazquez, Low cost constant round MPC combining BMR and oblivious transfer, in ASIACRYPT, pp. 598–628, 2017

  17. 17.

    Y. Ishai, M. Prabhakaran, A. Sahai, Founding cryptography on oblivious transfer—efficiently, in CRYPTO, pp. 572–591, 2008

  18. 18.

    J. Katz, R. Ostrovsky, A. D. Smith, Round efficiency of multi-party computation with a dishonest majority, in EUROCRYPT, pp. 578–595, 2003

  19. 19.

    M. Keller, E. Orsini, P. Scholl, MASCOT: faster malicious arithmetic secure computation with oblivious transfer, in ACM CCS, 2016, pp. 830–842, 2016

  20. 20.

    M. Keller, P. Scholl, N. P. Smart, An architecture for practical actively secure MPC with dishonest majority, in ACM CCS, pp. 549–560, 2013

  21. 21.

    M. Keller, A. Yanai, Efficient maliciously secure multiparty computation for RAM, in EUROCRYPT, 2018, pp. 91–124, 2018

  22. 22.

    E. Larraia, E. Orsini, N. P. Smart, Dishonest majority multi-party computation for binary circuits, in CRYPTO, 2014, pp. 495–512, 2014

  23. 23.

    Y. Lindell, Fast cut-and-choose based protocols for malicious and covert adversaries, in CRYPTO, pp. 1–17, 2013

  24. 24.

    Y. Lindell, B. Riva, Cut-and-choose yao-based secure computation in the online/offline and batch settings, in CRYPTO, pp. 476–494, 2014

  25. 25.

    Y. Lindell, N. P. Smart, E. Soria-Vazquez, More efficient constant-round multi-party computation from BMR and SHE, in 14th TCC 2016-B, pp. 554–581, 2016

  26. 26.

    J. B. Nielsen, P. S. Nordholt, C. Orlandi, S. S. Burra, A new approach to practical active-secure two-party computation, in CRYPTO, pp. 681–700, 2012

  27. 27.

    R. Pass, Bounded-concurrent secure multi-party computation with a dishonest majority, in 36th STOC, pp. 232–241, 2004

  28. 28.

    M. C. Pease, R. E. Shostak, L. Lamport, Reaching agreement in the presence of faults. Journal of ACM 27(2), 228–234 (1980)

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    B. Pinkas, T. Schneider, N. P. Smart, S. C. Williams, Secure two-party computation is practical, in ASIACRYPT, pp. 250–267, 2009

  30. 30.

    T. Rabin, M. Ben-Or, Verifiable secret sharing and multiparty protocols with honest majority, in 21st STOC, pp. 73–85, 1989

  31. 31.

    P. Rogaway, The round complexity of secure protocols. PhD thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1991

  32. 32.

    A. C. Yao, Protocols for secure computations, in 23rd FOCS, pp. 160–164, 1982

Download references


The first and fourth authors were supported in part by the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC consolidators grant agreement no. 615172 (HIPS). The second author was supported under the European Union’s Seventh Framework Program (FP7/2007-2013) grant agreement no. 609611 (PRACTICE), and by a grant from the Israel Ministry of Science, Technology and Space (grant 3-10883). The third author was supported in part by ERC Advanced Grant ERC-2010-AdG-267188-CRIPTO, by EPSRC via grant EP/I03126X and by ERC Advanced Grant ERC-2015-AdGIMPaCT. The first and third authors were also supported by an award from EPSRC (grant EP/M012824), from the Ministry of Science, Technology and Space, Israel, and the UK Research Initiative in Cyber Security. The first, second and fourth authors were supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Directorate in the Prime Minister’s Office.

Author information



Corresponding author

Correspondence to Benny Pinkas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

An extended abstract of this paper appeared at CRYPTO 2015; this is the full version.

Communicated by Jonathan Katz.

Appendix: A Generic Protocol to Implement \(\mathcal {F}_{\mathsf {offline}}\)

Appendix: A Generic Protocol to Implement \(\mathcal {F}_{\mathsf {offline}}\)

In this appendix, we give a generic protocol \(\Pi _{\mathsf {offline}}\) which implements \(\mathcal {F}_{\mathsf {offline}}\) using any protocol which implements the generic MPC functionality \(\mathcal {F}_{\mathsf {MPC}}\). The protocol is very similar to the protocol in the main body which is based on the SPDZ protocol; however, this generic functionality requires more rounds of communication (but still requires constant rounds). Phase Two is implemented exactly as in Sect. 4, so the only change we need is to alter the implementation of Phase One, which is implemented as follows:

  1. 1.

    Initialize the MPC Engine: Call Initialize on the functionality \(\mathcal {F}_{\mathsf {MPC}}\) with input p, a prime with \(2^\kappa<p < 2^{\kappa +1}\).

  2. 2.

    Generate wire masks: For every circuit wire w, we need to generate a sharing of the (secret) masking values \(\lambda _{w}\). Thus, for all wires w the players execute the following commands

    • Player i calls InputData on the functionality \(\mathcal {F}_{\mathsf {MPC}}\) for a random value \(\lambda ^{i}_{w}\) of his choosing.

    • The players compute

      $$\begin{aligned}{}[\mu _w]&= \prod _{i=1}^n [\lambda ^{i}_{w}], \\ [\lambda _{w}]&= \frac{[\mu _w]+1}{2}, \\ [\tau _w]&= [\mu _w] \cdot [\mu _w]-1. \end{aligned}$$
    • The players open \([\tau _w]\) and if \(\tau _w \ne 0\) for any wire w they abort.

  3. 3.

    Generate garbled wire values: For every wire w, each party \(i \in [1,\ldots ,n]\) and for \(j \in \{0,1\}\), player i generates a random value \(k^{i}_{w,j} \in {\mathbb {F}}_p\) and call InputData on the functionality \(\mathcal {F}_{\mathsf {MPC}}\) so as to obtain \([k^{i}_{w,j}]\). The vector of shares \([k^{i}_{w,j}]_{i=1}^n\) we shall denote by \([\mathbf{k}_{w,j}]\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lindell, Y., Pinkas, B., Smart, N.P. et al. Efficient Constant-Round Multi-party Computation Combining BMR and SPDZ. J Cryptol 32, 1026–1069 (2019). https://doi.org/10.1007/s00145-019-09322-2

Download citation


  • Secure multiparty computation (MPC)
  • Garbled circuits
  • Concrete efficiency
  • BMR
  • SPDZ