1 Introduction

Secure Multi-Party Computation (MPC) is a general term for techniques which allow a set of n parties to compute a function f on their private inputs such that only the output of the function becomes known. Using MPC as a tool to achieve security generally comes with an inherent slowdown over insecure solutions, so using the right MPC protocol with suitable properties is crucial in order to foster adoption in practice. For certain requirements, it is even known that MPC is impossible to achieve.

For example, while in the honest majority setting, where more than half of the parties are honest, MPC for any function is possible, when there is a dishonest majority it is well-known that fairness for MPC is impossible, in general  [15]. The fairness property means that if any corrupted party learns the output then all the honest parties do as well, so a dishonest party cannot withhold the output from the other parties. To work around this impossibility, most MPC protocols for dishonest majority settle for the weaker notion of security with abort, which allows the adversary to abort the protocol, possibly after learning the output.

However, a major downside of this model is that it does not protect against denial-of-service attacks. This motivates the stronger model of MPC with identifiable abort, or ID-MPC, where if the adversary aborts then the honest parties will agree upon the identity of a cheating party. This allows the honest parties to exclude cheaters and re-run the aborting protocol, and it can also be combined with a distributed ledger (such as in  [35]) to achieve monetary fairness (see e.g.  [4] for an overview). The concept of ID-MPC was first implicitly considered in the context of covert security, and more formally studied in later works  [16, 29].

A related, desirable property of an MPC protocol is public verifiability  [3, 45], which allows any external party to verify the correctness of some claimed outputs of the protocol by, for instance, inspecting public values posted to a bulletin board. This is important for settings where the computation is of particular interest to the public, for example, it may be desirable for the results of a research study on private medical data to be publicly verifiable. It is also relevant to the client-server setting, where many clients outsource a computation to a set of non-colluding servers and wish to verify the result, without interacting with the servers.

As well as security properties like the above, an important aspect when choosing an MPC protocol is its efficiency. This can be measured in terms of number of rounds of communication, total communication complexity (i.e. amount of data sent over the network), and computational overhead (compared with computing the function in the clear). In this work, we consider the problem of efficiently constructing MPC in the dishonest majority setting providing security with identifiable abort and public verifiability, in a constant number of rounds of interaction.

1.1 Previous Work on Constant-Round MPC, Identifiable Abort and Public Verifiability

Constant-Round MPC. The main tool for building constant-round MPC is garbled circuits, which were introduced by Yao [49] for 2-party secure computation. Garbled circuits were generalized to the multi-party setting by Beaver, Micali and Rogaway  [6], who constructed a constant-round MPC protocol (called “BMR”) that can support a dishonest majority of participants. The BMR protocol makes heavy, non-black-box use of a pseudorandom generator, so is inefficient in practice.

Subsequently, constant-round MPC making only black-box use of cryptographic primitives was presented by Damgård and Ishai  [18], for the honest majority setting, and extended to the case of a dishonest majority by Ishai et al.  [30]. Later, more efficient black-box solutions with active security for dishonest majority were introduced by Lindell et al.  [36, 37], who used somewhat homomorphic encryption in a preprocessing phase of the protocols. Currently, the most efficient protocols are those by Wang et al. [48] and Hazay et al. [26], which use oblivious transfer (OT) instead of homomorphic encryption, and can be instantiated very efficiently using the TinyOT-protocol [21, 40] based on fast OT extension techniques  [28, 31].

ID-MPC in the Dishonest Majority Setting. The seminal MPC protocol of Goldreich, Micali and Wigderson  [24] can be combined with any public-coin zero-knowledge proof system to obtain ID-MPC for dishonest majority, and the same holds for the BMR protocol  [6] to achieve a constant round complexity. However, the resulting protocols make extensive, non-black-box use of cryptographic primitives and are not practical. Additionally, also [41] implies a constant-round ID-MPC scheme that is not black-box (and secure in the stand-alone setting as observed by [7]. More recently, there has been interest in concretely efficient ID-MPC. Ishai, Ostrovsky and Zikas  [29] presented an ID-MPC protocol in the preprocessing model, where a trusted dealer gives the parties some correlated randomness, with information-theoretic security. They also gave a general compiler that allows removing the trusted dealer, leading to the first ID-MPC protocol making only black-box use of cryptographic primitives, namely, an adaptively secure oblivious transfer protocol and a broadcast channel. Concurrent to this work, Brandt et al. [9] studied the feasibility of ID-MPC from lower-cardinality primitives as well as the relation of the conflict graph to identifiable abort. Their work is orthogonal to ours, as we are interested in concrete and practical constructions.

Baum et al. [5] also construct ID-MPC in the preprocessing model, with better concrete efficiency, by combining a variant of the BDOZ protocol  [8] with information-theoretic signatures, and homomorphic encryption for the preprocessing. Other works [17, 46] have added identifiability to the practical SPDZ protocol [19], obtaining more efficient results in a similar setting. These works, while concretely quite practical, all require a number of rounds of interaction that scales linearly with the multiplicative depth of the circuit being evaluated.

MPC with Public Verifiability. The idea of secure computation with public verifiability was first introduced in the two-party setting for covert security by Asharov and Orlandi [2]. Subsequent works  [27, 33] later improved upon the efficiency of their construction, and in particular the size of the cheating certificate, for which the work of Hong et al. [27] requires \({<}400\) bytes for 128 bit security.

The notion of public verifiability for actively secure dishonest majority MPC (with potentially all parties being corrupted) has been introduced independently by Baum et al. [3] and Schoenmakers and Veeningen [45]. Their work ensures privacy if at least one party is honest and correctness for any level of corruption. In subsequent works, [5, 17] independently showed how to combine public verifiability and identifiable abort for general computations where either the correctness of the output is attested or a cheater will be found by a third party. Both works rely on expensive tools in a preprocessing phase (lattice-based encryption for large fields), have a circuit depth-dependent round complexity and have not been implemented in practice. Another, more general approach for publicly verifiable MPC with identifiable abort was given in [32] where the authors presented a general compiler based on the approach of [29].

1.2 Contributions

In this work, we present the first concretely efficient and constant-round MPC protocols that provide security with identifiable abort and public verifiability in the dishonest majority setting. Note that all our protocols are in the setting of static corruptions.

Our results for identifiable abort assume access to a broadcast channel, while for public verifiability we need a public bulletin board, and in both cases we count round complexity by assuming that their consumes a single round. In practice, if using an authenticated broadcast protocol  [20, 43] to implement this, each broadcast requires \(\varOmega (n)\) rounds of point-to-point messages  [23]. Alternatively, broadcast can be realized using a bulletin board or blockchain, giving a constant number of rounds of interaction with this functionality. Note that it seems difficult to avoid the use of broadcast, since MPC with identifiable abort itself implies secure broadcast  [16].

We first establish the feasibility of ID-MPC with constant round complexity, with black-box use of cryptographic primitives.

Theorem 1.1

(informal). There exists an ID-MPC protocol for securely realizing any functionality in a constant number of rounds, given black-box access to an adaptively secure oblivious transfer protocol and a pseudorandom function.

Next, our main result is a more concretely efficient protocol, with greatly reduced communication complexity and allowing optimizations like efficient OT extension and free-XOR gates.

Theorem 1.2

(informal). There exists an ID-MPC protocol for securely realizing any functionality in a constant number of rounds, given black-box access to a statically secure oblivious transfer protocol and a circular 2-correlation robust hash function.

Interestingly, and unlike the previous result, in this construction we manage to avoid the need for adaptively secure OT, allowing our protocol to use efficient OT extensions  [28], which are impossible with adaptive security in the standard model as showed by Lindell and Zarosim  [39]. This means that Theorems 1.1 and 1.2 are incomparable from a feasibility perspective, since although constructions of adaptively secure OT are known from standard assumptions, it cannot be built from static OT in a black-box manner  [38].

Finally, we show how to upgrade the above protocol to achieve public verifiability using a public bulletin board.

Theorem 1.3

(informal). Assuming additionally a secure public bulletin board, there is a black-box ID-MPC protocol with public verifiability, with a constant number of rounds of interaction with the bulletin board.

We obtain our first feasibility result with a variant of the Damgård-Ishai protocol  [18] for constant-round honest majority MPC, tailored for the dishonest majority setting using information-theoretic signatures  [13]. We then obtain a protocol with identifiable abort by combining this with a transformation from  [29], which needs an adaptively secure OT protocol. While our construction achieves static security, we want to remark that it is possible to construct an adaptively secure constant-round ID-MPC protocol by applying the [29] transform to the [30] protocol. This approach, on the other hand, will make non-black box use of the underlying PRF by the [29] compiler whereas our construction is fully black-box.

Our second protocol is much more attractive from a practical perspective, since it builds upon recent, optimized MPC protocols that offer active security with (non-identifiable) abort using BMR-style garbled circuits  [26, 48]. We also support the free-XOR technique  [34], by assuming a suitable circular 2-correlation robust hash function  [14]. Our core idea is a lightweight method of adding identifiability to the MPC protocol of Hazay, Scholl and Soria-Vazquez  [26], which creates a BMR garbled circuit using OT and any non-constant round MPC protocolFootnote 1. We obtain our efficient method in two steps: firstly, we devise a cheater identification procedure for the online phase, based on opening a circuit-independent number of additively homomorphic commitments. The cheater identification is highly efficient as this is the only necessary interaction and because no heavy cryptographic tools such as zero-knowledge proofs are necessary. Secondly, we show how to modify the preprocessing phase of  [26] to produce the necessary committed values in an identifiable way. To achieve the latter, we improve techniques by Ishai, Ostrovsky and Zikas  [29] to avoid the use of adaptively secure OT. Our approach in doing so might be of independent interest.

Concrete Efficiency. We now expand on the concrete efficiency of our protocols and compare them with existing constant-round, non-identifiable protocols, as illustrated in Table 1. Note that the current most practical, constant-round MPC protocols are all obtained by combining garbling circuits with the so-called ‘TinyOT’ protocol  [40], which combines OT extension and additive secret sharing with information-theoretic MACs over \(\mathbb {F}_2\). The TinyOT part turns out to be the dominant, overall cost in the protocols, in terms of communication complexity. The parameter B in Table 1 is related to a statistical security parameter used in cut-and-choose in TinyOT, and in practice is around 3–6. Using the most efficient multi-party variant of TinyOT  [48] has a communication complexity of \(O(n^2B\kappa )\) bits per AND gate. The most efficient constant-round protocols have roughly the same communication complexity as TinyOT.

Our efficient protocol from Sects. 3 and 4 uses TinyOT in a similar way to previous works, with the difference that we also use homomorphic commitments to obtain identifiability. While most constructions of publicly verifiable homomorphic commitments use public-key style assumptions like discrete log, we are able to get away with a weaker form of homomorphic commitment that only allows a bounded number of openings. This variant be based on any extractable commitment scheme  [12], and the main computational cost is PRG evaluations and encodings of an error-correcting code, which can be implemented very efficiently, so we expect only a small computational overhead on top of the non-identifiable protocols. Additionally, the introduced communication overhead from these commitments (per gate) is expected to be a factor 2–3 over the communication that is necessary to perform the String-Oblivious Transfer required to garble a gate as in [26].

Regarding communication complexity, the main overhead in our protocol comes from creating and broadcasting homomorphic commitments to the \(O(n \cdot |C |)\) wire keys in a BMR garbled circuit. We minimize this cost by using the efficient homomorphic commitments mentioned above, which have only a small constant communication overhead. Using this scheme, the overhead of commitments is not much more than the cost incurred from having each party broadcast its shares of the garbled circuit (\(4 n^2 \cdot \kappa |C |\) bits) at the end of our preprocessing phase. We remark that this broadcast step is not needed in non-identifiable protocols  [26, 48], which can get away with reconstructing the garbled circuit towards a single party who then sends the sum of all shares.

Table 1. Efficiency of constant-round MPC protocols with and without identifiable abort, for a circuit with \(|C |\) AND gates. ID/PV means identifiability or public verifiability. Communication complexity measured in total number of bits transmitted across the network; \(\mathsf {bc}(n)\) is the cost of securely broadcasting O(n) bits. The ‘free-XOR’ assumption is a circular 2-correlation robust hash function  [14]

To compare with existing non-constant round protocols such as [5, 46], we remark that these use lattice-based preprocessing. Such preprocessing is much more computationally expensive than our lightweight techniques based on OT extension. In terms of broadcasts, the offline phase of [5] has \(O(n^3|C|\kappa )\) broadcast complexity, which is worse than our protocol. [46] does not describe the offline phase in detail, but it likely requires \(O(n\kappa |C|)\) broadcasts for threshold decryption of the homomorphic encryption scheme. Regarding round complexity, even with the factor n overhead when implementing broadcast, our protocol likely performs significantly better for complex functionalities with high-depth circuits. In general, [5, 17, 46] are for arithmetic circuits and likely applicable in different scenarios than ours, making a direct comparison difficult.

1.3 Technical Overview

In this overview, we assume some familiarity with garbled circuits and their use in MPC. For a more thorough introduction, we refer to the full version.

Feasibility of constant-round ID-MPC. To first establish a feasibility result, we use a variant of the garbling scheme from  [18] combined with information-theoretic signatures  [13, 25, 47], together with a compiler for sampling functionalities with identifiable abort from  [29]. Although this construction is quite natural, we are not aware of it being described before.

In a little more detail, [18] is based on a garbling scheme where, similarly to BMR, when evaluating the garbled circuit, for each wire we obtain a vector of keys \((K_w^1, \dots , K_w^n)\), where the component \(K_w^i\) is known to party \(P_i\). The garbling uses a specialized encryption scheme, which encrypts \(K_w^i\) by first producing verifiable secret shares (VSS) \((K_w^i[1], \dots , K_w^i[n])\) of \(K_w^i\), and then encrypting each share \(K_w^i[j]\) under the corresponding input wire key components of \(P_j\), as:

$$\begin{aligned} E_{K_u,K_v}(K_w^i) := \begin{pmatrix} \mathsf {H}(K_u^1, K_v^1) \oplus K_w^i[1] \\ \vdots \\ \mathsf {H}(K_u^n, K_v^n) \oplus K_w^i[n] \\ \end{pmatrix} \end{aligned}$$

This is amenable to secure computation in a black-box way, as \(P_j\) can input the hash values \(H(K_u^j,K_v^j)\) to the protocol, and as long as the majority of these hash values are correct, which is guaranteed by an honest majority, the VSS allows correct reconstruction of \(K_w^i\).

We adapt this to the dishonest majority setting by replacing VSS with additive secret-sharing and information-theoretic signatures. Roughly, we consider a preprocessing functionality which samples additive shares of each \(K_w^i\) and augments each share with a signature under a signing key that no-one gets, while also allowing corrupt parties to choose their hash values for each gate. This suffices to obtain ID-MPC in an online phase, since if any corrupt party uses an incorrect hash value then the corresponding signature on their share will no longer verify.

To realize the preprocessing phase which outputs authenticated shares of the garbled circuit, we apply the compiler from  [29], which transforms a protocol for any sampling functionality that is secure with abort, into one with identifiable abort. We remark that in the preprocessing functionality, the size of each garbled gate is \(O(n^3 \cdot \kappa )\) bits, and the communication complexity of the protocol to generate this is at least \(\varOmega (n^4 \kappa )\) due to overheads in  [29], so this approach is not practical.

For space reasons, the complete description of these protocols can be found in the full version.

Concretely efficient ID-MPC with BMR. As mentioned before, our protocol follows the same approach of [26] (‘HSS’) based upon BMR garbled circuits. In BMR garbling, the vector of output wire keys \((K_w^1,\dots ,K_w^n)\) of a gate g is directly encrypted under the input wire keys, with

$$\begin{aligned} E_{K_u,K_v}(K_w) := \bigoplus _{j=1}^n \mathsf {H}(g, K_u^j, K_v^j) \oplus (K_w^1, \dots , K_w^n) \end{aligned}$$

When using free-XOR with BMR, each pair keys on a wire is of the form \((K_{w,0}, K_{w,1}=\) \(K_{w,0} \oplus {R})\) for some fixed string \({R}= ({R}^1,\dots ,{R}^n)\), with \({R}^i\) known to \(P_i\). When garbling an AND gate with input wires uv and output wire w, we need to produce the 4 rows

$$\begin{aligned} \begin{aligned} \mathtt {circ}_{g,a,b} =&\bigoplus \nolimits _{j=1}^n \mathsf {H}(g, K_{u,a}^{j}, K_{v,b}^{j}) \oplus (K_{w,0}^{1}, \dots ,K_{w,0}^{n}) \\&\oplus ({R}^1,\dots ,{R}^n) \cdot ((\lambda _u\oplus a)(\lambda _v \oplus b)\oplus \lambda _w ), \end{aligned} \end{aligned}$$
(1)

for \((a,b)\in \{0,1\}^2\), where \(\lambda _u,\lambda _v,\lambda _w\) are the secret wire masks assigned to each wire.

In the HSS protocol, to generate additive shares of the above, each party \(P_i\) first samples all of their key components and global string \({R}^i\), as well as secret shares of all the wire masks. Then, a generic MPC protocol for binary circuits is used to compute shares of the wire mask products \(\lambda _u \cdot \lambda _v\), and shares of the products between each wire mask and every global string \({R}^i\) are computed using OT. This allows the parties to obtain additive shares of the entire garbled circuit, since each hash value in (1) can be computed locally by party \(P_j\). If any party uses an incorrect hash value, it was shown in  [26] that this would result in an abort in the online phase with overwhelming probability, since each party can check correctness when decrypting a gate by checking for the presence of one of their own key components.

Identifiable online phase. Adding identifiable abort to BMR is more challenging than with  [18], since if any error is introduced to the hash values in (1), we have no direct way of knowing which party introduced it. Note that if the parties were committed to the entirety of the shares of the garbled circuit (i.e. all of (1)) then this would be straightforward: they could simply broadcast their shares, then attempt to run the online phase; if any party sends an incorrect share then the protocol aborts with overwhelming probability, and in our case everyone could then open their commitments to prove they behaved honestly. Unfortunately, we do not know how to efficiently create commitments to all of the shares, since in particular each share contains a hash value \(\mathsf {H}(g, K_{u,a}^{j}, K_{v,b}^{j})\), and it seems challenging to reliably commit to these without resorting to proving statements about hash function computations in zero-knowledge.

Instead, we observe that it is actually enough if each party is given commitments to partial shares of the garbled gates, namely, shares of the whole of (1) except for the hash values. To see this, consider that some party aborts at gate g in the computation. If g is the first (in topological order) such gate where the parties detect an inconsistency, then it must hold that the preceding gates were correctly garbled. This means that the wire keys from the previous gate can be used to compute the correct \(\mathsf {H}(\cdot )\) values by every party. Hence, we can verify the garbling of g by opening the commitments to the partial shares, then reconstructing the shares that should have been sent by ‘filling in’ the remaining parts of the garbled gate that were not committed to. Finally, the resulting shares can be compared with the shares that were actually sent, allowing us to detect a cheating party.

We therefore rely on a preprocessing functionality that adds XOR-homomorphic commitments to all the wire keys and shares of the bit-string products. Since the commitments are homomorphic, this easily allows computing commitments to the partial shares as required.

Identifiable preprocessing phase. Our first challenge with the preprocessing is to create the necessary commitments to the bit-string products in a reliable way. We show that without identifiability, this can be done without too much difficulty, using a consistency check based on a technique adapted from  [26].

Next, the main challenge is to make the whole preprocessing identifiable. One possible approach would be to simply apply the same IOZ transformation we used for the protocol based on Damgård-Ishai, to convert a protocol \(\varPi _{\scriptstyle \mathsf {Prep}}\) that realizes the preprocessing functionality \(\mathcal {F}_{\scriptstyle \mathsf {Prep}}\) with abort into a new protocol \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) that is identifiable. Unfortunately, this transformation has two main drawbacks: Firstly, the protocol \(\varPi _{\scriptstyle \mathsf {Prep}}\) needs to compute not only the outputs of \(\mathcal {F}_{\scriptstyle \mathsf {Prep}}\), but authenticated secret shares of these outputs, where each share has an information-theoretic signature attached to it; since IT signatures have a multiplicative \(\varOmega (n)\) storage overhead, this adds a significant cost burden to the protocol. Secondly, \(\varPi _{\scriptstyle \mathsf {Prep}}\) needs to be secure against adaptive corruptions, which is in general much harder to achieve than static corruptions; in particular, it rules out the use of efficient OT extensions unless we rely on the programmable random oracle model  [10, 39].

We work around these issues with careful modifications to the  [29] transformation, which are tailored specifically to our preprocessing phase. We first briefly recall the idea behind IOZ. To construct \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\), first each party commits to its randomness in \(\varPi _{\scriptstyle \mathsf {Prep}}\), and then if \(\varPi _{\scriptstyle \mathsf {Prep}}\) aborts, everyone simply opens their randomness, which is safe as the preprocessing phase is independent of the parties’ inputs. The main challenge when proving security of this approach is that if the protocol aborts, the simulator needs to be able to convincingly open the honest parties’ random tapes to the adversary, explaining the previously simulated protocol messages. This leads to the above two issues, since (1) if the protocol aborts after a corrupt party has seen its outputs, the simulator may not be able to produce honest parties’ outputs that match, and (2) the simulator may not be able to come up with convincing honest parties’ random tapes, since the previous honest parties’ messages were simulated independently of the actual outputs from \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) In IOZ, (1) is resolved by producing an authenticated secret-sharing of the outputs, and (2) is resolved by requiring \(\varPi _{\scriptstyle \mathsf {Prep}}\) to be adaptively secure.

In our work, we address (1) by ensuring that an abort is only possible in \(\varPi _{\scriptstyle \mathsf {Prep}}\) before the ideal functionality \(\mathcal {F}_{\scriptstyle \mathsf {Prep}}\) has delivered outputs to the honest parties. This means there is no danger of inconsistencies between the simulated honest parties’ outputs and those seen by the distinguisher. Our method of resolving (2) is more complex. First, consider a simulation strategy where when running \(\varPi _{\scriptstyle \mathsf {Prep}}\) within \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\), the simulator simply performs an honest run of \(\varPi _{\scriptstyle \mathsf {Prep}}\) on random inputs. If \(\varPi _{\scriptstyle \mathsf {Prep}}\) later aborts, there is no problem opening the random tapes of honest parties’, since the simulator knows these. The problem now is that the simulator can no longer extract any corrupt parties’ inputs which may have to be sent to \(\mathcal {F}_{\scriptstyle \mathsf {Prep}}\), or ensure the corrupt parties get the corrupt output sent by \(\mathcal {F}_{\scriptstyle \mathsf {Prep}}\). To work around this, we combine \(\varPi _{\scriptstyle \mathsf {Prep}}\) with a homomorphic commitment scheme, and require that every party commits to all values used in \(\varPi _{\scriptstyle \mathsf {Prep}}\); we ensure consistency of these commitments with the values in \(\varPi _{\scriptstyle \mathsf {Prep}}\) with a simple test where we open random linear combinations of the commitments, and modify the (reactive) protocol \(\varPi _{\scriptstyle \mathsf {Prep}}\) to open the same combinations. If the homomorphic commitment scheme is UC secure with identifiable abort, then the simulator can use this to extract and open the values in \(\varPi _{\scriptstyle \mathsf {Prep}}\), allow us to prove security of the whole protocol. A suitable commitment scheme can be efficiently constructed, building upon any (non-homomorphic) extractable commitment and a PRG  [12].

We apply the above blueprint to the preprocessing phase of HSS, which performs multiplications between random bits, as well as between bits and random, fixed strings, to produce additive shares of the garbled circuit. With our transformation, the parties actually end up producing homomorphic commitments to shares of some (but not all) parts of the garbled circuit; namely, they are committed to the wire keys and the shares of the bit-string products from (1).

Achieving public verifiability. Public verifiability with identifiable abort requires not only that a party from the protocol can identify a cheater, but anyone can do so (or verify correctness of the result) by simply inspecting some messages posted to a public bulletin board. Adding this to our efficient construction requires modifying both the preprocessing and online phases of the protocol. First, we modify our preprocessing method so that the underlying protocol that is secure with abort satisfies a property called public detectability, which requires that an external verifier, who is given the random tapes of all parties in the protocol and all broadcast messages, can detect whether any cheating occurred and identify a corrupted party if so. This is similar to the concept of \(\mathcal {P}\)-verifiability used in IOZ  [29], but removes the requirement that the verifier is also given the view of one honest party. We then show that any suitable, secure protocol can be transformed to be publicly detectable, with a simple transformation that is similar to the \(\mathcal {P}\)-verifiable transformation from  [29]. Using the publicly detectable protocol in our identifiable preprocessing phase, and replacing the broadcast channel with a bulletin board, we obtain a publicly verifiable preprocessing protocol with identifiable abort.

To add public verifiability to the online phase, we need to ensure that an external evaluator can detect any cheating in the garbled circuit, given only the public transcript. It turns out that in case of abort, almost all of the computation done by an honest party when detecting a cheater relies only on public information; the only exception is the 0/1 wire values that are obtained when evaluating the garbled circuit, which each party computes by looking at its private keys. To allow an external verifier to compute these values, we modify the preprocessing with a variant of the point-and-permute technique, which encodes these values as the last bit in the corresponding key on that wire. Now if the protocol aborts, and the entire transcript of broadcast messages has been posted to the public bulletin board, the verifier has all the information that is needed to detect any inconsistency and identify a cheating party.

Notice that our public cheater identification is protocol-specific and does not require heavy NIZK machinery. This differentiates it from [32] who gave a general compiler that achieves publicly verifiable ID-MPC, but where the generated “cheating certificate” is a NIZK that has to re-compute the next-message function of the compiled protocol. That means that compiling a BMR-style protocol using their approach might require giving a zero-knowledge proof of correct garbling of the whole circuit, whereas our certificate just requires a few commitments to be opened.

Fig. 1.
figure 1

Illustration of our efficient protocol with identifiable abort.

Paper Outline. In Fig. 1 we show the relationship between our protocols and functionalities in our main construction with identifiable abort. Section 3.1 contains our publicly detectable transformation, used for both identifiable abort and public verifiability, and instantiation from the OT-based preprocessing phase of [26]. Section 3.2 describes our identifiable preprocessing protocol, which uses the publicly detectable in a non-black-box way (but with black-box use of its next-message function), and combines this with homomorphic commitments. In Sect. 4, we present the main MPC protocol with identifiable abort, which uses \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) to create and then evaluate a BMR garbled circuit, with identifiable abort. In Sect. 5, we describe how to modify the previous protocol to additionally obtain public verifiability, using a bulletin board instead of a broadcast channel.

2 Preliminaries

Let \(\kappa \) (resp. s) denote the computational (resp. statistical) security parameter. We let \(\mathcal {P}=\{P_1,\dots ,P_n\}\) be the set of parties involved in any particular protocol/functionality, and \({\mathcal {V}}\) be a verifier which might check \(\mathcal {P}\)’s computation at a later point. Among those parties, we denote by \(\mathcal{I}\subset \mathcal {P}\) the set of corrupted parties and by \(\overline{\mathcal{I}}=\mathcal {P}\setminus \mathcal{I}\) the honest parties. Let \(C_f\) be a circuit computing the function \(f: \mathbb {F}_2^{n_{\mathtt {in}}} \rightarrow \mathbb {F}_2^{n_{\mathtt {out}}}\) with \(n_{\mathtt {in}}\) inputs and \(n_{\mathtt {out}}\) outputs. To ease the reading, we drop the dependence on f, when it is clear from the context. We will define the disjunct sets \(\mathtt {input}_1,\dots ,\mathtt {input}_n\subset [n]\) as the inputs which each party in \(\mathcal {P}\) provides to the circuit C, so \(P_i\) provides the inputs in \(\mathtt {input}_i\). The circuit C has the set of AND gates G, for which we denote the extended set \(G^{\mathtt {ext}}:=G\times \mathbb {F}_2^2\). For \(\tau \in G^{\mathtt {ext}}\), we usually denote \(\tau =(g,a,b)\) where g is the AND gate in question and \(a,b\in \mathbb {F}_2\) are used to point to a specific entry in g’s (garbled) truth table.

2.1 Security Model and Primitives

We will prove security of our protocols in the universal composability (UC) framework [11]. We consider a static, active adversary corrupting up to \(n-1\) parties. To achieve our goals, we will make use of multiple primitives, whose ideal functionalities we now introduce.

Identifiable Abort Version of Functionalities. In order to be able to rigorously discuss our protocols, we now formalize what it is to enhance their ideal functionalities \(\mathcal {F}_{\scriptstyle \mathsf {}}\) to support identifiable abort, which we denote by \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) and describe in Fig. 2. As showed in [29], the UC composition theorem extends to security with identifiable abort in a straightforward way.

Fig. 2.
figure 2

Extending a functionality \(\mathcal {F}\) to its identifiable abort version \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\).

An \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) functionality is exactly as \(\mathcal {F}\), but additionally allows the adversary to send a message \((\mathtt {Abort},\mathcal{J})\) at any point of time, where \(\mathcal{J}\) denotes a non-empty set of dishonest parties. Upon receiving this message, the functionality ceases all computation and outputs the set \(\mathcal{J}\) to all honest parties. The main points of identifiable abort are that (i) The adversary cannot abort without revealing the identity of at least one corrupt party; and (ii) All honest parties interacting with \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) agree on the revealed corrupted parties.

Coin Tossing. Coin tossing is used by a set of parties to fairly sample a number of coins according to a fixed distribution. In this work we will use an identifiable version of it, \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Rand}}\), meaning that either all computing parties learn the sampled coins or, otherwise, the honest parties agree on a subset of dishonest parties who cheated in the sampling process. The standard \(\mathcal {F}_{\scriptstyle \mathsf {Rand}}\) functionality is described in the full version.

Secure Broadcast. Our work will crucially rely on the use of secure (or, authenticated) broadcast, which is a standard functionality given in the full version of the paper. Nevertheless, in order to achieve protocols with identifiable abort, we need to enhance the description of this functionality to \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Broadcast}}\) as previously described. This is not a problem, as all standard protocols for \(\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\) such as  [20, 43] are already identifiable. Under the assumption of a Public Key Infrastructure, implementing \(\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\) requires \(\varOmega (n)\) rounds of communication and signatures  [23]. If the parties have access to an authenticated bulletin board, \(\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\) can be achieved with a single call to the board.

Fig. 3.
figure 3

Functionality \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) for homomorphic multiparty commitment with delayed verifiability.

Homomorphic Commitments. In this work, we use homomorphic commitments. These allow a sender to commit to a message \({M}\) at a certain time, such as to later open \({M}\) to a set of receivers. The properties required from commitment schemes are that (i) \({M}\) remains hidden to the receivers until the opening (hiding); and (ii) the sender can only open \({M}\) and no other value to the receivers, once committed (binding). We further require that the commitment scheme is homomorphic, meaning that the sender can open any linear combination of commitments that it made without revealing anything but the combined output. The functionality \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) is described in Fig. 3.

To efficiently implement \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) we would like to use the homomorphic commitment scheme of Cascudo et al. [12], but it turns out that this is not possible directly. The problem is that \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) (which we use throughout this work) allows to perform multiple rounds of Add and Open, whereas [12] permits to perform only one call to the interface Open. In the full version, we present a slightly weaker functionality \(\mathcal {F}_{\scriptstyle \mathsf {WHComm}}\) having multiple rounds of Open but not Add. We show that this is sufficient for our application and also how this weaker functionality can then be implemented using the protocol in  [12].

3 Preprocessing Phase

Here we describe our preprocessing phase with identifiable abort. At a high level, we proceed in two steps: first, we describe a protocol with the weaker property of public detectability, and then we bootstrap it to a preprocessing protocol with identifiable abort using homomorphic commitments.

3.1 Publicly Detectable MPC with (Non-identifiable) Abort

We start this section by recalling the notion of the (deterministic) next message function, \(\mathsf {nmf}_\varPi ^i\), of a party \(P_i\) in an n-party protocol \(\varPi \) that is executed in a limited number of rounds, say \(\rho \). Given the \(\textsc {View}\) of \(P_i\) at the beginning of round h, where \(h \le \rho \), i.e. the set \(\textsc {View}_h^i = \left( i,{X},{X}_i,\mathsf {Rnd}_i,({M}_{i,1},\dots ,{M}_{i,h})\right) \), where i identifies party \(P_i\), \({X}\) is the common public input, \({X}_i\) and \(\mathsf {Rnd}_i\) are \(P_i\)’s private input and randomness respectively, and \(({M}_{i,1},\dots ,{M}_{i,h})\) are the messages received by \(P_i\) in the first h rounds, then \(\mathsf {nmf}^i_{\varPi }(\textsc {View}_h^i)={M}^i_{h+1}\) are the messages that \(P_i\) has to send in round \(h+1\). In particular, \(\mathsf {nmf}^i_{\varPi }(\textsc {View}_\rho ^i)={Y}_i\), where \({Y}_i\) is \(P_i\)’s output, and \(\textsc {View}^i=\textsc {View}_\rho ^i\). In other words, the messages sent by each party \(P_i\) at each round are deterministically specified as a function of \(P_i\)’s inputs and random coins, and messages received by \(P_i\) in previous rounds.

We can now introduce the notion of public detectability. It is similar to that of \(\mathcal {P}\)-verifiability given in [29]. However, whereas the notion of \(\mathcal {P}\)-verifiability in that work was conceived with identifiable abort in mind, public detectability will allow us to implement functionalities not only achieving identifiable abort, but also public verifiability if required (see Sect. 5).

Definition 3.1

(Public detectability). Let \(\varPi \) be a protocol in the CRS model and \(\mathcal {D}\) a deterministic poly-time algorithm, called the detector, which takes as inputs the CRS, the inputs and random tape of all parties in \(\mathcal {P}\) involved in the execution of \(\varPi \), and any message sent over an authenticated broadcast channel during the execution of \(\varPi \). We say that the protocol \(\varPi \) is publicly detectable if the detector \(\mathcal {D}\) outputs a non-empty subset \(\mathcal{J}\subset \mathcal {P}\) corresponding to (some of) the parties that did not honestly execute \(\varPi \), if any such subset \(\mathcal{J}\) exists.

Notice there is a gap between the public detectability and identifiable abort properties: the latter requires that, upon abort, the adversary does not learn anything about the honest parties’ inputs, beyond of what is deducible from the functionalities’ output; on the other hand, running the detector requires access to all the input and random tape of \(\mathcal {P}\). However, we will show, in Sect. 3.2, that public detectability is almost enough to define our preprocessing with identifiable abort, which we will later on extend to a public verifiable one in Sect. 5. At a high level, the main idea is that, since the goal of the preprocessing phase is to produce random correlated values that will be used in a very efficient online evaluation, during such a phase parties have not yet provided their private inputs, so, if the protocol aborts, it is enough for every party to run the detector on their own. The privacy of the overall MPC protocol is not affected then, due to the absence of the (actual) private inputs.

We now show how to turn any protocol \(\varPi \) that UC-realises an ideal functionality \(\mathcal {F}\) in the CRS model with static security, into a protocol \(\varPi _V\) realising the same functionality with public detectability. Given the protocol \(\varPi \), and a binding and hiding commitment scheme \(\mathcal {C}=(\mathtt {Commit}, \mathtt {Reveal})\), we apply the following changes to \(\varPi \).

  • Before any step of \(\varPi \) is executed, each party securely broadcasts a commitment to their input and random tape using the commitment scheme.Footnote 2

  • In case of any broadcast communication, execute the protocol \(\varPi \) using instead an authenticated broadcast functionality \(\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\) .

  • Each pairwise communication between a sender \(P_S\) and a receiver \(P_R\), such that \(\{P_S, P_R\} \subseteq \mathcal {P}\), is implemented by first securely broadcasting a commitment \(c({{M}^S})\) to the message \({M}^S\) that has to be sent, followed by a private opening of it towards the receiving party. If \(P_R\) does not receive the correct opening from \(P_S\), then the receiver securely broadcasts a message asking for the opening of \(c({{M}^S})\). The sender has to reply with that information, using also secure broadcast.Footnote 3 If the broadcasted reply is a correct opening, parties in \(\mathcal {P}\) retake the computation, otherwise they abort.

It is easy to prove that the protocol \(\varPi \), modified as above, is publicly detectable.

Lemma 3.2

Let \(\varPi \) be a protocol that realises an ideal functionality \(\mathcal {F}\) with static security in the CRS model with broadcast and pairwise communication, and \(\mathcal {C}\) a standalone-secure commitment scheme. The protocol \(\varPi _V\) described above is publicly detectable and realises the functionality \(\mathcal {F}\) in the \(\{\) CRS,\(\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\}\)-hybrid model.

Publicly Detectable Preprocessing. In our preprocessing phase, we use the functionality \(\mathcal {F}_{\scriptstyle \mathsf {TinyMPC}}\) (Fig. 4), which is a standard functionality for secret sharing-based MPC for binary circuits augmented with the command MultBitString that allows multiplying a bit by a fixed string known to one party. This functionality is exactly what is needed to securely preprocess a BMR garbled circuit  [26] with abort, where the fixed strings play the roles of the global \(R^i\) strings in the garbled circuit; this can be efficiently implemented using a TinyOT-like protocol, for example [21, 26, 40], in the \(\mathcal {F}_{\scriptstyle \mathsf {OT}}\)-hybrid model.

We can apply the transformation above to obtain a publicly detectable protocol , if we have a protocol \(\varPi _{\scriptstyle \mathsf {TinyMPC}}\) that implements the functionality in the CRS model. Such a protocol can be efficiently obtained by implementing the OT functionality with OT extension  [1, 31], with base OTs realized in the CRS model [42]. Thus, we obtain the following corollary.

Corollary 3.3

Let \(\mathcal {C}\) be a commitment scheme and \(\varPi _{\scriptstyle \mathsf {TinyMPC}}\) a protocol that UC-realises the functionality \(\mathcal {F}_{\scriptstyle \mathsf {TinyMPC}}\) in the CRS model. The protocol (described in is publicly detectable and it securely realises the functionality \(\mathcal {F}_{\scriptstyle \mathsf {TinyMPC}}\) in the \(\{\mathcal {F}_{\scriptstyle \mathsf {Broadcast}}\), CRS\(\}\)-hybrid model.

Fig. 4.
figure 4

Functionality \(\mathcal {F}_{\scriptstyle \mathsf {TinyMPC}}\) for Bit-MPC.

3.2 Implementing the Preprocessing with Identifiable Abort

We now combine the detectable protocol with homomorphic commitments, \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\), to obtain a preprocessing protocol with identifiable abort. Our preprocessing functionality \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) is described in the full version of this work. It essentially performs the same computations as \(\mathcal {F}_{\scriptstyle \mathsf {TinyMPC}}\), except the output shares of the bit-string multiplication are now committed with homomorphic commitments, modelled in the functionality by allowing them to be added together and opened. Another key difference is that the outputs are only delivered at the very end of the protocol. After the initial outputs are sent to the parties, the only further allowed command is to open values from the homomorphic commitment scheme. This is because in the security proof for our preprocessing protocol, the simulator can always equivocate \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\), whereas it cannot equivate the simulation of when it is still possible for an abort to occur (which would require opening the honest parties’ random tapes to the adversary).

The protocol \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\), implementing \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) and described in Fig. 5, uses the publicly detectable version of \(\varPi _{\scriptstyle \mathsf {TinyMPC}}\) (from Corollary 3.3) for all the \(\mathbb {F}_2\)-arithmetic, i.e. to perform secure additions and multiplications on bits, as well as to obtain secret-shares of the product of secret bits with the strings \({R}^i\). The protocol uses two copies of the homomorphic commitment functionality (which we name \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) and \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\)). The first copy is used to create commitments to values in \(\mathbb {F}_2^\kappa \), such as the fixed strings \({R}^i\) as well as the additive shares of all the bit-string products of secret bits with \({R}^i\). We furthermore employ a consistency check to verify that the committed bit-string shares are correct, which is shown in Fig. 6. The functionality \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\) is used to additionally commit to the bits which are used in , and we use a second consistency check to verify that these two sets of bits stored in and \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\) are the same; this can be found in Fig. 7. We also use this functionality to open bit values during the output phase. We remark that the necessity of using \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\) for both of this is an artefact of the proof and we leave it as an interesting open problem to remove \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\) (together with the consistency check \(\varPi _{\scriptstyle \mathsf {CheckBit}}\)) while retaining a provably-secure protocol.

Fig. 5.
figure 5

The preprocessing protocol \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\).

Fig. 6.
figure 6

Subprotocol \(\varPi _{\scriptstyle \mathsf {BitStringMult}}^m\) to check consistency of committed bit-string multiplications.

In the case of abort, we will reveal all random tapes and committed messages of and test which party has sent inconsistent messages and when. Interestingly, we can do that without requiring adaptive primitives (in comparison to previous works): The simulation of in the security proof is only ever checked using the public detectability if no output of \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) has been revealed yet. Therefore we do not have to ever equivocate the random tape during the simulation of - revealing the tape used by the simulator is enough. This is exactly where previous work [29] required adaptivity of the underlying primitives, which we in our case can then avoid. To prove consistency of the committed shares of bit-string products, we use the following lemma. Its statement and proof are very similar to [26, Lemma 3.1] and we only provide it here for completeness. The proof of the lemma is given in the full version.

Lemma 3.4

If the protocol \(\varPi _{\scriptstyle \mathsf {BitStringMult}}\) does not abort, then the committed values \({R}^j, W^i_{\tau ,j}\) produced by \(\varPi _{\scriptstyle \mathsf {BitStringMult}}\) satisfy

$$\begin{aligned} \sum _{i=1}^n W^i_{\tau ,j} = x_\tau \cdot {R}^j, \end{aligned}$$

where \({R}^j\) was computed in the Init phase and \(\langle x_1 \rangle ,\dots ,\langle x_m \rangle \) were input to \(\varPi _{\scriptstyle \mathsf {BitStringMult}}\), except with probability \(\max (\varepsilon , 2^{-s})\).

Fig. 7.
figure 7

Subprotocol \(\varPi _{\scriptstyle \mathsf {CheckBit}}\) to check consistency of committed bits.

We use the lemma below to verify that bits committed in both \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}}\) and are consistent. We omit the proof, which is essentially a simplified version of the proof of Lemma 3.4.

Lemma 3.5

If the protocol \(\varPi _{\scriptstyle \mathsf {CheckBit}}\) does not abort, then the inputs \(\langle x_1 \rangle ,\dots ,\langle x_m \rangle \), \([x_1^i]_c^{\mathtt {Bit}}, \dots , [x_m^i]_c^{\mathtt {Bit}}\) satisfy

$$\begin{aligned} \sum _{i=1}^n x^i_{j} = x_j, \end{aligned}$$

except with probability \(\max (\varepsilon , 2^{-s})\).

These allow us to prove security of our construction.

Theorem 3.6

Assuming a secure broadcast channel, protocol \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) implements \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) in the \(\{\)CRS,\(\mathcal {F}_{\scriptstyle \mathsf {HCom}},\) \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}^{\mathtt {Bit}},\) \(\mathcal {F}_{\scriptstyle \mathsf {Rand}}\}\)-hybrid model with security against a static, active adversary corrupting at most \(n-1\) parties.

The proof follows a simulation-based argument, along the lines described in Sect. 1.3. It is given in the full version.

3.3 Efficiency Analysis

The main overhead of our preprocessing protocol, compared with the non-identifiable protocol which we build upon  [26], is due to the use of secure broadcast in the compilation with Lemma 3.2, and the use of homomorphic commitments for every wire in the garbled circuit. We now discuss these costs in more detail, and describe an optimization to reduce the use of broadcast.

We first observe that we can run an optimistic version of \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\), without the additional broadcasts in Lemma 3.2. If this optimistic offline protocol ends successfully (i.e. no abort occurs during Phase I), then the online phase will not require the identifiability property of \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\). In case of an error in the optimistic \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) instance, we then re-run the preprocessing with new randomness for each party and using the identifiable version with broadcast. Observe that this may not identify a cheater in case the second protocol succeeds, but this does not contradict the definition of identifiable abort as the adversary now just forces the honest parties to use more resources. At the same time, the fact that the real inputs at this point were not used for any computation and due to the use of new randomness by each party, privacy is preserved.

Regarding the homomorphic commitments, note that the dominating cost is during the bit-string multiplication (), where each of the n parties must commit to \(n\cdot m\) strings of length \(\kappa \), where m is approximately the number of AND gates. Computationally, this overhead is very cheap. For instance, based on the implementation from  [44] (which is similar to the non-interactive scheme we use) as a rough estimate for computation, then for \(n=5\) parties and the AES circuit with 6800 AND gates, we estimate the cost of generating commitments in a WAN setting is around 0.3 s. Compared with  [48], this is only around a 10% overhead. Therefore, the main cost introduced in this stage is likely having to securely broadcast these commitments. Since for large circuits this involves the broadcast of very long strings, broadcast “extension” techniques such as  [22] may be useful to reduce the complexity.

Fig. 8.
figure 8

The MPC protocol - \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) (Initialization).

Fig. 9.
figure 9

The MPC protocol - \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) (Computation).

Fig. 10.
figure 10

The MPC protocol - \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) (Abort).

4 Online Phase

The online phase of our computation is modeled after [26]. For completeness, in the full version we outline the main idea of constant-round MPC, so here we focus on the details necessary in order to achieve identifiable abort. HSS uses a BMR-approach to achieve constant-round MPC, where the parties can identify locally if their output was correct or not. In order to identify a cheater, we will perform a reconstruction of a faulty gate in case of an error in that gate. This reconstruction, as it turns out, does not reveal any information beyond the regular protocol transcript. To be able to perform such a reconstruction we will use the new properties of our enhanced preprocessing functionality \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\), namely that it also provides a verification mechanism for keys. This reconstruction consists of opening commitments to the input keys to the gate which all parties used for evaluation as well as the supposed output key using \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\).

There are multiple cases in which aborts can happen during the online phase, which entails different ways of how the cheater must be identified. These cases are as follows:

  1. 1.

    A set of parties can stop to communicate. As our protocol (apart from \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\)) only uses broadcast communication, such behaviour identifies cheaters directly.

  2. 2.

    The adversary can manipulate a gate in such a way that an honest party sees an error. This is the most straightforward error from garbling and can directly be detected by reconstructing the gate. In the protocol, we detect this in Steps 5, 6 of Abort. There, we open the keys which correspond to the output of the gate in \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) that we would expect to see, based on Init. We can then compare the opening with the actual outputs which were derived based on \(\mathtt {share}_\tau ^j\) (the garbling information) from each party, which directly shows whenever \(\mathcal {A}\) garbled a circuit wrongly towards any party.

  3. 3.

    The adversary may send complaints, even though a gate was garbled correctly. In this case we reconstruct the gate normally and, as no error occurs, we identify complaining parties as cheaters. This is the end of Step 6 of Abort.

  4. 4.

    The adversary may send complaints about incorrect gates that are outside the “active path”. Here we will show that the honest parties agree on the active path, i.e. on the rows of each garbled table which they decrypt during evaluation or - equivalently - the public values \(\varLambda _w\). Thus, honest parties can identify complaints outside the active path as cheating, which is done in Step 2 of Abort.

  5. 5.

    The adversary can garble a gate incorrectly for a dishonest party and let that party not report this. In this case, the protocol will only abort at the next AND gate \(g'\) into which the output of the wrongly garbled gate g is fed. In such a case we will see a difference between the keys that the honest parties obtained as input of g and the keys that are opened by the dishonest party that did not send a \(\mathtt {Conflict}\)-message. We do this by opening the committed input keys \([K_{u,\varLambda _u}^j]_j,[K_{v,\varLambda _v}^j]_j\) in Step 5 via \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\) and comparing these to the keys which are the inputs of the “faulty” gate g as evaluated by every party during the circuit evaluation. This is done in Step 6 of Abort.

Fig. 11.
figure 11

Extending an identifiable-abort functionality \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) to its publicly verifiable version \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {}}\).

Throughout the protocol, which is outlined in Fig. 8, Fig. 9 and Fig. 10 we let \(K_{w,0}^i\) be the 0-key of \(P_i\) for the wire w, \(R^i\) be the global difference used by this party and define \(K_{w,1}^i=K_{w,0}^i\oplus R^i\). Overlined keys \(\overline{K}\) are those obtained in the evaluation of the circuit. We present the proof of the following theorem in the full version.

Theorem 4.1

Let \(\mathsf {H}\) be a circular 2-correlation robust hash function. Assuming a secure broadcast channel, protocol \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) implements the functionality \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) in the \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Prep}}\)-hybrid model with security against a static, active adversary corrupting at most \(n-1\) parties.

Optimistic online phase. We can define an optimistic version of \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\), for which we analyse its best and worst case complexity. In this variant the use of broadcast is replaced with that of point-to-point channels, with the exception of the broadcast of the \(\varLambda _{\mathtt {in}}\) values. We require this broadcast in order to extract the inputs of the malicious parties and the unique active path for the evaluation of the garbled circuit. As MPC with identifiable abort implies secure broadcast  [16], it seems natural that we cannot avoid broadcast in our protocol, even optimistically. Furthermore, broadcasting \(\varLambda _{\mathtt {in}}\) is also necessary in the original [26] construction.

The main advantage of avoiding the use of broadcast in an instance of \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) is in the reconstruction of the garbled circuit (Step 2 of Garble). In our optimistic version we can put the circuit together by having each party send \(\mathtt {share}_{\tau }^i\) to a single party \(P_i\), which will then send the reconstructed circuit to everyone else, as in [26]. Note that replacing the broadcast here and in Step 2 of Input, only allows an adversary to introduce additive errors to the circuit, which can be easily derived by computing the difference between an evaluation of the garbled circuit with the correct key and an incorrect one.

When the honest parties encounter an error that would trigger the execution of Abort, they instead repeat the execution of \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) from Step 2 of Input, this time using broadcast as indicated in the protocol boxes. Any party refusing to engage in the re-run can be trivially identified as corrupted by all honest parties. Hence, the best-case complexity is of \(O(\kappa n |C |) + \mathsf {bc}(I)\), where I is the number of input wires, and the worst-case complexity is \(O(\kappa n |C |) + \mathsf {bc}(I \kappa + n^2 \kappa |C |)\).

Finally, we observe that this additional optimistic evaluation of a garbled circuit with additive errors enables an extra attempt for the adversary to guess the value \(R^i\) of each honest party. As \( R^i \in \{0,1\}^\kappa \), this has no impact on the protocol security.

5 Achieving Public Verifiability

We conclude by presenting how the previously introduced protocols and functionalities can be transformed into publicly verifiable counterparts. To model public verifiability formally, we assume (similar to [3]) the existence of a third party \({\mathcal {V}}\) who will not be part of \(\mathcal {P}\). As a matter of fact, \({\mathcal {V}}\) does not need to be online or even exist while all the other protocol steps are running.

The notion of public verifiability which we achieve in this work differs from [3, 17, 46] in multiple respects: while it still allows \({\mathcal {V}}\) to establish the correctness of the output value, our model requires that at least one honest party is present during the MPC protocol. This is in contrast to the aforementioned works which could guarantee correctness even if all parties are corrupted. Similar to [17] (albeit different from the other works) \({\mathcal {V}}\) will be able to identify cheaters during the verification phase (if they existed). Note that our model lends itself to be applicable in e.g. the client-server setting in order to prevent corrupted servers from announcing false outcomes, or when MPC is integrated with distributed ledgers.

More formally, let \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) be the identifiable abort version of any functionality \(\mathcal {F}\), as presented in Sect. 2.1. We denote as \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {}}\) an extension of \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) which further supports public verifiability, as described in Fig. 11. In a nutshell, publicly verifiable functionalities incorporate an additional party, the verifier \({\mathcal {V}}\), who can query the functionality at any point. When doing so, \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {}}\) replies with all public outputs of \(\mathcal {F}\) produced so far and, if there was an abort, the same set of corrupted parties \(\mathcal{J}\) that \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {}}\) would have produced towards the honest parties.

Whereas turning functionalities into publicly verifiable ones is pretty straightforward, one has to be more careful about their corresponding protocols. At the core, we achieve our goal by using \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}\), a publicly verifiable version of secure broadcast. Such variant can be implemented by using an authenticated bulletin board \(\mathcal {F}_{\scriptstyle \mathsf {BulletinBoard}}\) (detailed in the full version): broadcasting is equivalent to writing, and \({\mathcal {V}}\) can verify any ‘sent’ message by reading the board.

If considered on its own, adapting \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) to implement \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Prep}}\) could be attained mostly by just switching \(\mathcal {F}^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {Broadcast}}\) to its publicly verifiable version. As all messages would go through \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}\) (either in the clear, or inside commitments), an external verifier could easily find \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Prep}}\)’s outputs there if no abort happened. Also, if there was an abort, \({\mathcal {V}}\) would find in the same place all the necessary information to run the deterministic detector algorithm \(\mathcal {D}\) (cf. Definition 3.1) and conclude the same set of corrupted parties \(\mathcal{J}\) as the honest parties do.

On the other hand, implementing \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {MPC}}\) based on \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Prep}}\) requires substantially more work: the way the Abort procedure of \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) identifies cheating relies on knowing the active path in the garbled circuit corresponding to the \(\varLambda _{\mathtt {in}}\) values which were broadcast in the Input phase. Unfortunately, determining the active path in \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) requires private randomness from any party in \(\mathcal {P}\) (namely, Step 3 in Fig. 9), which at that point cannot be revealed to \({\mathcal {V}}\) for running \(\mathcal {D}\) because the parties have already provided their private inputs to \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\). Having all parties announce their active path does not solve the problem, as it is unclear to \({\mathcal {V}}\) which such path could be trusted. We now explain how to modify both \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) and \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) in order to achieve public verifiability.

5.1 Public Active Paths

To make the active path recognisable for \({\mathcal {V}}\), we use the well-known technique of fixing the last bit of a key to be its external wire value \(\varLambda _w\), i.e., we require that each \(K_{w,0}^i\) has as last bit 0 and each \(K_{w,1}^i\) has as last bit 1Footnote 4. The latter can be achieved by requiring that the last bit of \({R}^i\) is 1. We formalize this in the ideal functionality \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Prep}}\) by requiring that vectors generated by Sample and Init have their last bit set to 0 and 1, respectively. For the sake of formality, we denote the resulting modified functionality as \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {\widetilde{Prep}}}\).

In order to UC-implement \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {\widetilde{Prep}}}\), we first modify \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) so that all messages are sent via \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}\). Notice that \(\mathcal {F}_{\scriptstyle \mathsf {HCom}}\) and \(\mathcal {F}_{\scriptstyle \mathsf {Rand}}\) can be easily implemented with public verifiability by having their respective protocols send all communication through \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}\). Furthermore, we require that each party sets the last bit of their vectors according to the previous description. As dishonest parties might not follow this, we add a (cheap) random linear check using \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {HCom}}\) to ensure correct behaviour, which is described below:

  1. 1.

    Each \(P_i \in \mathcal {P}\) randomly samples \(s\) auxiliary masking vectors \(A^i_1,\dots ,A^i_{s} \in \mathbb {F}_2^\kappa \), subject to the constraint that the last bit of each of them is zero. \(P_i\) commits to them using \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {HCom}}\) as \([A^i_1]_i,\dots ,[A^i_{s}]_i\).

  2. 2.

    Denote by \([X^i_1]_i,\dots ,[X^i_r]_i\), \(r = |G | + n_{\mathtt {in}}\), \(P_i\)’s (value-zero) keys corresponding to the circuit-input wires and the output wires of AND gates, obtained using Sample. Let \([{R}^i]_i\) be the value obtained during Init. Parties in \(\mathcal {P}\) call \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Rand}}\) to generate, for every \(i\in [n]\), \(s\cdot (r+1)\) random bits \(b^i_1,\dots ,b^i_{s(r+1)}\).

  3. 3.

    For \(j\in [s]\), \(P_i\) uses \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {HCom}}\) to compute and open the random linear combination \([Z_j^i]_i = [A^i_j]_i+ b^i_{j + s\cdot r}\cdot [{R}^i]_i + \sum _{k=1}^{r}b^i_{(j-1)\cdot r+k} \cdot [X^i_k]_i\) towards all parties, who check that the last bit of each opened \(Z_j^i\) is \(b^i_{r\cdot s+j}\). If that is not the case for any i or j, they enter the Abort procedure.

As the last bit of every \(A_j^i, X_k^i\) should be zero and the last bit of \(R^i\) should be one and all those values are committed to before the b values are sampled, any corrupted \(P_i\) providing wrong values can only pass the j-th check with probability at most 1/2, for every \(j \in [s]\). On the other hand, and for an honest \(P_i\), the \(A_j^i\) masks prevent any leakage on \(X_k^i, R^i\) beyond their lasts bits. From this, and Theorem 3.6 it then follows:

Lemma 5.1

The transformation to \(\varPi _{\scriptstyle \mathsf {Prep}}^{\mathsf {ID}}\) outlined above implements \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {\widetilde{Prep}}}\) in the

\(\{\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}, \mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {HCom}}, \mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Rand}}\}\)-hybrid model with security against a static, active adversary corrupting at most \(n-1\) parties.

5.2 Public Verifiability in the Online Phase

We now explain how to modify the online phase of \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) in order to implement \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {MPC}}\). As mentioned before, we require that all communication will be done via \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}\).

While the Init, Garble, Output and Abort phase of our publicly verifiable protocol will be identical to \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\), we need to introduce some differences in Input and Circuit Evaluation. We proceed to outline those differences:

  • Input: Add a third step, where parties in \(\mathcal {P}\) additionally check if the last bit of \(\overline{K}_{\mathtt {in},\varLambda _{\mathtt {in}}}^i\) is identical to \(\varLambda _{\mathtt {in}}\) for every \(i \in [n]\). Otherwise, they broadcast \((\mathtt {Abort}, \mathcal{J})\), where \(\mathcal{J}\) contains every \(P_i\) who sent the wrong key.

  • Circuit Evaluation: In Step 3 of this subprotocol, parties in \(\mathcal {P}\) additionally check if the last bit of every \(\overline{K}_{w,\varLambda _w}^i\) is identical for every \(i \in [n]\). If that is not the case, they broadcast \((\mathtt {Conflict},g,\varLambda _u,\varLambda _v)\) and enter Abort.

Trivially, the Abort procedure still works for parties in \(\mathcal {P}\) the same way it did in \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) without the above modifications. For a verifier \({\mathcal {V}}\) looking at the transcript of this procedure afterwards, things also work out: Due to the fact that wire keys now indicate the corresponding \(\varLambda \) value on the wire, up to the smallest g in the circuit among all \(\mathtt {Conflict}\) messages, \({\mathcal {V}}\) can be sure of having obtained the correct \(\varLambda _u, \varLambda _v\) values. Hence, on Step 2, he can perform the same check honest parties in \(\mathcal {P}\) did to conclude whether or not the \((\mathtt {Conflict},g,\varLambda _u',\varLambda _v')\) was correct, or if a malicious party trying to cheat. \({\mathcal {V}}\) cannot perform Step 4, but he can obtain the resulting value on Step 5, because it is a \(\mathtt {PublicOutput}\) of \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {\widetilde{Prep}}}\) (cf. Fig. 11). The same is true for the other values revealed in that step, and thus \({\mathcal {V}}\) can perform the whole \(\mathbf{Abort} \) procedure non-interactively. From this, and Theorem 4.1, we can conclude:

Lemma 5.2

The transformation to \(\varPi ^{\scriptstyle \mathsf {ID}}_{\scriptstyle \mathsf {MPC}}\) outlined above implements \(\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {MPC}}\) in the \(\{\mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {Broadcast}}, \mathcal {F}^{\scriptstyle \mathsf {PV}}_{\scriptstyle \mathsf {\widetilde{Prep}}}\}\)-hybrid model with security against a static, active adversary corrupting at most \(n-1\) parties.