Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Secure Multi-party Computation (MPC) allows mutually suspicious parties to evaluate a function on their joint private inputs without revealing these inputs to each other. One fruitful line of investigation is concerned with the round complexity of these protocols. More specifically, we consider a model where at each round, each party is allowed to broadcast a message to everyone else. We allow the adversary to be malicious and corrupt any fraction of the parties.

If a Common Random String (CRS) is allowed, then two rounds are necessary and sufficient for secure multi-party protocols under plausible cryptographic assumptions [GGHR14, MW16].

Without relying on trusted setup, it was still known that constant round protocols are possible [BMR90], but the exact round complexity remained open. A lower bound in the simultaneous message model was established in the recent work of Garg et al. [GMPP16], who proved that four rounds are necessary. They also showed how to perform multi-party coin flipping in four rounds, which can then be used to generate a CRS and execute the aforementioned protocols in the CRS model. That technique implied a five-round protocol based on 3-round 3-robust non-malleable commitmentsFootnote 1 and indistinguishability obfuscation, and a 6-round protocol based on 3-round 3-robust non-malleable commitments and LWE.

For the important special case of only two parties, it is known that two message protocols with sequential rounds, i.e. each party talks in turn, are necessary and sufficient in the CRS model [JS07, HK07] and five message protocols are necessary and sufficient without setup [KO04, ORS15].

Our Results. Our work addresses the following fundamental question:

Can we obtain round-optimal multi-party computation protocols without setup?

We answer this question in the affirmative, obtaining a round-optimal multi-party computation protocol in the plain model for general functionalities in the presence of a malicious adversary. Informally, we prove the following:

Theorem 1

(Informal). Assuming the existence of adaptive commitments, as well as the sub-exponential hardness of Learning-with-Errors, there exists a four-round protocol that securely realizes any multi-party functionality against a malicious adversary in the plain model without setup.

In particular, we use a two-round adaptively secure commitment scheme (e.g., as constructed by Pandey et al. [PPV08], using Naor’s protocol [Nao91] with adaptive PRGs - a non-standard assumption). However, the recent two-round non-malleable commitment scheme [LPS17] from “Time-Lock Puzzles” can be made adaptively secure.

To establish our result, we depart from the coin flipping approach of [GMPP16] and instead rely on a new generalized notion of multi-key fully homomorphic encryption [LTV12] which we show how to construct based on the hardness of LWE. In a nutshell, whereas prior LWE based constructions required trusted setup (essentially a CRS) [CM15, MW16, BP16, PS16], we show that the setup procedure can be distributed. Each party only needs to broadcast a random stringFootnote 2, and generate its public key based on the collection of strings by all other parties. We show that the resulting scheme is secure even when some of the broadcast strings are adversarial (and even when the adversary is rushing). Similarly to Mukherjee and Wichs [MW16], we can transform our multi-key FHE into an MPC protocol in the semi-malicious model (where the adversary is only allowed to corrupt parties in a way that is consistent with some random tape). Our protocol requires 3 rounds without setup (vs. 2 rounds in the CRS model), and only requires polynomial hardness of LWE with slightly super polynomial noise ratio. Informally, we establish the following theorem:

Theorem 2

(Informal). Assuming hardness of Learning-with-Errors with super-polynomial noise ratio, there exists a three-round protocol that securely realizes any multi-party functionality against a rushing semi-malicious adversary in the plain model without setup.

Concurrent work. In a concurrent and independent work, Ananth, Choudhuri, and Jain construct a maliciously-secure 4-round MPC protocol based on one-way permutations and sub-exponential hardness of DDH [ACJ17]. Their approach is very different from ours, they construct and use a “robust semi-honest” MPC protocol from DDH, while our main building block is an LWE-based 3-round protocol secure against semi-malicious adversaries.

Related work on LWE-based MPC protocols. Asharov et al. [AJL+12] first show a three-round multi-party computation protocol in the CRS model and a two-round multi-party computation protocol in the reusable public-key infrastructure setup model based on LWE. The work of Mukherjee and Wichs [MW16], and its extensions [BP16, PS16], based on multi-key FHE [LTV12, CM15], shows how to obtain optimal two-round constructions based on LWE and NIZKs in the CRS model. See Chap. 3 of [Pol16] for related work of MPC protocols based on different assumptions both in the CRS and plain model.

2 Overview of Our Protocol

Our starting point is the multi-key FHE approach to MPC, first introduced by [LTV12]. As explained above, it was shown in [MW16] that the Clear-McGoldrick scheme [CM15] implies a two-round protocol in the semi-malicious setting in the CRS model under LWE. Furthermore, using NIZK it is possible to also achieve fully malicious security. Constructing a multi-key FHE without setup and with the necessary properties for compiling it into an MPC protocol is still an open problem, but we show that the trusted setup can be replaced by a distributed process which only adds a single round to the MPC protocol. While our final solution is quite simple, it relies on a number of insights as to the construction and use of multi-key FHE.

  1. 1.

    While the schemes in [GSW13, CM15, MW16] rely on primal Regev-style LWE-based encryption as a basis for their FHE scheme, it is also possible to instantiate them based on the dual-Regev scheme [GPV08] (with asymptotically similar performance). However, the same form of CRS is required for this instantiation as well, so at first glance it does not seem to offer any advantages.

  2. 2.

    The multi-key FHE schemes of [CM15, MW16] are presented as requiring trusted setup, but a deeper look reveals that this trusted setup is only needed to ensure a single property, related to the functionality: In LWE based encryption (Regev or dual-Regev) the public key contains a matrix A and a vector \(t \cdot A\) (possibly with some additional noise, depending on the flavour of the scheme) where t is the secret key. In order to allow multi-key homomorphism between parties that each have their own \(A_i, t_i\), it is only required that the values \(b_{i,j} = t_i A_j\) for all ij, are known to all participating parties (up to small noise). The use of CRS in previous works comes in setting all \(A_i\) to be the same matrix A, which is taken from the CRS, and thus the public key \(b_i \approx t_i A_i = t_i A\) is the only required information.

  3. 3.

    Lastly, we notice that dual-Regev with the appropriate parameters is leakage resilient [DGK+10]. This means that so long as a party honestly generates its \(t_i\) and \(A_i\), it can expose arbitrary (length-bounded) information on \(t_i\) without compromising the security of its own ciphertexts. Combining this with the above, we can afford to expose all \(b_{i,j}=t_i A_j\) without creating a security concern (for appropriate setting of the parameters).

Putting the above observations together, we present a multi-key FHE scheme with a distributed setup, in which each party generates a piece of the common setup, namely the matrix \(A_i\). After this step, each party can generate a public key \({\mathsf {pk}}_i\) containing all vectors \(b_{i,j}\), the respective secret key is the vector \(t_i\). Given all \({\mathsf {pk}}_i\) and the matrices \(A_i\), it is possible to perform multi-key homomorphism in a very similar manner to [CM15, MW16]. The 3-round semi-malicious protocol is therefore as follows.

  • Round 1: Distributed Setup. Every player \(P_i\) broadcasts a random matrix \(A_i\) of the appropriate dimension.

  • Round 2: Encryption. Each party generates a public/secret key-pair for the multi-key FHE, encrypts its input under these keys, and broadcasts the public key and ciphertext.

  • Round 3: Partial Decryption. Each party separately evaluates the function on the encrypted inputs, then use its secret key to compute a decryption share of the resulting evaluated ciphertext and broadcasts that share to everyone.

  • Epilogue: Output. Once all the decryption shares are received, each party can combine them to get the function value, which is the output of the protocol.

This skeleton protocol can be shown to be secure in the semi-malicious adversary model, but it is clearly insecure in the presence of a malicious adversary. Although the protocol can tolerate adversarial choice of the first-round matrices \(A_i\), the adversary can still violate privacy by sending invalid ciphertexts in Round 2 and observing the partial decryption that the honest players send in the next round. It can also violate correctness by sending the wrong decryption shares in the last round.

These two attacks can be countered by having the parties prove that they behaved correctly, namely that the public keys and ciphertexts in Round 2 were generated honestly, and that the decryption shares in Round 3 were obtained by faithful decryption. To be effective we need the proof of correct encryption to complete before the parties send their decryption shares (and of course the proof of correct decryption must be completed before the output can be produced). Hence, if we have a k-round input-delayed proof of correct encryption (and a \((k+1)\)-round input-delayed proof of correct decryption) then we get a \((k+1)\)-round protocol overall. Much of the technical difficulties to achieve malicious security in the current work are related to using 3-round proofs of correct encryptions, resulting in a 4-round protocol.

2.1 The Maliciously-Secure Protocol

Our maliciously-secure protocol builds on the above 3-round semi-malicious protocol, and in addition it uses a two-round adaptive commitment protocol \(\mathsf {aCom}=(\mathsf {acom}_1,\mathsf {acom}_2)\), a three-round delayed-input proof of correct encryption \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}=(\mathsf {p}_1, \mathsf {p}_2, \mathsf {p}_3)\), and a four-round delayed-input proof of correct decryption \(\varPi _{\scriptscriptstyle \mathrm {FS}}=(\mathsf {fs}_1, \mathsf {fs}_2, \mathsf {fs}_3,\mathsf {fs}_4)\). (The names \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) and \(\varPi _{\scriptscriptstyle \mathrm {FS}}\) are meant to hint on the implementation of these proofs, see more discussion in the next subsection.)

  • Round 1: Distributed Setup, commitment & proof. Every party i broadcasts its setup matrix \(A_i\). It also broadcasts the first message \(\mathsf {acom}_1\) of the adaptive commitment for its randomness and input, the first message \(\mathsf {p}_1\) of the proof of correct encryption, and the first message \(\mathsf {fs}_1\) of the proof of correct decryption (both proofs with respect to the committed values).

  • Round 2: Continued commitment & proofs. Each party broadcasts \(\mathsf {acom}_2,\mathsf {p}_2,\mathsf {fs}_2\).

  • Round 3: Encryption & proofs. The parties collect all the first round matrices \(A_i\) and run the key-generation and encryption procedures of the multi-key FHE. Then, each party broadcasts its public key and encrypted input. In the same round, each party also broadcasts messages \(\mathsf {p}_3,\mathsf {fs}_3\).

  • Round 4: Verification & decryption. Each party runs the verifier algorithm for the \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) proof of correct encryption, verifying all the instances. If all of them passed then it evaluates the function on the encrypted inputs, then uses its secret key to compute a decryption share of the resulting evaluated ciphertext, and broadcasts that share to everyone. It also broadcasts the message \(\mathsf {fs}_4\) of the proof of correct decryption.

  • Epilogue: Verification & output. Once all the decryption shares and proofs are received, each party runs the verifier algorithm for the \(\varPi _{\scriptscriptstyle \mathrm {FS}}\) proof of correct decryption, again verifying all the instances. If all of them passed then it combines all the decryption shares to get the function value, which is the output of the protocol.

If any of the messages is missing or mal-formed, or if any of the verification algorithms fail, then the parties are instructed to immediately abort with no output.

As explained in the next section, subtle technicalities arise in the security proof that lead to an extended version of the above protocol description.

2.2 A Tale of Malleability and Extraction

To prove security of our protocol, we must exhibit a simulator that can somehow extract the inputs of the adversary, so that it can send these inputs to the trusted party in order to get the function output. To that end we make the three-round proof of correct encryption a Proof of Knowledge (POK), and let the simulator use the knowledge extractor to get these adversarial inputs.

At the same time, we must ensure that this proof of knowledge is non-malleable, so that the extracted adversarial inputs do not change between the real protocol (in which the honest parties prove knowledge of their true input) and the simulated protocol (in which the simulator generates proofs for the honest players without knowing their true inputs). A few subtle technicalities are discussed below.

Two-round commitment with straight-line extraction. The main technical tool that we use in our proofs is two-round adaptive commitments, that the parties use to commit to their inputs and randomness. Commitments in this scheme are marked by tags, and the scheme has the remarkable property of adaptive security: Namely, commitments with one tag are secure even in the presence of an oracle that breaks commitments for all other tags.

Some hybrid games in our proof of security are therefore staged in a mental-experiment game where such a breaking oracle exists, providing us with straight-line (rewinding-free) extraction of the values that the adversary commits to, while keeping the honest-party commitments secret. Looking ahead, straight-line extraction is used in some of our hybrids to fake the (WIPOK) zero knowledge proofs.

However, we also need our other primitives (MFHE, POK, etc.) to remain secure in the presence of a breaking oracle, and we use complexity leveraging for that purpose: We assume that these primitives are sub-exponentially secure, and set their parameters much larger that those of the commitment scheme. This way, all these primitives remain secure even against sub-exponential time adversaries that can break the commitment by brute force. When arguing the indistinguishability of two hybrids, we reduce to the sub-exponential security of these primitives and use brute force to implement the breaking oracle in those hybrids.Footnote 3

Delayed-input proofs. In the three-round proofs for correct encryption and in the four-round proofs for correct decryption, the statement to be proved is not defined until just before the last round of the protocols. We therefore need to use delayed-input proofs that can handle this case and squeeze rounds in order to achieve a four round protocol.

Fake proofs via Feige-Shamir. The simulator needs to fake the four-round proof of correct decryption on behalf of the correct parties, as it derives their decryption shares from the function output that it gets from the trusted party. For this purpose we use a Feige-Shamir-type four-round proof [FS90], which has a trapdoor that we extract and can be used to fake these proofs.

WI-POK with a trapdoor. Some steps in our proof have hybrid games in which the commitment contains the honest parties’ true inputs while the encryption contains zeros. In such hybrids, the statement that the values committed to are consistent with the encryption is not true, so we need to fake that three-round proof as well.

For that purpose we use another Feige-Shamir-type trapdoor as follows: Each party chooses a random string R, encloses \(\hat{R}=OWF(R)\) with its first-flow message, encloses R inside the commitment \(\mathsf {aCom}\) (together with its input and randomness) and adds the statement \(\hat{R}=OWF(R)\) to the list of things that it proves in the 3-round POK protocol.

In addition, the parties execute a second commitment protocol \(\mathsf {bCom}\) (which is normally used to commit to zero in the real protocol), and we modify the POK statement to say that EITHER the original statement is true, OR the value committed in that second commitment \(\mathsf {bCom}\) is a pre-image of the \(\hat{R}\) value sent by the verifier in the first round. Letting the POK protocol be witness-indistinguishable (WI-POK), we then extract the R value from the adversary (in some hybrids), let the challenger commit to that value in the second commitment \(\mathsf {bCom}\), and use it as a trapdoor to fake the proof in the POK protocol.

We note that the second commitment \(\mathsf {bCom}\) need not be non-malleable or adaptive, but it does need to remain secure in the presence of a breaking oracle for the first commitment. Since we already assume a 2-round adaptive commitment \(\mathsf {aCom}\), then we use the same scheme also for this second commitment, and appeal to its adaptive security to argue that the second commitment remains secure in the presence of a breaking oracle for the first commitment.

Public-coin proofs. In the multi-party setting, the adversary may choose to fail the proofs with some honest parties and succeed with others. We thus need to specify what honest parties do in case one of the proofs fail. The easiest solution is to use public-coin proofs with perfect completeness, and have the parties broadcast their proofs and verify them all (not only the ones where they chose the challenge). This way we ensure that if one honest party fails the proof, then all of them do.

2.3 Roadmap

In Part I we provide our 3-round semi-malicious protocol based on multi-key FHE. In Part II we “compile” our 3-round semi-malicious protocol to a 4-round fully maliciously-secure protocol. We conclude in Sect. 7 with open problems.

3 Part I: 3-Round Semi-malicious Protocols

3.1 LWE-Based Multi-key FHE with Distributed Setup

Notations. Throughout the text we denote the security parameter by \(\kappa \). A function \(\mu :\mathbb {N}\rightarrow \mathbb {N}\) is negligible if for every positive polynomial \(p(\cdot )\) and all sufficiently large \(\kappa \)’s it holds that \(\mu (\kappa )<\frac{1}{p(\kappa )}\). We often use [n] to denote the set \(\{1,...,n\}\).

Let \(d \leftarrow \mathcal{D}\) denote the process of sampling d from the distribution \(\mathcal{D}\) or, if \(\mathcal{D}\) is a set, a uniform choice from it. For two distributions \(\mathcal{D}_1\) and \(\mathcal{D}_2\), we use \(\mathcal{D}_1 \approx _{\mathrm {s}}\mathcal{D}_2\) to denote that they are statistically close (up to negligible distance), \(\mathcal{D}_1\approx _{\mathrm {c}}\mathcal{D}_2\) denotes computational indistinguishability, and \( \mathcal{D}_1 \equiv \mathcal{D}_2\) denotes identical distributions.

3.2 Definitions

An encryption scheme is multi-key homomorphic if it can evaluate circuits on ciphertexts that are encrypted under different keys. Decrypting an evaluated ciphertext requires the secret keys of all the ciphertexts that were included in the computation. In more detail, a multi-key homomorphic encryption scheme (with trusted setup) consists of five procedures, \({\mathsf {MFHE}}= ({\mathsf {MFHE.Setup}}, \mathsf {MFHE.Keygen},{\mathsf {MFHE.Encrypt}},\mathsf {MFHE.Decrypt},{\mathsf {MFHE.Eval}})\):

  • Setup \({\mathsf {params}}\leftarrow {\mathsf {MFHE.Setup}}(1^\kappa )\) : On input the security parameter \(\kappa \) the setup algorithm outputs the system parameters \({\mathsf {params}}\).

  • Key Generation \(({\mathsf {pk}}, {\mathsf {sk}}) \leftarrow \mathsf {MFHE.Keygen}({\mathsf {params}})\) : On input \({\mathsf {params}}\) the key generation algorithm outputs a public/secret key pair \(({\mathsf {pk}},{\mathsf {sk}})\).

  • Encryption \( c \leftarrow {\mathsf {MFHE.Encrypt}}({\mathsf {pk}},x)\) : On input \({\mathsf {pk}}\) and a plaintext message \(x\in \{0,1\}^*\) output a “fresh ciphertext” c. (We assume for convenience that the ciphertext includes also the respective public key.)

  • Evaluation \({{\hat{c}}}:= {\mathsf {MFHE.Eval}}({\mathsf {params}}; \mathcal{C}; (c_1,\ldots , c_\ell ))\) : On input a (description of a) Boolean circuit \(\mathcal{C}\) and a sequence of \(\ell \) fresh ciphertexts \((c_1,\ldots ,c_\ell )\), output an “evaluated ciphertext” \({{\hat{c}}}\). (Here we assume that the evaluated ciphertext includes also all the public keys from the \(c_i\)’s.)

  • Decryption \(x := \mathsf {MFHE.Decrypt}(({\mathsf {sk}}_1,\ldots , {\mathsf {sk}}_N), {{\hat{c}}})\) : On input an evaluated ciphertext c (with N public keys) and the corresponding N secret keys \(({\mathsf {sk}}_1,\ldots ,{\mathsf {sk}}_N)\), output the message \(x\in \{0,1\}^*\).

The scheme is correct if for every circuit \(\mathcal{C}\) on N inputs and any input sequence \(x_1,\ldots ,x_N\) for \(\mathcal{C}\), we set \({\mathsf {params}}\leftarrow {\mathsf {MFHE.Setup}}(1^\kappa )\) and then generate N key-pairs and N ciphertexts \(({\mathsf {pk}}_i,{\mathsf {sk}}_i) \leftarrow \mathsf {MFHE.Keygen}({\mathsf {params}})\) and \(c_i \leftarrow {\mathsf {MFHE.Encrypt}}(pk_i,x_i)\), then we get

$$ \mathsf {MFHE.Decrypt}\big (({\mathsf {sk}}_1,\ldots , {\mathsf {sk}}_N), {\mathsf {MFHE.Eval}}({\mathsf {params}}; \mathcal{C}; (c_1,\ldots ,c_N)) \big ) = \mathcal{C}(x_1,\ldots , x_N)] $$

except with negligible probability (in \(\kappa \)) taken over the randomness of all these algorithms.Footnote 4

Local decryption and simulated shares. A special property that we need of the multi-key FHE schemes from [CM15, MW16], is that the decryption procedure consists of a “local” partial-decryption procedure \(ev_i\leftarrow {\mathsf {MFHE.PartDec}}({{\hat{c}}},{\mathsf {sk}}_i)\) that only takes one of the secret keys and outputs a partial decryption share, and a public combination procedure that takes these partial shares and outputs the plaintext, \(x \leftarrow {\mathsf {MFHE.FinDec}}(ev_1,\ldots , ev_N, {{\hat{c}}})\).

Another property of these schemes that we need is the ability to simulate the decryption shares. Specifically, there exists a \({PPT}\) simulator \(\mathcal{S}^T\), that gets for input:

 – the evaluated ciphertext \({{\hat{c}}}\),

 – the output plaintext \(x:=\mathsf {MFHE.Decrypt}(({\mathsf {sk}}_1,\ldots , {\mathsf {sk}}_N), {{\hat{c}}})\),

 – a subset \(I \subset [N]\), and all secret keys except the one for I, \(\{{\mathsf {sk}}_j\}_{j\in [N]\setminus I}\).

The simulator produces as output simulated partial evaluation decryption shares: \(\{\widetilde{ev_i}\}_{i\in I}\leftarrow \mathcal{S}^{T}(x,{{\hat{c}}},I,\{{\mathsf {sk}}_j\}_{j\in [N]\setminus I}).\) We want the simulated shares to be statistically close to the shares produced by the local partial decryption procedures using the keys \(\{sk_i\}_{i\in I}\), even conditioned on all the inputs of \(\mathcal{S}^T\).

We say that a scheme is simulatable if it has local decryption and a simulator as described here. As in [MW16], in our case too we only achieve simulatability of the basic scheme when all parties but one are corrupted (i.e., when the set I is a singleton).

Distributed Setup

The variant that we need for our protocol does not require the setup procedure to be run by a trusted entity, but rather it is run in a distributed manner by all parties in the protocol. In our definition we allow the setup to depend on the maximum number of users N. This restriction does not pose a problem for our application.

  • Distributed Setup \({\mathsf {params}}_i \leftarrow {\mathsf {MFHE.DistSetup}}(1^\kappa , 1^N,i)\) : On input the security parameter \(\kappa \) and number of users N, outputs the system parameters for the i-th player \({\mathsf {params}}_i\).

The remaining functions have the same functionality as above, where \({\mathsf {params}}=\{{\mathsf {params}}_i\}_{i\in [N]}\), the key generation takes i as an additional parameter in order to specify which entry in \({\mathsf {params}}\) it refers to.

Semantic Security and Simulatability

Semantic security for multi-key FHE is defined as the usual notion of semantic security. For the distributed setup variant, we require that semantic security for the i-th party holds even when all \(\{{\mathsf {params}}_j\}_{j \in [N]\setminus \{i\}}\) are generated adversarially and possibly depending on \({\mathsf {params}}_i\).

Namely, we consider a rushing adversary that chooses N and \(i\in [N]\), then it sees \({\mathsf {params}}_i\) and produces \({\mathsf {params}}_j\) for all \(j\in [N]\setminus \{i\}\). After this setup, the adversary is engaged in the usual semantic-security game, where it is given the public key, chooses two messages and is given the encryption of one of them, and it needs to guess which one was encrypted.

Simulatability of the decryption shares is defined as before, but now the evaluated ciphertext is produced by the honest party interacting with the same rushing adversary (and statistical closeness holds even conditioned on everything that the adversary sees).

3.3 A “Dual” LWE-Based Multi-key FHE with Distributed Setup

For our protocol we use an adaptation of the “dual” of the multi-key FHE scheme from [CM15, MW16]. Just like the “primal” version, our scheme uses the GSW FHE scheme [GSW13], and its security is based on the hardness of LWE.

Recall that the LWE problem is parametrized by integers nmq (with \(m>n\log q\)) and a distribution \(\chi \) over \(\mathbb {Z}\) that produces whp integers much smaller than q. The LWE assumption says that given a random matrix \(A\in \mathbb {Z}_q^{n\times m}\), the distribution \(sA+e\) with random \(s\in \mathbb {Z}_q^n\) and \(e\leftarrow \chi ^m\) is indistinguishable from uniform in \(\mathbb {Z}_q^m\).

For the “dual” GSW scheme below, we use parameters \(n<m<w<q\) with \(m>n\log q\) and \(w>m\log q\), and two error distributions \(\chi ,\chi '\) with \(\chi '\) producing much larger errors than \(\chi \) (but still much smaller than q). Specifically, consider the distribution

$$\begin{aligned} \chi '' = \{ a\leftarrow \{0,1\}^{m}, b\leftarrow \chi ^{m}, c\leftarrow \chi ', ~\text{ output }~c-\langle a,b\rangle \}. \end{aligned}$$
(1)

We need the condition that the statistical distance between \(\chi '\) and \(\chi ''\) is negligible (in the security parameter n). This condition holds, for example, if \(\chi ,\chi '\) are discrete Gaussian distributions around zero with parameters \(p,p'\), respectively, such that \(p'/p\) is super-polynomial (in n).

  • Distributed Setup \({\mathsf {params}}_i \leftarrow {\mathsf {MFHE.DistSetup}}(1^\kappa ,1^N, i)\) : Set the parameters \(q=\mathrm {poly}(N)n^{\omega (1)}\) (as needed for FHE correctness), \(m>(Nn+1)\log q+2\kappa \), and \(w=m\log q\).Footnote 5 Sample and output a random matrix \(A_i\in \mathbb {Z}_q^{(m-1)\times n}\).

  • Key Generation \(({\mathsf {pk}}_i, {\mathsf {sk}}_i) \leftarrow \mathsf {MFHE.Keygen}({\mathsf {params}},i)\) : Recall that \({\mathsf {params}}= \{{\mathsf {params}}_i\}_{i \in [N]} = \{A_i\}_{i\in [N]}\). The public key of party i is a sequence of vectors \({\mathsf {pk}}_i= \{b_{i,j}\}_{j \in [N]}\) to be formally defined below. The corresponding secret key is a low-norm vector \(t_i\in \mathbb {Z}_q^m\).

    We will define \(b_{i,j}\), \(t_i\) such that for \(B_{i,j}=\left( \begin{array}{c}A_j\\ \!\!-b_{i,j}\!\!\end{array}\right) \) it holds that \(t_iB_{i,j}=b_{i,i}-b_{i,j}\pmod q\) for all j.

    In more detail, sample a random binary vector \(s_i\leftarrow \{0,1\}^{m-1}\), we set \(b_{i,j}=s_iA_j \bmod {q}\). Denoting \(t_i=(s_i,1)\), we indeed have \(t_iB_{i,j}=b_{i,i}-b_{i,j}\pmod q\).

  • Encryption \( C \leftarrow {\mathsf {MFHE.Encrypt}}({\mathsf {pk}}_i,\mu )\) : To encrypt a bit \(\mu \) under the public key \({\mathsf {pk}}_i\), choose a random matrix \(R\in \mathbb {Z}_q^{n\times w}\) and a low-norm error matrix \(E\in \mathbb {Z}_q^{m\times w}\), and set

    $$\begin{aligned} C := B_{i,i} R + E + \mu G\ \bmod {q}, \end{aligned}$$
    (2)

    where G is a fixed m-by-w “gadget matrix” (whose structure is not important for us here, cf. [MP12]). Furthermore, as in [CM15, MW16], encrypt all bits of R in a similar manner. For our protocol, we use more error for the last row of the error matrix E than for the top \(m-1\) rows. Namely, we choose \(\hat{E}\leftarrow \chi ^{(m-1)\times w}\) and \(e'\leftarrow {\chi '}^w\) and set \(E=\left( \begin{array}{c}\hat{E}\\ e'\end{array}\right) \).

  • Decryption \(\mu := \mathsf {MFHE.Decrypt}(({\mathsf {sk}}_1,\ldots , {\mathsf {sk}}_N), C)\) : The invariant satisfied by ciphertexts in this scheme, similarly to GSW, is that an encryption of a bit \(\mu \) relative to secret key t is a matrix C that satisfies

    $$\begin{aligned} t C = \mu \cdot t G + e \pmod {q} \end{aligned}$$
    (3)

    for a low-norm error vector e, where G is the same “gadget matrix”. The vector t is the concatenation of all \({\mathsf {sk}}_i=t_i\) for all parties i participating in the evaluation.

    This invariant holds for freshly encrypted ciphertexts since \(t_i B_{i,i}=0\pmod {q}\), and so \(t_i(B_{i,i} R +E +\mu G) = \mu \cdot t_iG + t_iE \pmod {q},\) where \(e=t_iE\) has low norm (as both \(t_i\) and E have low norm).

    To decrypt, the secret-key holders compute \(u=t\cdot C \bmod q\), outputting 1 if the result is closer to tG or 0 if the result is closer to 0.

  • Evaluation \(C:= {\mathsf {MFHE.Eval}}({\mathsf {params}}; \mathcal{C}; (c_1,\ldots ,c_\ell ))\) : Since ciphertexts satisfy the same invariant as in the original GSW scheme, then the homomorphic operations in GSW work just as well for this “dual” variant. Similarly the ciphertext-extension technique from [CM15, MW16] works also for this variant exactly as it does for the “primal” scheme (see below). Hence we get a multi-key FHE scheme.

Security

Security with distributed setup follows from LWE so long as \((m-1) > (N n +1) \log q + 2\kappa \). The basis for security is the following lemma, which is essentially the same argument from [DGK+10] showing that dual Regev is leakage resilient for bounded leakage.

Lemma 1

Let \(A_i\in \mathbb {Z}_q^{(m-1)\times n}\) be uniform, and let \(A_j\) for all \(j\ne i\) be chosen by a rushing adversary after seeing \(A_i\). Let \(s_i\leftarrow \{0,1\}^{m-1}\) and \(b_{i,j}=s_i A_j\). Let \(r\in \mathbb {Z}_q^{n}\) be uniform, \(e\leftarrow \chi ^{m-1}\), \(e'\leftarrow \chi '\). Then, under the LWE assumption, the vector \(c = A_i r + e\) and number \(c' =b_{i,i} r+e'\) are (jointly) pseudorandom, even given the \(b_{i,j}\)’s for all \(j \in [N]\) and the view of the adversary that generated the \(A_j\)’s.

Proof:

Consider the distribution of \(c,c'\) as in the lemma statement. We notice that \(c' = b_{i,i} r+e' = s_i A_i r +e' = s_i c - s_i e + e'\). The proof proceeds by a sequence of hybrids. Our first hybrid changes the distribution of \(c'\) to \(c' = s_i c + e'\). Noting that \(c'-s_ic\) is drawn from \(\chi ''\) before the change and from \(\chi '\) after the change (cf. Eq. (1)), we get that the statistical distance between the hybrids is negligible.

In the next hybrid, we use LWE to replace c with a uniform vector. Since c could have been sampled before \(s_i\) or any of the \(A_j\) with \(j \ne i\), LWE implies indistinguishability with the previous hybrid.

Finally, we apply the leftover hash lemma, noting that all the \(b_{i,j}\)’s only leak at most \(N n \log q\) bits of information on \(s_i\) and therefore the average min-entropy of \(s_i\) is at least \((m-1)-N n \log q > \log q+2\kappa \). Using the leftover hash lemma with c as seed and \(s_i\) as source, we have that \((c, s_i c)\) are jointly statistically indistinguishable from uniform. This implies that \((c,c')\) are jointly statistically indistinguishable from uniform, even given all \(A_j, b_{i,j}\) for all \(j \in [N]\). The lemma follows.    \(\square \)

Applying this lemma repeatedly for every column via a hybrid argument shows that the ciphertext components \(c = A_i R + \hat{E}\) and \(c' = b_{i,i} R+e'\) are also jointly pseudorandom, even given the view of the adversary, and semantic security of the scheme follows.

Multi-key Homomorphism and Simulatability

The other components of the multi-key FHE scheme from [CM15, MW16] work for our variant as well, simply because the encryption and decryption formulas are identical (except with slightly different parameter setting), namely Eqs. (2) and (3). Below we briefly sketch these components for the sake of self-containment.

The ciphertext-expansion procedure. The “gadget matrix” G used for these schemes has the property that there exists a low-norm vector u such that \(Gu=(0,0,\ldots ,0,1)\). Therefore, for every secret key \(t=(s|1)\) we have \(tGu=1 \pmod q\). It follows that if C is an encryption of \(\mu \) wrt secret key \(t=(s|1)\), then the vector \(v=Cu\) satisfies

$$ \langle t,v\rangle =tCu=(\mu tG+e)u=\mu tGu + \langle e,u\rangle = \mu + \epsilon \pmod {q} $$

where \(\epsilon \) is a small integer. In other words, given an encryption of \(\mu \) wrt t we can construct a vector v such that \(\langle t,v\rangle \approx \mu \pmod {q}\). Let \(A_1, A_2\) be public parameters for two users with secret keys \(t_1=(s_1|1)\), \(t_2=(s_2|1)\), and recall that we denote \(b_{i,j}=s_i A_j\) and \(B_{i,i}=\left( \begin{array}{c}A_i \\ -s_iA_i \end{array}\right) = \left( \begin{array}{c}A_i \\ -b_{i,i} \end{array}\right) \).

Let \(C=B_{1,1}R+E+\mu G\) be fresh encryption of \(\mu \) w.r.t \(B_{1,1}\), and suppose that we also have an encryption under \(t_1\) of the matrix R. We note that given any vector \(\delta \), we can apply homomorphic operations to the encryption of R to get an encryption of the entries of the vector \(\rho =\rho (\delta )=\delta R\). Then, using the technique above, we can compute for every entry \(\rho _i\) a vector \(x_i\) such that \(\langle t_1,x_i\rangle \approx \rho _i \pmod {q}\). Concatenating all these vectors we get a matrix \(X=X(\delta )\) such that \(t_1 X \approx \rho =\delta R\pmod {q}\).

We consider the matrix \(C'=\left( \begin{array}{cc} C &{} X\\ 0 &{} C\end{array}\right) \), where \(X=X(\delta )\) for a \(\delta \) to be determined later. We claim that for an appropriate \(\delta \) this is an encryption of the same plaintext \(\mu \) under the concatenated secret key \(t'=(t_1|t_2)\). To see this, notice that

$$t_2 C = (s_2|1) \left( \left( \begin{array}{c}A_1\\ -s_1A_1 \end{array}\right) R+E+\mu G\right) \approx (b_{2,1}-b_{1,1})R + \mu t_2 G \pmod {q}, $$

and therefore setting \(\delta = b_{1,1}-b_{2,1}\), which is a value that can be computed from \({\mathsf {pk}}_1, {\mathsf {pk}}_2\) we get

$$\begin{aligned} t' C'= & {} (t_1 C ~|~ t_1 X + t_2 C) \approx (\mu t_1 G ~|~ (b_{1,1}-b_{2,1})R + (b_{2,1}-b_{1,1})R + \mu t_2 G)\\= & {} \mu (t_1 G ~|~ t_2 G) ~=~ \mu (t_1|t_2) \left( \begin{array}{cc}G &{} \\ &{} G\end{array}\right) , \end{aligned}$$

as needed. As in the schemes from [CM15, MW16], this technique can be generalized to extend the ciphertext C into an encryption of the same plaintext \(\mu \) under the concatenation of any number of keys.

Partial decryption and Simulatability. This aspect works exactly as in [MW16, Theorem 5.6]. Let \(\mathbf {v}\) be a fixed low-norm vector satisfying \(G\mathbf {v}=(0,0,\ldots ,0,\lceil {q/2}\rceil ) \pmod {q}\) (such a vector exists). Let C be an encryption of a bit \(\mu \) relative to the concatenated secret key \(t=(t_1|t_2|\ldots |t_N)\) (whose last entry is 1). Then on one hand C satisfies Eq. (3) so we have

$$ t C \mathbf {v} = \mu \underbrace{t G \mathbf {v}}_{=\lceil {q/2}\rceil } + \underbrace{\langle e,\mathbf {v}\rangle }_{=\epsilon , |\epsilon |\ll q} \approx \mu \cdot \lceil {q/2}\rceil \pmod {q}. $$

On the other hand, breaking C into N bands of m rows each (i.e., \(C=(C_1^T|C_2^T|\ldots |C_N^T)^T\) with each \(C_i\in \mathbb {Z}_q^{m\times mN}\)), we have \(tC\mathbf {v} = \sum _{i=1}^N t_i C_i\mathbf {v}\). Hence in principle we could set the partial decryption procedure as \(ev_i={\mathsf {MFHE.PartDec}}(C,t_i):=t_i C_i\mathbf {v} \bmod {q}\), and the combination procedure will just add all these \(ev_i\)’s and output 0 if it is smaller than q/4 in magnitude and 1 otherwise.

To be able to simulate (when there are \(N-1\) corruptions), we need the partial decryption to add its own noise, large enough to “drown” the noise in \(t C \mathbf {v}\) (but small enough so decryption still works). Given the ciphertext C, \(N-1\) keys \(t_j\) for all \(j\in [N]\setminus \{i\}\), and the plaintext bit \(\mu \), the simulator will sample its own noise e and output the share \(ev_i=\mu \cdot \lceil {q/2}\rceil +e-\sum _{j} t_jC_j\mathbf {v} \bmod q\).

4 A Semi-malicious Protocol Without Setup

The semi-malicious adversary model [AJL+12] is a useful mid-point between the semi-honest and fully-malicious models. Somewhat similarly to a semi-honest adversary, a semi-malicious adversary is restricted to run the prescribed protocol, but differently than the semi-honest model it can choose the randomness that this protocol expects arbitrarily and adaptively (as opposed to just choosing it at random). Namely, at any point in the protocol, there must exists a choice of inputs and randomness that completely explain the messages sent by the adversary, but these inputs and randomness can be arbitrarily chosen by the adversary itself. A somewhat subtle point is that the adversary must always know the inputs and randomness that explain its actions (i.e., the model requires the adversary to explicitly output these before any messages that it sends).

We still assume a rushing adversary that can choose its messages after seeing the messages of the honest parties (subject to the constraint above). Similarly to the malicious model, an adversarial party can abort the computation at any point. Security is defined in the usual way, by requiring that a real-model execution is simulatable by an adversary/simulator in the ideal model, cf. Definition 7 in Sect. 5.4.

4.1 A Semi-malicious Protocol from Multi-key FHE With Distributed Setup

Our construction of 3-round semi-malicious protocol without setup is nearly identical to the Mukherjee-Wichs construction with a common reference string [MW16, Sect. 6], except that we use multi-Key FHE with distributed setup, instead of their multi-Key FHE with trusted setup. We briefly describe this construction here for the sake of self-containment.

  • To compute an N-party function \(\mathbf {\mathcal {F}}:(\{0,1\}^*)^N\rightarrow \{0,1\}^*\) on input vector \(\mathbf {w}\), the parties first run the setup round and broadcast their local parameters \({\mathsf {params}}_i\).

  • Setting \({\mathsf {params}}=({\mathsf {params}}_1,\ldots ,{\mathsf {params}}_N)\), each party runs the key generation to get \(({\mathsf {pk}}_i,{\mathsf {sk}}_i)\leftarrow \mathsf {MFHE.Keygen}({\mathsf {params}},i)\) and then the encryption algorithm \(c_i\leftarrow {\mathsf {MFHE.Encrypt}}(pk_i,w_i)\), and broadcasts \(({\mathsf {pk}}_i,c_i)\).

  • Once the parties have all the public keys and ciphertexts, they each evaluate homomorphically the function \(\mathcal {F}\) and all get the same evaluated ciphertext \({{\hat{c}}}\). Each party applies its partial decryption procedure to get \(ev_i\leftarrow {\mathsf {MFHE.PartDec}}({{\hat{c}}},{\mathsf {sk}}_i)\) and broadcasts its decryption share \(ev_i\) to everyone.

  • Finally, given all the shares \(ev_i\), every party runs the combination procedure and outputs \(\mu \leftarrow {\mathsf {MFHE.FinDec}}(ev_1,\ldots , ev_N, {{\hat{c}}})\).

Security. Security is argued exactly as in [MW16, Theorem 6.1]: First we use the simulatability property to replace the partial decryption by the honest parties by a simulated partial decryption (cf. [MW16, Lemma 6.2]), and once the keys of the honest parties are no longer needed we can appeal to the semantic security of the FHE scheme (cf. [MW16, Lemma 6.3]).

Exactly as in the Mukherjee-Wichs construction, here too the underlying multi-key scheme only satisfies simulatability when all but one of the parties are corrupted, and as a result also the protocol above is only secure against adversaries that corrupt all but one of the parties. Mukherjee and Wichs described in [MW16, Sect. 6.2] a transformation from a protocol secure against exactly \(N-1\) corruptions to one which is secure against any number of corruptions. Their transformation is generic and can be applied also in our context, resulting in a semi-malicious-secure protocol.

5 Part II: 4-Round Malicious Protocols

5.1 Tools and Definitions

We use tools of commitment and proofs to “compile” our semi-malicious protocol to a protocol secure in the malicious model. Below we define these tools and review the properties that we rely on.

5.2 Commitment Schemes

Commitment schemes allow a committer C to commit itself to a value while keeping it (temporarily) secret from the receiver R. Later the commitment can be “opened”, allowing the receiver to see the committed value and check that it is consistent with the earlier commitment. In this work, we consider commitment schemes that are statistically binding. This means that even an unbounded cheating committer cannot create a commitment that can be opened in two different ways. We also use tag-based commitment, which means that in addition to the secret committed value there is also a public tag associated with the commitment. The notion of hiding that we use is adaptive-security (due to Pandey et al. [PPV08]): it roughly means that the committed value relative to some tag is hidden, even in a world that the receiver has access to an oracle that breaks the commitment relative to any other tag.

Definition 1

(Adaptively-secure Commitment [PPV08]). A tag-based commitment scheme (CR) is statistically binding and adaptively hiding if it satisfies the following properties:

  • Statistical binding: For any (computationally unbounded) cheating committer \(C^*\) and auxiliary input z, it holds that the probability, after the commitment stage, that there exist two executions of the opening stage in which the receiver outputs two different values (other than \(\perp \)), is negligible.

  • Adaptive hiding: For every cheating PPT receiver \(R^*\) and every tag value \(\mathsf {tag}\), it holds that the following ensembles computationally indistinguishable.

    1. \(\{\mathsf {view}_{\mathsf {aCom}}^{R^*(\mathsf {tag}),\mathcal {B}_\mathsf {tag}}(m_1,z)\}_{\kappa \in N,m_1, m_2 \in \{0,1\}^\kappa ,z\in \{0,1\}^*}\)

    2. \(\{\mathsf {view}_{\mathsf {aCom}}^{R^*(\mathsf {tag}),\mathcal {B}_\mathsf {tag}}(m_2,z)\}_{\kappa \in N,m_1, m_2 \in \{0,1\}^\kappa ,z\in \{0,1\}^*}\)

    where \(\mathsf {view}_{\mathsf {aCom}}^{R^*(\mathsf {tag}),\mathcal {B}_\mathsf {tag}}(m,z)\) denotes the random variable describing the output of \(R^*(\mathsf {tag})\) after receiving a commitment to m relative to \(\mathsf {tag}\) using \(\mathsf {aCom}\), while interacting with a commitment-breaking oracle \(\mathcal {B}_\mathsf {tag}\).

    The oracle \(\mathcal {B}_\mathsf {tag}\) gets as input an alleged view \(v'\) and tag \(\mathsf {tag}'\). If \(\mathsf {tag}'\ne \mathsf {tag}\) and \(v'\) is a valid transcript of a commitment to some value \(m'\) relative to \(\mathsf {tag}'\), then \(\mathcal {B}_{\mathsf {tag}}\) returns that value \(m'\). (If there is no such value, or if \(\mathsf {tag}=\mathsf {tag}'\), then \(\mathcal {B}_\mathsf {tag}'\) returns \(\perp \). If there is more than one possible value \(m'\) then \(\mathcal {B}_{\mathsf {tag}'}\) returns an arbitrary one.)

To set up some notations, for a two-message commitment we let \(\mathsf {aCom}_1=\mathsf {aCom}_{\mathsf {tag}}(r)\) and \(\mathsf {aCom}_2=\mathsf {aCom}_{\mathsf {tag}}(m;\mathsf {aCom}_1 ;r')\) denote the two messages of the protocol, the first depending only on the randomness of the receiver and the second depending on the message to be committed, the first-round message from the receiver, and the randomness of the sender.

5.3 Proof Systems

Given a pair of interactive Turing machines, P and V, we denote by \(\langle P(w), V \rangle (x)\) the random variable representing the (local) output of V, on common input x, when interacting with machine P with private input w, when the random input to each machine is uniformly and independently chosen.

Definition 2

(Interactive Proof System). A pair of interactive machines \(\langle P, V \rangle \) is called an interactive proof system for a language L if there is a negligible function \(\mu (\cdot )\) such that the following two conditions hold:

  • Completeness: For every \(x \in L\), and every \(w\in R_L(x)\), \(\Pr [\langle P(w), V \rangle (x)= 1] = 1\).

  • Soundness: For every \(x \notin L\), and every \(P^*\), \(\Pr [\langle P^*, V \rangle (x) = 1] \le \mu (\kappa )\)

In case the soundness condition is required to hold only with respect to a computationally bounded prover, the pair \(\langle P, V \rangle \) is called an interactive argument system.

Definition 3

(ZK). Let L be a language in \({\mathcal{NP}}\), \(R_L\) a witness relation for L, (PV) an interactive proof (argument) system for L. We say that (PV) is statistical/computational ZK, if for every probabilistic polynomial-time interactive machine V there exists a probabilistic algorithm \(\mathcal{S}\) whose expected running-time is polynomial in the length of its first input, such that the following ensembles are statistically close/computationally indistinguishable over L.

  • \(\{\langle P(y), V(z) \rangle (x)\}_{\kappa \in \mathbb {N}\,x\in \{0,1\}^\kappa \cap L, y \in R_L(x), z\in \{0,1\}^*}\)

  • \(\{\mathcal{S}(x,z)\}_{\kappa \in \mathbb {N}\,x\in \{0,1\}^\kappa \cap L, y \in R_L(x), z\in \{0,1\}^*}\)

where \(\langle P(y), V(z) \rangle (x)\) denotes the view of V in interaction with P on common input x and private inputs y and z, respectively.

Definition 4

(Witness-indistinguishability). Let \(\langle P, V \rangle \) be an interactive proof (or argument) system for a language \(L \in {\mathcal{NP}}\). We say that \(\langle P, V \rangle \) is witness-indistinguishable for \(R_L\), if for every probabilistic polynomial-time interactive machine \(V^*\) and for every two sequences \(\{w^1_{\kappa ,x}\}_{\kappa \in \mathbb {N},x\in L}\) and \(\{w^2_{\kappa ,x}\}_{\kappa \in \mathbb {N},x\in L}\), such that \(w^1_{\kappa ,x}, w^2_{\kappa ,x}\in R_L(x)\) for every \(x \in L\cap \{0,1\}^\kappa \), the following probability ensembles are computationally indistinguishable over \(\kappa \in \mathbb {N}\).

  • \(\{\langle P(w^1_{\kappa ,x}), V^*(z) \rangle (x) \}_{\kappa \in \mathbb {N}\,x\in \{0,1\}^\kappa \cap L, z\in \{0,1\}^*}\)

  • \(\{\langle P(w^2_{\kappa ,x}), V^*(z) \rangle (x) \}_{\kappa \in \mathbb {N}\,x\in \{0,1\}^\kappa \cap L, z\in \{0,1\}^*}\)

Definition 5

(Proof of knowledge). Let (PV) be an interactive proof system for the language L. We say that (PV) is a proof of knowledge for the witness relation \(R_L\) for the language L it there exists an probabilistic expected polynomial-time machine E, called the extractor, and a negligible function \(\mu (\cdot )\) such that for every machine \(P^*\), every statement \(x \in \{0,1\}^\kappa \), every random tape \(x \in \{0,1\}^*\), and every auxiliary input \(z \in \{0,1\}^*\),

$$Pr [ \langle P^*_r(z), V \rangle (x) = 1]\le Pr[E^{P^*_r(x,z)}(x) \in R_L(x)] + \mu (\kappa ) $$

An interactive argument system \(\langle P, V \rangle \) is an argument of knowledge if the above condition holds w.r.t. probabilistic polynomial-time provers.

Delayed-Input Witness Indistinguishability. The notion of delayed-input Witness Indistinguishability formalizes security of the prover with respect to an adversarial verifier that adaptively chooses the input statement to the proof system in the last round. Once we consider such adaptive instance selection, we also need to specify where the witnesses come from; to make the definition as general as possible, we consider an arbitrary (potentially unbounded) witness selecting machine that receives as input the views of all parties and outputs a witness w for any statement x requested by the adversary. In particular, this machine is a (randomized) Turing machine that runs in exponential time, and on input a statement x and the current view of all parties, picks a witness \(w \in R_L(x)\) as the private input of the prover.

Let \(\langle P, V \rangle \) be a 3-round Witness Indistinguishable proof system for a language \(L \in {\mathcal{NP}}\) with witness relation \(R_L\). Denote the messages exchanged by \((\mathsf {p}_1, \mathsf {p}_{2}, \mathsf {p}_{3})\) where \(\mathsf {p}_i\) denotes the message in the i-th round. For a delayed-input 3-round Witness Indistinguishable proof system, we consider the game \(\mathsf{ExpAWI}\) between a challenger \(\mathcal{C}\) and an adversary \(\mathcal{A}\) in which the instance x is chosen by \(\mathcal{A}\) after seeing the first message of the protocol played by the challenger. Then, the challenger receives as local input two witnesses \(w_0\) and \(w_1\) for x chosen adaptively by a witness-selecting machine. The challenger then continues the game by randomly selecting one of the two witnesses and by computing the third message by running the prover’s algorithm on input the instance x, the selected witness \(w_b\) and the challenge received from the adversary in the second round. The adversary wins the game if he can guess which of the two witnesses was used by the challenger.

Definition 6

(Delayed-Input Witness Indistinguishability). Let \(\mathsf{ExpAWI}_{\langle P, V \rangle }^\mathcal{A}\) be a delayed-input WI experiment parametrized by a \({PPT}\) adversary \(\mathcal{A}\) and an delayed-input 3-round Witness Indistinguishable proof system \(\langle P, V \rangle \) for a language \(L \in {\mathcal{NP}}\) with witness relation \(R_L\). The experiment has as input the security parameter \(\kappa \) and auxiliary information aux for \(\mathcal{A}\). The experiment \(\mathsf{ExpAWI}\) proceeds as follows:

  • \(\mathsf{ExpAWI}_{\langle P, V \rangle }^\mathcal{A}(\kappa , aux)\) :

    • Round-1: The challenger \(\mathcal{C}\) randomly selects coin tosses r and runs P on input \((1^\kappa ; r)\) to obtain the first message \(\mathsf {p}_1\);

    • Round-2: \(\mathcal{A}\) on input \(\mathsf {p}_1\) and aux chooses an instance x and a challenge \(\mathsf {p}_2\). The witness-selecting machine on inputs the statement x and the current view of all parties outputs witnesses \(w_0\) and \(w_1\) such that \((x,w_0), (x,w_1) \in R_L\). \(\mathcal{A}\) outputs \(x, w_0, w_1, \mathsf {p}_2\) and internal state \(\mathsf state\);

    • Round-3: \(\mathcal{C}\) randomly selects \(b\leftarrow \{0,1\}\) and runs P on input \((x,w_b, \mathsf {p}_2)\) to obtain \(\mathsf {p}_3\);

    • \(b' \leftarrow \mathcal{A}((\mathsf {p}_1, \mathsf {p}_2, \mathsf {p}_3), aux, \mathsf state)\);

    • If \(b = b'\) then output 1 else output 0.

A 3-round Witness Indistinguishable proof system for a language \(L \in {\mathcal{NP}}\) with witness relation \(R_L\) is delayed-input if for any \({PPT}\) adversary \(\mathcal{A}\) there exists a negligible function \(\mu (\cdot )\) such that for any \(aux \in \{0,1\}^*\) it holds that

$$|Pr[\mathsf{ExpAWI}_{\langle P, V \rangle }^\mathcal{A}(\kappa , aux)=1]-1/2|\le \mu (\kappa ) $$

The most recent 3 round delayed input delayed input WI proof system appeared in [COSV16].

Feige-Shamir ZK Proof Systems. For our construction we use the 3-round, public-coin, input-delayed witness-indistinguishable proof-of-knowledge \({ \varPi _{\scriptscriptstyle \mathrm {WIPOK}}}\) based on the work of Feige et al. [FLS99], and the 4-round zero-knowledge argument-of-knowledge protocol of Feige and Shamir \({ \varPi _{\scriptscriptstyle \mathrm {FS}}}\) [FS90].

Recall that the Feige-Shamir protocol consists of two executions of a WIPOK protocol in reverse directions. The first execution has the verifier prove something about a secret that it chooses, and the second execution has the prover proving that either the input statement is true or the prover knows the verifier’s secret. The zero-knowledge simulator then uses the knowledge extraction to extract the secret of the verifier, making it possible to complete the proof.

5.4 Secure Computation

The security of a protocol is analyzed by comparing what an adversary can do in the protocol to what it can do in an “ideal model”. A protocol is secure if any adversary interacting in the real protocol can do no more harm than if it was involved in this “ideal” computation.

Execution in the ideal model. In the “ideal model” we have an incorruptible trusted party to whom the parties send their inputs. The trusted party computes the functionality on the inputs and returns to each party its respective output. Even this model is not completely “ideal”, however, since some malicious behavior that cannot be prevented (such as early aborting) is permitted here too. An ideal execution proceeds as follows:

  • Inputs: Each party obtains an input, denoted by w.

  • Send inputs to trusted party: An honest party always sends w to the trusted party. A malicious party may, depending on w, either abort or send some \(w' \in \{0,1\}^{|w|}\) to the trusted party.

  • Trusted party answers malicious parties: The trusted party realizing the functionality \(\mathbf {\mathcal {F}}=(\mathbf {\mathcal {F}}_M,\mathbf {\mathcal {F}}_H)\) is informed of the set of malicious parties M, and let us denote the complementing set of honest parties by H.

    Once it received all the inputs, the trusted party first replies to the malicious parties with \(\mathbf {\mathcal {F}}_M(\mathbf {w})\).

  • Trusted party answers second party: The malicious parties reply to the trusted party by either “proceed” or “abort”. If they all reply “proceed” then the trusted party sends \(\mathbf {\mathcal {F}}_H(\mathbf {w})\) to the honest parties. If any of them reply “abort” then the trusted party sends \(\bot \) to the honest parties.

  • Outputs: An honest party always outputs the message it received from the trusted party. A malicious party may output an arbitrary (probabilistic polynomial-time computable) function of its initial input and the message received from the trusted party.

    The random variable containing the joint outputs of the honest and malicious parties in this execution (including an identification of the set M of malicious parties) is denoted by \({\textsc {IDEAL}}_{\mathbf {\mathcal {F}},\mathcal{S}}(\kappa ,\mathbf {w})\), where \(\kappa \) is the security parameter, \(\mathcal{S}\) is representing parties in the ideal model and \(\mathbf {w}\) are the inputs.

Execution in the real model. In the real model, where there is no trusted party, a malicious party may follow an arbitrary feasible strategy; that is, any strategy implementable by (non-uniform) probabilistic polynomial-time machines. In particular, the malicious party may abort the execution at any point in time (and when this happens prematurely, the other party is left with no output). The (static) adversary chooses the set M of malicious parties before it receives any inputs to the protocol, and it can be rushing, in that in every communication round it first sees the messages from the honest parties and only then chooses the messages on behalf of the malicious parties.

Let \(\mathbf {\mathcal {F}} : (\{0,1\}^*)^N\rightarrow (\{0,1\}^*)^N\) be an N-party function, let \(\varPi \) be an N-party protocol for computing \(\mathbf {\mathcal {F}}\), and let \(\mathcal{A}\) be an adversary. The joint execution of \(\varPi \) with adversary \(\mathcal{A}\) in the real model, denoted \({\textsc {REAL}}_{\varPi ,\mathcal{A}}(\kappa ,\mathbf {w})\) (with \(\kappa \) the security parameter and \(\mathbf {w}\) the inputs), is defined as the output of the honest and malicious parties (and an identification of the set M of malicious parties), resulting from the protocol interaction.

Definition 7

(secure MPC). Let \(\mathbf {\mathcal {F}}\) and \(\varPi \) be as above. Protocol \(\varPi \) is said to securely compute \(\mathbf {\mathcal {F}}\) (in the malicious model) if for every (non-uniform) probabilistic polynomial-time adversary \(\mathcal{A}\) for the real model, there exists a (non-uniform) probabilistic expected polynomial-time adversary \(\mathcal{S}\) for the ideal model, such that:

$$ \left\{ {\textsc {IDEAL}}_{\mathbf {\mathcal {F}},\mathcal{S}}(\kappa ,\mathbf {w})\right\} _{\kappa \in \mathbb {N},\mathbf {w}\in (\{0,1\}^*)^N} {\mathop {\approx }\limits ^\mathrm{c}}\left\{ {\textsc {REAL}}_{\varPi ,\mathcal{A}}(\kappa ,\mathbf {w})\right\} _{\kappa \in \mathbb {N},\mathbf {w}\in (\{0,1\}^*)^N.} $$

Notations. For a sub-protocol \(\pi \) between two parties \(P_i\) and \(P_j\), denote by \((\mathsf{{p_1}}^{i,j},\ldots , \mathsf{{p_t}}^{i,j})\) the view of the messages in all t rounds where the subscripts (ij) denote that the first message of the sub-protocol is sent by \(P_i\) to \(P_j\). Likewise, subscripts (ji) denote that the first message of the sub-protocol is sent by \(P_j\) to \(P_i\).

6 A Malicious Protocol Without Setup

Our 4-round protocol for the malicious case is obtained by “compiling” the 3-round semi-malicious protocol from Sect. 4, adding round-efficient proofs of correct behavior. The components of this protocol are:

  • The 3-round semi-malicious protocol from Sect. 4, based on the “dual”-GSW-based multi-key FHE scheme with distributed setup. We denote this multi-key FHE scheme by \({\mathsf {MFHE}}= ({\mathsf {MFHE.DistSetup}}, \mathsf {MFHE.Keygen}, {\mathsf {MFHE.Encrypt}}, {\mathsf {MFHE.Eval}}, {\mathsf {MFHE.PartDec}}, {\mathsf {MFHE.FinDec}})\).

  • Two instances of a two-round adaptively secure commitment scheme, supporting tags/identities of length \(\kappa \). We denote the first instance by \(\mathsf {aCom}=(\mathsf {acom}_1,\mathsf {acom}_2)\) and the second by \(\mathsf {bCom}=({\mathsf {bcom}}_1,{\mathsf {bcom}}_2)\).

  • A one-way function OWF.

  • A three-round public coin witness-indistinguishable proof of knowledge with delayed input, \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}=(\mathsf {p}_1, \mathsf {p}_2, \mathsf {p}_3)\), for the \({\mathcal{NP}}\)-Language \({\mathcal{L}^{\scriptscriptstyle \mathrm {WIPOK}}_{P}}\) from Fig. 2. We often refer to this protocol as “proof of correct encryption”, but what it really proves is that EITHER the encryption is consistent with the values committed in \(\mathsf {aCom}\), OR the value committed in \(\mathsf {bCom}\) is a pre-image under OWF of values sent by the other parties.

  • A four-round zero-knowledge argument of knowledge with delayed input, \(\varPi _{\scriptscriptstyle \mathrm {FS}}=(\mathsf {fs}_1, \mathsf {fs}_2, \mathsf {fs}_3,\mathsf {fs}_4)\), for the \({\mathcal{NP}}\)-Language \({{{\mathsf {\mathcal{L}}}^{{\scriptscriptstyle \mathrm {FS}}}_{P}}}\) from Fig. 2. We often refer to this protocol as “proof of correct decryption”.

The parameters for the \({\mathsf {MFHE}}\) scheme, the OWF, and the two proof systems, are chosen polynomially larger than those for the commitment schemes. Hence (assuming sub-exponential security), all these constructions remain secure even against an adversary that can break \(\mathsf {aCom},\mathsf {bCom}\) by exhaustive search.

The protocol. Let \(F: (\{0,1\}^*)^N \rightarrow \{0,1\}^*\) be a deterministic N-party function to be computed. Each party \(P_i\) holds input \(x_i \in \{0,1\}^\kappa \) and identity \(\mathsf {id}_i\).Footnote 6 The protocol consists of four broadcast rounds, where messages \((m_t^1,\ldots , m^N_t)\) are exchanged simultaneously in the t-th round for \(t\in [4]\). The message flow is detailed in Fig. 1, and Fig. 3 depicts the exchanged messages between two parties \(P_i\) and \(P_j\). Blue messages are sub-protocols where party \(P_i\) is the prover/committer and party \(P_j\) is the verifier/receiver, red messages denote the opposite.

Fig. 1.
figure 1

Protocol \(\varPi _{\scriptscriptstyle \mathrm {MPC}}\) with respect to party \(P_i\).

Fig. 2.
figure 2

\({\mathcal{NP}}\)-Language \({\mathcal{L}_{i,j,1}}\), \({\mathcal{L}_{i,j,2}}\), \({\mathcal{L}_{i,j,3}}\) for \(\varPi _{\scriptscriptstyle \mathrm {FS}}\) and \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) proof systems.

Fig. 3.
figure 3

Messages exchanged between party \(P_i\) and \(P_j\) in \(\varPi _{\scriptscriptstyle \mathrm {MPC}}\). \((\mathsf {acom}_1,\mathsf {acom}_2)\) and \(({\mathsf {bcom}}_1,{\mathsf {bcom}}_2)\) are commitments, \((\mathsf {p}_1, \mathsf {p}_2, \mathsf {p}_3)\) belong to the 3-round \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\), \((\mathsf {fs}_1, \mathsf {fs}_2, \mathsf {fs}_3,\mathsf {fs}_4)\) belong to the 4-round \(\varPi _{\scriptscriptstyle \mathrm {FS}}\), and \(({\mathsf {params}},{\mathsf {pk}},c,ev)\) denote the \({\mathsf {MFHE}}\) messages. Blue messages are sub-protocols where party \(P_i\) is the prover/committer and party \(P_j\) is the verifier/receiver, red messages donete the opposite. (Color figure online)

6.1 Proof of Security

Theorem 3

Assuming sub-exponential hardness of \(\textsc {LWE}\), and the existence of an adaptively-secure commitment scheme, there exists a four-broadcast-round protocol for securely realizing any functionality against a malicious adversary in the plain model with no setup.

To prove Theorem 3, we note that the two assumptions listed suffice for instantiating all the components of our protocol \(\varPi _{{\scriptscriptstyle \mathrm {MPC}}}\): the commitment is used directly for \(\mathsf {aCom}\) and \(\mathsf {bCom}\), and sub-exponential LWE suffices for everything else. We also note that while we think of the protocol from Fig. 1 as a “compilation” of the 3-round protocol from Sect. 4 using zero-knowledge proofs, it is not a generic compiler, as it relies on the specifics of our semi-malicious protocol. See more discussion in Sect. 7 below.

In the following, we prove security of \(\varPi _{{\scriptscriptstyle \mathrm {MPC}}}\) by describing a simulator and proving that the simulated view is indistinguishable from the real one.

Description of the Simulator

Let \(\mathcal{P}= \{P_1,\ldots , P_N\}\) be the set of parties, let \(\mathcal{A}\) be a malicious, static adversary in the plain model, and let \(\mathcal{P}^* \subseteq \mathcal{P}\) be the set of parties corrupted by \(\mathcal{A}\). We construct a simulator \(\mathcal{S}\) (the ideal world adversary) with access to the ideal functionality \(\mathbf {\mathcal {F}}\), such that the ideal world experiment with \(\mathcal{S}\) and \(\mathbf {\mathcal {F}}\) is indistinguishable from a real execution of \(\varPi _{\scriptscriptstyle \mathrm {MPC}}\) with \(\mathcal{A}\). The simulator \(\mathcal{S}\) only generates messages on behalf of parties \(\mathcal{P}\backslash \mathcal{P}^*\), as follows:

Round 1 Messages \(\varvec{\mathcal{S}\rightarrow \mathcal{A}}\) : In the first round, \(\mathcal{S}\) generates messages on behalf of each honest party \(P_h \notin \mathcal{P}^*\), as follows:

  1. 1.

    Choose randomness \(r_h=(r^{gen}_h,r^{enc}_h)\) for the \({\mathsf {MFHE}}\) scheme and an unrelated \(\kappa \)-bit randomness value \(R_h\), and set \(\hat{R}_h=OWF(R_h)\).

  2. 2.

    For every j engage in a two-round commitment protocol with \(P_j\). To this end, prepare the first message \(\mathsf {acom}_1^{h,j}\) corresponding to the execution of \(\mathsf {aCom}_{\mathsf {id}_j}(x_j,r^{gen}_j,r^{enc}_j,R_j;\omega _{j})\) on behalf of \(P_h\), acting as the receiver of the commitment. Since the commitment \(\mathsf {aCom}\) is a two-round protocol, the message of the committer \(P_j\) is only sent in the second round.

  3. 3.

    Prepare the first message \(\mathsf {p}^{h,j}_1\) of \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) (with \(P_h\) as Prover) for the \({\mathcal{NP}}\)-Language \({\mathcal{L}^{\scriptscriptstyle \mathrm {WIPOK}}_{P_h}}\), and the first message \(\mathsf {fs}^{h,j}_1\) of \(\varPi _{\scriptscriptstyle \mathrm {FS}}\) (with \(P_h\) as Verifier) for \({{{\mathsf {\mathcal{L}}}^{{\scriptscriptstyle \mathrm {FS}}}_{P_j}}}\).

  4. 4.

    Run \({\mathsf {params}}_h \leftarrow {\mathsf {MFHE.DistSetup}}(1^\kappa ,1^N,h)\).

  5. 5.

    Send the message \(m^{h,j}_1=\left( \mathsf {acom}_1^{h,j}, \mathsf {p}_1^{h,j},\mathsf {fs}_1^{h,j},\hat{R}_h,{\mathsf {params}}_h\right) \) to \(\mathcal{A}\).

Round 1 Messages \(\varvec{\mathcal{A}\rightarrow \mathcal{S}}\) : Also in the first round the adversary \(\mathcal{A}\) generates the messages \(m^{j,h}_1=\left( \mathsf {acom}_1^{j,h}, \mathsf {p}_1^{j,h},\mathsf {fs}_1^{j,h},\hat{R}_j,{\mathsf {params}}_j\right) \) on behalf of corrupted parties \(j\in \mathcal{P}^*\) to honest parties \(h\notin \mathcal{P}^*\). Messages \(\{\mathsf {acom}_1^{j,h}\}\) correspond to an execution of \(\mathsf {aCom}_{\mathsf {id}_h}(\mathbf{0};\omega _{h})\).

Round 2 Messages \(\varvec{\mathcal{S}\rightarrow \mathcal{A}}\) : In the second round \(\mathcal{S}\) generates messages on behalf of each honest party \(P_h\in \mathcal{P}^*\) as follows:

  1. 1.

    Complete the commitment to the zero string generating the second messages \(\mathsf {acom}_2^{j,h}\) corresponding to all executions of \(\mathsf {aCom}_{\mathsf {id}_h}(\mathbf{0};\omega _{h})\).

  2. 2.

    Honestly prepare the second message \(\mathsf {p}_2^{j,h}\) (\(\mathsf {fs}_2^{j,h}\)) of \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\)(\(\varPi _{\scriptscriptstyle \mathrm {FS}}\)) initiated by \(P_j\) acting as the prover (verifier) in the first round.

  3. 3.

    Generate the second commitment messages \({\mathsf {bcom}}_1^{h,j}\) for \(\mathsf {bCom}_{\mathsf {id}_j}(\mathbf{0};\zeta _j)\) where party \(P_h\) acts as the Receiver.

  4. 4.

    Send the message \(m^{h,j}_2=(\mathsf {acom}^{j,h}_2, \mathsf {p}^{j,h}_2, \mathsf {fs}^{j,h}_2, {\mathsf {bcom}}^{h,j}_1)\) to \(\mathcal{A}\).

Round 2 Messages \(\varvec{\mathcal{A}\rightarrow \mathcal{S}}\) : In the second round the adversary \(\mathcal{A}\) generates the messages \(m^{j,h}_2:=(\mathsf {acom}^{h,j}_2, \mathsf {p}^{h,j}_2, \mathsf {fs}^{h,j}_2, {\mathsf {bcom}}^{j,h}_1)\) on behalf of corrupted parties \(j\in \mathcal{P}^*\) to honest parties \(h\notin \mathcal{P}^*\). Messages \(\{\mathsf {acom}^{h,j}_2\}\) correspond to an execution of \(\mathsf {aCom}_{\mathsf {id}_j}(x_j,r^{gen}_j,r^{enc}_j,R_j;\omega _{j})\) and messages \(\{{\mathsf {bcom}}^{j,h}_1\}\) correspond to an execution of \(\mathsf {bCom}_{\mathsf {id}_h}(\mathbf{0};\zeta _{h})\)

Round 3 Messages \(\varvec{\mathcal{S}\rightarrow \mathcal{A}}\) : In the third round \(\mathcal{S}\) generates messages on behalf of each honest party \(P_h\notin \mathcal{P}^*\) as follows:

  1. 1.

    Generate the second messages \({\mathsf {bcom}}_2^{j,h}\) corresponding to all \(\mathsf {bCom}_{\mathsf {id}_h}(\mathbf{0};\zeta _h)\).

  2. 2.

    Set \({\mathsf {params}}=({\mathsf {params}}_1,\ldots ,{\mathsf {params}}_N)\) for the \({\mathsf {MFHE}}\) scheme and generate the keys \(({\mathsf {pk}}_h, {\mathsf {sk}}_h) = \mathsf {MFHE.Keygen}({\mathsf {params}},h;r^{gen}_h)\). Generate an encryption of zero using randomness \(r^{enc}_h\), \(c_h={\mathsf {MFHE.Encrypt}}(pk_h,\mathbf{0};r^{enc}_h)\).

  3. 3.

    Honestly prepare the final message \(\mathsf {p}_3^{h,j}\) (\(\mathsf {fs}_3^{h,j}\)) of \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\)(\(\varPi _{\scriptscriptstyle \mathrm {FS}}\)) initiated by \(P_h\) acting as the prover (verifier) in the first round.

  4. 4.

    Send the message \(m^{h,j}_3=({\mathsf {pk}}_h, c_h, \mathsf {p}^{h,j}_3,\mathsf {fs}^{h,j}_3,{\mathsf {bcom}}^{j,h}_2)\) to \(\mathcal{A}\).

Round 3 Messages \(\varvec{\mathcal{A}\rightarrow \mathcal{S}}\) : \(\mathcal{S}\) receives \(m^{j,h}_3=({\mathsf {pk}}_j,c_j, \mathsf {p}^{j,h}_3, \mathsf {fs}^{j,h}_3, {\mathsf {bcom}}^{h,j}_2)\) from \(\mathcal{A}\), where messages \(\{{\mathsf {bcom}}^{h,j}_2\}\) correspond to an execution of \(\mathsf {bCom}_{\mathsf {id}_j}(\mathbf{0};\zeta _{j})\).

Then, \(\mathcal{S}\) proceeds to extract the witness corresponding to each proof-of-knowledge \((\mathsf {p}^{j,h}_1,\mathsf {p}^{j,h}_2,\mathsf {p}^{j,h}_3)\) completed in the first three rounds, using rewinding.

To this end, \(\mathcal{S}\) applies the knowledge extractor of \(\varPi _{WIPOK}\) to obtain the “witnesses” which consist of the inputs and secret keys of the corrupted parties \((x_j,r_j)\) Footnote 7. \(\mathcal{S}\) also uses the zero-knowledge simulator of \(\varPi _{FS}\) to obtain the “trapdoors” associated with that protocol. (Note that here we rely on the specific structure of Feige-Shamir proofs, where the zero-knowledge simulator extracts a “verifier secret” after the 3rd round, that makes it possible to simulate the last round.)

Next \(\mathcal{S}\) sends \(\{x_j\}_{j\in [N]\setminus {\{h\}}}\) to the ideal functionality \(\mathbf {\mathcal {F}}\) which responds by sending back y such that \(y = F(\{x_j\}_{j \in [N]})\).

Round 4 Messages \(\varvec{\mathcal{S}\rightarrow \mathcal{A}}\) : In the fourth round \(\mathcal{S}\) generates messages on behalf of each honest party \(P_h\notin \mathcal{P}^*\) as follows:

  1. 1.

    Generate the evaluated ciphertext \({{\hat{c}}}:= {\mathsf {MFHE.Eval}}({\mathsf {params}}; F; (c_1,\ldots , c_N))\).

  2. 2.

    \(\mathcal{S}\) reconstructs all the secret keys \(\{{\mathsf {sk}}_j\}_{j\in \mathcal{P}^*}\) from the witnesses \(\{r^{gen}_j\}_{j\in \mathcal{P}^*}\), and computes the simulated decryption shares \(\{ev_h\}_{h\notin \mathcal{P}^*}\leftarrow \mathcal{S}^T(y, {{\hat{c}}},h,\{{\mathsf {sk}}_j\}{_{j\in \mathcal{P}^*}})\). (The simulator \(\mathcal{S}^T\) is the one provided by [MW16, Sect. 6.2].Footnote 8)

  3. 3.

    Simulate the final message \(\mathsf {fs}^{j,h}_4\) of \(\varPi _{\scriptscriptstyle \mathrm {FS}}\) protocol using the extracted trapdoor. \(\mathcal{S}\) sends the message \(m^{h,j}_4=(ev_h, \mathsf {fs}^{j,h}_4)\) on behalf of \(P_h\).

Round 4 Messages \(\varvec{\mathcal{A}\rightarrow \mathcal{S}}\) : In the last round the adversary \(\mathcal{A}\) generates the messages on behalf of corrupted parties in \(\mathcal{P}^*\). For each party \(j\in \mathcal{P}^*\) our simulator receives messages \({m}^{j,h}_4=({ev}_j, \mathsf {fs}^{h,j}_4)\) from \(\mathcal{A}\).

This completes the description of the simulator.

Proof of Indistinguishability

Overview. We need to prove that for any malicious (static) adversary \(\mathcal{A}\), the view generated by the simulator \(\mathcal{S}\) above is indistinguishable from the real view, namely:

$$ \left\{ {\textsc {IDEAL}}_{\mathbf {\mathcal {F}},\mathcal{S}}(\kappa ,\cdot )\right\} _{\kappa } {\mathop {\approx }\limits ^\mathrm{c}}\left\{ {\textsc {REAL}}_{\varPi ,\mathcal{A}}(\kappa ,\cdot )\right\} _{\kappa } $$

To prove indistinguishability, we consider a sequence of hybrid experiments. Let \(H_0\) be the hybrid describing the real-world execution of the protocol, and we modify it in steps:

\(H_1\) :

Use the zero-knowledge simulator to generate the proof in the 4-round \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\), indistinguishability follows by the ZK property of \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\).

\(H_2\) :

Starting in this hybrid, the challenger is given access to a breaking oracle \(\mathcal {B}_{\mathsf {tag}}\) (with \(\mathsf {tag}=(\mathsf {id}_h,\star )\) where h is one of the honest parties). Here the challenger uses the breaking oracle to extract the values committed to by the adversary in \(\mathsf {acom}_2^{h,\mathcal{A}}\) (in the second round), then commits to these same values in \({\mathsf {bcom}}_2^{\mathcal{A},h}\) on behalf of the honest party (in the third round). Indistinguishability follows by the adaptive-hiding of \(\mathsf {bCom}\).

\(H_3\) :

Change the proof in \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) to use the “OR branch”. Indistinguishability follows by the WI property of \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) (which must hold even in the presence of the breaking-oracle \(\mathcal {B}_{\mathsf {tag}}\)).

\(H_4\) :

Here the challenger also has access to the ideal-world functionality that gives it the output of the function. Having extracted the secret keys using \(\mathcal {B}_{\mathsf {tag}}\), the challenger simulates the decryption shares of the honest parties rather than using the decryption procedure. Indistinguishability follows since the FHE scheme is simulatable.

\(H_5\) :

Encrypt \(\mathbf 0\)’s rather than the true inputs. Indistinguishability follows due to the semantic security of the encryption scheme.

\(H_6\) :

Commit to 0’s in \(\mathsf {acom}_2^{\mathcal{A},h}\), rather than to the real inputs. Indistinguishable due to the adaptive-hiding of \(\mathsf {aCom}\).

\(H_7\) :

Revert the change in \(H_3\), make the proof in \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) use the normal branch rather than the “OR branch”. Indistinguishability follows by the WI property of \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\).

\(H_8\) :

Revert the change in \(H_2\) and thus commit to zero in \({\mathsf {bcom}}_2^{\mathcal{A},h}\) (instead of committing to the extracted values). Indistinguishability follows by the adaptive-hiding of \(\mathsf {bCom}\).

\(H_9\) :

Here the challenger no longer has access to a breaking oracle, and instead it uses the POK extractor to get the randomness and inputs (witnesses) from \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\). Indistinguishability follows from the extraction property of \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\), combined with the one-wayness of OWF.

As \(H_9\) no longer uses the inputs of the honest parties, the view of this hybrid can be simulated. (We also note that the simulator does not use a breaking oracle, rather it is a traditional rewinding simulator.)

Security in the presence of a breaking oracle: Note that some of our indistinguishability arguments must holds in worlds with a breaking oracle \(\mathcal {B}_{\mathsf {tag}}\). In particular, we require that \(\mathsf {aCom}\) is still hiding, that LWE still holds, and that \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) is still witness-indistinguishable in the presence of the oracle. The hiding property of \(\mathsf {aCom}\) follows directly from its adaptive-hiding property. As for LWE and \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\), security in the presence of \(\mathcal {B}_{\mathsf {tag}}\) follows from sub-exponential hardness and complexity leveraging. Namely, in the relevant reductions we can implement \(\mathcal {B}_{\mathsf {tag}}\) ourselves in subexponential time, while still relying on the hardness of LWE or \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\).

Another point to note is that using the zero-knowledge simulator (in hybrids \(H_2\)\(H_9\)) requires rewinding, which may be problematic when doing other reductions. As we explain below, we are able to handle rewinding by introducing many sub-hybrids, essentially cutting the distinguishing advantage by a factor equal to the number of rewinding operations. We now proceed to give more details.

\(\mathbf{H}_\mathbf{0}\) : This hybrid is the real execution. In particular, \(H_0\) starts the execution of \(\mathcal{A}\) providing it fresh randomness and input \(\{x_j\}_{P_j \in \mathcal{P}^* }\), and interacts with it honestly by performing all actions of the honest parties with uniform randomness and input. The output consists of \(\mathcal{A}\)’s view.

\(\mathbf{H}_\mathbf{1}\) : In this hybrid the challenger uses the zero-knowledge simulator of \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\) to generate the proofs on behalf of each honest party \(P_h\), rather than the honest prover strategy as is done in \(\mathbf{H_0}\). We note that the challenger in this hybrid needs to rewind the adversary \(\mathcal{A}\) (up to the second round) as needed for the Feige-Shamir ZK simulator. Since in these two hybrids the protocol \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\) is used to prove the same true statement, then the simulated proofs are indistinguishable from the real ones, so we get:

Lemma 61

\(\mathrm {\mathrm {H}_{0}} \approx _{\mathrm {s}}\mathrm {\mathrm {H}_{1}}\).

\(\mathbf{H}_\mathbf{2}\) : In this “mental-experiment hybrid” the challenger is given access to a breaking oracle \(\mathcal {B}_{\mathsf {id}_h}\), with the tag being the identity of an arbitrary honest parties (\(h\notin \mathcal{P}^*\)). The challenger begins as in the real execution for the first two rounds, but then it uses \(\mathcal {B}_{\mathsf {tag}}\) to extract the values \((x_j,r_j,R_j)\) of all the adversarial players \(j\in \mathcal{P}^*\) from \(\mathsf {acom}_2^{h,j}\).

Then the challenger changes the commitments \({\mathsf {bcom}}_2^{j,h}\) on behalf of the honest party \(P_h\), committing to the values \(R_j\) that were extracted from \(\mathsf {acom}_2^{h,j}\) (and thus making the language \({\mathcal{L}_{h,j,2}}\) –the “OR branch”– in \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) a true statement).Footnote 9

Lemma 62

\(\mathrm {\mathrm {H}_{1}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{2}}\).

Proof:

Since the only differences between these hybrids are the values committed to in \({\mathsf {bcom}}^{j,h}\), then indistinguishability should follow from the adaptive-hiding of the commitment scheme \(\mathsf {bCom}\) (as the challenger never queries its breaking oracle with any tag containing the identity \(\mathsf {id}_h\) of the honest party).

One subtle point here, is that in both \(\mathrm {\mathrm {H}_{1}}\) and \(\mathrm {\mathrm {H}_{2}}\) we use the rewinding Feige-Shamir ZK simulator, so we need to explain how the single value \({\mathsf {bcom}}_2^{j,h}\) provided by the committer in the reduction (which is a commitment to either 0 or \(R_j\)) is used in all these transcripts. To that end let M be some polynomial upper bound on the number of rewinding operations needed by the zero-knowledge simulator. The reduction to the security of \(\mathsf {bCom}\) will choose at random \(t\in [1,M]\) and will only use the \(\mathsf {bCom}\) committer that it interacts with to commit to a value in the t’th rewinding, committing to 0 in all the rewindings \(i<t\) and to the value \(R_j\) (that it has from the breaking oracle) in all the rewindings \(i>t\).

By a standard argument, if we can distinguish between \(\mathrm {\mathrm {H}_{1}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{2}}\) with probability \(\epsilon \) then the reduction algorithm can distinguish commitments to 0 and \(R_j\) with probability \(\epsilon /M\).    \(\square \)

\(\mathbf{H}_\mathbf{3}\) : In this hybrid, we change the witness used in \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) on behalf of each honest party \(P_h\). In particular, all \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) executions use the “OR branch” \({\mathcal{L}_{h,j,2}}\).

Lemma 63

\(\mathrm {\mathrm {H}_{2}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{3}}\).

Proof:

We make sub-hybrids that change one honest party at a time, and show that a distinguisher D that distinguishes two such sub-hybrids can be used by another distinguisher \(D'\) to distinguish between the two witnesses of \(\varPi _{WIPOK}\) (as per Definition 6).

Description of \(D'\): \(D'\) plays the role of both the challenger and the adversary in the two hybrids, except that the prover messages of \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) (on behalf of \(P_h\)) are obtained from the external prover that the WI-distinguisher \(D'\) has access to.

At the third round of the protocol, \(D'\) has the statement that \(P_h\) needs to prove, and it gets the two witnesses for that statement from the witness-selecting machine in Definition 6. Sending the statement and witnesses to its external prover, \(D'\) obtains the relevant \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) message (for one of them). \(D'\) also uses these witnesses to complete the other flows of the protocol (e.g., the commitments \({\mathsf {bcom}}_2^{j,h}\) that include some of these witnesses). Once the protocol run is finished, it gives the transcript to D and outputs whatever D outputs.

As above, we still need to support rewinding by the Feige-Shamir ZK simulator, while having access to only a single interaction with the external prover, and we do it by sub-sub-hybrids where we embed this interaction in a random rewinding t, producing all the other proofs by the \(H_2\) challenger (for \(i<t\)) or the \(H_3\) challenger (for \(i>t\)). It is clear that the advantage of \(D'\) is a 1 / M fraction of the advantage of D. \(\square \)

We note that \(D'\) above still uses the breaking oracle \(\mathcal {B}_{\mathsf {tag}}\) (to extract the \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\) secrets), so we need to assume that delayed-input-WI holds even in a world with the breaking oracle. As explained above, we rely on complexity leveraging for that purpose. That is, we let \(D'\) run in subexponential time (so it can implement \(\mathcal {B}_{\mathsf {tag}}\) itself), and set the parameters of \(\varPi _{\scriptscriptstyle \mathrm {WIPOK}}\) large enough so we can assume witness-indistinguishability even for such a strong \(D'\). (We can implement subexponential WI protocol from subexponential LWE.)

\(\mathbf{H}_\mathbf{4}\) : The difference from \(H_3\) is that in \(H_4\) we simulate the decryption shares of the honest parties. More specifically, the challenger in \(H_4\) has access also to the ideal functionality, and it proceeds as follows:

  1. 1.

    It completes the first three broadcast rounds exactly as in \(H_3\).

  2. 2.

    Having extracted the input of all the corrupted parties, the challenger sends all these inputs to the ideal functionality \(\mathbf {\mathcal {F}}\) and receives back the output \(y = F(\{x_j\}_{j \in [N]})\).

  3. 3.

    Having extracted also all the secret keys of the corrupted parties, the challenger has everything that it needs to compute the simulated decryption shares of the honest parties, \(\{ev_h\}_{h\notin \mathcal{P}^*}\leftarrow \mathcal{S}^T(y, {{\hat{c}}},h,\{{\mathsf {sk}}_j\}{_{j\in \mathcal{P}^*}})\).

  4. 4.

    The challenger computes also the last message of \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\) (using the simulator as before), and sends it together with decryption shares \(\{ev_h\}_h\) in the last round.

Lemma 64

\(\mathrm {\mathrm {H}_{3}} \approx _{\mathrm {s}}\mathrm {\mathrm {H}_{4}}\).

Proof:

The only change between these two experiments is that the partial decryption shares of the honest parties are not generated by partial decryption. Instead they are generated via the threshold simulator \(\mathcal{S}^T\) of the \({\mathsf {MFHE}}\) scheme. By the simulatability of threshold decryption, the partial decryptions shares are statistically indistinguishable.    \(\square \)

\(\mathbf{H}_\mathbf{5}\) : We change \(H_4\) by making \(\mathcal{S}\) broadcast encryptions of \(\mathbf{0}\) on behalf of the honest parties in the third round, instead of encrypting the real inputs.

Lemma 65

\(\mathrm {\mathrm {H}_{4}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{5}}\).

Proof:

The proof follows directly from semantic security, which in our case follows from LWE. As in the previous hybrid, here too we need this assumption to hold even in the presence of a breaking oracle, and we lose a factor of M in the distinguishing probability due to rewinding.    \(\square \)

\(\mathbf{H}_\mathbf{6}\) : In this hybrid, we get rid of the honest parties’ inputs \(\{(x_h,r_h)\}_h\) (that are present in the values of \(\mathsf {acom}_2^{j,h}\)). Formally, \(H_6\) is identical to \(H_5\) except that in the first round it sets \(x_h=\mathbf{0}\) for all \(h\notin \mathcal{P}^*\).

Lemma 66

\(\mathrm {\mathrm {H}_{5}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{6}}\).

Proof:

This proof is very similar to the proof of \(\mathrm {\mathrm {H}_{1}} \approx _{\mathrm {c}}\mathrm {\mathrm {H}_{2}}\), and indistinguishability follows from adaptive-hiding of \(\mathsf {aCom}\). Since the challenger never asks its breaking oracle \(\mathcal {B}_{\mathsf {tag}}\) to break commitments relative to the honest party’s tags (and since these committed values are no longer used by the challenger for anything else), then having the honest parties commit to \(x_h\) is indistinguishable from having it commit to \(\mathbf{0}\).    \(\square \)

\(\mathbf{H}_\mathbf{7}\) : In this hybrid we essentially reverse the change that was made in going from \(H_2\) to \(H_3\). Namely, since now both the encryption and the commitment at each honest party are for the value \(\mathbf{0}\) then there is no need to use the “OR branch” in \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\). Hence we return in using the honest prover strategy there, relative to the input \(x_h=\mathbf 0\). As in Lemma 63 indistinguishability follows by the WI property of \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\).

\(\mathbf{H}_\mathbf{8}\) : Revert the change that was made in going from \(H_1\) to \(H_2\) and thus commit to a random value \(s_h\) in \({\mathsf {bcom}}_2^{j,h}\). Indistinguishability follows by the computational hiding of \(\mathsf {bCom}\), just like in Lemma 62.

\(\mathbf{H}_\mathbf{9}\) : In this hybrid the challenger no longer has access to the breaking oracle \(\mathcal {B}_{\mathsf {tag}}\). Instead, it uses the knowledge extractor of \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) to get the input and secret keys of the corrupted parties, and the “standard” zero-knowledge simulator to get the proof in \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\).

Lemma 67

\(\mathrm {\mathrm {H}_{8}} \approx _{\mathrm {s}}\mathrm {\mathrm {H}_{9}}\).

Proof:

The only difference between these hybrids is the method used by the challenger to extract the adversary secrets. Two technical points needs to be addressed here:

  • This hybrid requires rewinding by both the FS ZK simulator and the FLS knowledge extractor, so we need to argue that after polynomially many trials they will both succeed on the same transcript. This is a rather standard argument (which essentially boils down to looking at the knowledge-extractor inside \(\varPi _{{\scriptscriptstyle \mathrm {FS}}}\) and the one used explicitly in \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) as extracting knowledge for and AND language.)

  • We also need to argue that the value extracted from the adversary by the \(\varPi _{{\scriptscriptstyle \mathrm {WIPOK}}}\) extractor in \(\mathrm {\mathrm {H}_{9}}\) is a witness for \({\mathcal{L}_{i,j,1}}\) and not for \({\mathcal{L}_{i,j,2}}\). This is done by appealing to the one-wayness of OWF, if there is a noticeable probability to extract an \({\mathcal{L}_{i,j,2}}\) witness in \(H_9\) then we get an inverter for this one-way function.

We conclude that in both \(\mathrm {\mathrm {H}_{8}}\) and \(\mathrm {\mathrm {H}_{9}}\) we succeed in extraction with about the same probability, and moreover extract the very same thing, and (statistical) indistinguishability follows.    \(\square \)

Observing that the hybrid \(H_9\) is identical to the ideal-world game with the simulator completes the proof of security.    \(\square \)

7 Discussion and Open Problems

Compiling semi-malicious to malicious protocols. Our protocol and its proof can be viewed as starting from a 3-round semi-malicious protocol and “compiling” it into a 4-round malicious protocol using commitments and zero-knowledge proofs. However our construction is not a generic compiler of semi-malicious to malicious protocols, rather it relies on the specifics of our 3-round semi-malicious protocol from Sect. 4. At the very least, our construction needs the following two properties of the underlying semi-malicious protocol:

  • Public-coin 1st round. In our protocol we must send the second-round messages of the underlying protocol no later than the 3rd round of the compiled protocol. We thus have at most two rounds to prove that the first-round messages are valid, before we must send the second-round messages, severely limiting the type of proofs that we can use.

    This is not a problem in our case, since the first round of the semi-malicious protocol is public coin, i.e., the parties just send to each other random bits. Hence, there is nothing to prove about them and the semi-malicious protocol can withstand any messages sent by the adversary.

  • Committing 2nd round. We also use the fact that the second round of the semi-malicious protocol is fully committing to the input, since our simulator extracts the inputs after these rounds.

We remark that in some sense every 3-round semi-malicious protocol with a public-coin first round and fully-committing second round can be thought of as a multi-key homomorphic encryption with distributed setup, by viewing the random coins send in the first round as the \({\mathsf {params}}\), and the second-round messages as encryptions of the inputs.

Adaptive commitments. Although the intuitive property that we need from the commitment component of our protocol is non malleability, our actual proof relies heavily on the stronger notion of adaptive security, that lets us use straight-line extraction from the adversary’s commitment. Oversimplifying here, the use of adaptive commitments with straight-line extraction together with the WI proofs let us construct a 3-round non-malleable zero-knowledge proof of knowledge system.

While it is plausible that our 3-round semi-malicious protocol can be “compiled” using only non-malleable commitments and avoid complexity leveraging, we were not able to do it, this question remains open.Footnote 10