On the Computational Overhead of MPC with Dishonest Majority
 2 Citations
 972 Downloads
Abstract
We consider the situation where a large number n of players want to securely compute a large function f with security against an adaptive, malicious adversary which might corrupt \(t < c n\) of the parties for some given \(c \in [0,1)\). In other words, only some arbitrarily small constant fraction of the parties are assumed to be honest. For any fixed c, we consider the asymptotic complexity as n and the size of f grows. We are in particular interested in the computational overhead, defined as the total computational complexity of all parties divided by the size of f. We show that it is possible to achieve polylogarithmic computational overhead for all \(c < 1\). Prior to our result it was only known how to get polylogarithmic overhead for \(c < \frac{1}{2}\). We therefore significantly extend the area where we can do secure multiparty computation with polylogarithmic overhead. Since we allow that more than half the parties are corrupted, we can only get security with abort, i.e., the adversary might make the protocol abort before all parties learn their outputs. We can, however, for all c make a protocol for which there exists \(d > 0\) such that if at most dn parties are actually corrupted in a given execution, then the protocol will not abort. Our result is solely of theoretical interest. In its current form, it has not practical implications whatsoever.
Keywords
Secret Share Secret Share Scheme Ideal Functionality Expander Graph Virtual Server1 Introduction
We consider the situation where a large number n of players want to securely compute a large function f with security against an adaptive, malicious adversary which might corrupt \(t < c n\) of the parties for some given constant \(c \in [0,1)\). In other words, only some arbitrarily small constant fraction of parties are assumed to be honest. We also require that there exists \(d > 0\) such that if at most dn parties are actually corrupted in a given execution, then the protocol will not abort. We call this the setting with constant honesty and constant termination guarantee.
For any fixed c, we consider the asymptotic complexity as n and the size of f grows. We are in particular interested in the computational overhead, defined by summing the total computational complexity of all parties and dividing by the size of f. We show that it is possible to achieve polylogarithmic computational overhead for all \(c < 1\). Prior to our result, it was only known how to get polylogarithmic overhead for settings with constant honesty and constant termination guarantee for \(c < \frac{1}{2}\) (cf. [DIK+08, CDD+15, BSFO12, BCP15, CDI+13]). We therefore significantly extend the area where we can do secure multiparty computation with polylogarithmic overhead. Let us state up front that our result is only meant as an asymptotic feasibility result. The constants hidden by the asymptotic analysis are so huge that the protocol has no practical implications.
Our protocol is based on standard assumptions. It can be built in a whitebox manner from essentially any multiparty computation protocol secure against any number of corrupted parties and a secure multiparty computation protocol which has polylogarithmic overhead and which is secure when at most a quarter of parties are corrupt. Both protocols should be secure against a malicious, adaptive adversary. We give an informationtheoretic secure protocol in the hybrid model with oblivious transfer and a small number of initial broadcasts. We also give a computationally secure protocol in the hybrid model with a CRS and a PKI.
We note that approaches based on selecting a small committees and having the committee run the computation are doomed to failure in our model. This is because any small committee can be corrupted by the adaptive adversary. The protocol from [CPS14] is insecure in our model precisely for this reason. We also note that our protocol, in contrast to the low overhead protocols of [DPSZ12, DZ13, DKL+13], does not rely on preprocessing. The IPS compiler [IPS08] is also a generic protocol with low computational overhead, but it has a quadratic overhead in the number of players, so it does not have a low computational overhead in the sense that we consider here. Finally, notice that an approach based on fully homomorphic encryption, where the n parties send their encrypted inputs to one party and lets this party do the computation can have a polylogarithmic computational overhead. However, it does not have constant termination guarantee. To ensure this, it seems one would still need some constant fraction of the parties to do all the computation, suffering a blow up in the overhead of a factor \(\varTheta (n)\).
2 Technical Overview
Our protocol follows the same high level approach as [DIK+08] which is based on the work of [Bra87]. Our protocol is also inspired by the IPS compiler from [IPS08] and the player virtualization technique from [HM00]. The main idea is that we will run an honest majority protocol with polylogarithmic overhead. Following [IPS08], we call this the outer protocol. Each of the parties \(\mathsf {P}_i\) in the outer protocol will be emulated by a constant number of the parties running a protocol with security against any number of corrupted parties. The set of parties that run \(\mathsf {P}_i\) is called committee number i. The protocol that committees run is called the inner protocol.
We use an expander graph to set up the committees so that except with negligible probability, a vast majority of committees will contain at least one honest player as long as at most cn of the real parties are corrupted. We call a committee consisting of only honest parties an honest committee. We call a committee containing at least one honest party and at least one corrupted party a crashable committee. We call a committee consisting of only corrupted parties a corrupted committee. Since the inner protocol is secure against any number of corrupted parties, an honest committee corresponds to an honest party in the outer protocol and a corrupted committee corresponds to a corrupted party in the outer protocol. Since the inner protocol only guarantees termination when all parties in the committee are honest, a crashable committee corresponds to a party in the outer protocol which is running correctly and which has a private state, but which might crash—if a corrupted committee member makes the inner protocol abort.
We need the outer protocol to tolerate a large constant fraction of malicious corruptions (one quarter) along with any number of failstop errors. At the same time, we need it to guarantee termination if there is a large enough fraction of honest parties. On top of that the protocol needs to have polylogarithmic overhead. Prior to our work, there is no such protocol in the literature. We show how to build such a protocol in a whitebox manner from offtheshelf protocols with polylogarithmic overhead.
There are many additional complications along the way. Most honest majority protocols rely on private and authenticated channels. Since an adversary can corrupt players so that all committees contain a corrupted member,^{1} we need a way to allow the inner protocols emulated by different sets of parties to communicate securely while hiding the messages from the committee members. We should also prevent corrupted committee members from attacking the delivery or authentication of the transmitted messages. In addition, when a user sends his input to an emulated party of the outer protocol emulated by a committee that may only have a single honest party, we should still be able guarantee that he can securely send a message to the inner protocol. This is necessary to ensure that an honest party cannot be prevented from giving input. To prevent this, we employ a multitude of new techniques described in the following technical sections which includes player elimination and the use of a tamperresilient secretsharing scheme.
Although the basic approach is the same, there are important technical differences between this work and [DIK+08]. In the following, we describe the most important ones. The work of [DIK+08] employs Verifiable Secret Sharing (VSS) to solve the problem of secure message transmission between parties. It also uses VSS to allow real parties to provide their inputs to the emulated parties of the outer protocol. The work of [DIK+08] can employ VSS because it can set up committees so that it is guaranteed that most committees have an honest majority. In contrast, since it could be that a majority of players are corrupt, we cannot ensure that any committee has an honest majority and therefore we cannot employ VSS. Another difference is that we need an explicit bipartite expander graph with constant left degree and constant right degree. Since we could not find such a construction in the literature, we constructed such an expander using standard techniques.
3 Setting the Stage
We use \(\lambda \) to denote the security parameter. We consider a setting with n players \(\mathsf {P}_1, \ldots , \mathsf {P}_n\). Here \(\mathsf {P}_i\) is just a distinct name for each of the participating players. We use \(\mathcal {P}= \{ \mathsf {P}_1, \ldots , \mathsf {P}_n \}\) to denote the set of players. We assume that all players agree on \(\mathcal {P}\). We assume that n is a function of \(\lambda \) and that \(n(\lambda ) \ge \lambda \). We often write n instead of \(n(\lambda )\).
We also assume that the parties agree on a circuit C to be computed. We assume that all parties have a fixed size input in C. We use \(s = {\text {size}}_{\textsc {Bool}}(C)\) to denote the size of C.
By a protocol \(\pi \), we mean a generic protocol which has an instantiation \(\pi (C,\lambda ,n)\) for each circuit C, value of the security parameter \(\lambda \) and number n of parties. We assume that there exists a uniform polytime Turing machine which produces circuits computing \(\pi (f,\lambda ,n)\) given input \((C,1^\lambda ,1^n)\). We do not consider the production of \(\pi (C,\lambda ,n)\) as part of the complexity of \(\pi \). We use \({\text {comp}}(\pi (C,n,\lambda ))\) to denote the expected total work done by all parties in \(\pi (C,n,\lambda )\), where the work is measured as local computation, counting the sending and receiving of one bit as 1 towards the work. Note that \({\text {comp}}(\pi (C,n,\lambda ))\) in particular is an upper bound on the communication of the protocol.
We prove security in the UC model assuming a synchronous model, pointtopoint channels and broadcast. The UC model is originally best geared towards modeling asynchronous computation, but it can be adapted to model synchronous computation.
3.1 UC and Synchronous Computation
Our study is cast in the synchronous model. Still, we would like to prove security in the UC model which by design considers asynchronous computation. The reason why we would like to use the UC model is to give modular proofs. We need to consider reactive secure computations for which the UC model is the de facto standard. One can cast synchronous computation in the UC model by using the techniques of [KMTZ13]. The model from [KMTZ13] is, however, much more refined and detailed than what we need, so we have decided to go for a simpler model that we present below.
We are going to assume that synchrony is ensured by the environment giving the parties \(\mathsf {P}_i\) special inputs \(\textsc {tick}\) modeling that the time has increased by one tick for \(\mathsf {P}_i\). The parties can then simply count what time it is. We then simply require that the environment keeps the clocks of two honest parties at most one tick apart. To make sure that all parts of a composed protocol and all ideal functionalities know what time it is, we require that all parties which receive an input \(\textsc {tick}\) passes it on to all its subprotocols and ideal functionalities.
In a bit more detail, synchrony is defined via a synchrony contract that all entities must follow for as long as all other entities do so. We describe the contract now for the different entities of the UC framework. In doing so, we describe the behaviour that the entity must show, assuming that all other parties followed the contract until that point. If an entity A observes another entity B breaking the contract, then A is allowed to exhibit arbitrary behaviour after that point.

Synchronous Environment. A round is defined by all parties having received the input \(\textsc {tick}\) from the environment. The environment might in each round give additional input \(x_i\) to a party \(\mathsf {P}_i\) by inputting \((\textsc {tick}, x_i)\). In most of our we use \(r_i\) to denote the round in which \(\mathsf {P}_i\) is. We say that party \(\mathsf {P}_i\) is in round \(r_i\), if it has received the input \(\textsc {tick}\) exactly \(r_i\) times from the environment. The environment must ensure that \(r_i \le r_j + 1\) for all honest parties \(\mathsf {P}_i\) and \(\mathsf {P}_j\). Furthermore, when the environment sends \(\textsc {tick}\) to an honest party, it cannot send another \(\textsc {tick}\) to that party until it has received an output from \(\mathsf {P}_i\).

Synchronous Parties. If a party \(\mathsf {P}_i\) gets input \(\textsc {tick}\) from its environment it must after this input, send \(\textsc {tick}\) exactly once to each of its ideal functionalities. Note that the caller might be a superprotocol instead of an environment and that \(\mathsf {P}_i\) might be calling a subprotocol instead of an ideal functionality. This is transparent to \(\mathsf {P}_i\) and we will use environment to denote the entity calling \(\mathsf {P}_i\) and ideal functionality to denote the entity being called by \(\mathsf {P}_i\). When an honest party received back an output from all the subentities to which it input \(\textsc {tick}\), it must deliver an output to its environment as the next thing.
Notice that if we compose a synchronous environment with a synchronous protocol to get a new environment, then we again have a synchronous environment, which is the main observation needed to lift the UC composition theorem to the synchronous setting.
In the following we will ignore the inputs \(\textsc {tick}\) as they are only used to define which round we are in. We will say that \(\mathsf {P}_i\) gets input \(x_i\) in round \(r_i\) if it gets input \((\textsc {tick}, x_i)\) in that round. We will say that \(\mathsf {P}_i\) gets no input in round \(r_i\) if it gets input \((\textsc {tick})\) in that round.
We will be using an ideal functionality for synchronous communication. In each evaluation, party \(\mathsf {P}_i\) has input \((x_{i,1}, \ldots , x_{i,n})\) and receives the output \((x_{1,i}, \ldots , x_{n,i})\), i.e., in each round each party can send a message to each other party. We will not write this ideal functionality explicitly in our formal statements. We consider it the ground model of communication, i.e., it is present in all our (hybrid) models. The round complexity of each evaluation is 1.
We will be using an ideal functionality for broadcast between a set of parties \(\mathsf {P}_1, \ldots , \mathsf {P}_n\). In each evaluation each party has input \(x_i\) and receives the output \((x_1, \ldots , x_n)\). The round complexity is the same in all rounds but might depend on the number of parties. For n parties, we use \(\mathsf {R}_{\textsc {broadcast}}(n)\) to denote the round complexity of each round of broadcast among n parties.
3.2 Broadcast
4 The Outer Protocol
The outer protocol \(\pi _{\textsc {out}}\) involves n users \(\mathsf {U}_1, \ldots , \mathsf {U}_n\) and m servers \(\mathsf {S}_1,\ldots , \mathsf {S}_m\). Only the users have inputs and outputs. The protocol computes an nparty function \(f: D^n \rightarrow E^n\) given by circuit C. We assume that \(D = \{0,1\}^k\) and that \(E = D \cup \{ \bot \}\), but the protocol obviously generalises to differently structured inputs and outputs. We use \(\bot \) to signal that a user did not get an output.
We assume that f is fixed and that the protocol runs in some fixed number of rounds, which we denote by \(\mathsf {R}_{\textsc {out}}\).
We assume that the only interactions involving users is in the first round where all users send a message to all servers (we call these the input messages) and in some given round \(\mathsf {R}_{\textsc {out}}\) all servers send a message to all users (we call these the output messages). We use \(\mathsf {In}_{\textsc {out}}\) to denote the randomized function used to compute input messages from an input and we use \(\mathsf {Out}_{\textsc {out}}\) to denote the deterministic function used to compute the output from output messages.
Definition 1
We say that \(\pi _{\textsc {out}}\) is a \((\mathsf {t}_\textsc {out}^\textsc {mal},\mathsf {t}_\textsc {out}^{\textsc {term}})\)suitable outer protocol if it UC realises the corresponding \(\mathcal {F}_\textsc {out}^{\mathsf {t}_\textsc {out}^{\textsc {term}}}\) against a proportion \(\mathsf {t}_\textsc {out}^\textsc {mal}\) of adaptive, active corruptions and any number of adaptive crashstop corruptions.
Theorem 1
There exists a suitable outer protocol \(\pi \) for all \(C \in \mathsf {NC}\) with \({\text {OH}}= {\text {polylog}}(n) \cdot \log ({\text {size}}(C))\).
Proof
(sketch). We only sketch a proof of the theorem as the desired protocol can be built in a fairly straightforward manner from offtheshelf techniques.
Starting from [DIK10] we get a protocol \(\pi \) for m servers and a circuit C which is perfectly secure against m / 4 adaptive, active corruptions. We can extend this to the clientserver setting by having each \(\mathsf {U}_i\) secret share its input among \(\mathsf {S}_1, \ldots , \mathsf {S}_m\) and then computing the function \(f'\) which first reconstructs the secret sharings, computes f, and outputs secret sharings of the results. We denote the resulting protocol by \(\pi '_f\). It runs the protocol \(\pi _{f'}\), i.e., the protocol \(\pi \) from [DIK10] for the function \(f'\).
The secretsharing scheme used for the inputs and outputs should have the following properties. First, that given m shares of which at most \(\smash {\frac{1}{4}}m\) are incorrect, one can efficiently reconstruct the message. Furthermore, the secretsharing scheme should also have the property that given at most m / 4 shares, one gets no information on the secret. Finally, when secret sharing a message x, the secretsharing scheme should produce a secret sharing of size \({O(\vert x \vert + m)}\) and it should be possible to share and reconstruct in time \(O(\vert x \vert + m)\). The secret sharing scheme from [CDD+15] meets these criteria.
We now do a generic, whitebox transformation of the protocol \(\pi \) into a protocol \(\pi '_f\) which can tolerate crash errors. Each server \(\mathsf {S}_j\) will run exactly as in \(\pi \) except that it keeps a counter \(c_j\) which is initialized to 0 and which is increased whenever \(\mathsf {S}_j\) sees a party sent \(\textsc {crashed}\). There is a threshold \(t = m/8\) and when \(c_j \ge t\), server \(\mathsf {S}_j\) will crash itself, i.e., it sends \(\textsc {crashed}\) to all parties and stops sending messages. If at the end of the computation of \(f'\), a server is not crashed, it sends its share of the output of \(\mathsf {U}_i\) to \(\mathsf {U}_i\).
The intuition behind \(\pi '_f\) is that we try to keep the number malicious servers plus the number of crashed servers within the threshold m / 4 of \(\pi \). We will use m / 8 of the budget for crashes and have m / 8 left for tolerating some additional malicious corruptions. If we see too many crashed servers, then all servers will shut down the execution of \(\pi \) by selfcrashes. We say that \(\pi \) was shut down if all correct servers did a selfcrash.
We are going to reduce the security of \(\pi '_f\) to that of \(\pi \) by considering all parties which deviate from \(\pi '_f\) as actively corrupted. Notice that in \(\pi '_f\) there are three types of parties which deviate from the underlying protocol \(\pi _{f'}\). (1) The servers \(\mathsf {S}_j\) which are actively corrupted in \(\pi '_f\). (2) The servers \(\mathsf {S}_j\) which are crashstop corrupted in \(\pi '_{f}\). (3) The correct servers \(\mathsf {S}_j\) which did a selfcrash and hence stopped running \(\pi _{f'}\). At any point in the execution, let \(d_i\) denote the number of servers which deviated from \(\pi \) and are of type i and let \(d = d_1 + d_2 + d_3\). We are going to show that at any point before the shutdown point, it holds that \(d < m/4\). This means that up to the shutdown point, we can perfectly emulate an attack on \(\pi '_f\) by an attack on \(\pi _{f'}\) using \(< m/4\) active corruption. This also holds after the shutdown point since all honest parties have selfcrashed and therefore there is no more communication from honest parties to simulate.
What remains is to show that if \(d \ge m/4\), then the shutdown point has been reached. Assume that \(d \ge m/4\). If in a given round there are \((d_1, d_2, d_3)\) deviators of the various types, then at the beginning of the next round all correct servers have seen \(d_2 + d_3\) messages \(\textsc {crashed}\) as both crashed and selfcrashed parties sent out \(\textsc {crashed}\) to all parties. Hence before the next round begins it will hold for all correct \(\mathsf {S}_j\) that \(c_j \ge d_2 + d_3 = d  d_1 \ge m/4  d_1 \ge m/4  m/8 = m/8 = t\). Hence the shutdown point has been reached.
We then show that if any party gets an output then all honest users have their inputs considered correctly and all honest parties who get an output get the correct output. If the shutdown point is reached, then clearly no party gets an output, so assume that the shutdown point was not reached. Then the attack can be emulated given m / 4 active corruptions. This means that at most m / 4 of the shares of the honest parties are missing or modified. Therefore each honest \(\mathsf {U}_i\) will correctly reconstruct \(z_i\).
This ends the proof that the protocol is secure. We now address when the protocol is guaranteed to terminate.
It is clear that as long as \(d_1 + d_2 < t\), we will have that \(d_3 = 0\) as all \(c_j \le d_1 + d_2\) until the first selfcrash. This shows that as long as \(d_1 + d_2 < t\) we will have \(d < t = m/8\) and therefore we will have guaranteed termination of \(\pi '_{f}\). Furthermore, if \(d_1 + d_2 < m/8\) then at least \(m  d \ge \smash {\frac{7}{8}}m\) shares of the secret shared inputs are correct and at most \(d_1 + d_2 \le m/8\) are incorrect. Hence, all honest parties will have their inputs \(x_i\) reconstructed inside \(f'\). It will similarly be guaranteed that each \(\mathsf {U}_j\) receives at least \({{m  d \ge \smash {\frac{7}{8}}m}}\) shares of the secret shared output and that at most m / 8 of these are incorrect. Hence \(\mathsf {U}_i\) can compute the correct \(z_i\).
We then address the complexity of the functions. By the assumptions on the secret sharing scheme and on the size of f, we have that \(\vert f' \vert = O(\vert f \vert + n \cdot m)\). Assuming that \(m = O(n)\), this is of the form \(\vert f' \vert = O(\vert f \vert ) + {\text {poly}}(n)\), so for the sake of computing the overhead, we can assume that \(\vert f' \vert = O(\vert f \vert )\). When \(f \in \mathsf {NC}\), then the protocol from [DIK10] has \({\text {OH}}= {\text {polylog}}(n) \cdot \log (\vert f' \vert )\).
5 The Inner Protocol
Definition 2
We say that \(\pi _{\textsc {in}}\) is a suitable inner protocol for \((\mathsf {Tr}_{\textsc {in}},\mathsf {R}_{\textsc {in}})\) if it UC realises \(\mathcal {F}_\textsc {in}^{\mathsf {Tr}_{\textsc {in}},\mathsf {R}_{\textsc {in}}}\) against adaptive, active corruption of any number of parties.
Theorem 2
For for all c and polysized \(\mathsf {Tr}_{\textsc {in}}\) there exists \(\mathsf {R}_{\textsc {in}}\) and \(\pi _{\textsc {in}}\) such that \(\pi _{\textsc {in}}\) is a suitable inner protocol for \((\mathsf {Tr}_{\textsc {in}},\mathsf {R}_{\textsc {in}})\) in the OThybrid model with statistical security and complexity \(O({\text {poly}}(c) \vert \mathsf {Tr}_{\textsc {in}}\vert )\), where in the complexity the calls to the OT functionality are counted as the size of the inputs and outputs.
Proof
One can use the protocol from [IPS08]. One can in particular note that once the circuit to be computed is fixed, [IPS08] has a fixed round complexity.
Theorem 3
For all c and polysized \(\mathsf {Tr}_{\textsc {in}}\) there exists \(\mathsf {R}_{\textsc {in}}\) and \(\pi _{\textsc {in}}\) such that \(\pi _{\textsc {in}}\) is a suitable inner protocol for \((\mathsf {Tr}_{\textsc {in}},\mathsf {R}_{\textsc {in}})\) in the CRS model with computational complexity \(O({\text {poly}}(c) \vert \mathsf {Tr}_{\textsc {in}}\vert \lambda )\).
6 Combining the Inner Protocol and the Outer Protocol
In this section, we describe how to combine the inner and outer protocol into the protocol that we call the combined protocol. This is a new instance of a blackbox protocol transformation defined by [IKP+16]. First, we will describe tools that we will need. The first tool is called an expander graph. The second tool called authentic secret sharing is a secret sharing scheme that allows an honest party that receives shares to detect tampering. Our third tool is called Authenticated OneTime Encryption which is an informationtheoretic authenticated encryption scheme. It is analogous to the onetime pad. Finally, we will describe how to run the combined protocol. We will describe what to do when an emulated server crashes, how emulated servers can exchange keys with other parties even when its committee only has a single honest party and then how the servers can then use those keys to securely communicate. We will then describe our final protocol and prove that it has polylog overhead and some termination guarantees.
6.1 More Tools
Threshold Bipartite Expander Graph. A threshold bipartite expander graph is a bipartite graph with n left nodes and m right nodes which guarantees that that for any set of left nodes that has size greater or equal to \(\alpha n\), the size of the neighborhood of that set is greater or equal to \(\beta m\). Recall that given a graph \(G=(V,E)\) and some subset \(S \subseteq V\), the neighbourhood of S denoted by N(S) is the set of nodes that are adjacent to a node in S, i.e., \(N(S) := \{v \in V \mid \exists \; u \in S : (u,v) \in E\}\). As usual a bipartite graph is a graph \(G = (L \cup R,E)\) where \(L \cap R = \emptyset \), \(N(L) \subseteq R\) and \(N(R) \subseteq L\). The left (right) degree is the maximal degree of a node in L (R).
Definition 3
A \((n,m,\alpha ,\beta )\)threshold bipartite expander is a bipartite graph \(G = (L \cup R,E)\) with \(L = n, \; R = m\) such that if \(S \subseteq L\) and \(S \ge \alpha n\) then \(N(S) \ge \beta m\).
We show that for all constant \(0< \alpha < 1\) and \(0< \beta < 1\) there exists \(m = O(n)\) and an \((n,m,\alpha ,\beta )\)threshold bipartite expander where the left degree is O(1) and the right degree is O(1).
We describe a simple construction of a bipartite threshold expander graph. It is inspired by [SS96]. We will show that for any \(\alpha > 0\), that there exists a degree d such that for every n, \(\beta \) there exists an explicit way of constructing graphs such that the resulting graph is \((n,n,\alpha ,\beta )\)threshold bipartite expander graph except with probability negligible in n. In addition, the degree of the graph is at most d and each right node has at least one edge. We denote the binary entropy function as \(\mathbb {H}\).
Theorem 4
For any \(0< \alpha ,\beta < 1\), let \(d = \left\lceil  \frac{ \mathbb {H}(\alpha )+ \mathbb {H}(\beta )}{\alpha \log \beta } \right\rceil + 1\) then for any \(n \in N\) the previous construction results in a bipartite \((n,n,\alpha ,\beta )\)threshold expander except with probability smaller than \(2^{\alpha n \log \beta }\)
We note that the number of left sets of size \(\alpha n\) is equal to \(\left( {\begin{array}{c}n\\ \alpha n\end{array}}\right) \). We note that the number of right sets of size \((1\beta ) n\) is equal to \(\left( {\begin{array}{c}n\\ \beta n\end{array}}\right) = \left( {\begin{array}{c}n\\ (1  \beta ) n\end{array}}\right) \).
We will now upper bound the probability that the neighborhood of \(\alpha n\) left nodes does not intersect a set of \((1\beta ) n\) right nodes. We can see that for each permutation, for each element in the left set, the probability that the element is not mapped to an element in the right set is less than or equal to \(\beta \). Therefore we have that the probability is upper bounded \(\beta ^{\alpha n d}\).
Lemma 1
For the construction above, the degree of the graph is at most d.
This follows since there are d permutations in \(\Pi \) and each node gains at most one edge per permutation.
Lemma 2
For the construction above, each right node has at least one edge.
This follows since each permutation assigns each right node to a left node.
Authentic Secret Sharing. Let \(\mathbb {F}\) be a finite field and n be an integer. A secret sharing scheme consists of two algorithms \({\mathsf {{share}}}\) and \({\mathsf {{rec}}}\). For every \(s \in \mathbb {F}\), \({\mathsf {{share}}}(s)\) outputs a randomized set of shares \((s_1, \ldots , s_n)\). We use \({\mathsf {{share}}}(s)\) to denote the distribution on \((s_1, \ldots , s_n)\) when the input is s. The algorithm \({\mathsf {{rec}}}\) takes as input \((s_1', \ldots , s_n')\) and gives an output in \(\mathbb {F}\cup \{ \bot \}\) where \(\bot \) signals error.
For any \(i \in [n]\) we let \((s_1, \ldots , s_n)_{i} = (s_1, \ldots , s_{i1}, s_{i+1}, \ldots , s_n)\). For any \((s_1, \ldots , s_n)_{i}\) and any s we let \(((s_1, \ldots , s_n)_{i}, s) = (s_1, \ldots , s_{i1}, s, s_{i+1}, \ldots , s_n)\). For all i and all \(s \in \mathbb {F}\) and all unbounded adversaries A taking as input \((s_1, \ldots , s_n)_{i}\) and giving outputs \((s_1', \ldots , s_n')_{i}\) consider the game where we sample \((s_1, \ldots , s_n) \leftarrow {\mathsf {{share}}}(s)\) and compute \(s' = {\mathsf {{rec}}}(A((s_1, \ldots , s_n)_{i}), s_i)\). Note that it might be the case that \(s' = \bot \). We use \(A_{i}(s)\) to denote the distribution of \(s'\), i.e., the result of reconstructing with the \(n1\) possibly modified shares. Let \(\delta (\bot ) = \bot \) and \(\delta (x) = \top \) for \(x \ne \bot \). We use \(\hat{A}_{i}(s)\) to denote \((\delta (s'), (s_1, \ldots , s_n)_{i})\), i.e., the shares seen by the adversary plus the information whether reconstructing with the wrong shares gave an error or not.
Definition 4
(authentic secret sharing). Let \(({\mathsf {{share}}}, {\mathsf {{rec}}})\) be a secret sharing scheme. We call \(({\mathsf {{share}}}, {\mathsf {{rec}}})\) an authentic secret sharing scheme if the following conditions hold.

Reconstruction. For all \(s \in \mathbb {F}\) it holds that \(\Pr [{\mathsf {{rec}}}({\mathsf {{share}}}(s))=s]=1\).

Sound. For all \(s \in \mathbb {F}\) and all \(i \in [n]\) and all unbounded adversaries A it holds that \(\Pr [A_{i}(s) \in \{ s, \bot \}]=1\).

Privacy. For all \(s, \bar{s} \in \mathbb {F}\) and all \(i \in [n]\) and all unbounded adversaries A it holds that \(\hat{A}_{i}(s)\) and \(\hat{A}_{i}(\bar{s})\) are statistically close.
Authenticated OneTime Encryption. An Authenticated OneTime Encryption scheme is given by a key space, encryption algorithm and decryption algorithm \((\mathcal {K}, {\mathsf {{Enc}}}, {\mathsf {{Dec}}})\). For each message length m and value \(\lambda \) of the security parameter we have a key space \(\mathcal {K}_{m,\lambda }\). Given \(K \in \mathcal {K}_{m,\lambda }\), \(\lambda \) and message \(x \in \{0,1\}^m\) the encryption algorithm outputs a ciphertext \(A = {\mathsf {{Enc}}}_{K,\lambda }(x)\). Given \(K \in \mathcal {K}_{m,\lambda }\), \(\lambda \), m and ciphertext A the decryption algorithm outputs message \(x = {\mathsf {{Dec}}}_{K,\lambda ,m}(A)\).

Correctness. For all m and all \(x \in \{0,1\}^m\) it holds with probability 1 for a random key \({K \leftarrow \mathcal {K}_{m,\lambda }}\) that \({\mathsf {{Dec}}}_{K,\lambda ,m}({\mathsf {{Enc}}}_{K,\lambda }(x))=x\).

Security. Let \(\mathcal {A}\) be a computationally unbounded algorithm. Input \(\lambda \) to \(\mathcal {A}\) and run it to get m and \(x_0, x_1 \in \{0,1\}^m\). Sample a uniformly random bit \(b \leftarrow \{0,1\}\). Sample \(K \leftarrow \mathcal {K}_{m,\lambda }\) and \(A \leftarrow {\mathsf {{Enc}}}_{K,\lambda }(x_b)\). Let \(\mathcal {O}_{m,K,A}(B)\) be the oracle which on input \(B \ne A\) returns \({\mathsf {{Dec}}}_{K,\lambda ,m}(B)\). Compute \(g \leftarrow \mathcal {A}^{\mathcal {O}_{m,K,A}(B)}(A)\) for \(g \in \{0,1\}\). The advantage of \(\mathcal {A}\) is given by \(\mathsf {Adv}_{\mathcal {A}}(\lambda ) = \vert \Pr [g = b]  \frac{1}{2}\vert \). We say that \((\mathcal {K}, {\mathsf {{Enc}}}, {\mathsf {{Dec}}})\) is secure if \(\mathsf {Adv}_{\mathcal {A}}(\lambda ) \in \mathsf {negl}(\lambda )\) for all \(\mathcal {A}\) which makes at most a polynomial number of queries to its oracle.

Authenticity. Let \(\mathcal {A}\) be a computationally unbounded algorithm. Input \(\lambda \) to \(\mathcal {A}\) and run it to get m. Sample \(K \leftarrow \mathcal {K}_{m,\lambda }\). Let \(\mathcal {O}_{m,K}(B)\) be the oracle which on input B returns \({\mathsf {{Dec}}}_{K,\lambda ,m}(B)\). Compute \(c \leftarrow \mathcal {A}^{\mathcal {O}_{m,K,A}(B)}()\). We say that \((\mathcal {K}, {\mathsf {{Enc}}}, {\mathsf {{Dec}}})\) has authenticity if \(\Pr [{\mathsf {{Dec}}}_{K,\lambda ,m}(c) \ne \bot ] \in \mathsf {negl}(\lambda )\) for all \(\mathcal {A}\) which makes at most a polynomial number of queries to its oracle.
We say that an Authenticated OneTime Encryption scheme has overhead O(1) if it holds for all messages x and all \(K \in \mathcal {K}_{\vert x \vert ,\lambda }\) that \(\vert K \vert + \vert {\mathsf {{Enc}}}_K(x) \vert \in O(\vert x \vert + {\text {poly}}(\lambda ))\) for a polynomial independent of \(\vert x \vert \) and if we can encrypt and decrypt in time \(O(\vert K \vert + \vert {\mathsf {{Enc}}}_K(x) \vert + \vert x \vert )\). This means that for large enough x we have that \(\vert K \vert + \vert {\mathsf {{Enc}}}_K(x) \vert \in O(\vert x \vert )\) and that we can encrypt and decrypt in time \(O(\vert x \vert )\). We can construct such a scheme offtheshelf. Let \(\mathsf {MAC}\) be an informationtheoretic MAC which can handle message of length \(O(\lambda )\) using keys of length \(O(\lambda )\) and which can be computed in time \({\text {poly}}(\lambda )\). Such a scheme is described for instance in [WC81]. Let \(\mathcal {H}\) be a family of almost universal hashfunctions which can be computed in linear time, see for instance [IKOS07]. For messages of length m, the key for the encryption scheme will consist of (L, H, P), where L is a random key for the MAC, \(H \leftarrow \mathcal {H}\) is a random hash function from the family and P is uniformly random in \(\{0,1\}^m\). To encrypt, compute \(C = x \oplus P\), \(M = H(C)\) and \(A = \mathsf {MAC}_L(M)\) and send (C, A). To decrypt, if \(\vert C \vert \ne m\), output \(\bot \). Otherwise, compute \(M = H(C)\) and \(A' = \mathsf {MAC}_L(M)\). If \(A' \ne A\), output \(\bot \). Otherwise, output \(C \oplus P\). The complexity is as claimed and the security follows from [WC81].
6.2 The Combined Protocol
We will present our result in a hybrid model with ideal functionalities for the inner protocol. We call this the innerhybrid model. For each \(i = 1, \ldots , m\), we are going to have an ideal functionality \(\mathcal {F}_j\) of the form given in Fig. 5 with \(c = \vert \mathcal {C}_j \vert \) and the parties being \(\mathcal {C}_j\). We call \(\mathcal {F}_j\) virtual server j and we specify later the behaviour of \(\mathcal {F}_j\).
We set up some notation. Let \(\mathcal {P}= \{ \mathsf {P}_1, \ldots , \mathsf {P}_n \}\). At any point in the execution, \(\mathcal {P}_\textsc {honest}\subset \mathcal {P}\) denotes the set of parties which are honest in the combined protocol and we let \(\mathcal {P}_\textsc {mal}\) be the set of maliciously corrupted parties.
By the way we have set the parameters of the threshold expander graph it follows that if \(\vert \mathcal {P}_\textsc {mal}\vert \le n \cdot \mathsf {t}_{\textsc {mal},\textsc {comb}}\) then \({\vert \mathcal {S}_\textsc {mal}\vert \le m \cdot \mathsf {t}_{\textsc {mal},\textsc {out}}}\). We have therefore almost perfectly emulated the entities \(\mathsf {U}_1, \ldots , \mathsf {U}_n, \mathsf {S}_1, \ldots , \mathsf {S}_m\) of the outer protocol with the needed adversary structure as long as \({\vert \mathcal {P}_\textsc {mal}\vert \le n \cdot \mathsf {t}_{\textsc {mal},\textsc {comb}}}\). The only significant difference between \(\mathsf {U}_1, \ldots , \mathsf {U}_n, \mathsf {S}_1, \ldots , \mathsf {S}_m\) and \(\mathsf {P}_1, \ldots , \mathsf {P}_n\), \(\mathcal {F}_1, \ldots , \mathcal {F}_m\) is the fact that in the outer protocol, the entities \(\mathsf {U}_1, \ldots , \mathsf {U}_n, \mathsf {S}_1, \ldots , \mathsf {S}_m\) can send private messages to each other, whereas most of the entities \(\mathsf {P}_1, \ldots , \mathsf {P}_n\), \(\mathcal {F}_1, \ldots , \mathcal {F}_m\) cannot send private messages to each other. This is going to give us the socalled secret communication problems when we emulate the protocol in Fig. 3.

Distribution of Input Shares. To give inputs, each \(\mathsf {U}_i\) sends a share to \(\mathsf {S}_j\). In the emulated outer protocol this corresponds to \(\mathsf {P}_i\) inputting a message to \(\mathcal {F}_j\). If \(\mathsf {P}_i \not \in \mathcal {C}_j\), this is not allowed.

Server Communication. As part of the evaluation, each \(\mathsf {S}_j\) sends a message to \(\mathsf {S}_k\). In the emulated outer protocol, this corresponds to \(\mathcal {F}_j\) sending a message to \(\mathcal {F}_k\). This is not allowed since ideal functionalities cannot communicate.

Distribution of Output Shares. To give outputs, \(\mathsf {S}_j\) sends a share to \(\mathsf {U}_i\). In the emulated outer protocol this corresponds to \(\mathcal {F}_j\) outputting a message to \(\mathsf {P}_i\). If \(\mathsf {P}_i \not \in \mathcal {C}_j\), this is not allowed.
We handle all three secret communication problems by letting the entities that need to communicate share a secret key which is used to encrypt the given message using an Authenticated OneTime Encryption scheme. Then the authenticated ciphertext c can be sent in cleartext between the two involved entities. This solves the problem as an ideal functionality \(\mathcal {F}_j\) for instance can output c to all members of \(\mathcal {C}_j\) which can then all send c to all the members of \(\mathcal {C}_k\) who will input it to \(\mathcal {F}_k\). Our way to solve the secret communication problems is significantly more complicated than the approach in [IPS08] and other player emulation protocols. The reason is that previous techniques incur an overhead of at least n. For instance, in [IPS08] each message is secret shared among all parties, which means that messages will become a factor n longer. We need constant overhead. This is not an issue for [IPS08] as they consider n to be a constant. Also, the technique in [IPS08] do not guarantee termination if there is just one corrupted party.
To have servers and players share keys, we use a subprotocol to do so. This introduces another problem. It is possible that all committees contain at least one corrupt player. When a player is generating a key with a committee, a problem may arise. This can occur because either the player is corrupt or the committee contains at least one dishonest member. It is imperative that a server with at least one honest member must get the key from each honest user or abort. Otherwise, the corrupt parties can prevent honest parties from giving inputs. We employ player elimination techniques [HMP00] to solve this problem.

Distribution of Input Shares. When \(\mathsf {U}_i\) needs to send a message m to \(\mathsf {S}_j\) and they are both honest there will exist a random secret key \(K^i_j\) which is held by \(\mathsf {P}_i\) and which is inside \(\mathcal {F}_j\). Then \(\mathsf {P}_i\) computes \(c = {\mathsf {{Enc}}}_{K^i_j}(m)\) and sends it to each \(\mathsf {P}_k \in \mathcal {C}_j\). Then each \(\mathsf {P}_k\) inputs c to \(\mathcal {F}_j\). Let \(c_k\) denote the value input by \(\mathsf {P}_k\). Then the virtual server \(\mathcal {F}_j\) computes \(m_k = {\mathsf {{Dec}}}_{K^i_j}(c)\). If \(\vert \{ m_k \}_{k \in \mathcal {C}_j} \setminus \{ \bot \} \vert = 1\), then let m be the unique value in \(\{ m_k \}_{k \in \mathcal {C}_j} \setminus \{ \bot \}\). Otherwise, let \(m = \bot \). Notice that if \(\mathsf {P}_i\) is honest and there is at least one honest \(\mathsf {P}_k \in \mathcal {C}_j\), then the correct ciphertext will be input to the virtual server and therefore \(m \in \{ m_k \}_{k \in \mathcal {C}_j}\). Furthermore, no corrupted committee member can input another valid ciphertext. In particular, when the correct message is not received, either \(\mathsf {P}_i\) is corrupted or \(j \in \mathcal {S}_\textsc {mal}\).

Server Communication. When \(\mathsf {S}_i\) needs to send a message m to \(\mathsf {S}_j\) and they are both honest there will exist a random secret key \(K_{i,j}\) which is inside \(\mathcal {F}_i\) and \(\mathcal {F}_j\). Then \(\mathcal {F}_i\) computes \(c = {\mathsf {{Enc}}}_{K_{i,j}}(m)\) and outputs it to all \(\mathsf {P}_k \in \mathcal {F}_i\). Then all \(\mathsf {P}_k \in \mathcal {F}_i\) sends c to all \(\mathsf {P}_l \in \mathcal {C}_j\) and they all input all the ciphertexts they received. The virtual server decrypts all ciphertexts and sets m to be the unique message different from \(\bot \) if it exists and \(\bot \) otherwise. If \(\mathcal {C}_i\) crashes the message is also set to \(\bot \). Assume that \(\mathcal {C}_i\) is not crashed and \(\mathcal {C}_j\) is not corrupted. Then all the honest parties \(\mathsf {P}_k \in \mathcal {C}_i\) sent the correct ciphertext and no party knows the secret key, so the only correct ciphertext input to the virtual server is the correct one. Hence m arrives correctly. Assume then that that \(\mathcal {C}_i\) is crashed and \(\mathcal {C}_j\) is not corrupt. Then the message is set to \(\bot \) as it should be.

Distribution of Output Shares. When \(\mathsf {S}_j\) needs to send a message m to \(\mathsf {U}_i\) and they are both honest there will exist a random secret key \(K^i_j\) which is held by \(\mathsf {P}_i\) and which is inside \(\mathcal {F}_j\). Then \(\mathcal {F}_j\) computes \(c = {\mathsf {{Enc}}}_{K^i_j}(m)\) and outputs it to all parties \(\mathsf {P}_k \in \mathcal {C}_j\), who all forward it to \(\mathsf {P}_i\). The party decrypts all ciphertexts and sets m as above. Assume that \(\mathcal {C}_j\) is not crashed or corrupted and that \(\mathsf {P}_i\) is honest. Then all the honest parties \(\mathsf {P}_k \in \mathcal {C}_j\) sent the correct ciphertext and no party knows the secret key, so the only correct ciphertext sent to \(\mathsf {P}_i\) is the correct one. Hence m arrives correctly. Assume then that \(\mathcal {C}_j\) is crashed and that \(\mathsf {P}_i\) is honest. Then the message is set to \(\bot \) as it should be. Assume that \(\mathcal {C}_j\) is corrupted and that \(\mathsf {P}_i\) is honest. Then any message might arrive, but this is allowed as it correspond to \(\mathsf {S}_j\) being corrupted. Similarly if \(\mathsf {P}_i\) is corrupted.
We describe how to handle the case when reconstruction fails. First \(\mathcal {F}_j\) will output to all parties in \(\mathcal {C}_j\) all the shares \(\{ \hat{K}_{i,k}^j \}_{k \in \mathcal {C}_j}\) that was input. If this does not crash \(\mathcal {C}_j\) then all parties \(\mathsf {P}_k \in \mathcal {C}_j\) will broadcast \(\{ \hat{K}_{i,k}^j \}_{k \in \mathcal {C}_j}\) to \(\mathcal {C}_j\,\cup \,\{\mathsf {P}_i \}\). If all parties \(\mathsf {P}_k \in \mathcal {C}_j\) do not broadcast the same values, then all honest parties \(\mathsf {P}_k \in \mathcal {C}_j\) will crash \(\mathcal {C}_j\) by sending \(\textsc {crash}\) to all parties and \(\mathcal {F}_j\). Otherwise, \(\mathsf {P}_i\) will identify the indices k such that \(\hat{K}_{i,k}^j \ne K_{i,k}^j\) and will broadcast the indices to \(\mathcal {C}_j \cup \{\mathsf {P}_i \}\). If \(\mathsf {P}_i\) does not do so, then \(\mathsf {P}_i\) is corrupted and the parties in \(\mathcal {C}_j\) will ignore all future messages from \(\mathsf {P}_i\). If parties were excluded, then the above procedure is repeated, but now \(\mathsf {P}_i\) secret shares only among the committee members that were not excluded. Notice that only corrupted parties are excluded. Therefore, if \(\mathcal {C}_j\) is not corrupted, the procedure will terminate before all committee members were excluded, at which point \(K_i^j\) was added to \(\mathcal {F}_j\). If eventually all committee members were excluded, \(\mathsf {P}_i\) will consider \(\mathcal {C}_j\) corrupted (Fig 7).
Generating Committee Keys. The basic idea behind generating \(K_{i,j}\) is to let \(\mathcal {F}_i\) generate \(K_{i,j}\) and sample an authentic secret sharing \(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i} \leftarrow {\mathsf {{share}}}(K_{i,j})\) and output \(K_{i,j,k}\) to \(\mathsf {P}_k\). Then \(\mathsf {P}_k\) inputs \(K_{i,j,k}\) to \(\mathcal {C}_j\) using the method for when \(\mathsf {U}_k\) gives input to \(\mathcal {F}_j\). Recall that when \(\mathsf {U}_k\) gives input to \(\mathcal {F}_j\) it will succeed unless \(\mathcal {F}_j\) crashes or \(\mathcal {C}_j\) detects \(\mathsf {U}_k\) as being corrupted. If \(\mathcal {F}_j\) crashes there is no need to generate a key. If \(\mathcal {C}_j\) detects \(\mathsf {P}_k\) as corrupted, they all broadcast this to \(\mathcal {C}_i \cup \mathcal {C}_j\) and \(\mathsf {P}_k\). Notice that if this happens, then either \(\mathsf {P}_k\) is corrupted, and it is secure to excluded it, or \(\mathcal {C}_j\) is corrupt, which corresponds to \(\mathsf {S}_j\) being corrupt, and hence there is no reason to keep the key secret, so again it is secure to exclude \(\mathsf {P}_k\). Let \(\mathcal {C}_i' \subseteq \mathcal {C}_i\) be the parties that were not excluded. If any parties were excluded, then \(\mathcal {F}_i\) generates a new key \(K_{i,j}\) and samples an authentic secret sharing \(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i'} \leftarrow {\mathsf {{share}}}(K_{i,j})\) and outputs \(K_{i,j,k}\) to \(\mathsf {P}_k\). The procedure is repeated until \(\mathcal {C}_i' = \emptyset \) or \(\mathcal {C}_i\) crashed or \(\mathcal {C}_j\) crashed or in some attempt all keys \(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i'}\) were successfully input to \(\mathcal {F}_j\). In the three first cases, either \(\mathcal {C}_i\) or \(\mathcal {C}_j\) is corrupted and there is no need for a key. In the last case, \(\mathcal {F}_j\) computes \(K_{i,j} \leftarrow {\mathsf {{rec}}}(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i'})\). If \(K_{i,j} \ne \bot \), then the key is the same as generated by \(\mathcal {F}_i\) unless the security of the secret sharing scheme was broken. Assume then \(K_{i,j} = \bot \). Since we are in a situation which might correspond to both \(\mathsf {S}_i\) and \(\mathsf {S}_j\) being honest (if for instance \(\mathcal {C}_i\) and \(\mathcal {C}_j\) are crashable but not crashed) we have to handle \(K_{i,j} = \bot \) by trying again. When \(K_{i,j} = \bot \) the virtual server \(\mathcal {F}_j\) will output this to \(\mathcal {C}_j\) along with \(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i'}\) which will all broadcast the shares to \(\mathcal {C}_i \cup \mathcal {C}_j\). If they do not all broadcast the same value, then the honest parties in \(\mathcal {C}_j\) will crash \(\mathcal {C}_j\) which is safe as there must be a corrupted party in \(\mathcal {C}_j\). If they all broadcast the same value, denote this value by \(\{ K_{i,j,k} \}_{k \in \mathcal {C}_i'}\). Then all parties in \(\mathcal {C}_i\) will give these values to \(\mathcal {F}_i\). Again, if they do not all give the same values, the \(\mathcal {F}_i\) will crash. Otherwise \(\mathcal {F}_i\) will find the indices k for which the wrong shares arrived at \(\mathcal {F}_j\). This only happens if \(\mathsf {P}_k\) is corrupted, so it is not safe to remove \(\mathsf {P}_k\) from the set of parties among which the secret sharing is done and try again. The code is given in Figs. 9 and 10.
Termination. To analyze termination, we use that each party is in at most \(d = O(1)\) committees and that \(n = O(m)\). Let \(\delta = m/(8nd)\). If less than \(\delta n\) parties are corrupted, there will be at most \(d \delta n = m/8\) committees which even contain a corrupted member. Therefore the total number of corrupted committees plus crashable committees will be at most m / 8. Since the outer protocol is secure (including termination guarantee) against m / 8 malicious corruptions, it follows that the combined protocol guarantees termination against \(\delta n\) malicious corruptions.
Complexity. We now address the complexity of the combined protocol when run in innerhybrid model. We count one computational step by some \(\mathcal {F}_j\) as 1 towards the complexity. We count one computational step by some \(\mathsf {P}_i\) as 1 towards the complexity. We count the sending of a message x by some \(\mathsf {P}_i\) as \(\vert x \vert \) towards the complexity. We count the broadcast of one bit to \(\log (n)\) parties as \({\text {polylog}}(n)\). Notice that throughout the protocol, we ever only broadcast to sets of parties of constant size, as all committees have constant size. Let c denote the complexity of running the outer protocol as a plain protocol. We want to compute the complexity of running the combined protocol and show that it is of the form \(O(c \cdot {\text {polylog}}(n)) + {\text {poly}}(n,\lambda )\). This would show that the overhead of the outer protocol is \({\text {OH}}= {\text {polylog}}(n)\). The emulation of the computation of the outer protocol clearly introduces no other overhead than the abort handling, key generation and the encryption of messages. It is clear that crash handling sends at most \(O(n^2)\) messages of constant size. This can be swallowed by the \({\text {poly}}(n,\lambda )\) term. It is clear that one attempt of a key generation of a key of length k will have complexity O(k), as secret sharing is done among a constant number of parties and secret sharing and reconstruction is linear and we broadcast a constant number of messages in one attempt. Since C initially has constant size and each attempt of generating a key sees the size of C go down by at least 1 and the procedure stops when \(C = \emptyset \), it follows that key generation has complexity O(k). By assumption the overall complexity of key generation and sending a message is therefore \(O(k) + {\text {poly}}(\lambda )\). There are in the order of \(2 n + m^2 \mathsf {R}_{\textsc {out}}\) messages, so the total complexity of sending the encrypted messages of total length M will be \(O(M) + (2n+m^2) \cdot {\text {poly}}(\lambda )\). The total length M of the messages is already counted as part of the complexity c and can therefore be swallowed by the \(O(c \cdot {\text {polylog}}(n))\) terms. The remaining \((2n\,+\,m^2) \cdot {\text {poly}}(\lambda )\) can be swallowed by \({\text {poly}}(n,\lambda )\) as \(m = O(n)\).
Theorem 5
For all \(c \in [0,1)\) there exists a protocol \(\pi \) for the innerhybrid model for all \(f \in \mathsf {NC}\) secure against malicious, adaptive corruption of up to cn parties and with termination guarantee against a nonzero constant fraction of corruptions with \({\text {OH}}= {\text {polylog}}(n) \cdot \log ({\text {size}}(f))\).
We can use the UC theorem to replace each \(\mathcal {F}_j\) by a suitable inner protocol. Using the fact that all \(\mathcal {C}_j\) have constant size along with Theorem 2, this gives a combined protocol for the model with OT and a broadcast between sets of parties of constant size with an overhead of \({\text {OH}}= {\text {polylog}}(n) \cdot \log ({\text {size}}(f))\).
When we want to tolerate that more than half the parties are corrupted, there is no way to implement the broadcast from scratch. We can, however, weaken the assumption on broadcast to only having access to \({\text {poly}}(n)\) broadcasts which are all performed prior to the protocol being run. They might even be performed prior to knowing f. The broadcasts can either be used to set up a publickey infrastructure and then rely on signatures. They can also be used to run the setup phase of the protocol from [PW92] which can then be used to implement an unbounded of number of broadcasts in the online phase.
The protocol from [PW92] has information theoretic security. To broadcast between c parties the protocol from [PW92] has complexity \({\text {poly}}(c) \cdot \lambda \) to broadcast one bit. Since we broadcast \({\text {poly}}(n,\lambda )\) bits among logsize sets this will all in all contribute with a complexity of \({\text {poly}}(n,\lambda )\), which does not affect the overhead.
Corollary 1
For all \(c \in [0,1)\) there exists an informationtheoretically secure protocol \(\pi \) secure against adaptive, malicious corruption of up to cn parties and with termination guarantee against a nonzero constant fraction of corruptions for the hybrid model with initial broadcast and oblivious transfer for all \(f \in \mathsf {NC}\) with \({\text {OH}}= {\text {polylog}}(n) \cdot \log ({\text {size}}(f))\).
We can similarly use Theorem 3 to get a protocol for the CRS model and initial broadcast between logsize sets of parties. In this case we will only get computational security, and we might therefore as well go for the weaker model where we assume a PKI instead of initial broadcasts. Given a PKI we can implement the broadcasts using for instance the protocol in [DS83].
Corollary 2
For all \(c \in [0,1)\) there exists a protocol \(\pi \) secure against adaptive, malicious corruption of up to cn parties and with termination guarantee against a nonzero constant fraction of corruptions for the (PKI, CRS)hybrid model for all \(f \in \mathsf {NC}\) with \({\text {OH}}_{\textsc {Arith}}= \lambda \cdot {\text {polylog}}(n) \cdot \log ({\text {size}}( f))\).
Footnotes
 1.
If we for instance start out with a setting where 99 out of every 100 parties are corrupted and we start forming random committees, of course we should expect all or essentially all committees to get a corrupted member.
Notes
Acknowledgments
This work is supported by European Research Council Starting Grant 279447. Samuel Ranellucci is supported by NSF grants #1564088 and #1563722. This work is partially supported by the H2020LEITICT project SODA, project number 731583. The authors would also like to thank the anonymous reviewers for their valuable comments and suggestions. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
References
 [BCP15]Boyle, E., Chung, K.M., Pass, R.: Largescale secure computation: multiparty computation for (Parallel) RAM programs. In: Gennaro, R., Robshaw, M. (eds.) CRYPTO 2015. LNCS, vol. 9216, pp. 742–762. Springer, Heidelberg (2015). doi: 10.1007/9783662480007_36 CrossRefGoogle Scholar
 [Bra87]Bracha, G.: An o (log n) expected rounds randomized byzantine generals protocol. J. ACM (JACM) 34(4), 910–920 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
 [BSFO12]BenSasson, E., Fehr, S., Ostrovsky, R.: Nearlinear unconditionallysecure multiparty computation with a dishonest minority. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 663–680. Springer, Heidelberg (2012). doi: 10.1007/9783642320095_39 CrossRefGoogle Scholar
 [CDD+15]Cramer, R., Damgård, I.B., Döttling, N., Fehr, S., Spini, G.: Linear secret sharing schemes from error correcting codes and universal hash functions. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9057, pp. 313–336. Springer, Heidelberg (2015). doi: 10.1007/9783662468036_11 Google Scholar
 [CDI+13]Cohen, G., Damgård, I.B., Ishai, Y., Kölker, J., Miltersen, P.B., Raz, R., Rothblum, R.D.: Efficient multiparty protocols via logdepth threshold formulae. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8043, pp. 185–202. Springer, Heidelberg (2013). doi: 10.1007/9783642400841_11 CrossRefGoogle Scholar
 [CPS14]Choudhury, A., Patra, A., Smart, N.P.: Reducing the overhead of MPC over a large population. In: Abdalla, M., Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 197–217. Springer, Heidelberg (2014). doi: 10.1007/9783319108797_12 Google Scholar
 [DIK+08]Damgård, I., Ishai, Y., Krøigaard, M., Nielsen, J.B., Smith, A.: Scalable multiparty computation with nearly optimal work and resilience. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 241–261. Springer, Heidelberg (2008). doi: 10.1007/9783540851745_14 CrossRefGoogle Scholar
 [DIK10]Damgård, I., Ishai, Y., Krøigaard, M.: Perfectly secure multiparty computation and the computational overhead of cryptography. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 445–465. Springer, Heidelberg (2010). doi: 10.1007/9783642131905_23 CrossRefGoogle Scholar
 [DKL+13]Damgård, I., Keller, M., Larraia, E., Pastro, V., Scholl, P., Smart, N.P.: Practical covertly secure MPC for dishonest majority – or: breaking the SPDZ limits. In: Crampton, J., Jajodia, S., Mayes, K. (eds.) ESORICS 2013. LNCS, vol. 8134, pp. 1–18. Springer, Heidelberg (2013). doi: 10.1007/9783642402036_1 CrossRefGoogle Scholar
 [DPSZ12]Damgård, I., Pastro, V., Smart, N., Zakarias, S.: Multiparty computation from somewhat homomorphic encryption. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 643–662. Springer, Heidelberg (2012). doi: 10.1007/9783642320095_38 CrossRefGoogle Scholar
 [DS83]Dolev, D., Strong, H.R.: Authenticated algorithms for byzantine agreement. SIAM J. Comput. 12(4), 656–666 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
 [DZ13]Damgård, I., Zakarias, S.: Constantoverhead secure computation of boolean circuits using preprocessing. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 621–641. Springer, Heidelberg (2013). doi: 10.1007/9783642365942_35 CrossRefGoogle Scholar
 [GWZ09]Garay, J.A., Wichs, D., Zhou, H.S.: Somewhat noncommitting encryption and efficient adaptively secure oblivious transfer. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 505–523. Springer, Heidelberg (2009). doi: 10.1007/9783642033568_30 CrossRefGoogle Scholar
 [HM00]Hirt, M., Maurer, U.: Player simulation and general adversary structures in perfect multiparty computation. J. Cryptol. 13(1), 31–60 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
 [HMP00]Hirt, M., Maurer, U., Przydatek, B.: Efficient secure multiparty computation. In: Okamoto, T. (ed.) ASIACRYPT 2000. LNCS, vol. 1976, pp. 143–161. Springer, Heidelberg (2000). doi: 10.1007/3540444483_12 CrossRefGoogle Scholar
 [IKOS07]Ishai, Y., Kushilevitz, E., Ostrovsky, R., Sahai, A.: Zeroknowledge from secure multiparty computation. In: Johnson, D.S., Feige, U. (eds.) Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, 11–13 June, pp. 21–30. ACM (2007)Google Scholar
 [IKP+16]Ishai, Y., Kushilevitz, E., Prabhakaran, M., Sahai, A., Yu, C.H.: Secure protocol transformations. In: Robshaw, M., Katz, J. (eds.) CRYPTO 2016. LNCS, vol. 9815, pp. 430–458. Springer, Heidelberg (2016). doi: 10.1007/9783662530085_15 CrossRefGoogle Scholar
 [IPS08]Ishai, Y., Prabhakaran, M., Sahai, A.: Founding cryptography on oblivious transfer – efficiently. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 572–591. Springer, Heidelberg (2008). doi: 10.1007/9783540851745_32 CrossRefGoogle Scholar
 [KMTZ13]Katz, J., Maurer, U., Tackmann, B., Zikas, V.: Universally composable synchronous computation. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 477–498. Springer, Heidelberg (2013). doi: 10.1007/9783642365942_27 CrossRefGoogle Scholar
 [PW92]Pfitzmann, B., Waidner, M.: Unconditional byzantine agreement for any number of faulty processors. In: Finkel, A., Jantzen, M. (eds.) STACS 1992. LNCS, vol. 577, pp. 337–350. Springer, Heidelberg (1992). doi: 10.1007/3540552103_195 CrossRefGoogle Scholar
 [SS96]Sipser, M., Spielman, D.A.: Expander codes. IEEE Trans. Inf. Theory 42(6), 1710–1722 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 [WC81]Wegman, M.N., Carter, L.: New hash functions and their use in authentication and set equality. J. Comput. Syst. Sci. 22(3), 265–279 (1981)MathSciNetCrossRefzbMATHGoogle Scholar