Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Secure multiparty computation allows a set of mutually distrustful parties to perform a computational task, while guaranteeing some security properties to hold. Examples of desirable security properties of a secure protocol are correctness, privacy, and fairness (roughly, the requirement that either all parties receive their respective outputs, or none do). When a strict majority of honest parties can be guaranteed, protocols for secure computation (see, e.g., [9, 19]) provide full security, i.e., they provide all the security properties mentioned above (and others), including fairness. When there is no honest majority, however, this is no longer the case, and full security (specifically, full fairness) is not achievable in general. As was shown by Cleve [14], this is already evident for the elementary (no input) task of coin-tossing.

The coin-tossing functionality, introduced by Blum [12], allows a set of parties to agree on a uniformly chosen bit. Cleve [14] showed that this functionality cannot be computed with complete fairness without a strict honest majority. He proved that for any r-round two-party coin-tossing protocol, there exists an (efficient) adversary that can bias the output of the honest party by \(\varOmega (1/r)\). Cleve’s impossibility naturally generalizes to the multiparty setting with no honest majority and has ramifications to general secure computation, implying that any function that implies coin-tossing (e.g., the XOR function) cannot be computed with full fairness without an honest majority. The question of optimal fairness for the coin-tossing functionality seems to be crucial towards understanding general secure and fair multiparty computation.

On the positive end, Averbuch et al. [6], Cleve [14] showed how to compute the coin-tossing functionality with partial fairness, limiting the bias of any adversary to \(O\left( 1/\sqrt{r} \right) \). For over two decades, these constructions were believed to be optimal. This belief was supported by the work of Cleve and Impagliazzo [15], showing that in a model, where commitments are available only as black-box (and no other assumptions are made), the bias of any coin-tossing protocol is \(\varOmega (1/\sqrt{r})\). In a breakthrough result, Moran, Naor, and Segev [30] showed that the \(\varOmega (1/r)\)-bias lowerbound of Cleve is tight for the case of two-party coin-tossing. They constructed an \(r\)-round two-party coin-tossing protocol with bias \(O(1/r)\). The protocol of Moran et al. follows the special-round paradigmFootnote 1, previously appearing in [22, 27].

Beimel, Omri, and Orlov [8] constructed (via the special-round paradigm) an optimal \(O(1/r)\)-bias protocol for any constant number of parties, whenever strictly less than a 2 / 3-fraction of the parties are malicious. More accurately, for their construction to yield an \(O(1/r)\) bound on the bias of their protocol, it suffices that the gap between the number of corrupted parties and the number of honest parties is constant (rather than the total number of parties).

Still, the question whether optimal \(O(1/r)\)-coin-tossing was possible when the set of malicious parties may consist of two-thirds or more of the parties remained open. Specifically, even the case of three-party optimally-fair coin-tossing, where two of the parties may be corrupted remained unsettled. Answering the question regarding the three party case seemed to require new techniques and a novel understanding of coin-tossing protocols. In another breakthrough result, Haitner and Tsfadia [24] constructed an \(O\left( \log ^3(r)/r \right) \)-fair (almost optimal) three-party coin-tossing protocol. Their work, indeed, offers some profound insight into the difficulties of constructing coin-tossing protocols, and brings forth a combination of novel techniques for coping with these difficulties. However, while it may be tempting to expect that the solution for the three-party case (and, specifically, that of [24]) will soon lead to a solution for fair coin-tossing for any (constant) number of parties, this has not been the case so far.

1.1 Our Results

Our main contribution is a multiparty coin-tossing protocol that has small bias whenever the number of parties is constant fewer than 3 / 4 of them are corrupted.

Theorem 1 (informal)

Assume that oblivious transfer protocols exist. Let \({m}\) and \({t}\) be constants (in the security parameter \({n}\)) such that \({m}/2 \le {t}< 3{m}/4\), and let \(r= r({n})\) be an integer. There exists an \(r\)-round \({m}\)-party coin-tossing protocol tolerating up to \({t}\) corrupted parties that has bias \(O({2^{2^{{m}}}}\log ^3(r)/r)\).

The formal statements and proofs implying Theorem 1 are given in Sect. 3, a warmup construction illustrating the ideas behind the general construction is given in Sect. 1.4. The \(2^{2^m}\) factor in the upperbound on the bias of our construction is due the fact that in each round, the adversary sees defense values for many corrupted subsets. For this reason, we require m to be constant.

1.2 Additional Related Work

Partially fair coin-tossing is an example of 1 / p-secure computation. Informally, a protocol is 1 / p-secure if it emulates the ideal functionality within 1 / p distance. The formal definition of 1 / p-secure computation appears in Sect. 2.3.1. 1 / p-security with abort was suggested by Katz [27]. Gordon and Katz [21] defined 1 / p-security and constructed 2-party 1 / p-secure protocols for every functionality whose size of either the domain or the range of the functionality is polynomial (in the security parameter). Beimel et al. [7] studied multiparty 1 / p-secure protocols for general functionalities. The main result in [7] is constructions of 1 / p-secure protocols that are resilient against any number of corrupted parties, provided that the number of parties is constant and that the size of the range of the functionality is at most polynomial in the security parameter \({n}\). The bias of the coin-tossing protocol resulting from [7] is \(O(1/\sqrt{r})\).

The impossibility result of Cleve [14] made many researchers believe that no interesting functions can be computed with full fairness without an honest majority. A surprising result by Gordon et al. [22] showed that there are even functions containing embedded XOR that can be computed with fairness. This led to a line of works, investigating complete fairness in secure multiparty computation without an honest majority [2, 3, 29]. Recently, Asharov et al. [4] gave a full characterization of fairness secure two-party computation of Boolean functions.

Coin-tossing is an interesting and useful task even in weaker models, e.g., secure-with-abort coin-tossing – where honest parties are not requested to output a bit upon a premature abort by the adversary, and weak coin-tossing – where each party has an a priori desire for the output bit. Indeed, the latter type of coin-tossing was the one formulated by Blum [11], who suggested a fully secure weak (and actually, secure with abort) coin-tossing protocol based on the existence of one-way functions ([25, 31]). His protocol is also a 1 / 4-secure implementation of the fair coin-tossing functionality. Conversely, the existence of secure-with-abort protocols imply the existence of one-way functions [10, 23, 28]. For the cryptographic complexity of optimally-fair coin-tossing, [16, 17] gave some evidence that one-way functions may not suffice.

1.3 Our Techniques

Towards explaining the ideas behind our protocol, we give a brief overview of the constructions of [8, 24, 30]. We restrict our discussion to the fail-stop model, where corrupted parties follow the prescribed protocol, unless choosing to prematurely abort at some point in the execution. Indeed, the core difficulties in constructing fair coin-tossing protocols stand in this model as well. Specifically, an r-round multiparty coin-tossing protocol in the fail-stop model can be adapted to the malicious setting by adding signatures to each message (or by applying the GMW compiler [19]).

1.3.1 The Protocol of Moran et al. [30].

 The protocol of Moran, Naor, and Segev [30] is a two-party r-round coin-tossing protocol with optimal bias 1/4r. That is, their protocol matches the lowerbound of Cleve [14] (up to a factor 2). The basic idea of the protocol is that in each round i, each of the parties is given an independently chosen uniform bit, which will be its output, in case the other party aborts. This is done until some special round \(i^{*}\). From round \(i^{*}\) and on, both parties get the same bit c. Finally, \(i^{*}\) is chosen uniformly from [r] and is kept secret from the parties. The security of the protocol relies on the inability of the adversary to guess the value of \(i^{*}\) with probability higher than 1 / r. We next give a slightly more detailed overview of the MNS protocol restricted to fail-stop adversaries.

A skeleton for two-party coin-tossing protocols. We start by describing the skeleton for the two-party protocol of [30]. Indeed, this is a more generic skeleton and can be used to describe any two-party coin-tossing protocol \(\left( \mathsf {A},\mathsf {B} \right) \).

The preliminary phase of the protocol. In this phase, the parties jointly compute defense values for each of the r rounds of interaction. Denote the defense value assigned to \(\mathsf {A}\) for round \(i\in [r]\) by \(a_i\) and the value assigned to \(\mathsf {B}\) for round i by \(b_i\) (in the MNS protocol, these defense values are actually bits). At the end of this preliminary phase, the parties do not learn these defense values, but rather hold a share in a 2-out-of-2 secret sharing scheme (separately, for each defense value). Denote by \(a_i[{\text {P}}]\) and \(b_i[{\text {P}}]\) the shares of \(a_i\) and \(b_i\) (respectively) held by party \({\text {P}}\).

Interaction rounds. In round i, party \(\mathsf {A}\) reveals \(b_i[\mathsf {A}]\) and party \(\mathsf {B}\) reveals \(a_i[\mathsf {B}]\). Specifically, in round i, party \(\mathsf {A}\) learns \(a_i\) and party \(\mathsf {B}\) learns \(b_i\). The role of these defense values is to define the output of an honest party, upon a premature abort of the other party. For example, if party \(\mathsf {A}\) aborts in round i (not allowing \(\mathsf {B}\) to learn \(b_i\)), then \(\mathsf {B}\) halts and outputs \(b_{i-1}\). If an abort never occurs, then parties output \(a_r = b_r\).

The MNS instantiation of the two-party skeleton. We now specify how the defense values are selected in the protocol of [30]. The parties jointly select a special round number \(i^{*}\in \left\{ 1, \ldots , r\right\} \), uniformly at random, and select bits \(a_1, \ldots ,a_{i^*-1}, b_1, \ldots ,b_{i^*-1}\), independently, uniformly at random. Then, they uniformly select a bit \({w}\in \left\{ 0,1\right\} \) and set \(a_i = b_i = {w}\) for all \(i^* \le i \le r\).

The security of the protocol follows from the fact that, unless the adversary aborts in round \(i^*\), it cannot bias the output of the protocol. This is true, since before round \(i^{*}\) the view of the adversary is independent of the prescribed output bit \({w}\), and hence, given that the adversary aborts before round \(i^{*}\), the output of the honest party is a uniform bit. On the other hand, after round \(i^{*}\) is completed, the output of the honest party is fixed. Hence, aborting in any round after \(i^{*}\) is equivalent to never aborting at all, therefore, given that the adversary aborts after round \(i^{*}\), the output of the honest party is also a uniform bit. Finally, the view of any of the parties up to round \(i \le i^{*}\) is independent of the value of \(i^{*}\), hence, any adversary corrupting a single party can guess \(i^{*}\) with probability at most \(1/r\).

1.3.2 The Protocols of Haitner and Tsfadia [24].

 Haitner and Tsfadia [24] constructed a three-party r-round coin-tossing protocol with close to optimal bias \(O\left( \log ^3 r/r \right) \). Towards achieving this goal, Haitner and Tsfadia [24] first constructed several new two-party fair coin-tossing protocols with bias \(O\left( \log ^3 r/r \right) \). Evidently, the bias of these protocols does not match the Cleve [14] lowerbound (as does the MNS protocol), however, the techniques and insight introduced in these constructions make them interesting even before considering the final three-party construction, for which they serve as a building block. In fact, most of the techniques that enable the three-party construction of [24] come up already in their two-party protocols.

Before describing the protocols of [24], let us first highlight some of the ideas underlying them. We stress that none of their protocols follows the special round paradigm. Alternatively, their protocols have the value of the game (i.e., the expected outcome in an honest continuation of the current state) gradually shift from being 1/2 (or some other \(\alpha \in [0,1]\), for that matter) to being either 0 or 1. This is done by having the parties run in the background – jointly and hidden from each of them – a protocol with a gradually shifting and publicly known game value (in this case, a weighted variant of the majority protocol of [6, 14]). Let \(O_i\) be the game value in round i.

One of the core observations underlying all the constructions of Haitner and Tsfadia [24] is that letting the defense value \(a_i\) be a bit sampled according to \(O_i\), fully protects \(\mathsf {A}\) in case of an abort by \(\mathsf {B}\) in round i. More importantly, if the gap between \(O_i\) and \(O_{i-1}\) is typically \(O\left( 1/\sqrt{r} \right) \), then \(a_i\) does not reveal too much information about the current value of \(O_i\) to \(\mathsf {A}\). Finally, Haitner and Tsfadia [24] show that \(a_i\) can be instantiated, not only as a bit, but also as a description of a full execution of a two-party protocol with output and (defense values) sampled according to \(O_i\) (where, this form of \(a_i\) still does not reveal too much information about the current value of \(O_i\) to \(\mathsf {A}\)). Going from here to their construction of a three-party coin-tossing protocol is fairly natural.

We next describe the two-party protocols of Haitner and Tsfadia [24]. We do so using the skeleton for two-party protocols described in Sect. 1.3.1. That is, we explain how the defense values \(a_i, b_i\) for each round i are selected. We note that Haitner and Tsfadia [24] did not present their protocols in this exact manner, but rather divided each interaction round i into two steps. The first step is exactly the one described in the above skeleton, i.e., where \(\mathsf {A}\) learns \(a_i\) and \(\mathsf {B}\) learns \(b_i\). In the second step of round i, the parties reconstruct a value \(x_i\) that describes the expected value of the game \(O_i\). This extra step is not necessary for the correctness of the protocol, and hence, does not affect the security of the protocol (since any attack on the protocol not using \(x_i\) can also be applied to the protocol that gives \(x_i\)).

The basic two-party protocol of [24]. We now specify how the defense values are selected in the basic two-party protocol of [24] (parametrized by \(\alpha \in [0,1]\)), such that the common output bit is 1 with probability \(\alpha \). The basic idea is to sample \(O(r^2)\) bits (i.e., elements from \(\left\{ -1,1\right\} \)) i.i.d., such that the sum of all bits is positive with probability \(\alpha \). The prescribed output of the protocol is 1 if the sum of all bits is positive, and 0 otherwise. Towards revealing this output (gradually, in r rounds), let \(\delta _i\) be the value of the game, conditioned on the value of the first \(\sum _{k=r-i+1}^{r}k\) bits. Note that \(\delta _0 = \alpha \) and that in each round i, the value of \(\delta _i\) is computed conditioned on less and less new bits (i.e., bits that were not used to compute \(\delta _{i-1}\)). The defense value given to each of the parties in round i is simply a sample from \(\delta _i\).

Slightly more formally, let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \) be such that the sum of \(r(r+1)/2\) elements from \(\left\{ -1,1\right\} \) is positive with probability \(\alpha \), where each element is 1 with probability \(1/2+\varepsilon \). Let \(x_i\) be the sum of \(r-i+1\) elements from \(\left\{ -1,1\right\} \), where each element takes the value of 1 with probability \(1/2+\varepsilon \). Let \(\delta _i\) be the expected game value in round i, that is, \(\delta _i\) is the probability that the sum of \(\sum _{k=1}^{r-i}k\) elements from \(\left\{ -1,1\right\} \), is at least \(\sum _{k=1}^i x_k\). The bits \(a_i\) and \(b_i\) are independently sampled according to \(\delta _i\), i.e., \(a_i=1\) (and \(b_i=1\)) w.p. \(\delta _i\).

For some intuition on the security of the protocol, consider the case where party \(\mathsf {A}\), wishing to bias the output of party \(\mathsf {B}\), receives a defense value \(a_i\) before party \(\mathsf {B}\) receives its defense value \(b_i\). If \(\mathsf {A}\) chooses to abort, then \(\mathsf {B}\) is instructed to output \(a_{i-1}\), which was sampled according to \(\delta _{i-1}\). Indeed, if \(\mathsf {A}\) could see \(\delta _i\) before deciding whether to abort or not, it could bias the output of \(\mathsf {B}\) by \(\varOmega (1/\sqrt{r})\). The crux of the analysis is to show that this is not the case when \(\mathsf {A}\) only receives a sample from \(\delta _i\). Towards this end, Haitner and Tsfadia [24] bound, on expectation, the gap between \(\delta _{i-1}\) and \(\widehat{\delta _{i-1}}\), defined to be the value of the game, conditioned on the value of the first \(\sum _{k=r-i+1}^{r}k\) bits and on the value of \(a_i\).

The three party protocol of [24]. The construction of [24] for three parties follows a very similar rationale to the above protocol. That is, in each round i every single party, as well as, every pair of parties obtain a defense value that should behave as a sample from \(\delta _i\). A pair of parties cannot simply be given a single bit, since one of them may be corrupt. Rather, they should be given a two-party protocol similar to the above, with their defenses set with parameter \(\alpha = \delta _i\). A problem arises here, since the simple application of the above idea would require giving the adversary information based on \(\varOmega (r^3)\) bits sampled according to the appropriate \(\varepsilon \) value. This would be devastating to the security of the protocol, as it would allow the adversary to reveal \(\delta _i\). To tackle this problem, [24] came up with a derandomized version of the above two-party protocol. They were then able to show that sending the shares for this protocol as the defense values for pairs of parties does not reveal too much about \(\delta _i\) to the adversary. We next describe the derandomized two-party protocol of Haitner and Tsfadia [24].

The two-party derandomized protocol of [24]. We now specify how the defense values are selected in the derandomized version of the protocol of [24], such that the common output bit is 1 with probability \(\alpha \). Let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \) be such that the sum of \(r(r+1)/2\) elements from \(\left\{ -1,1\right\} \) is positive with probability \(\alpha \), where each element is 1 with probability \(1/2+\varepsilon \). For \(j\in \left\{ a,b\right\} \), let \(S^j\) be a set of size \(r(r+1)\), over \(\left\{ -1,1\right\} \), where each element takes the value of 1 with probability \(1/2+\varepsilon \). Let \(x_i\) be the sum of \(r-i+1\) elements from \(\left\{ -1,1\right\} \), where each element takes the value of 1 with probability \(1/2+\varepsilon \). Let \(\delta ^j_i\) be the expected game value in round i, according to the set \(S^j\), that is, \(\delta ^j_i\) is the probability that the sum of the elements in a randomly chosen subset of \(S^j\), of size \(\sum _{k=1}^{r-i}k\), is at least \(\sum _{k=1}^i x_k\). The bit \(a_i\) (respectively \(b_i\)) is sampled according to \(\delta ^a_i\) (respectively \(\delta ^b_i\)), i.e., \(a_i=1\) (respectively \(b_i=1\)) with probability \(\delta ^a_i\) (respectively \(\delta ^b_i\)).

The security of the various constructions of [24] is proved via a series of bounds on weighted Binomial games. In Sect. 2, we recall these results, and in Sect. 3 we use them to prove the security of our construction.

1.3.3 Reducing Many-Party Coin-Tossing to Few-Party Coin-Tossing.

 Reducing multiparty coin-tossing protocols for the setting without an honest majority to 2-party protocols is quite straightforward. Indeed, the impossibility of [14] is generalized from the two-party setting to the many party setting via such a reduction. In this section, we show that sometimes the other direction is also possible.

The Protocol of Beimel et al. [8]. The protocol of Beimel, Omri, and Orlov [8] extends the results of [30] to the multiparty model, where fewer than 2/3 of the parties are corrupted. The bias of their protocol is proportional to \(1/r\) and doubly exponential in the gap between the number of corrupted parties t and the number of honest parties h in the protocol (\(m=h+t\)). In particular, for a constant number of parties m, where fewer than 2m/3 are corrupted, [8] present an \(r\)-round \({m}\)-party coin-tossing protocol with an optimal bias of \(O(1/r)\). Interestingly, their protocol has an \(O(1/r)\)-bias even when the number of parties \({m}\) is non-constant, as long as the \(t-h\) is constant. In the following description, however, we present a simplified version of the protocol of [8], which requires t (rather than \(t-h\)) to be constant in order to achieve an \(O(1/r)\)-bias.

While not presented this way, the result of Beimel et al. [8] is achieved via a generic reduction to (a certain type of) two-party protocols. They use a few layers of secret sharing schemes to allow for each subset J of parties, containing an honest majority (i.e., \(h\le \left| J\right| <2h\), hence if all the parties outside of J abort the execution, then there is an honest majority in J) to obtain a defense value, i.e., a bit \(d^J_i\). For each round i and for each such J, the value of \(d^J_i\) is shared in an inner secret sharing scheme with threshold h-out-of-\(\left| J\right| \). The idea is that the shares of this inner secret sharing scheme (of \(d^J_i\)) should be revealed to the parties of J at round i of the execution. Namely, each party in J should get one of the (inner scheme) shares of \(d^J_i\) in round i.

To make sure that the above shares are not revealed to any subset before round i, and at the same time, that the execution of the protocol proceeds, as long as, the set of remaining active parties does not contain an honest majority, the shares (of the inner scheme) for round i are shared in an outer secret sharing scheme with threshold \((t+1)\)-out-of-m. As a result, the adversary can never learn anything about the shares of the i’th inner scheme without the help of honest parties. In addition, to halt the computation in round i, the adversary must instruct at least h parties to abort the computation.

Now, given a two-party protocol according to the above skeleton, and with the additional property that \(a_i\) and \(b_i\) are sampled from the same distribution \(D_i\) and that it is possible to sample many such samples, completing the reduction is done by selecting the defense values \(d^J_i\) from the distribution \(D_i\).

If the following extra property holds, then the resulting many-party protocol would be \(\alpha \)-fair as long as \(t<2m/3\). The extra property that we need to require is that if the adversary in the 2-party protocol is given \(2^{2^m}\) defense values, sampled from \(D_i\) (and the honest party gets a single one), it will not be able to bias the 2-party protocol by more than \(\alpha \).Footnote 2

1.3.4 Applying the Reduction of Beimel et al. to the Protocols of Haitner and Tsfadia.

 In this work, we use secret sharing schemes, in a manner similar to [8], to reduce an m-party coin-tossing with \(t< 3m/4\) malicious to the 3-party construction of [24]. We do so in two steps. First, we apply the above (simplified version of the) reduction of [8] to the (derandomized) two-party protocol of [24] to obtain an auxiliary \(\hat{m}\)-party coin-tossing protocol, tolerating \(\hat{t}<2\hat{m}/3\) corruptions. Then, we use the auxiliary protocol, as a building block in the construction of the final m-party protocol that tolerates \(t<3m/4\) corruptions. More specifically, the auxiliary protocol, parametrized by some \(\varepsilon \in [0,1]\), is used as defense values for subsets of parties for the case that at least m / 4 corrupted parties abort the execution of the final protocol.

We next give an overview of both constructions. In Sect. 1.4, we exemplify the constructions for the case that \(m=7\) and \(t=5\); in Sect. 1.4.1, we instantiate the auxiliary protocol for the case of five parties with up to three corruptions, and in Sect. 1.4.2, we use this construction to instantiate the final protocol for the case of seven parties with up to five corruptions. In the following, let \(\hat{h} = \hat{m} - \hat{t}\) and \({h} = {m} - {t}\) be lowerbounds on the number of honest parties in the respective protocols. In our discussion the auxiliary protocol will be used with \(\hat{m}\) being the number of active parties remaining after some corrupted parties have prematurely aborted the execution of the final m-party protocol. Specifically, we will have \(\hat{h} = h\), since honest parties never prematurely abort the computation.

Both the basic and the final protocols use two layers of (threshold) secret sharing schemes. For each round i and for each protected subset of parties J (we specify below which subsets are called protected for each construction), the defense value for the set J in round i is \(d_i^J\). This defense value is shared among the parties of J in an appropriate secret sharing scheme (actual parameters for each construction are specified below). This is called the inner secret sharing scheme. For each round i, all the shares of all parties in the inner secret sharing schemes for round i are shared in an \((\tilde{t}+1)\)-out-of-\(\tilde{m}\) threshold secret sharing scheme, where \(\tilde{m}\) and \(\tilde{t}\) are the number of parties and the bound on the number of corruptions in the respective construction. This is called the outer secret sharing scheme.

The idea behind the outer secret sharing scheme is to provide two guarantees. First, the adversary is never able to reconstruct the secrets without the participation of honest parties (which will only participate in the appropriate round). Second, the adversary is only able to prevent the reconstruction of the secret of the outer scheme (for round i) by instructing at least \(\tilde{h}= \tilde{m}-\tilde{t}\) corrupted parties to abort before completing the reconstruction. Hence, the protocol proceeds normally as long as more than \(\tilde{t}\) parties are active. We stress that the adversary is indeed able to instruct \(\tilde{h}\) parties to abort in the process of reconstruction of the secret of the outer secret sharing scheme, hence, seeing all the shares of corrupted parties for round i, while not allowing honest parties to see their shares of the inner scheme. Furthermore, since the adversary is rushing, it can actually decide whether to do so or not – after seeing the shares of all honest parties.

In addition to the above, assume that \(\tilde{t}< \frac{b\tilde{m}}{b+1}\) for some natural \(b>1\), and assume that at least \(\tilde{h}\) corrupted parties aborted (which is the case if the secret of the outer scheme cannot be reconstructed). Let J be the set of the remaining parties and let \(t_J\) be the number of corrupted parties in J. Since the number of honest parties in J remains the same as before, i.e., at least \(h > \frac{\tilde{m}}{b+1}\), it follows that \(t_J < |J| - \frac{\tilde{m}}{b+1}\). By assumption \(|J|\le \tilde{t}<\frac{b\tilde{m}}{b+1}\), it follows that \(t_J<|J| - \frac{\tilde{m}}{b+1}< \frac{(b-1)\cdot \left| J\right| }{b}\). Thus, if h parties abort the execution of the final construction, then less than 2 / 3 of the remaining parties are corrupted, and if \(\hat{h}\) parties abort the execution of the auxiliary construction, then most of the remaining parties are honest.

We now explain what protected subsets are and how the parameters for the inner secret sharing schemes are chosen for each of the two constructions. We begin with the final construction. Protected subsets of parties are subsets J that are assigned a defense value \(d_i^J\) in each round i. These should include all subsets that are liable to become the set of active parties, after a premature abort by at least h parties. Since the number of aborting (corrupted) parties may be anything between h and t, we should let protected subsets be all subsets of parties J, such that \(h\le \left| J\right| \le t\).Footnote 3

To determine the parameters for the inner secret sharing scheme, consider the case that \(a\ge h\) corrupted parties have aborted in round i, hence the set of active parties J is of size \(m-a\). Let \(t_J\) be the number of corrupted parties in J, then \(t_J \le t-a\). Therefore, using a \((t-m+\left| J\right| +1)\)-out-of-\(\left| J\right| \), we require at least \(t-a+1 = t-m+\left| J\right| +1\) parties of J for the reconstruction of \(d_i^J\). This ensures that the adversary was never able to reconstruct \(d_{i-1}^J\) (which is the defense value that the parties in J will use). Very similar reasoning are used for the auxiliary construction, where a subset of parties is protected if it of any size between \(\hat{h}\) and \(2\hat{h}-1\), and the threshold of the inner secret sharing scheme is set to \(\hat{h}\)-out-of-\(\left| J\right| \).

It is left to specify what are the defense values \(d_i^J\), which are the secrets that are shared in the inner secret sharing schemes. Roughly speaking these values are selected in the auxiliary and in the final constructions in a very similar manner to that of the derandomized two-party and the three-party protocols of [24] (respectively). In a bit more detail, in these protocols, there is a value \(\delta _i\) representing the expected value of the game, and the defense values for all protected subsets describe a way to reveal a sample a bit according to \(\delta _i\).

In the final protocol, a defense value is an instantiation of the auxiliary protocol, such that the output bit is 1 with probability \(\delta _i\). To be more precise, \(d_i^J\) is the set of shares in the outer secret sharing of the instantiation of the auxiliary protocol to be executed by the parties of J, in case all other parties abort the computation. The exact same information can also be encapsulated into a set of \(O(r^2)\) elements from \(\{-1,1\}\) taking the value with probability \(1/2+\varepsilon \), where \(\varepsilon =\varepsilon (\delta _i)\in \left[ -\frac{1}{2},\frac{1}{2}\right] \) is such that the sum of \(r(r+1)/2\) elements from \(\left\{ -1,1\right\} \) is positive with probability \(\delta _i\), whenever each element is 1 with probability \(1/2+\varepsilon \). Indeed, this fact will allow us to use the vector game lemma of [24] (see Lemma 2) to bound the bias that the adversary can inflict by seeing the defense values of all corrupted protected sets. The proof of security of the final protocol is obtained by combining the above bound with a bound on the bias of the auxiliary protocol.

We now specify how the defense values are selected in the auxiliary protocol. Let J be a protected subset of parties, the parties of J jointly hold a set \(S^J\) of size \(r(r+1)\), over \(\left\{ -1,1\right\} \), where each element takes the value of 1 with probability \(1/2+\varepsilon \). Recall that \(x_i\) be the sum of \(r-i+1\) elements from \(\left\{ -1,1\right\} \), where each element takes the value of 1 with probability \(1/2+\varepsilon \). Let \(\delta ^J_i\) be the expected game value in round i, according to the set \(S^J\), that is, \(\delta ^J_i\) is the probability that the sum of the elements in a randomly chosen subset of \(S^J\), of size \(\sum _{k=1}^{r-i}k\), is at least \(\sum _{k=1}^i x_k\). The bit \(b^J_i\) is sampled according to \(\delta ^J_i\), i.e., \(b^J_i=1\) with probability \(\delta ^J_i\). To prove the security of this protocol, we introduce an extended version of the Hypergeometric game (Lemma 3), presented in [24]. More specifically, we show that even when the adversary sees a (constant) number of independent samples, each from a different set, it cannot bias the output by much.

1.4 A Warm-Up Construction – A Seven-Party Protocol Tolerating up to Five Corrupted Parties

Following the overview of our constructions, given in Sect. 1.3.4 in this section, we show how to instantiate our final construction for the case of 7 parties, where at most 5 are corrupted. In Sect. 1.4.1, we instantiate the auxiliary protocol for 5 parties with at most 3 corruptions, and in Sect. 1.4.2 we use it to instantiate the final protocol for 7 parties with at most 5 corruptions. In the following, let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \), and for \(i\in \left\{ 0,\ldots ,r\right\} \) let \(s_i=\sum _{k=1}^{r-i} k\).

1.4.1 A Five-Party Protocol Tolerating up to Three Corrupted Parties.

 We now describe the algorithm \(HG (\varepsilon ,5,3)\), generating shares for 5 parties with 3 corrupted parties. This is a specific instantiation of the more general functionality described in Algorithm 5. Let \(\mathcal {B}in_{n,\varepsilon }\) denote the binomial distribution over \(\left\{ -1,1\right\} \) (i.e., a sum of n samples from \(\left\{ -1,1\right\} \), each taking the value of 1 with probability \(\frac{1}{2}+\varepsilon \)).

Selecting defenses:

  1. 1.

    For every \(J\subset [5]\) of size 3, let \(S^J\) be a set with \(2s_0\) elements from \(\left\{ -1,1\right\} \), each taking the value of 1 with probability \(\frac{1}{2}+\varepsilon \).

  2. 2.

    For every \(i\in [r]\) let \(\hat{x}_i\leftarrow \mathcal {B}in_{r-i+1,\varepsilon }\).

  3. 3.

    For every \(i\in \left\{ 0,\ldots ,r\right\} \) and every \(J\subseteq [5]\) of size 3:

    1. (a)

      Let \(A_i^{J}\) be a random subset of \(S^J\) of size \(s_i\).

    2. (b)

      Let \(\hat{d}^J_i\) be 1 if \(\sum \limits _{k=1}^i \hat{x}_k+\sum \limits _{a\in A_i^J}a\ge 0\), and 0 otherwise.

Sharing the values:

  • For every \(i\in \left\{ 0,1\ldots ,r\right\} \), \(J\subset [5]\) of size 3, and \(j\in J\), let \(d_i^J[j]\) be the share of party \(P_j\) of the secret \(d_i^J\), in a 2-out-of-3 secret sharing.

  • For every \(i\in [r]\), \(J\subset [5]\) of size 3, and for every \(j'\in J\), let \(d_i^J[j',j]\) be the share of party \(P_j\) of the secret \(d_i^J[j']\), in a 4-out-of-5 secret sharing, such that party \(P_{j'}\) is required in order to recover \(d_i^J[j']\) (See Construction 4).

Interaction rounds. The interaction of the parties proceeds in r rounds. In round \(i\in [r]\), party \(P_j\) broadcasts \(d_i^J[j',j]\), for every \(J\subset [5]\) of size 3, and for every \(j'\in J\).

If a single party aborts the execution, then the remaining 4 parties can continue with the protocol. If two or three parties abort the execution, then the remaining parties reconstruct \(d^J_{i'}\), where J is lexicographically first set of size 3, which contains all the indices of the active parties, and \(i'\) is the maximum i for which the parties have enough shares to reconstruct. The honest parties output that bit.

If after r rounds, there are at least 4 active parties, then the parties reconstruct the last joint defense for the lexicographically first subset of them, and the honest parties output that bit.

Security. By the properties of the two layers of secret sharing, in each round the adversary learns a constant number of defense values, which are sampled according to the appropriate Hypergeometric distribution. Roughly speaking, the security of the above protocol is reduced to an extended version of the Hypergeometric game considered by [24], with a constant number of samples. The proof of security of the general construction, as well as, the froof of the bound for the extended Hypergeometric game are given in the full version of the paper [1].

1.4.2 The Seven-Party Protocol.

 We are now ready to describe our 7 party protocol. We first describe the share generator. Given \(x_1\ldots x_i\), for some \(i\in [r]\) we let \(\delta _i(x_1\ldots x_i)\) be the probability that then sum of \(s_{i}\) uniform \(\left\{ -1,1\right\} \) bits is at least \(-\sum _{k=1}^i x_k\). We call \(\delta _i\) the expected outcome of the protocol in round i. In the following we let \(\mathcal {B}in_{n}:=\mathcal {B}in_{n,0}\).

Selecting defenses:

  1. 1.

    For every \(i\in [r]\), let \(x_i\leftarrow \mathcal {B}in_{r-i+1}\).

  2. 2.

    Let \(\varepsilon _i\in \left[ -\frac{1}{2},\frac{1}{2}\right] \) be such that, the expected outcome of an honest execution with parameter \(\varepsilon =\varepsilon _i\) of the 5-party protocol from Sect. 1.4.1 is \(\delta _i(x_1\ldots x_i)\).

  3. 3.

    For every \(J\subset [7]\), such that \(4\le |J|\le 5\), let \(d_i^J\leftarrow HG (\varepsilon _i,|J|,|J|-2)\).

  4. 4.

    For every \(J\subset [7]\), such that \(2\le |J|\le 3\), let \(d_i^J\) be a bit, sampled with probability \(\delta _i(x_1\ldots x_i)\).

Sharing the values: 

  • For every \(i\in [r]\) and \(J\subset [7]\), such that \(4\le |J|\le 5\), let \(d_i^J[j]\) be the share of party \(P_j\) of the secret \(d_i^J\), in a \((|J|-1)\)-out-of-|J| secret sharing.

  • For every \(i\in [r]\), \(J\subset [7]\), such that \(4\le |J|\le 5\), and for every \(j'\in J\), let \(d_i^J[j',j]\) be the share of party \(P_j\) of the secret \(d_i^J[j']\), in a 6-out-of-7 secret sharing, such that party \(P_{j'}\) is required in order to recover \(d_i^J[j']\) (See Construction 4).

  • For every \(i\in [r]\) and \(J\subset [7]\), such that \(2\le |J|\le 3\), let \(d_i^J[j]\) be the share of party \(P_j\) of the secret \(d_i^J\), in a 2-out-of-|J| secret sharing.

Interaction rounds. The interaction of the parties proceeds in r rounds. In round \(i\in [r]\) party \(P_j\) broadcasts \(d_i^J[j',j]\), for every \(J\subset [7]\), such that \(3\le |J|\le 5\), and for every \(j'\in J\).

If a single party aborts the execution, then the remaining 6 parties can continue with the protocol (they can do so by the properties of the 6-out-of-7 secret sharing scheme). If more parties abort the execution, then the remaining active parties reconstruct \(d^J_{i'}\), where J is the lexicographic first set containing all their indices, and \(i'\) is the maximum i for which the parties have enough shares to reconstruct. If more than three parties remain, then they execute the five party protocol from Sect. 1.4.1. Otherwise, there is an honest majority, and hence, the remaining parties reconstruct \(d^J_{i'}\), which is a bit.

If after r rounds, there are at least 5 active parties, then each pair reconstruct its last common defense (Note that either all of these defenses are equal to 1 or all of them are equal to 0).

Security. In each round \(i\in [r]\), the adversary learns an \(O\left( r^2 \right) \) bits sampled according to \(\varepsilon _i\). If only one party aborts the execution, then the remaining parties can still continue, as the secret sharing is a 6-out-of-7. Hence the adversary must instruct at least two parties to abort. In case at least two parties abort at round i, the remaining active parties can reconstruct the defense from the round \(i-1\). They then, execute the protocol described in Sect. 1.4.1. As this is the Vector game considered by [24], the adversary does not gain much advantage from aborting after seeing the above \(O\left( r^2 \right) \) bits samples (assuming that the remaining parties run the defense protocol honestly). Of course, we cannot assume that they do, however, combining the above with the security of the 5-party protocol, we get that in total, the adversary’s gain remains small.

1.5 Organization

In Sect. 2, we provide some notations and definitions that we use in this work, and recall some bounds on online Binomial games from [24]. In Sect. 3 we present our main construction and provide a proof for Theorem 1.

2 Preliminaries

2.1 Notation

We use calligraphic letters to denote sets, uppercase for random variables, and lowercase for values. All logarithms considered here are in base two. For \(n\in {\mathbb {N}}\), let \([n]=\{1,2\ldots n\}\). Given a random variable (or a distribution) X, we write \(x\leftarrow X\) to indicate that x is selected according to X. The support of a distribution D over a finite set S, denoted \({\text {Supp}}(D)\), is defined as \(\left\{ s\in S\;|\;D(s)>0\right\} \). For a random variable X and a natural number n we let \(X^n=\left( X^{(1)},X^{(2)},\ldots ,X^{(n)} \right) \), where the \(X^{(i)}\)’s are i.i.d. copies of X.

Let \(n\in {\mathbb {N}}\) and \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \). Let \(\mathcal {B}er(\varepsilon )\) be the Bernoulli distribution over \(\left\{ -1,1\right\} \), taking 1 with probability \(\tfrac{1}{2}+\varepsilon \). Define the Binomial distribution \(\mathcal {B}in_{n,\varepsilon }\), by \(\mathcal {B}in_{n,\varepsilon }(k)=\Pr \left[ \sum _{i=1}^n x_i=k\right] \) where \(x_i\) are i.i.d according to \(\mathcal {B}er(\varepsilon )\). Let \(\widehat{\mathcal {B}in}_{n,\varepsilon }(k)=\Pr _{x\leftarrow \mathcal {B}in_{n,\varepsilon }}[x\ge k]=\sum _{t\ge k}\mathcal {B}in_{n,\varepsilon }(t)\). For \(\varepsilon =0\) we will simply write \(\mathcal {B}in_{n}\) and \(\widehat{\mathcal {B}in}_{n}\).

Define the Hypergeometric distribution \(\mathcal {H}G_{n,w,m}\), by \(\mathcal {H}G_{n,w,m}(k)=\Pr _{S\subseteq \mathcal {S},|S|=m}\left[ \sum _{s\in S} s=k\right] \), where S is chosen uniformly, \(\mathcal {S}\) is a set of size n, whose members are from \(\left\{ -1,1\right\} \), and it holds that \(\sum _{s\in \mathcal {S}}s=w\). Let \(\widehat{\mathcal {H}G}_{n,w,m}(k)=\Pr _{x\leftarrow \mathcal {H}G_{n,w,m}}[x\ge k]=\sum _{t\ge k}\mathcal {H}G_{n,w,m}(t)\). For \(i\in \left\{ 0,1,\ldots n\right\} \) let \(s_{i}(n)=\sum _{k=1}^{n-i} k=\frac{(n-i+1)(n-i)}{2}\). When n is clear from the context we write \(s_{i}\). For a set S we let \(w\left( S \right) =\sum _{s\in S}s\).

We make use of the following facts.

Fact 2

(Hoeffding’s inequality for \(\left\{ -1,1\right\} \) ). Let \(n,t\in {\mathbb {N}}\) and let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \). Then

$$\begin{aligned} {\mathop {\Pr }\limits _{x\leftarrow \mathcal {B}in_{n,\varepsilon }}}[\left|x-2\varepsilon n \right|\ge t]\le 2e^{-\frac{t^2}{2n}}. \end{aligned}$$

Fact 3

(Hoeffding’s inequality for the hypergeometric distribution). Let \(m\le n\in {\mathbb {N}}\) and let \(w\in {\mathbb Z}\) satisfying \(\left|w \right|\le n\). Then

$$\begin{aligned} {\mathop {\Pr }\limits _{x\leftarrow \mathcal {H}G_{n,w,m}}}[\left|x-\mu \right|\ge t]\le e^{-\frac{t^2}{2m}}, \end{aligned}$$

where \(\mu =\mathop {{\text {E}}}\limits _{x\leftarrow \mathcal {H}G_{n,w,m}}\left[ x\right] =\frac{mw}{n}\)

2.2 Coin-Tossing Protocols

A multiparty coin-tossing protocol with \({m}\) parties is defined using \({m}\) probabilistic polynomial-time Turing machines \(p_1,\ldots ,p_{m}\) having the security parameter \(1^{n}\) as their only input. The coin-tossing computation proceeds in rounds, in each round, the parties broadcast and receive messages on a broadcast channel. The number of rounds in the protocol is typically expressed as some polynomially-bounded function \(r\) in the security parameter. At the end of protocol, the (honest) parties should hold a common bit \({w}\). We denote by \({\text {CoinToss}}_{\varepsilon }()\) the ideal functionality that gives the honest parties the same bit \({w}\), distributed according to \(\varepsilon \), that is, \(\Pr [{w}=1]=1/2+\varepsilon \) and \(\Pr [{w}=0]=1/2-\varepsilon \). We let \({\text {CoinToss}}()\) be \({\text {CoinToss}}_{0}()\).

In this work we consider a malicious static computationally-bounded adversary, i.e., a non-uniform that runs in a polynomial-time. The adversary is allowed to corrupt some subset of the parties. That is, before the beginning of the protocol, the adversary corrupts a subset of the parties that may deviate arbitrarily from the protocol, and thereafter the adversary sees the messages sent to the corrupt parties and controls the messages sent by the corrupted parties. Still, for the most of the technical discussion of the paper, we only discuss fail-stop adversaries. A fail-stop adversary acts completely honestly (i.e., as required by the prescribed protocol), with the only difference that it can abort the computation at any point in the execution of the protocol. We, then, use standard techniques ([8, 19]) to turn a coin-tossing protocol in the fail-stop model into a coin-tossing protocol (with the same fairness and round-complexity) in the malicious model. The honest parties follow the instructions of the protocol.

The parties communicate in a synchronous network, using only a broadcast channel. The adversary is rushing, that is, in each round the adversary hears the messages sent by the honest parties before broadcasting the messages of the corrupted parties for this round (thus, the messages broadcast by corrupted parties can depend on the messages of the honest parties broadcast in this round).

2.3 Security Definitions for Multiparty Protocols

The security of multiparty computation protocols is defined using the real vs. ideal paradigm. In this paradigm, we consider the real-world model, in which protocols are executed. We then formulate an ideal model for executing the task at hand. This ideal model involves a trusted party whose functionality captures the security requirements of the task. Finally, we show that the real-world protocol “emulates” the ideal-world protocol: For any real-life adversary \(\mathcal {A}\) there should exist an ideal-model adversary \(\mathcal {S}\) (also called simulator) such that the global output of an execution of the protocol with \(\mathcal {A}\) in the real-world model is distributed similarly to the global output of running \(\mathcal {S}\) in the ideal model. In the coin-tossing protocol, the parties do not have inputs. Thus, to simplify the definitions, we define secure computation without inputs (except for the security parameters).

The Real Model. Let \(\varPi \) be an \({m}\)-party protocol computing \(\mathcal{F}\). Let \(\mathcal {A}\) be a non-uniform probabilistic polynomial time adversary with auxiliary input \(\mathrm{aux}\), corrupting a subset \(\mathcal {C}\) of the parties. Let \(REAL _{\varPi ,\mathcal {A}(\mathrm{aux})}(1^{n})\) be the random variable consisting of the view of the adversary (i.e., its random input and the messages it got) and the output of the honest parties, following an execution of \(\varPi \), where each party \(p_j\) begins by holding the input \(1^{n}\).

The Ideal Model. The basic ideal model we consider is a model without abort. Specifically, there are parties \(\left\{ p_1, \ldots , p_{m}\right\} \), and an adversary \(\mathcal {S}\) who has corrupted a subset I of them. An ideal execution for the computing \(\mathcal{F}\) proceeds as follows:

  • Inputs: Party \(p_j\) holds a security parameter \(1^{n}\). The adversary \(\mathcal {S}\) has some auxiliary input \(\mathrm{aux}\).

  • Trusted party sends outputs: The trusted party computes \(\mathcal{F}(1^{n})\) with uniformly random coins and sends the appropriate outputs to the parties.

  • Outputs: The honest parties output whatever they received from the trusted party, the corrupted parties output nothing, and \(\mathcal {S}\) outputs an arbitrary probabilistic polynomial-time computable function of its view.

Let \(IDEAL _{\mathcal{F},\mathcal {S}(\mathrm{aux})}(1^{n})\) be the random variable consisting of the output of the adversary \(\mathcal {S}\) in this ideal world execution and the output of the honest parties in the execution.

In this work we consider a few formulations of the ideal-world, and consider composition of a few protocols, all being executed in the same real-world, however, each secure with respect to a different ideal-world. We prove the security of the resulting protocol, using the hybrid model techniques of Canetti [13].

2.3.1 1 / p-Indistinguishability and 1 / p-Secure Computation

As explained in the introduction, the ideal functionality \({\text {CoinToss}}()\) cannot be implemented when there is no honest majority. We use 1 / p-secure computation, defined by [20, 27], to capture the divergence from the ideal world. This notion applies to general secure computation. We start with some notation.

A function \(\mu (\cdot )\) is negligible if for every positive polynomial \(q(\cdot )\) and all sufficiently large n it holds that \(\mu (n) < 1/q(n)\). A distribution ensemble \(X = \left\{ X_{a,n}\right\} _{a\in \left\{ 0,1\right\} ^*, n\in {\mathbb {N}}}\) is an infinite sequence of random variables indexed by \(a\in \left\{ 0,1\right\} ^*\) and \(n \in {\mathbb {N}}\).

Definition 1

(Statistical Distance and \(\mathbf{1}/{\varvec{p}}\)-indistinguishability). We define the statistical distance between two random variables A and B as the function

$$\begin{aligned} \mathsf {\textsc {SD}}(A,B) = \frac{1}{2}\sum _{\alpha }{\Big | \Pr \left[ A = \alpha \right] - \Pr \left[ B = \alpha \right] \Big |}. \end{aligned}$$

For a function p(n), two distribution ensembles \(X = \{X_{a,n}\}_{a\in \left\{ 0,1\right\} ^*, n\in {\mathbb {N}}}\) and \(Y = \{Y_{a,n}\}_{a\in \left\{ 0,1\right\} ^*, n\in {\mathbb {N}}}\) are computationally 1 / p-indistinguishable, denoted \(X \mathop {{\approx }}\limits ^{{ \tiny 1/p}}Y\), if for every non-uniform polynomial-time algorithm D there exists a negligible function \(\mu (\cdot )\) such that for every n and every \(a\in \left\{ 0,1\right\} ^*\),

$$\begin{aligned} \Big | \Pr \left[ D(X_{a,n}) = 1\right] - \Pr \left[ D(Y_{a,n})) = 1\right] \Big | \le \frac{1}{p(n)} + \mu (n). \end{aligned}$$

Two distribution ensembles are computationally indistinguishable, denoted \(X \mathop {{\equiv }}\limits ^{\mathrm{\tiny C}}Y\), if for every \(c \in N\) they are computationally \(\frac{1}{n^c}\) -indistinguishable.

We next define the notion of 1 / p-secure computation [7, 20, 27]. The definition uses the standard real/ideal paradigm [13, 18], except that we consider a completely fair ideal model (as typically considered in the setting of honest majority), and require only 1 / p-indistinguishability rather than indistinguishability.

Definition 2

(perfect \(\mathbf{1}/{{\varvec{p}}}\)-secure computation). An \({m}\)-party protocol \(\varPi \) is said to perfectly (t, 1 / p)-secure compute a functionality \(\mathcal{F}\) if for every non-uniform adversary \(\mathcal {A}\) in the real model, corrupting up to t of the parties, there exists a polynomial-time adversary \(\mathcal {S}\) in the ideal model, corrupting the same parties as \(\mathcal {A}\), such that for every \({n}\in {\mathbb {N}}\) and for every \(\mathrm{aux} \in \left\{ 0,1\right\} ^*\)

$$\begin{aligned} \mathsf {\textsc {SD}}(IDEAL _{\mathcal{F},\mathcal {S}(\mathrm{aux})}(1^{n}), REAL _{\varPi ,\mathcal {A}(\mathrm{aux})}(1^{n}))\le \frac{1}{p(n)}. \end{aligned}$$

Definition 3

( \(\mathbf 1 /{\varvec{p}}\)-secure computation [7, 20, 27]). Let \(p = p(n)\) be a function. An \({m}\)-party protocol \(\varPi \) is said to (t, 1 / p)-securely compute a functionality \(\mathcal{F}\) if for every non-uniform probabilistic polynomial-time adversary \(\mathcal {A}\) in the real model, corrupting up to t of the parties, there exists a non-uniform probabilistic polynomial-time adversary \(\mathcal {S}\) in the ideal model, corrupting the same parties as \(\mathcal {A}\), such that the following two distribution ensembles are computationally 1 / p(n)-indistinguishable

$$\begin{aligned} \left\{ IDEAL _{\mathcal{F},\mathcal {S}(\mathrm{aux})}(1^{n})\right\} _{\mathrm{aux} \in \left\{ 0,1\right\} ^*,{n}\in {\mathbb {N}}} \quad \mathop {{\approx }}\limits ^{{ \tiny 1/p}}\quad \left\{ REAL _{\varPi ,\mathcal {A}(\mathrm{aux})}(1^{n})\right\} _{\mathrm{aux} \in \left\{ 0,1\right\} ^*,{n}\in {\mathbb {N}}}. \end{aligned}$$

We next define the notion of secure computation and notion of bias of a coin-tossing protocol by using the previous definition.

Definition 4

(secure computation). An \({m}\)-party protocol \(\varPi \) t-securely computes a functionality \(\mathcal{F}\), if for every \(c \in N\), the protocol \(\varPi \) is \((t,1/n^c)\)-securely compute the functionality \(\mathcal{F}\).

Definition 5

( \(\varepsilon \)-coin-toss). We say that a protocol is a \(\varepsilon \)-coin-toss protocol with bias 1 / p, tolerating up to t corruptions, if it is a (t, 1 / p)-secure protocol for the functionality \({\text {CoinToss}}_{\varepsilon }()\).

Definition 6

(coin tossing). We say that a protocol is a coin-tossing protocol with bias 1 / p, tolerating up to t corruptions, if it is a (t, 1 / p)-secure protocol for the functionality \({\text {CoinToss}}()\).

2.4 Security with Identifiable Abort

We use here a variant of secure computation with abort, where upon abort, at least one cheating party is identified to all honest parties. This definition was first formally stated by Aumann and Lindell [5], and was also considered in [7, 8, 26], (in the first two, it was called security with abort and cheat detection).

Roughly speaking, our definition requires that one of two events is possible: If at least one party deviates from the prescribed protocol, then the adversary obtains the outputs of these parties (but nothing else), and all honest parties are notified by the protocol that these parties have aborted. Otherwise, the protocol terminates normally, and all parties receive their outputs. Again, we consider the restricted case where parties hold no private inputs. The formal definition is omitted for lack of space, and will appear in the full version of the paper [1].

2.5 Cryptographic Tools

We next informally describe two cryptographic tools that we use in our protocols.

Signature Schemes. A signature on a message proves that the message was created by its presumed sender, and its content was not altered. A signature scheme is a triple \(\left( {\text {Gen}},{\text {Sign}},{\text {Ver}} \right) \) containing the key generation algorithm \({\text {Gen}}\), which gets as input a security parameter \(1^{n}\) and outputs a pair of keys, the signing key \(K_{S}\) and the verification key \(K_{v}\), the signing algorithm \({\text {Sign}}\), and the verifying algorithm \({\text {Ver}}\). We assume that it is infeasible to produce signatures without holding the signing key.

Secret-Sharing Schemes. An \(\alpha \)-out-of-\({m}\) secret-sharing scheme is a mechanism for sharing data among a set of parties such that every set of parties of size \(\alpha \) can reconstruct the secret, while any smaller set knows nothing about the secret. In this paper, we use Shamir’s \(\alpha \)-out-of-\({m}\) secret-sharing scheme [33]. In this scheme, the shares of any \(\alpha -1\) parties are uniformly distributed and independent of the secret. Furthermore, given at most such \(\alpha -1\) shares and a secret s, one can efficiently complete them to \({m}\) shares of the secret s. Using this scheme, [8] presented a way to construct a secret sharing scheme with respect to a certain party. We use that in our construction as well.

Construction 4

Let s be some secret taken from some finite field \({\mathbb F}\). We share s among \({m}\) parties with respect to a special party \(p_j\) in an \(\alpha \)-out-of-\({m}\) secret-sharing scheme as follows:

  1. 1.

    Choose shares \(\left( s^{(1)},s^{(2)} \right) \) of the secret s in a two-out-of-two secret-sharing scheme, that is, select \(s^{(1)}\in {\mathbb F}\) uniformly at random and compute \(s^{(2)} = s-s^{(1)}\). Denote these shares by \({\text {mask}}_{j}{(s)}\) and \({\text {comp}}{(s)}\), respectively.

  2. 2.

    Generate shares \(\left( \lambda ^{(1)},\ldots ,\lambda ^{(j-1)},\lambda ^{(j+1)},\ldots ,\lambda ^{({m})} \right) \) of the secret \({\text {comp}}{(s)}\) in an \((\alpha -1)\)-out-of-\(({m}-1)\) Shamir’s secret-sharing scheme. For each \(\ell \ne j\), denote \({\text {comp}}_{\ell }{(s)} = \lambda ^{(\ell )}\).

Output:

  • The share of party \(p_j\) is \({\text {mask}}_{j}{(s)}\). We call this share, \(p_j\)’s masking share.

  • The share of each party \(p_{\ell }\), where \(\ell \ne j\), is \({\text {comp}}_{\ell }{(s)}\). We call this share, \(p_\ell \)’s complement share.

In the above, the secret s is shared among the parties in P in a secret-sharing scheme such that any set of size at least \(\alpha \) that contains \(p_j\) can reconstruct the secret. In addition, similarly to the Shamir secret-sharing scheme, the following property holds: for any set of \(\beta <\alpha \) parties (regardless if the set contains \(p_j\)), the shares of these parties are uniformly distributed and independent of the secret. Furthermore, given such \(\beta <\alpha \) shares and a secret s, one can efficiently complete them to \({m}\) shares of the secret s and efficiently select uniformly at random one vector of shares competing the \(\beta \) shares to \({m}\) shares of the secret s.

2.6 Claims and Definitions from [24]

The following definitions and propositions are taken verbatim from [24] and they will serve us as well. Given a partial view of a fail-stop adversary, we are interested in the expected outcome of the parties, conditioned on this view and the adversary making no further aborts.

Definition 7 (view value)

Let \(\pi \) be a protocol in which the honest parties always output the same bit value. For a partial view v of the parties in a fail-stop execution of \(\pi \), let \(C_{\pi }(v)\) denote the parties full view in an honest execution of \(\pi \) conditioned on v (i.e. all parties that do not abort in v act honestly in \(C_{\pi }(v)\)). Let \(\varDelta _{\pi }(v)=E_{v'\leftarrow C_{\pi }(v)} [out(v')]\), where \(out(v')\) is the common output of the non-aborting parties in \(v'\).

A protocol is unbiased, if no fail-stop adversary can bias the common output of the honest parties by too much.

Definition 8

( \((t,\alpha )\) -unbiased protocol). Let \(\pi \) be an m-party, r-round protocol, in which the honest parties always output the same bit value. We say that \(\pi \) is \((t,\alpha )\)-unbiased, if the following holds for every fail-stop adversary \(\mathcal {A}\) controlling the parties indexed by a subset \(\mathcal {C}\subset [m]\) of size at most t. Let V be \(\mathcal {A}\)’s view in a random execution of \(\pi \), and let \(I_j\) be the index of the j’th round in which \(\mathcal {A}\) sent an abort message (set to \(r+1\) if no abort occurred). Let \(V_i\) be the prefix of V at the end of the i’th round, letting \(V_0\) be the view consisting of only the random coins of \(\mathcal {A}\), and let \(V_i^-\) be the prefix of \(V_i\) with the i’th round abort message (if any) removed. Then,

$$\begin{aligned} {\mathop {\mathrm {E}}\limits _{V}}\left[ {\left|\sum \limits _{j\in |\mathcal C|} \left( \varDelta (V_{I_j})-\varDelta (V_{I_j}^-) \right) \right|}\right] \le \alpha \end{aligned}$$

where \(\varDelta =\varDelta _{\pi }\) according to Definition 7.

The following is an alternative characterization of fair coin-tossing protocols (against fail-stop adversaries).

Lemma 1

([24, Lemma 2.18]). Let \(n\in {\mathbb {N}}\) be a security parameter and let \(\pi \) be a \((t,\alpha )\)-unbiased coin-tossing protocol with \(\alpha (n)\le \frac{1}{2}-\frac{1}{p(n)}\), for some polynomial p. Then \(\pi \) is a \((t,\alpha (n)+neg(n))\)-secure coin tossing protocol against fail-stop adversaries.

The following lemmata and propositions assume that the protocol is of a specific form. More concretely, let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \), f be a randomized function (that may depend on \(\varepsilon \)), and let \(\pi _{\varepsilon ,f}\) be an r-round m-party coin-tossing protocol, such that, before any interaction takes place, every party learns \(D_0\), which is sampled according to the current game value, and for every round \(i\in [r]\), every party first learns a defense \(D_i=f(i,Y_i)\), and then the coin \(X_i\), where \(X_i\leftarrow \mathcal {B}in_{r-i+1,\varepsilon }\), \(Y_i=\sum \limits _{k=1}^i X_k\). We let \(V_{\pi _{\varepsilon ,f}}\) denote the adversary’s view in a random execution of \(\pi _{\varepsilon ,f}\). We further assume that adversary never aborts after seeing \(X_i\).

Lemma 2

(Vector Game [24, Lemma 4.5]). Let \(c\in {\mathbb {N}}\) and let \(r\in {\mathbb {N}}\) be the number of rounds. Let \(f:[r]\times {\mathbb Z}\rightarrow \left\{ -1,1\right\} ^{c\cdot r^2}\) be a randomized function that on input (iy) outputs \(c\cdot r^2\) elements from \(\left\{ -1,1\right\} \), each takes the value of 1 with probability \(\mathcal {B}er(\varepsilon )\), where \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \) satisfies \(\widehat{\mathcal {B}in}_{s_{0},\varepsilon }(0)=\widehat{\mathcal {B}in}_{s_{i}}(-y)\). Then:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{0,f}}}\left[ \left|\varDelta \left( V_{\pi _{0,f}} \right) -\varDelta \left( V_{\pi _{0,f}}^- \right) \right|\right] =O\left( \frac{\log ^3 r}{r} \right) . \end{aligned}$$

Lemma 3

(Hypergeometric Game [24, Lemma 4.4]). Let \(w\in {\mathbb Z}\), \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \) and let \(r\in {\mathbb {N}}\) be the number of rounds. Let \(f:[r]\times {\mathbb Z}\rightarrow \{0,1\}\) be a randomized function that on input (iy) outputs 1 with probability \(\widehat{\mathcal {H}G}_{2s_0,w,s_i}(-y)\) and 0 otherwise. Assuming that \(\left|w \right|\le c\cdot \sqrt{\log r\cdot s_0}\), for some constant c, then:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,f}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,f}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,f}}^- \right) \right|\right] =O\left( \frac{\log ^3 r}{r} \right) . \end{aligned}$$

Lemma 4

(Ratio Lemma [24, Lemma 4.10]). Let \(r\in {\mathbb {N}}\) be the number of rounds, and let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \). In the following we let \(Y_0=0\). Let

$$\begin{aligned} \mathcal {X}_i:=\left\{ x\in {\text {Supp}}(X_i):\left|x \right|\le 4\sqrt{\log r\cdot (r-i+1)}\right\} \end{aligned}$$

and

$$\begin{aligned} \mathcal {Y}_i:=\left\{ y'\in {\text {Supp}}(Y_{i-1}):\left|y'+2\varepsilon \cdot s_{i-1} \right|\le 4\sqrt{\log r\cdot s_{i-1}}\right\} . \end{aligned}$$

Assume \(\left|\varepsilon \right|\le 2\sqrt{\frac{\log r}{s_0}}\) and that for every \(i\in [r-\left\lfloor \log ^{2.5} r \right\rfloor ]\) and \(y\in \mathcal {Y}_i\), there exists a set \(\mathcal {D}_{i,y}\) such that for every \(x\in \mathcal {X}_i\), and every \(d\in \mathcal {D}_{i,y}\cap {\text {Supp}}(f(i,y+X_i)\;|\;Y_{i-1}=y,X_i\in \mathcal {X}_i)\), it holds that:

$$\begin{aligned} \Pr [f(i,y+X_i)\notin \mathcal {D}_{i,y}\;|\;Y_{i-1}=y]\le \frac{1}{r^2} \end{aligned}$$

and

$$\begin{aligned} \left|1-\frac{\Pr [f(i,y+X_i)=d\;|\;Y_{i-1}=y\wedge X_i=x]}{\Pr [f(i,y+X_i)=d\;|\;Y_{i-1}=y\wedge X_i\in \mathcal {X}_i]} \right|\le c\cdot \sqrt{\frac{\log r}{r-i}}\cdot \left( 1+\frac{|x|}{\sqrt{r-i+1}}\right) , \end{aligned}$$

for some constant c. Then:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,f}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,f}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,f}}^- \right) \right|\right] =O\left( \frac{\log ^3 r}{r} \right) . \end{aligned}$$

Proposition 1

([24, Proposition 4.6]). For every randomized functions fg, and for every \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \), it holds that

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,g\circ f}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,g\circ f}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,g\circ f}}^- \right) \right|\right] \le \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,f}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,f}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,f}}^- \right) \right|\right] \end{aligned}$$

Proposition 2

([24, Proposition 4.7]). Let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \) and f be some randomized function. If \(\Pr [Y_r\ge 0]\notin \left[ \frac{1}{r^2},1-\frac{1}{r^2}\right] \), where \(r\in {\mathbb {N}}\) is the number of rounds, then

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,f}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,f}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,f}}^- \right) \right|\right] \le \frac{2}{r}. \end{aligned}$$

2.7 An Extension of the Hypergeometric Game

In this section we introduce an extended version of the Hypergeometric game (Lemma 3), presented in [24]. More specifically, we let the adversary see a constant number of independent samples, each from a different set. Furthermore, we augment the view of the adversary with all of these sets.

Lemma 5

Let \(\xi \in {\mathbb {N}}\) be some constant, let \(\mathbf w =\left( w_1\ldots ,w_{\xi } \right) \in {\mathbb Z}^{\xi }\), let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \), and let \(r\in {\mathbb {N}}\) be the number of rounds. For \(k\in [\xi ]\), let \(h_k:[r]\times {\mathbb Z}\rightarrow \{0,1\}\) be a randomized function that on input (iy) outputs 1 with probability \(\widehat{\mathcal {H}G}_{2s_0,w_k,s_i}(-y)\) and 0 otherwise. Assuming that for every \(k\in [\xi ]\), it holds that \(\left|w_k \right|\le c\sqrt{\log r\cdot s_0}\), for some constant c, then:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{\pi _{\varepsilon ,h}}}\left[ \left|\varDelta \left( V_{\pi _{\varepsilon ,h}} \right) -\varDelta \left( V_{\pi _{\varepsilon ,h}}^- \right) \right|\right] =O\left( {2^{\xi }\cdot }\frac{\log ^3 r}{r} \right) , \end{aligned}$$

where \(h(i,y)=\left( h_1(i,y),\ldots ,h_{\xi }(i,y) \right) \).

The proof of Lemma 5 is deferred to the full version of this paper [1].

3 The Multiparty Protocol

In this section, we describe our construction and prove Theorem 1. This result is formally restated in Sect. 3.3 (as Corollary 1) and proved therein.

In Sect. 3.1, we describe a construction of an m-party coin-tossing protocol tolerating up to 2/3 corruptions. In Sect. 3.2, we describe the main construction of an m-party almost optimally fair coin-tossing protocol tolerating up to 3/4 corruptions.

3.1 A Coin-Tossing Protocol for \(t<2m/3\)

The following algorithm, is an extension of the two-party share generator, presented in [24], to the multiparty case.

Algorithm 5

(MultipartyShareGen \(_{<2/3}\)HG \((\varepsilon ,m,t)\) ). Let \(r\in {\mathbb {N}}\) be the number of rounds.

  • Input: Number of rounds r, \(\varepsilon =\varepsilon (n)\in \left[ -\frac{1}{2},\frac{1}{2}\right] \), the number of parties m, and an upper bound t on the number of corrupted parties. Denote \(h=m-t\). Observe that a subset \(J\subset [m]\) of size \(2h-1\), containing all honest parties has an honest majority.

  • Selecting coins and defenses: 

    1. 1.

      For every \(J\subset [m]\) of size \(2h-1\):

      1. (a)

        Let \(S^J\) be a set with \(2s_{0}\) elements from \(\left\{ -1,1\right\} \), where each element is sampled according to \(\mathcal {B}er(\varepsilon )\).

      2. (b)

        Let \(A_0^J\) be a random subset of \(S^J\) of size \(s_{0}\).

      3. (c)

        Let \(d_0^J\) be 1 if \(\sum \limits _{a\in A_0^J}a\ge 0\), and 0 otherwise .

    2. 2.

      For \(i=1\) to r:

      1. (a)

        Sample \(x_i\leftarrow \mathcal {B}in_{r-i+1,\varepsilon }\).

      2. (b)

        For every \(J\subset [m]\) of size \(2h-1\), we let \(A_i^J\) be a random subset of \(S^J\) of size \(s_{i}\).

      3. (c)

        For every \(J\subset [m]\) of size \(2h-1\), let \(d_i^J\) be 1 if \(\sum \limits _{k=1}^i x_k+\sum \limits _{a\in A_i^J}a\ge 0\), and 0 otherwise .

  • Sharing the values: 

    1. 1.

      For \(i\in [r]\), let \(x_i[j]\) be a share of \(x_i\) in a \((t+1)\)-out-of-m secret sharing.

    2. 2.

      For \(i\in \left\{ 0,\ldots ,r\right\} \), \(j\in [m]\), and \(J\subset [m]\) of size \(2h-1\), let \(d_i^J[j]\) be a share of \(d_i^J\) in a h-out-of-\((2h-1)\) secret sharing.

    3. 3.

      For \(i\in [r]\), \(j\in [m]\), \(J\subset [m]\) of size \(2h-1\), and \(j'\in J\), let \(d_i^J[j',j]\) be a share of \(d_i^J[j']\) in a \((t+1)\)-out-of-m secret sharing, such that party \(P_{j'}\) is required in order to recover \(d_i^J[j']\). This can be done with Construction 4.

  • Output: Party \(P_j\) receives \(d_i^{J'}[j',j]\), \(d_0^J[j]\), \(x_i[j]\) for all \(i\in [r]\), \(J,J'\subset [m]\) of size \(2h-1\), \(j\in J\), and \(j'\in J'\).

Protocol 6

(Multiparty \(_{<2/3}\) Coin-Toss). Let \(r\in {\mathbb {N}}\) be the number of rounds. Let \(\hat{m}\), and \(\hat{t}\) be two constants where \(\hat{m}\) denotes the number of parties, and \(\hat{t}\) is an upper bound on the number of corrupted parties.

  • Common input: Number of rounds r and output distribution parameter \(\varepsilon \) (jointly reconstructable, possibly unknown to parties).

  • Private inputs: The private inputs of the parties were given to them by an oracle computing \(HG (\varepsilon ,\hat{m},\hat{t})\) as defined in Algorithm 5. The input of party \(P_j\) for \(j\in [\hat{m}]\) is x \(_j\),d \(_j\), where

    $$\begin{aligned}{\varvec{x}}_j=\left( x_1[j],\ldots ,x_r[j] \right) \text { and }{\varvec{d}}_j=\left( D_0[j],D_1[j],\ldots D_r[j] \right) ,\end{aligned}$$

    where

    $$\begin{aligned} D_i[j]=\left\{ d^J_i[j',j]\;|\;J\subset [\hat{m}]\wedge |J|=2h-1 \wedge j'\in J\right\} \text {, for }i\in [r] \end{aligned}$$

    and

    $$\begin{aligned} D_0[j]=\left\{ d^J_0[j]\;|\;J\subset [\hat{m}]\wedge |J|=2h-1 \wedge j\in J\right\} . \end{aligned}$$
  • Interaction rounds: For \(i=1\) to r:

    1. (a)

      Each party \(P_j\) sends \(d^J_i[j',j]\) to \(P_{j'}\) for every \(j'\ne j\) and \(J\subset [\hat{m}]\) of size \(2h-1\), such that \(j'\in J\).

    2. (b)

      The parties reconstruct \(x_i\).

  • Output: The honest parties output 1 if \(\sum \limits _{i=1}^r x_i\ge 0\), and outputs 0 otherwise .

  • In case of abort: Let \(J\subset [\hat{m}]\) be the set of remaining parties. If \(|J|\ge \hat{t}+1\), then the parties in J go on with the execution of the protocol. Otherwise, they reconstruct and output \(d_i^{J'}\), for the lexicographically first \(J' \subset [\hat{m}]\) of size \(2h-1\), such that \(J\subseteq J'\), and for the largest i for which they have all of the corresponding shares (for the parties of J).

3.2 A Coin-Tossing Protocol for \(t<3m/4\)

Algorithm 7

(MultipartyShareGen \(_{<3/4}\) ). Let \(r\in {\mathbb {N}}\) be the number of rounds. Let m be a constant representing the number of parties, and let t be a constant which is a bound on the number of corrupted parties. We denote \(h=m-t\) (i.e., a lower bound on the number of honest parties). In the following, we call a subset \(J\subset [m]\) protected if \(2h-1\le |J|\le t\).

  • Input: Number of rounds r.

  • Selecting coins and defenses: 

    For \(i=1\) to r:

    1. 1.

      Sample \(x_i\leftarrow \mathcal {B}in_{r-i+1}\).

    2. 2.

      Let \(\varepsilon _i\in \left[ -\frac{1}{2},\frac{1}{2}\right] \) be such that \(\widehat{\mathcal {B}in}_{s_{i},\varepsilon }\left( -\sum \limits _{k=1}^i x_k \right) =\widehat{\mathcal {B}in}_{s_{0},\varepsilon _i}(0)\).

    3. 3.

      For every protected \(J\subset [m]\), sample \(d_i^J\leftarrow HG (\varepsilon _i,\left| J\right| ,t-m+\left| J\right| )\).

  • Sharing the values: 

    1. 1.

      For \(i\in [r]\), let \(x_i[j]\) be a share of \(x_i\) in a \((t+1)\)-out-of-m secret sharing.

    2. 2.

      For \(i\in [r]\), \(j\in [m]\), and a protected \(J\subset [m]\), let \(d_i^J[j]\) be a share of \(d_i^J\) in a \((t-m+|J|+1)\)-out-of-|J| secret sharing.

    3. 3.

      For \(i\in [r]\), \(j\in [m]\), a protected \(J\subset [m]\), and \(j'\in J\), let \(d_i^J[j',j]\) be a share of \(d_i^J[j']\) in a \((t+1)\)-out-of-m secret sharing, such that party \(P_{j'}\) is required in order to recover \(d_i^J[j']\). This can be done with Construction 4.

  • Output: Party \(P_j\) receives \(d_i^J[j',j]\), \(x_i[j]\) for every \(i\in [r]\), \(J\subset [m]\), and \(j'\in J\).

We are now ready to describe the actual multiparty coin-tossing protocol. We remark that the protocol is defined in the fail-stop model, where corrupted parties must follow the prescribed protocol, unless they decide to prematurely abort the execution at some point. This is done for the sake of simplicity of presentation and compiling the following protocols so that they tolerate any malicious behavior is done by standard techniques, using signatures.

Protocol 8

(Multiparty \(_{<3/4}\) Coin-Toss).

  • Common input: Number of rounds r.

  • Preprocessing: Parties run a secure with identifiable abort implementation of Algorithm 7 to obtain their respective outputs. If an abort occurred during the execution, then the remaining parties restart the protocol without the aborting parties.

  • Interaction rounds: For \(i=1\) to r:

    1. (a)

      Each party \(P_j\) sends \(d^J_i[j',j]\) to \(P_{j'}\) for every \(j'\ne j\) and every protected \(J\subset [m]\) such that \(j'\in J\).

    2. (b)

      The parties reconstruct \(x_i\).

  • Output: The honest parties output 1 if \(\sum \limits _{i=1}^r x_i\ge 0\), and outputs 0 otherwise .

  • In case of abort: Let \(J\subset [m]\) be the set of remaining active parties. If \(|J|\ge t+1\), then the parties in J continue with the execution of the protocol. Assume that \(|J|\le t\). If the abort happened before the execution of Algorithm 7, then the parties run a secure with identifiable abort implementation of Algorithm 5 to obtain their respective outputs, and they execute Protocol 6. If the abort happened during the interaction rounds, then the parties execute Protocol 6 with \(d^{J'}_i[j]\) as the private input for \(P_j\), for the lexicographic first \(J'\subset [m]\) such that \(J\subseteq J'\), and for the largest i for which they have all of the corresponding shares.Footnote 4

3.3 Stating the Main Results

Theorem 9

Let m and t be two constants such that \(t<3m/4\). Assuming OT exists, then for every \(r\in {\mathbb {N}}\), Protocol 8 is an r-round m-party \(O\left( {2^{2^m}}\cdot \frac{\log ^3 r}{r} \right) \)-secure coin-tossing protocol tolerating any fail-stop adversary that corrupts up to t parties, in the \(\left( {\text {MultipartyShareGen}}_{<3/4},{\text {MultipartyShareGen}}_{<2/3}\right) \)-hybrid model (guaranteeing security with identifiable abort).

Corollary 1

Let n be the security parameter, and let m and t be two constants, such that \(t<3m/4\). Assuming OT exists, then for every polynomial \(r=r(n)\), there exists an r-round m-party \(O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) \)-secure coin-tossing protocol, against any PPT adversary corrupting up to t parties.

In order to prove Theorem 9 we first need to show that Protocol 6 is secure. The security of Protocol 6 by itself does not suffice, as in Protocol 8 after an abort, the adversary’s view contains some additional information, and so, the following Lemma states that the additional information won’t help him to bias the outcome.

Lemma 6

Let \(\varepsilon \in \left[ -\frac{1}{2},\frac{1}{2}\right] \), and let \(\hat{m}\) and \(\hat{t}\) be two constants, such that \(\hat{t}<2\hat{m}/3\). Then for every \(r\in {\mathbb {N}}\), Protocol 6 is an r-round \(\hat{m}\)-party \(\left( \hat{t},O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) \right) \)-unbiased \(\varepsilon \)-coin-toss protocol tolerating any fail-stop adversary, corrupting up to \(\hat{t}\) parties. Moreover, the above holds even when the adversary gets \(\varepsilon \) as an auxiliary input.

The proof of Lemma 6 is deferred to the final version of this paper [1]. We now use it in combination with the results of [24] to prove Theorem 9.

Proof of (Theorem 9)

Assume without loss of generality that \(r\equiv 1\mod 4\) (otherwise, we set the number of rounds to be the largest \(r'<r\) such that \(r'\equiv 1\mod 4\)). Hence, \(s_{i}(r)\) is odd, and the output of the parties in an honest execution (without aborts) is a uniform bit. We also assume that r is larger than some constant, which will be determined by the analysis, as otherwise the theorem holds trivially.

Let \(\mathcal {A}\) be a fail-stop adversary and let \(\mathcal {C}\subset [m]\) be the set of parties that \(\mathcal {A}\) corrupts. By assumption, it holds that \(|\mathcal {C}|<3m/4\). Let V be the view of the adversary \(\mathcal {A}\) in a random execution of Protocol 8. For a round \(I\in [r]\times \left\{ (a),(b)\right\} \) in the outer protocol, let \(V_I\) be the view of the adversary in round I and let \(V^-_I\) be it’s view without the abort (if happened). We show that the protocol is \(\left( t,O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) \right) \)-unbiased, i.e., we show that:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V}\left[ \left|\varDelta \left( V \right) -\varDelta \left( V^- \right) \right|\right] =O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) . \end{aligned}$$
(1)

Applying Lemma 1 to Eq. (1) yields that the protocol is \(\left( |\mathcal {C}|, O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) +neg(n)\right) \)-secure. We next prove the correctness of Eq. (1).

We need to analyze the gain of the adversary by prematurely aborting the execution of the protocol. Recall that to prematurely abort the execution of the outer protocol, the adversary needs to instruct at least \({m-t}\) parties to abort. Otherwise, the remaining active parties are instructed to go on as usual, and indeed, by the properties of the secret sharing scheme, they are able to go through with reconstructing their appropriate secrets. Namely, upon receiving (in Step a of round i) shares \(d^J_i[j,j']\) from at least t parties \(P_{j'}\), party \(P_j\) is able to reconstruct \(d^J_i[j]\) (using its own share of it). Similarly, upon receiving (in Step b of round i) shares \(x_i[j']\) from at least t parties \(P_{j'}\), party \(P_j\) is able to reconstruct \(x_i\).

Assume an abort occurred before the interaction rounds. Moreover, we assume that at most t parties remain active. Then by the description of the protocol, the parties are instructed to run a secure with identifiable abort implementation of Protocol 6, and there is no bias in the samples. Then \(\varDelta (V)=\varDelta (V^-)=\frac{1}{2}\), which yields no advantage to the adversary.

Assume an abort occurred during the interaction rounds. Let \(I=(i,(\cdot ))\) be the first round for which there is an abort and there are at most t active parties remaining. We define two adversaries \(\mathcal {A}_{(a)}\) and \(\mathcal {A}_{(b)}\) as follows: \(\mathcal {A}_{(a)}\) and \(\mathcal {A}_{(b)}\) act exactly as does \(\mathcal {A}\), until round I, in which \(\mathcal {A}\) decided to abort. If \(I=(i,(a))\), then \(\mathcal {A}_{(a)}\) aborts at (i, (a)), and \(\mathcal {A}_{(b)}\) completes the execution honestly without aborting. If \(I=(i,(b))\), then \(\mathcal {A}_{(a)}\) completes the execution honestly without aborting, and \(\mathcal {A}_{(b)}\) aborts at (i, (b)). Let \(V_I^{(a)}\) and \(V_I^{(b)}\) be the view of \(\mathcal {A}_{(a)}\) and \(\mathcal {A}_{(b)}\), respectively.

Assume that \(I=(i,(a))\) : 

The view of the adversary \(\mathcal {A}_{(a)}\) consists of:

$$\begin{aligned} \left\{ x_1,x_2\ldots x_{i-1}\right\} \text { and }D^{\mathcal {C}}_i, \end{aligned}$$

where

$$\begin{aligned} D^{\mathcal {C}}_i = \left\{ d^{\mathcal {C}'}_{k}: |\mathcal {C}'\cap \mathcal {C}|>t-m+|\mathcal {C}'|\wedge k\le i\right\} , \end{aligned}$$

is the set of all the defenses that the adversary can see up to and including round i. In addition, the adversary \(\mathcal {A}_{(a)}\) sees many shares that are useless to it. Specifically, the adversary \(\mathcal {A}_{(a)}\) holds shares of two different types. The first type are shares of the elements in its view that \(\mathcal {A}_{(a)}\) completely reconstructed (i.e., those specified above). This type of shares are useless to \(\mathcal {A}_{(a)}\), as they were chosen independently of all other information. The second type are shares of the defense values of other sets that \(\mathcal {A}_{(a)}\) cannot reconstruct (since it sees at most t such shares). This type of shares are useless to \(\mathcal {A}_{(a)}\) by the properties of secret sharing schemes. We thus, disregard these two types of shares, and continue with the analysis as if the view of \(\mathcal {A}_{(a)}\) consists only of the random coins and of \(D^{\mathcal {C}}_i\). Formally, the view of the adversary \(\mathcal {A}_{(a)}\) may contain only part of \(D^{\mathcal {C}}_i\), however, an adversary with more information can always emulate one with less information by simply disregarding parts of its view.

Each \(d^{\mathcal {C}'}_k\) is a vector, which consists of \(O\left( r^2 \right) \) elements from \(\left\{ -1,1\right\} \), where the elements are sampled according to \(\mathcal {B}er(\varepsilon _k)\), where \(\varepsilon _k\) satisfies \(\widehat{\mathcal {B}in}_{s_{0},\varepsilon _k}(0)=\widehat{\mathcal {B}in}_{s_{k}}\left( -\sum \limits _{l=1}^k x_l \right) \). As \(D^{\mathcal {C}}_i\) has \(O\left( r^2 \right) \) bits in total, Lemma 2 tell us that:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V^{(a)}_I}\left[ \left|\varDelta \left( V^{(a)}_I \right) -\varDelta \left( V^{(a)-}_I \right) \right|\right] =O\left( \frac{\log ^3 r}{r} \right) , \end{aligned}$$
(2)

Assume that \(I=(i,(b))\) : 

The view of the adversary \(\mathcal {A}_{(b)}\) consists of:

$$\begin{aligned} \left\{ x_1,x_2\ldots x_i\right\} \text { and }{D}^{\mathcal {C}}_i, \end{aligned}$$

As in the previous case, we disregard the other shares that \(\mathcal {A}_{(b)}\) sees. Since the defenses are sampled independently, given \(x_i\), and since the expectation of each \(d^{\mathcal {C}'}_i\) is exactly the game value given \(x_1,\ldots x_i\), the adversary gains nothing by aborting in this rounds.

Combining the two cases yields the bound on the maximum bias \(\mathcal {A}\) can do in round I:

$$\begin{aligned}&\mathop {{\text {E}}}\limits _{V_I}\left[ \left|\varDelta \left( V_I \right) -\varDelta \left( V_I^- \right) \right|\right] \\&\quad = \mathop {{\text {E}}}\limits _{V^{(a)}_I}\left[ \left|\varDelta \left( V^{(a)}_I \right) -\varDelta \left( V^{(a)-}_I \right) \right|\right] +\mathop {{\text {E}}}\limits _{V^{(b)}_I}\left[ \left|\varDelta \left( V^{(b)}_I \right) -\varDelta \left( V^{(b)-}_I \right) \right|\right] \\&\quad = O\left( \frac{\log ^3 r}{r} \right) . \end{aligned}$$

In order the conclude the proof of security we need to show that the remaining corrupted parties can’t bias the outcome by more than \(O\left( \frac{\log ^3 r}{r} \right) \). Let \(J\subset [m]\) be the set of the remaining parties , let \(\hat{m}=|J|\), and let \(h=m-t\) be a lower bound on the number of honest parties. Since, at least h parties aborted, it follows that there are at most \(\hat{t}:={\hat{m}-h}\) corrupted parties in J. By assumption, \(t<\frac{3m}{4}\), and hence, \(\hat{t}<\hat{m}-\frac{m}{4}\). Since \(\hat{m}\le t<\frac{3m}{4}\), it holds that \(\hat{t}<\hat{m}-\frac{\hat{m}}{3}=\frac{2\hat{m}}{3}\). Therefore by Lemma 6 it holds that:

$$\begin{aligned} \mathop {{\text {E}}}\limits _{V_{{\text {inner}}}}\left[ \left|\varDelta \left( V_{{\text {inner}}} \right) -\varDelta \left( V_{{\text {inner}}}^- \right) \right|\right] =O\left( {2^{2^m}\cdot }\frac{\log ^3 r}{r} \right) , \end{aligned}$$
(3)

where \(V_{{\text {inner}}}\) is the view of \(\mathcal {A}\) in Protocol 6, with \(\varepsilon _i\) included. Note that Lemma 6 assumes that the adversary’s view contains only \(\varepsilon _i\) as auxiliary input. However, Eq. (3) still holds, as the rest of the view is independent of \(V_I\), and give no information to the adversary. \(\square \)

3.3.1 Proof of Corollary 1.

 We next sketch the proof of Corollary 1.

Proof Sketch of Corollary 1

We adjust Protocols 6 and 8, so that each message that any of the parties ever needs to send is signed, and all other parties verify this signature upon receiving this messages. If at some point in the execution, party P broadcasts a message that is not properly signed, then, all parties treat this as if P has aborted the computation and is no longer active. This is done similarly to the way presented in [8]. Towards this end, Algorithm 7 is changed so that for every round i, every two parties \(P_j,P_{j'}\), and every appropriate subset J, both \(x_i[j]\) and \(d_i^J[j]\) are signed. In addition, let \(\sigma (i,J,j')\) be the signature attributed to \(d_i^J[j']\), then, \(d_i^J[j',j]\) is redefined to be a share of \(\left( d_i^J[j'],\sigma (i,J,j') \right) \) in a \((t+1)\)-out-of-m secret sharing, such that party \(P_{j'}\) is required in order to recover \(d_i^J[j']\). Finally, \(d_i^J[j',j]\) is also signed.

We further modify Algorithm 7 so that for every \(i\in [r]\), the computation of \(\varepsilon _i\) (see Item 2) can be done efficiently, similarly to the way done in [24]. Observe that \(\varepsilon _i\) is only used to sample \(O\left( r^2 \right) \) independent bits, hence it can be efficiently estimated with \(\tilde{\varepsilon }_i\), such that the statistical difference between the samples is bounded by \(\frac{1}{r^2}\). It follows that the adjusted Protocol 8 is a r-round, m-party \(O\left( 2^{2^m}\cdot \frac{\log ^3 r}{r}+\frac{r}{r^2} \right) \)-secure coin-tossing protocol against any PPT adversary.

Finally, similarly to [8], the modified (efficient) functionality is replaced by a secure with identifiable abort protocol that runs in a constant number of rounds. As explained in [8], this can be done using (a variation on) the protocol of [32].