1 Introduction

A coin-flipping protocol allows mutually distrustful parties to generate a common unbiased random bit. Such a protocol should satisfy two properties. First, when all parties are honest and follow the instructions of the protocol, their common output is a uniformly distributed bit. Second, even if some of the parties collude and deviate from the protocol’s instructions, they should not be able to significantly bias the common output of the honest parties.

When a majority of the parties are honest, efficient and completely fair coin-flipping protocols are known as a special case of secure multiparty computation with an honest majority [9] (assuming a broadcast channel). When an honest majority is not available, and in particular when there are only two parties, the situation is more complex. Blum’s two-party coin-flipping protocol [12] guarantees that the output of the honest party is unbiased only if the malicious party does not abort prematurely (note that the malicious party can decide to abort after learning the result of the coin flip). This satisfies a rather weak notion of fairness in which once the malicious party is labeled as a “cheater” the honest party is allowed to halt without outputting any value. Blum’s protocol relies on any one-way function [26, 34], and Impagliazzo and Luby [28] showed that one-way functions are in fact essential even for such a seemingly weak notion. While this notion suffices for some applications, in many cases fairness is required to hold even if one of the parties aborts prematurely (consider, for example, an adversary that controls the communication channel and can prevent communication between the parties). In this paper, we consider a stronger notion: Even when the malicious party is labeled as a cheater, we require that the honest party outputs a bit.

Cleve’s Impossibility Result The latter notion of fairness turns out to be impossible to achieve in general. Specifically, Cleve [14] showed that for any two-party \(r\)-round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by \(\varOmega (1/r)\). Cleve’s lower bound holds even under arbitrary computational assumptions: The adversary only needs to simulate an honest party and decide whether or not to abort early depending on the output of the simulation. However, the best previously known protocol (with respect to bias) only guaranteed \(O(1 / \sqrt{r})\) bias [5, 14], and the question of whether Cleve’s bound was tight has remained open for over 20 years.

Fairness in Secure Computation The bias of coin-flipping protocols can be viewed as a particular case of the more general framework of fairness in secure computation. Typically, the security of protocols is formalized by comparing their execution in the real model to an execution in an ideal model where a trusted party receives the inputs of the parties, performs the computation on their behalf, and then sends all parties their respective outputs. Executions in the ideal model guarantee complete fairness: Either all parties learn the output, or neither party does. Cleve’s result, however, shows that without an honest majority complete fairness is generally impossible to achieve, and therefore, the formulation of secure computation (see [19]) weakens the ideal model to one in which fairness is not guaranteed. Informally, a protocol is “secure-with-abort” if its execution in the real model is indistinguishable from an execution in the ideal model allowing the ideal-model adversary to choose whether the honest parties receive their outputs (this is the notion of security satisfied by Blum’s coin-flipping protocol).

Recently, Katz [29] suggested an alternate relaxation: keep the ideal model unchanged (i.e., all parties always receive their outputs), but relax the notion of indistinguishability by asking that the real model and ideal model are distinguishable with probability at most \(1/p(n) + \nu (n)\), for a polynomial \(p(n)\) and a negligible function \(\nu (n)\) (we refer the reader to Sect. 2 for a formal definition). Protocols satisfying this requirement are said to be \(1/p\)-secure, and intuitively, such protocols guarantee complete fairness in the real model except for probability \(1/p\). In the context of coin-flipping protocols, any \(1/p\)-secure protocol has bias at most \(1/p\). However, the definition of \(1/p\)-security is more general and applies to a larger class of functionalities.

1.1 Our Contributions

In this paper, we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. We prove the following theorem:

Theorem 1.1

Assuming the existence of oblivious transfer, for any polynomial \(r = r(n)\) there exists an \(r\)-round two-party coin-flipping protocol that is \(1/(4r - c)\)-secure, for some constant \(c > 0\).

We prove the security of our protocol under the simulation-based definition of \(1/p\)-security,Footnote 1 which for coin-flipping protocols implies, in particular, that the bias is at most \(1/p\). We note that our result not only identifies the optimal trade-off asymptotically, but almost pins down the exact leading constant: Cleve showed that any \(r\)-round two-party coin-flipping protocol has bias at least \(1/(8r + 2)\), and we manage to achieve bias of at most \(1/(4r - c)\) for some constant \(c > 0\).

Our approach holds in fact for a larger class of functionalities. We consider the more general task of sampling from a distribution \({{\mathcal {D}}} = ({{\mathcal {D}}}_1, {{\mathcal {D}}}_2)\): Party \(P_1\) receives a sample from \({{\mathcal {D}}}_1\), and party \(P_2\) receives a correlated sample from \({{\mathcal {D}}}_2\) (in coin flipping, for example, the joint distribution \({\mathcal {D}}\) produces the values \((0,0)\) and \((1,1)\) each with probability \(1/2\)). Before stating our result in this setting, we introduce a standard notation: We denote by \(\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)\) the statistical distance between the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\) and the direct product of the two marginal distributions \({\mathcal {D}}_1\) and \({\mathcal {D}}_2\). We prove the following theorem which generalizes Theorem 1.1:

Theorem 1.2

Assuming the existence of oblivious transfer, for any polynomially sampleable distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\) and polynomial \(r = r(n)\) there exists an \(r\)-round two-party protocol for sampling from \({\mathcal {D}}\) that is \(\frac{\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)}{2r - c}\)-secure, for some constant \(c > 0\).

Our approach raises several open questions that are fundamental to the understanding of coin-flipping protocols. These questions include identifying the minimal computational assumptions that are essential for reaching the optimal trade-off (i.e., one-way functions vs. oblivious transfer), extending our approach to the multiparty setting, and constructing a more efficient variant of our protocol that can result in a practical implementation. We elaborate on these questions in Sect. 5 and hope that our approach and the questions it raises can make progress toward resolving the complexity of coin-flipping protocols.

1.2 Related Work

Coin-Flipping Protocols When security with abort is sufficient, simple variations of Blum’s protocol are the most commonly used coin-flipping protocols. For example, an \(r\)-round protocol with bias \(O(1/\sqrt{r})\) can be constructed by sequentially executing Blum’s protocol \(O(r)\) times and outputting the majority of the intermediate output values [5, 14]. We note that in this protocol an adversary can indeed bias the output by \(\varOmega (1/\sqrt{r})\) by aborting prematurely. One of the most significant results on the bias of coin-flipping protocols gave reason to believe that the optimal trade-off between the round complexity and the bias is in fact \(\Theta (1 / \sqrt{r})\) (as provided by the latter variant of Blum’s protocol): Cleve and Impagliazzo [15] showed that in the fail-stop model, any two-party \(r\)-round coin-flipping protocol has bias \(\varOmega (1 / \sqrt{r})\). In the fail-stop model, adversaries are computationally unbounded, but they must follow the instructions of the protocol except for being allowed to abort prematurely.

Coin-flipping protocols were also studied in a variety of other models. Among those are collective coin flipping in the “perfect information model” in which parties are computationally unbounded and all communication is public [2, 10, 18, 35, 36], and protocols based on physical assumptions, such as quantum computation [1, 3, 4] and tamper-evident seals [33].

Fair Computation The techniques underlying our protocols were directly inspired by a recent line of research devoted for achieving various forms of fairness in secure computation. Specifically, the technique of choosing a secret “threshold round,” before which no information is learned, and after which aborting the protocol is essentially useless was suggested by Moran and Naor [33] as part of a coin-flipping protocol based on tamper-evident seals, by Katz [29] for partially fair protocols using a simultaneous broadcast channel and by Gordon et al. [20] for completely fair protocols for a restricted (but yet rather surprising) class of functionalities.

Various techniques for hiding a meaningful round in game-theoretic settings were suggested by Halpern and Teague [25], Gordon and Katz [21], and Kol and Naor [30]. Katz [29] also introduced the technique of distributing shares to the parties in an initial setup phase (which is only secure-with-abort), and these shares are then exchanged by the parties in each round of the protocol.

1.3 Subsequent Work

Multiparty Coin Flipping Beimel et al. [7] generalized our two-party protocol to the multiparty case where fewer than 2/3 of the parties are corrupt. They showed an \(m\)-party \(r\)-round protocol that can tolerate \(t\) corrupt parties (where \(m/2\le t \le 2m/3\)) with bias \(O(2^{2^{k+1}}/r')\), where \(r'=r-O(k)\) and \(k=2t-m\). In particular, when \(t\) and \(m\) are constant (but \(r\) grows with \(n\)), this gives a protocol with \(O(1/r)\) bias.

Recently, Haitner and Tsfadia [24] managed to break the \(2/3\) barrier with the construction of a protocol for three parties with bias \(O(\log ^2 r/r)\). Their protocol requires a new technique—rather than relying on a “special round” at which the value of the game changes abruptly for the parties (i.e., the expected output of the game is \(1/2\) before this round, and either \(0\) or \(1\) afterward); the new protocol “smoothly” changes the value of the game. Haitner and Tsfadia show that this is necessary when there are more than two parties and arbitrary coalitions may be corrupted. Their results seem somewhat limited, however, to three parties; dealing with more than three parties and arbitrary corruptions remains an open problem (Haitner and Tsfadia conjecture that their technique can be extended to a logarithmic number of parties).

\(1/p\)-Secure Computation Our results were recently extended by Gordon and Katz [22] to deal with the more general case of randomized functions, and not only distributions. Gordon and Katz showed that any efficiently computable randomized function \(f : X \times Y \rightarrow Z\) where at least one of \(X\) and \(Y\) is of polynomial size has an \(r\)-round protocol that is \(O \left( \frac{\min \{|X|, |Y|\}}{r} \right) \)-secure. In addition, they showed that even if both domains are of super-polynomial size but the range \(Z\) is of polynomial size, then \(f\) has an \(r\)-round protocol that is \(O \left( \frac{|Z|}{\sqrt{r}} \right) \)-secure. Gordon and Katz also showed a specific function \(f : X \times Y \rightarrow Z\) where \(X,\,Y\), and \(Z\) are of size super-polynomial which cannot be \(1/p\)-securely computed for any \(p > 2\) assuming the existence of exponentially hard one-way functions.

These results were further extended by Beimel et al. [6] to the multiparty setting. For a constant number of parties and any function \(f\) with range of size polynomial in the security parameter, Beimel et al. show that there exists a protocol for \({1/p}\)-secure computation of \(f\) tolerating any number of corrupt parties. Moreover, if the number of corrupted parties is less than \(2/3\), they construct a protocol that can handle a super-constant number of parties (up to \(\log n\)), as long as the domain of \(f\) is of constant size. In the other direction, Beimel et al. show that no protocols exist for \(1/p\)-secure computation of functions with polynomial-size domain if the number of parties is super-constant.

Coin Flipping Versus One-Way Functions Our work shows how to construct an optimally fair coin-flipping protocol based on oblivious transfer; however, it has left open to find the minimal assumptions necessary for optimally fair coin flipping. The work of Dachman-Soled et al. [16] took a step toward answering this question by showing that any black-box construction of an optimally fair coin-flipping protocol based on a one-way function with \(n\)-bit input and output needs \(\varOmega (n/ \log n)\) rounds. Subsequently, Dachman-Soled et al. [17] took another step toward understanding the complexity of optimally fair coin flipping by showing that this task (with an arbitrary number of rounds) cannot be based on one-way functions in a black-box way, as long as the protocol is “oblivious” to the implementation of the one-way function (we refer the reader to [17] for more details on their notion of obliviousness).

Surprisingly, an even more fundamental question is still open: Are one-way functions necessary at all for constructing fair coin-flipping protocols? While Impagliazzo and Luby [28] gave a positive answer for protocols with negligible, only recently have we begin to close the gap with regard to protocols that guarantee only non-negligible bias. Maji et al. [32] showed that one-way functions are required for constant-round protocols with bias \(1/2 - o(1)\). Haitner and Omri [23] showed that strong coin-flipping protocols with bias \(\frac{\sqrt{2}-1}{2}\) and any number of rounds imply one-way functions. Berman et al. [11] subsequently improved this result, proving that coin-flipping protocols with any constant bias (and any number of rounds) imply one-way functions (the stronger result also applies to weak coin-flipping protocols). The question of whether protocols with non-constant bias and non-constant rounds can be constructed without one-way functions is still open, to the best of our knowledge.

1.4 Paper Organization

The remainder of this paper is organized as follows. In Sect. 2, we review several notions and definitions that are used in the paper (most notably, the definition of \(1/p\)-secure computation). In Sect. 3, we describe a simplified variant of our protocol and prove its security. In Sect. 4, we describe a more refined and general variant of our protocol which settles Theorems 1.1 and 1.2. Finally, in Sect. 5, we discuss several open problems.

2 Preliminaries

In this section, we review the definitions of coin-flipping protocols, \(1/p\)-indistinguishability and \(1/p\)-secure computation (taken almost verbatim from [22, 29]), security with abort, and one-time message authentication.

2.1 Coin-Flipping Protocols

A two-party coin-flipping protocol is defined via two probabilistic polynomial-time Turing machines \((P_1, P_2)\), referred to as parties, that receive as input a security parameter \(1^n\). The parties exchange messages in a sequence of rounds, where in every round each party both sends and receives a message (i.e., a round consists of two moves). At the end of the protocol, \(P_1\) and \(P_2\) produce output bits \(c_1\) and \(c_2\), respectively. We denote by \((c_1 | c_2) \leftarrow \langle P_1(1^n), P_2(1^n) \rangle \) the experiment in which \(P_1\) and \(P_2\) interact (using uniformly chosen random coins), and then \(P_1\) outputs \(c_1\) and \(P_2\) outputs \(c_2\). It is required that for all sufficiently large \(n\), and every possible pair \((c_1, c_2)\) that may be output by \(\langle P_1(1^n), P_2(1^n) \rangle \), it holds that \(c_1 = c_2\) (i.e., \(P_1\) and \(P_2\) agree on a common value). This requirement can be relaxed by asking that the parties agree on a common value with sufficiently high probability.Footnote 2

The security requirement of a coin-flipping protocol is that even if one of \(P_1\) and \(P_2\) is corrupted and arbitrarily deviates from the protocol’s instructions, the bias of the honest party’s output remains bounded. Specifically, we emphasize that a malicious party is allowed to abort prematurely, and in this case, it is assumed that the honest party is notified on the early termination of the protocol. In addition, we emphasize that even when the malicious party is labeled as a cheater, the honest party must output a bit. For simplicity, the following definition considers only the case in which \(P_1\) is corrupted, and an analogous definition holds for the case that \(P_2\) is corrupted:

Definition 2.1

A coin-flipping protocol \((P_1, P_2)\) has bias at most \(\epsilon (n)\) if for every probabilistic polynomial-time Turing machine \(P^*_1\) it holds that

$$\begin{aligned} \left| \mathrm{Pr} \left[ (c_1 | c_2) \leftarrow \langle P^*_1(1^n), P_2(1^n) \rangle : c_2 = 1 \right] - \frac{1}{2} \right| \le \epsilon (n) + \nu (n), \end{aligned}$$

for some negligible function \(\nu (n)\) and for all sufficiently large \(n\).

2.2 1/p-Indistinguishability and 1/p-Secure Computation

\(1/p\)-Indistinguishability A distribution ensemble \(X = \{ X(a, n) \}_{a \in {\mathcal {D}}_n, n \in \mathbb {N}}\) is an infinite sequence of random variables indexed by \(a \in {\mathcal {D}}_n\) and \(n \in {\mathbb {N}}\), where \({\mathcal {D}}_n\) is a set that may depend on \(n\). For a fixed polynomial \(p(n)\), two distribution ensembles \(X = \{ X(a, n) \}_{a \in {\mathcal {D}}_n, n \in {\mathbb {N}}}\) and \(Y = \{ Y(a, n) \}_{a \in {\mathcal {D}}_n, n \in {\mathbb {N}}}\) are computationally \(1/p\)-indistinguishable, denoted \(X \mathop {\approx }\limits ^{1/p} Y\), if for every non-uniform polynomial-time algorithm \(D\) there exists a negligible function \(\nu (n)\) such that for all sufficiently large \(n \in {\mathbb {N}}\) and for all \(a \in {\mathcal {D}}_n\) it holds that

$$\begin{aligned} \left| \mathrm{Pr} \left[ D(X(a,n)) = 1 \right] - \mathrm{Pr} \left[ D(Y(a,n)) = 1 \right] \right| \le \frac{1}{p(n)} + \nu (n). \end{aligned}$$

\(1/p\)-Secure Computation A two-party protocol for computing a functionality \({\mathcal {F}} = \{ (f^1, f^2)\}\) is a protocol running in polynomial time and satisfying the following functional requirement: If party \(P_1\) holds input \((1^n, x)\), and party \(P_2\) holds input \((1^n, y)\), then the joint distribution of the outputs of the parties is statistically close to \((f^1(x, y), f^2(x, y))\). In what follows we define the notion of \(1/p\)-secure computation [22, 29]. The definition uses the real/ideal paradigm (following [13, 19, 27]) where we consider a completely fair ideal model (as typically considered in the setting of honest majority) and require only \(1/p\)-indistinguishability rather than indistinguishability (we note that, in general, the notions of \(1/p\)-security and security with abort are incomparable). We consider active adversaries, who may deviate from the protocol in an arbitrary manner, and static corruptions.

Security of Protocols (Informal) The security of a protocol is analyzed by comparing what an adversary can do in a real protocol execution to what it can do in an ideal scenario that is secure by definition. This is formalized by considering an ideal computation involving an incorruptible trusted party to whom the parties send their inputs. The trusted party computes the functionality on the inputs and returns to each party its respective output. Loosely speaking, a protocol is secure if any adversary interacting in the real protocol (where no trusted party exists) can do no more harm than if it was involved in the above-described ideal computation.

Execution in the Ideal Model The parties are \(P_1\) and \(P_2\), and there is an adversary \({\mathcal {A}}\) who has corrupted one of them. An ideal execution for the computation of \({\mathcal {F}} = \{ f_n \}\) proceeds as follows:

Inputs::

\(P_1\) and \(P_2\) hold the security parameter \(1^n\) and inputs \(x \in X_n\) and \(y \in Y_n\), respectively. The adversary \({\mathcal {A}}\) receives an auxiliary input \(\mathsf{aux}\).

Send inputs to trusted party::

The honest party sends its input to the trusted party. The corrupted party may send an arbitrary value (chosen by \({\mathcal {A}}\)) to the trusted party. Denote the pair of inputs sent to the trusted party by \((x', y')\).

Trusted party sends outputs::

If \(x' \notin X_n\), the trusted party sets \(x'\) to some default element \(x_0 \in X_n\) (and likewise if \(y' \notin Y_n\)). Then, the trusted party chooses \(r\) uniformly at random and sends \(f^1_n(x', y'; r)\) to \(P_1\) and \(f^2_n(x', y'; r)\) to \(P_2\).

Outputs::

The honest party outputs whatever it was sent by the trusted party, the corrupted party outputs nothing, and \({\mathcal {A}}\) outputs any arbitrary (probabilistic polynomial-time computable) function of its view.

We let \(\mathsf{IDEAL}_{{\mathcal {F}}, {\mathcal {A}}(\mathsf{aux})}(x, y, n)\) be the random variable consisting of the view of the adversary and the output of the honest party following an execution in the ideal model as described above.

Execution in the Real Model We now consider the real model in which a two-party protocol \(\pi \) is executed by \(P_1\) and \(P_2\) (and there is no trusted party). The protocol execution is divided into rounds; in each round, one of the parties sends a message. The honest party computes its messages as specified by \(\pi \). The messages sent by the corrupted party are chosen by the adversary, \({\mathcal {A}}\), and can be an arbitrary (polynomial-time) function of the corrupted party’s inputs, random coins, and the messages received from the honest party in previous rounds. If the corrupted party aborts in one of the protocol rounds, the honest party behaves as if it had received a special \(\bot \) symbol in that round.

Let \(\pi \) be a two-party protocol computing \({\mathcal {F}}\). Let \({\mathcal {A}}\) be a non-uniform probabilistic polynomial-time machine with auxiliary input \(\mathsf{aux}\). We let \(\mathsf{REAL}_{\pi , {\mathcal {A}}(\mathsf{aux})}(x, y, n)\) be the random variable consisting of the view of the adversary and the output of the honest party, following an execution of \(\pi \) where \(P_1\) begins by holding input \((1^n, x)\) and \(P_2\) begins by holding input \((1^n, y)\).

Security as Emulation of an Ideal Execution in the Real Model Having defined the ideal and real models, we can now define security of a protocol. Loosely speaking, the definition asserts that a secure protocol (in the real model) emulates the ideal model (in which a trusted party exists). This is formulated as follows:

Definition 2.2

(\(1/p\)-secure computation) Let \({\mathcal {F}}\) and \(\pi \) be as above and fix a function \(p = p(n)\). Protocol \(\pi \) is said to \(1/p\)-securely compute \({\mathcal {F}}\) if for every non-uniform probabilistic polynomial-time adversary \({\mathcal {A}}\) in the real model, there exists a non-uniform probabilistic polynomial-time adversary \({\mathcal {S}}\) in the ideal model such that

$$\begin{aligned}&\left\{ \mathsf{IDEAL}_{{\mathcal {F}}, {\mathcal {S}}(\mathsf{aux})}(x, y, n) \right\} _{(x,y) \in X \times Y, \mathsf{aux} \in \{0,1\}^*}\\&\quad \mathop {\approx }\limits ^{1/p} \left\{ \mathsf{REAL}_{\pi , {\mathcal {A}}(\mathsf{aux})}(x, y, n) \right\} _{(x,y) \in X \times Y, \mathsf{aux} \in \{0,1\}^*} \end{aligned}$$

and the same party is corrupted in both the real and ideal models.

2.3 Security With Abort

In what follows we use the standard notion of computational indistinguishability. That is, two distribution ensembles \(X = \{ X(a, n) \}_{a \in {\mathcal {D}}_n, n \in \mathbb {N}}\) and \(Y = \{ Y(a, n) \}_{a \in {\mathcal {D}}_n, n \in {\mathbb {N}}}\) are computationally indistinguishable, denoted \(X \mathop {=}\limits ^{c} Y\), if for every non-uniform polynomial-time algorithm \(D\) there exists a negligible function \(\nu (n)\) such that for all sufficiently large \(n \in {\mathbb {N}}\) and for all \(a \in {\mathcal {D}}_n\) it holds that

$$\begin{aligned} \left| \mathrm{Pr} \left[ D(X(a,n)) = 1 \right] - \mathrm{Pr} \left[ D(Y(a,n)) = 1 \right] \right| \le \nu (n). \end{aligned}$$

Security with abort is the standard notion for secure computation where an honest majority is not available. The definition is similar to the definition of \(1/p\)-security presented in Sect. 2.2, with the following two exceptions: (1) The ideal-model adversary is allowed to choose whether the honest parties receive their outputs (i.e., fairness is not guaranteed) and (2) the ideal model and real model are required to be computationally indistinguishable. Specifically, the execution in the real model is as described in Sect. 2.2, and the execution in the ideal model is modified as follows:

Inputs::

\(P_1\) and \(P_2\) hold the security parameter \(1^n\) and inputs \(x \in X_n\) and \(y \in Y_n\), respectively. The adversary \({\mathcal {A}}\) receives an auxiliary input \(\mathsf{aux}\).

Send inputs to trusted party::

The honest party sends its input to the trusted party. The corrupted party controlled by \({\mathcal {A}}\) may send any value of its choice. Denote the pair of inputs sent to the trusted party by \((x', y')\).

Trusted party sends output to corrupted party::

If \(x' \notin X_n\), the trusted party sets \(x'\) to some default element \(x_0 \in X_n\) (and likewise if \(y' \notin Y_n\)). Then, the trusted party chooses \(r\) uniformly at random, computes \(z_1 = f^1_n(x', y'; r)\) and \(z_2 = f^2_n(x', y'; r)\) to \(P_2\), and sends \(z_i\) to the corrupted party \(P_i\) (i.e., to the adversary \({\mathcal {A}}\)).

Adversary decides whether to abort::

After receiving its output, the adversary sends either “abort” or “continue” to the trusted party. In the former case, the trusted party sends \(\bot \) to the honest party \(P_j\), and in the latter case, the trusted party sends \(z_j\) to \(P_j\).

Outputs::

The honest party outputs whatever it was sent by the trusted party, the corrupted party outputs nothing, and \({\mathcal {A}}\) outputs any arbitrary (probabilistic polynomial-time computable) function of its view.

We let \(\mathsf{IDEAL}^\mathsf{abort}_{{\mathcal {F}}, {\mathcal {A}}(\mathsf{aux})}(x, y, n)\) be the random variable consisting of the view of the adversary and the output of the honest party following an execution in the ideal model as described above.

Definition 2.3

(Security with abort) Let \({\mathcal {F}}\) and \(\pi \) be as above. Protocol \(\pi \) is said to securely compute \({\mathcal {F}}\) with abort if for every non-uniform probabilistic polynomial-time adversary \({\mathcal {A}}\) in the real model, there exists a non-uniform probabilistic polynomial-time adversary \({\mathcal {S}}\) in the ideal model such that

$$\begin{aligned}&\left\{ \mathsf{IDEAL}^\mathsf{abort}_{{\mathcal {F}}, {\mathcal {S}}(\mathsf{aux})}(x, y, n) \right\} _{(x,y) \in X \times Y, \mathsf{aux} \in \{0,1\}^*}\\&\quad \mathop {=}\limits ^{c} \left\{ \mathsf{REAL}_{\pi , {\mathcal {A}}(\mathsf{aux})}(x, y, n) \right\} _{(x,y) \in X \times Y, \mathsf{aux} \in \{0,1\}^*}. \end{aligned}$$

2.4 One-Time Message Authentication

Message authentication codes provide assurance to the receiver of a message that it was sent by a specified legitimate sender, even in the presence of an active adversary who controls the communication channel. A message authentication code is defined via triplet \((\mathsf{Gen}, \mathsf{Mac}, \mathsf{Vrfy})\) of probabilistic polynomial-time Turing machines such that:

  1. 1.

    The key generation algorithm \(\mathsf{Gen}\) receives as input a security parameter \(1^n\) and outputs an authentication key \(k\).

  2. 2.

    The authentication algorithm \(\mathsf{Mac}\) receives as input an authentication key \(k\) and a message \(m\), and outputs a tag \(t\).

  3. 3.

    The verification algorithm \(\mathsf{Vrfy}\) receives as input an authentication key \(k\), a message \(m\), and a tag \(t\), and outputs a bit \(b \in \{0,1\}\).

The functionality guarantee of a message authentication code is that for any message \(m\) it holds that \(\mathsf{Vrfy} (k, m, \mathsf{Mac}(k, m)) = 1\) with overwhelming probability over the internal coin tosses of \(\mathsf{Gen},\,\mathsf{Mac}\), and \(\mathsf{Vrfy}\). In this paper, we rely on message authentication codes that are one-time secure. That is, an authentication key is used to authenticate a single message. We consider an adversary that queries the authentication algorithm on a single message \(m\) of her choice and then outputs a pair \((m', t')\). We say that the adversary forges an authentication tag if \(m' \ne m\) and \(\mathsf{Vrfy} (k, m', t') = 1\). Message authentication codes that are one-time secure exist in the information-theoretic setting; that is, even an unbounded adversary has only a negligible forgery probability. Constructions of such codes can be based, for example, on pair-wise independent hash functions [37].

3 A Simplified Protocol

In order to demonstrate the main ideas underlying our approach, in this section we present a simplified protocol. The simplification is twofold: First, we consider the specific coin-flipping functionality (as in Theorem 1.1), and not the more general functionality of sampling from an arbitrary distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\) (as in Theorem 1.2). Second, the coin-flipping protocol will only be \(1/(2r)\)-secure and not \(1/(4r)\)-secure.

We describe the protocol in a sequence of refinements. We first informally describe the protocol assuming the existence of a trusted third party. The trusted third party acts as a “dealer” in a preprocessing phase, sending each party an input that it uses in the protocol. In the protocol, we make no assumptions about the computational power of the parties. We then eliminate the need for the trusted third party by having the parties execute a secure-with-abort protocol that implements its functionality (this can be done in a constant number of rounds).

The Protocol The joint input of the parties, \(P_1\) and \(P_2\), is the security parameter \(1^n\) and a polynomial \(r = r(n)\) indicating the number of rounds in the protocol. In the preprocessing phase, the trusted third party chooses uniformly at random a value \(i^* \in \{ 1, \ldots , r \}\) that corresponds to the round in which the parties learn their outputs. In every round \(i \in \{1, \ldots , r\}\), each party learns one bit of information: \(P_1\) learns a bit \(a_i\), and \(P_2\) learns a bit \(b_i\). In every round \(i \in \{1, \ldots , i^* - 1\}\) (these are the “dummy” rounds), the values \(a_i\) and \(b_i\) are independently and uniformly chosen. In every round \(i \in \{i^*, \ldots , r\}\), the parties learn the same uniformly distributed bit \(c = a_i = b_i\) which is their output in the protocol. If the parties complete all \(r\) rounds of the protocol, then \(P_1\) and \(P_2\) output \(a_r\) and \(b_r\), respectively.Footnote 3 Otherwise, if a party aborts prematurely, the other party outputs the value of the previous round and halts. That is, if \(P_1\) aborts in round \(i \in \{1, \ldots , r\}\), then \(P_2\) outputs the value \(b_{i-1}\) and halts. Similarly, if \(P_2\) aborts in round \(i\), then \(P_1\) outputs the value \(a_{i-1}\) and halts.

More specifically, in the preprocessing phase the trusted third party chooses \(i^* \in \{1, \ldots , r\}\) uniformly at random and defines \(a_1, \ldots , a_r\) and \(b_1, \ldots , b_r\) as follows: First, it chooses \(a_1, \ldots , a_{i^* - 1} \in \{0,1\}\) and \(b_1, \ldots , b_{i^* - 1} \in \{0,1\}\) independently and uniformly at random. Then, it chooses \(c \in \{0,1\}\) uniformly at random and lets \(a_{i^*} = \cdots = a_r = b_{i^*} = \cdots = b_r = c\). The trusted third party creates secret shares of the values \(a_1,\ldots ,a_r\) and \(b_1,\ldots ,b_r\) using an information-theoretically secure \(2\)-out-of-\(2\) secret-sharing scheme, and these shares are given to the parties. For concreteness, we use the specific secret-sharing scheme that splits a bit \(x\) into \((x^{(1)}, x^{(2)})\) by choosing \(x^{(1)} \in \{0,1\}\) uniformly at random and letting \(x^{(2)} = x \oplus x^{(1)}\). In every round \(i \in \{1, \ldots , r\}\), the parties exchange their shares for the current round, which enables \(P_1\) to reconstruct \(a_i\) and \(P_2\) to reconstruct \(b_i\). Clearly, when both parties are honest, the parties produce the same output bit which is uniformly distributed.

Eliminating the Trusted Third Party We eliminate the need for the trusted third party by relying on a possibly unfair sub-protocol that securely computes with abort the functionality \(\mathsf{ShareGen}_r\), formally described in Fig. 1. Such a protocol with a constant number of rounds can be constructed assuming the existence of oblivious transfer (see, for example, [31]). In addition, our protocol also relies on a one-time message authentication code \((\mathsf{Gen}, \mathsf{Mac}, \mathsf{Vrfy})\) that is information-theoretically secure. The functionality \(\mathsf{ShareGen}_r\) provides the parties with authentication keys and authentication tags, so each party can verify that the shares received from the other party were the ones generated by \(\mathsf{ShareGen}_r\) in the preprocessing phase. A formal description of the protocol is provided in Fig. 2.

Fig. 1
figure 1

The ideal functionality \(\mathsf{ShareGen}_r\)

Fig. 2
figure 2

The coin-flipping protocol \(\mathsf{CoinFlip}_r\)

Proof of Security The following theorem states that the protocol is \(1/(2r)\)-secure. We then conclude the section by showing our analysis is in fact tight: There exists an efficient adversary that can bias the output of the honest party by essentially \(1/(2r)\).

Theorem 3.1

For any polynomial \(r = r(n)\), if protocol \(\pi \) securely computes \(\mathsf{ShareGen}_r\) with abort, then protocol \(\mathsf{CoinFlip}_r\) is \(1/(2r)\)-secure.

Proof

We prove the (1/2r)-security of protocol \(\mathsf{CoinFlip}_r\) in a hybrid model where a trusted party for computing \(\mathsf{ShareGen}_r\) with abort is available. Using standard techniques (see [13]), it then follows that when replacing the trusted party computing \(\mathsf{ShareGen}_r\) with a sub-protocol that security computes \(\mathsf{ShareGen}_r\) with abort, the resulting protocol is \(1/(2r)\)-secure.

Specifically, for every polynomial-time hybrid-model adversary \({\mathcal {A}}\) corrupting \(P_1\) and running \(\mathsf{CoinFlip}_r\) in the hybrid model, we show that there exists a polynomial-time ideal-model adversary \({\mathcal {S}}\) corrupting \(P_1\) in the ideal model with access to a trusted party computing the coin-flipping functionality such that the statistical distance between these two executions is at most \(1/(2r) + \nu (n)\), for some negligible function \(\nu (n)\). For simplicity, in the remainder of the proof we ignore the aspect of message authentication in the protocol and assume that the only malicious behavior of the adversary \({\mathcal {A}}\) is early abort. This does not result in any loss of generality, since there is only a negligible probability of forging an authentication tag.

On input \((1^n, \mathsf{aux})\), the ideal-model adversary \({\mathcal {S}}\) invokes the hybrid-model adversary \({\mathcal {A}}\) on \((1^n, \mathsf{aux})\) and queries the trusted party computing the coin-flipping functionality to obtain a bit \(c\). The ideal-model adversary \({\mathcal {S}}\) proceeds as follows:

  1. 1.

    \({\mathcal {S}}\) simulates the trusted party computing the \(\mathsf{ShareGen}_r\) functionality by sending \({\mathcal {A}}\) independently and uniformly chosen shares \(a^{(1)}_1,\ldots , a^{(1)}_r, b^{(1)}_1,\ldots , b^{(1)}_r\). If \({\mathcal {A}}\) aborts (i.e., if \({\mathcal {A}}\) sends \(\mathsf{abort}\) to the simulated \(\mathsf{ShareGen}_r\) after receiving the shares), then \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts.

  2. 2.

    \({\mathcal {S}}\) chooses \(i^* \in \{1, \ldots , r\}\) uniformly at random.

  3. 3.

    In every round \(i \in \{1, \ldots , i^* - 1\},\,{\mathcal {S}}\) chooses a random bit \(a_i\) and sends \({\mathcal {A}}\) the share \(a^{(2)}_i = a^{(1)}_i \oplus a_i\). If \({\mathcal {A}}\) aborts, then \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts.

  4. 4.

    In every round \(i \in \{i^*, \ldots , r\},\,{\mathcal {S}}\) sends \({\mathcal {A}}\) the share \(a^{(2)}_{i^* + 1} = a^{(1)}_{i^* + 1} \oplus c\) (recall that \(c\) is the value received from the trusted party computing the coin-flipping functionality). If \({\mathcal {A}}\) aborts, then \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts.

  5. 5.

    At the end of the protocol, \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts.

We now consider the joint distribution of \({\mathcal {A}}\)’s view and the output of the honest party \(P_2\) in the ideal model and in the hybrid model. There are three cases to consider:

  1. 1.

    \({\mathcal {A}}\) aborts before round \(i^*\). In this case, the distributions are identical: In both models, the view of the adversary is the sequence of shares and the sequence of messages up to the round in which \({\mathcal {A}}\) aborted, and the output of \(P_2\) is a uniformly distributed bit which is independent of \({\mathcal {A}}\)’s view.

  2. 2.

    \({\mathcal {A}}\) aborts in round \(i^*\). In this case, \({\mathcal {A}}\)’s view is identical in both models, but the distributions of \(P_2\)’s output given \({\mathcal {A}}\)’s view are not identical. In the ideal model, \(P_2\) outputs the random bit \(c\) that was revealed to \({\mathcal {A}}\) by \({\mathcal {S}}\) in round \(i^*\) (recall that \(c\) is the bit received from the trusted party computing the coin-flipping functionality). In the hybrid model, however, the output of \(P_2\) is the value \(b_{i^* - 1}\) which is a random bit that is independent of \({\mathcal {A}}\)’s view. Thus, in this case the statistical distance between the two distributions is \(1/2\). However, this case occurs with probability at most \(1/r\) since in both models \(i^*\) is independent of \({\mathcal {A}}\)’s view until this round (that is, the probability that \({\mathcal {A}}\) aborts in round \(i^*\) is at most \(1/r\)).

  3. 3.

    \({\mathcal {A}}\) aborts after round \(i^*\) or does not abort. In this case, the distributions are identical: The output of \(P_2\) is the same random bit that was revealed to \({\mathcal {A}}\) in round \(i^*\).

Note that \({\mathcal {A}}\)’s view in the hybrid and ideal models is always identically distributed (no matter what strategy \({\mathcal {A}}\) uses to decide when to abort) and that the only difference is in the distribution of the honest party’s output conditioned on \({\mathcal {A}}\)’s view. Specifically, for any particular round \(i\) the event in which the adversary aborts in round \(i\) occurs with exactly the same probability in both models.Footnote 4 Thus, the above three cases imply that the statistical distance between the two distributions is at most \(1/(2r)\). \(\square \)

Claim 3.2

In protocol \(\mathsf{CoinFlip}_r\) there exists an efficient adversarial party \(P^*_1\) that can bias the output of \(P_2\) by \(\frac{1 - 2^{-r}}{2r}\).

Proof

Consider the adversarial party \(P^*_1\) that completes the preprocessing phase and then halts in the first round \(i \in \{1, \ldots , r\}\) for which \(a_i = 0\). We denote by \(\mathsf{Abort}\) the random variable corresponding to the round in which \(P^*_1\) aborts, where \(\mathsf{Abort} = \bot \) if \(P^*_1\) does not abort. In addition, we denote by \(c_2\) the random variable corresponding to the output bit of \(P_2\). Notice that if \(P^*_1\) aborts in round \(j \le i^*\), then \(P_2\) outputs a random bit, and if \(P^*_1\) does not abort, then \(P_2\) always outputs \(1\). Therefore, for every \(i \in \{1, \ldots , r\}\) it holds that

$$\begin{aligned} \mathrm{Pr} \left[ c_2 = 1 | i^* = i \right]&= \sum _{j = 1}^{i} \mathrm{Pr} \left[ \mathsf{Abort} = j | i^* = i \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = j \wedge i^* = i \right] \\&\quad +\,\mathrm{Pr} \left[ \mathsf{Abort} = \bot | i^* = i \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = \bot \wedge i^* = i \right] \\&= \sum _{j = 1}^{i} \mathrm{Pr} \left[ a_1 = \cdots = a_{j-1} = 1, a_j = 0 \right] \\&\mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = j \wedge i^* = i \right] \\&\quad +\,\mathrm{Pr} \left[ a_1 = \cdots = a_i = 1 \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = \bot \wedge i^* = i \right] \\&= \sum _{j = 1}^{i} \frac{1}{2^j} \cdot \frac{1}{2} + \frac{1}{2^{i}} \cdot 1 \\&= \frac{1}{2} + \frac{1}{2^{i+1}}. \end{aligned}$$

This implies that

$$\begin{aligned} \mathrm{Pr} \left[ c_2 = 1 \right]&= \sum _{i = 1}^r \mathrm{Pr} \left[ i^* = i \right] \mathrm{Pr} \left[ c_2 = 1 | i^* = i \right] \\&= \sum _{i = 1}^r \frac{1}{r} \left( \frac{1}{2} + \frac{1}{2^{i+1}} \right) \\&= \frac{1}{2} + \frac{1 - 2^{-r}}{2r}. \end{aligned}$$

\(\square \)

4 The Generalized Protocol

In this section, we present a more refined and generalized protocol that settles Theorems 1.1 and 1.2. The improvements over the protocol presented in Sect. 3 are as follows:

Improved security guarantee::

In the simplified protocol, party \(P_1\) can bias the output of party \(P_2\) (by aborting in round \(i^*\)), but party \(P_2\) cannot bias the output of party \(P_1\). This is due to the fact that party \(P_1\) always learns the output before party \(P_2\) does. In the generalized protocol, the party that learns the output before the other party is chosen uniformly at random (i.e., party \(P_1\) learns the output before party \(P_2\) with probability \(1/2\)). This is achieved by having the parties exchange a sequence of \(2r\) values \((a_1, b_1), \ldots , (a_{2r}, b_{2r})\) (using the same secret-sharing exchange technique as in the simplified protocol) with the following property: For odd values of \(i\), party \(P_1\) learns \(a_i\) before party \(P_2\) learns \(b_i\), and for even values of \(i\), party \(P_2\) learns \(b_i\) before party \(P_1\) learns \(a_i\). Thus, party \(P_1\) can bias the result only when \(i^*\) is odd, and party \(P_2\) can bias the result only when \(i^*\) is even. The key point is that the parties can exchange the sequence of \(2r\) shares in only \(r+1\) rounds by combining some of their messages.Footnote 5 Figure 3 gives a graphic overview of the protocol.

We note that modifying the original protocol by just having \(\mathsf{ShareGen}\) randomly choose which party starts would also halve the bias (since with probability \(1/2\) the adversary chooses a party that cannot bias the outcome at all). However, this is vulnerable to a trivial dynamic attack: The adversary decides which party to corrupt after seeing which party was chosen to start.

A larger class of functionalities::

We consider the more general task of sampling from a distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\): Party \(P_1\) receives a sample from \({\mathcal {D}}_1\), and party \(P_2\) receives a correlated sample from \({\mathcal {D}}_2\) (in coin flipping, for example, the joint distribution \({\mathcal {D}}\) produces the values \((0,0)\) and \((1,1)\) each with probability \(1/2\)). Our generalized protocol can handle any polynomially sampleable distribution \({\mathcal {D}}\).

Fig. 3
figure 3

Overview of the generalized protocol

In the following, we describe the generalized protocol \(\mathsf{Sampling}_r\) (Sect. 4.1) and then prove its security (Sect. 4.2).

4.1 Description of the Protocol

Joint Input Security parameter \(1^n\) and a polynomially sampleable distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\).

Fig. 4
figure 4

The ideal functionality \(\mathsf{ShareGen}_r\) of the generalized two-party protocol

Preliminary phase:

  1. 1.

    Parties \(P_1\) and \(P_2\) run protocol \(\pi \) for computing \(\mathsf{ShareGen}_r(1^n, {\mathcal {D}})\) (see Fig. 4).

  2. 2.

    If \(P_1\) receives \(\bot \) from the above computation, it outputs a random sample from \({\mathcal {D}}_1\) and halts. Likewise, if \(P_2\) receives \(\bot \), it outputs a random sample from \({\mathcal {D}}_2 \) and halts. Otherwise, the parties proceed.

  3. 3.

    Denote the output of \(P_1\) from \(\pi \) by \(a^{(1)}_1, \ldots , a^{(1)}_{2r},\,\left( b^{(1)}_1, t^b_1 \right) , \ldots , \left( b^{(1)}_{2r}, t^b_{2r} \right) \) and \(k^a_1, \ldots , k^a_{2r}\).

  4. 4.

    Denote the output of \(P_2\) from \(\pi \) by \(\left( a^{(2)}_1, t^a_1 \right) , \ldots , \left( a^{(2)}_{2r}, t^a_{2r} \right) ,\,b^{(2)}_1, \ldots , b^{(2)}_{2r}\) and \(k^b_1, \ldots , k^b_{2r}\).

In round \(\varvec{1}\) do:

  1. 1.

    \(P_2\) sends a share to \(P_1\):

    1. (a)

      \(P_2\) sends \(\left( a^{(2)}_{1}, t^{a}_{1} \right) \) to \(P_1\).

    2. (b)

      \(P_1\) receives \(\left( \hat{a}^{(2)}_{1}, \hat{t}^{a}_{1} \right) \) from \(P_2\). If \(\mathsf{Vrfy}_{k^a_{1}} \left( 1 || \hat{a}^{(2)}_{1}, \hat{t}^{a}_{1} \right) = 0\), then \(P_1\) outputs a random sample from \({\mathcal {D}}_1\) and halts. Otherwise, \(P_1\) reconstructs \(a_{1}\) using the shares \(a^{(1)}_{1}\) and \(\hat{a}^{(2)}_{1}\).

  2. 2.

    \(P_1\) sends a pair of shares to \(P_2\):

    1. (a)

      \(P_1\) sends \(\left( b^{(1)}_{1}, t^{b}_{1} \right) \) and \(\left( b^{(1)}_{2}, t^{b}_{2} \right) \) to \(P_2\).

    2. (b)

      \(P_2\) receives \(\left( \hat{b}^{(1)}_{1}, \hat{t}^{b}_{1} \right) \) and \(\left( \hat{b}^{(1)}_{2}, \hat{t}^{b}_{2} \right) \) from \(P_1\). If \(\mathsf{Vrfy}_{k^b_{1}} \left( 1 || \hat{b}^{(1)}_{1}, \hat{t}^{b}_{1} \right) = 0\), then \(P_2\) outputs a random sample from \({\mathcal {D}}_2\) and halts. Otherwise, \(P_2\) reconstructs \(b_{1}\) using the shares \(b^{(1)}_{1}\) and \(b^{(2)}_{1}\).

    3. (c)

      If \(\mathsf{Vrfy}_{k^b_{2}} \left( 2 || \hat{b}^{(1)}_{2}, \hat{t}^{b}_{2} \right) = 0\), then \(P_2\) outputs \(b_{1}\) and halts. Otherwise, \(P_2\) reconstructs \(b_{2}\) using the shares \(b^{(1)}_{2}\) and \(b^{(2)}_{2}\).

In each round \(\varvec{j = 2, \ldots , r}\) do:

  1. 1.

    \(P_2\) sends a pair of shares to \(P_1\):

    1. (a)

      \(P_2\) sends \(\left( a^{(2)}_{2j - 2}, t^{a}_{2j - 2} \right) \) and \(\left( a^{(2)}_{2j - 1}, t^{a}_{2j - 1} \right) \) to \(P_1\).

    2. (b)

      \(P_1\) receives \(\left( \hat{a}^{(2)}_{2j-2}, \hat{t}^{a}_{2j-2} \right) \) and \(\left( \hat{a}^{(2)}_{2j-1}, \hat{t}^{a}_{2j-1} \right) \) from \(P_2\). If \(\mathsf{Vrfy}_{k^a_{2j-2}} \left( 2j-2 || \hat{a}^{(2)}_{2j-2}, \hat{t}^{a}_{2j-2} \right) = 0\), then \(P_1\) outputs \(a_{2j-3}\) and halts. Otherwise, \(P_1\) reconstructs \(a_{2j-2}\) using the shares \(a^{(1)}_{2j-2}\) and \(\hat{a}^{(2)}_{2j-2}\).

    3. (c)

      If \(\mathsf{Vrfy}_{k^a_{2j-1}} \left( 2j-1 || \hat{a}^{(2)}_{2j-1}, \hat{t}^{a}_{2j-1} \right) = 0\), then \(P_1\) outputs \(a_{2j-2}\) and halts. Otherwise, \(P_1\) reconstructs \(a_{2j-1}\) using the shares \(a^{(1)}_{2j-1}\) and \(\hat{a}^{(2)}_{2j-1}\).

  2. 2.

    \(P_1\) sends a pair of shares to \(P_2\):

    1. (a)

      \(P_1\) sends \(\left( b^{(1)}_{2j-1}, t^{b}_{2j-1} \right) \) and \(\left( b^{(1)}_{2j}, t^{b}_{2j} \right) \) to \(P_2\).

    2. (b)

      \(P_2\) receives \(\left( \hat{b}^{(1)}_{2j-1}, \hat{t}^{b}_{2j-1} \right) \) and \(\left( \hat{b}^{(1)}_{2j}, \hat{t}^{b}_{2j} \right) \) from \(P_1\). If \(\mathsf{Vrfy}_{k^b_{2j-1}} \left( 2j-1 || \hat{b}^{(1)}_{2j-1}, \hat{t}^{b}_{2j-1} \right) = 0\), then \(P_2\) outputs \(b_{2j-2}\) and halts. Otherwise, \(P_2\) reconstructs \(b_{2j-1}\) using the shares \(b^{(1)}_{2j-1}\) and \(b^{(2)}_{2j-1}\).

    3. (c)

      If \(\mathsf{Vrfy}_{k^b_{2j}} \left( 2j || \hat{b}^{(1)}_{2j}, \hat{t}^{b}_{2j} \right) = 0\), then \(P_2\) outputs \(b_{2j-1}\) and halts. Otherwise, \(P_2\) reconstructs \(b_{2j}\) using the shares \(b^{(1)}_{2j}\) and \(b^{(2)}_{2j}\).

In round \(\varvec{r + 1}\) do:

  1. 1.

    \(P_2\) sends a share to \(P_1\):

    1. (a)

      \(P_2\) sends \(\left( a^{(2)}_{2r}, t^{a}_{2r} \right) \) to \(P_1\).

    2. (b)

      \(P_1\) receives \(\left( \hat{a}^{(2)}_{2r}, \hat{t}^{a}_{2r} \right) \) from \(P_2\). If \(\mathsf{Vrfy}_{k^a_{2r}} \left( 2r || \hat{a}^{(2)}_{2r}, \hat{t}^{a}_{2r} \right) = 0\), then \(P_1\) outputs \(a_{2r-1}\) and halts. Otherwise, \(P_1\) reconstructs \(a_{2r}\) using the shares \(a^{(1)}_{2r}\) and \(\hat{a}^{(2)}_{2r}\).

  2. 2.

    \(P_1\) and \(P_2\) output the values \(a_{2r}\) and \(b_{2r}\), respectively, and halt.

4.2 Proof of Security

We remind the reader that \(\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)\) denotes the statistical distance between the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\) and the direct product of the two marginal distributions \({\mathcal {D}}_1\) and \({\mathcal {D}}_2\). We prove the following theorem which implies Theorems 1.1 and 1.2:

Theorem 4.1

For any polynomially sampleable distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\) and polynomial \(r = r(n)\), if protocol \(\pi \) securely computes \(\mathsf{ShareGen}_r\) with abort, then \(\mathsf{Sampling}_r\) is \(\frac{\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)}{2r}\)-secure.

Proof

As in the proof of Theorem 3.1, we prove the security of the protocol in a hybrid model where a trusted party for computing \(\mathsf{ShareGen}_r\) with abort is available. For every polynomial-time hybrid-model adversary \({\mathcal {A}}\) corrupting \(P_1\) and running \(\mathsf{Sampling}_r\) in the hybrid model, we show that there exists a polynomial-time ideal-model adversary \({\mathcal {S}}\) corrupting \(P_1\) in the ideal model with access to a trusted party that samples from \({\mathcal {D}}\) such that the statistical distance between these two executions is at most \(\frac{\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)}{2r} + \nu (n)\), for some negligible function \(\nu (n)\). The proof for the case that \(P_2\) is corrupted is essentially identical and therefore is omitted. For simplicity, we ignore the aspect of message authentication and assume that the only malicious behavior of \({\mathcal {A}}\) is early abort. This does not result in any loss of generality, since there is only a negligible probability of forging an authentication tag.

On input \((1^n, \mathsf{aux})\), the ideal-model adversary \({\mathcal {S}}\) invokes the hybrid-model adversary \({\mathcal {A}}\) on \((1^n, \mathsf{aux})\) and queries the trusted party who sends to the parties a sample \((c_1, c_2)\) drawn from the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\). At this point, \({\mathcal {S}}\) receives the value \(c_1\) which was sent to the corrupted \(P_1\). \({\mathcal {S}}\) simulates the trusted party computing the \(\mathsf{ShareGen}_r\) functionality by sending \({\mathcal {A}}\) independently and uniformly chosen shares \(a^{(1)}_1, \ldots , a^{(1)}_{2r}, b^{(1)}_1, \ldots , b^{(1)}_{2r}\). If \({\mathcal {A}}\) aborts at this point, then \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts. Otherwise, \({\mathcal {S}}\) chooses \(i^* \in \{1, \ldots , 2r\}\) uniformly at random and proceeds by sending \({\mathcal {A}}\) shares \(a^{(2)}_1, \ldots , a^{(2)}_{i^* + 1}\) (in the order defined by the rounds of the protocol), where the shares are defined as follows:

  1. 1.

    For every \(i \in \{1, \ldots , i^* - 1\},\,{\mathcal {S}}\) samples a random \(a_i \leftarrow {\mathcal {D}}_1\) and sets \(a^{(2)}_i = a^{(1)}_i \oplus a_i\).

  2. 2.

    For every \(i \le \{i^*, \ldots , r\},\,{\mathcal {S}}\) sets \(a^{(2)}_{i^*} = a^{(1)}_{i^*} \oplus c_1\) where \(c_1\) is the value received from the trusted party.

If at some point during the simulation \({\mathcal {A}}\) aborts, then \({\mathcal {S}}\) outputs \({\mathcal {A}}\)’s output and halts.

We now consider the joint distribution of the adversaries view and the output of the honest party \(P_2\) in the ideal model and in the hybrid model, and show that the statistical distance between the two distributions is at most \(\frac{\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)}{2r}\). As in the proof of Theorem 3.1, note that the adversary’s view is always identically distributed in both cases, and therefore, we only need to consider the distribution of \(P_2\)’s output given the adversary’s view. There are two cases to consider, depending on whether \(i^*\) is even or odd.

Case 1: \(\varvec{i^* = 2j^*}\) for some \(\varvec{j^* \in \{1, \ldots , r\}}\). In this case, \(P_2\) learns its output in round \(j^*\) and \(P_1\) learns its output in round \(j^* + 1\), and we show that the two distributions are identical. There are two cases to consider:

  1. 1.

    \({\mathcal {A}}\) aborts before round \(j^* + 1\). In both models, if \({\mathcal {A}}\) aborts before round \(j^* + 1\), then he does not receive the share \(a^{(2)}_{i^*} = a^{(2)}_{2j^*}\) since this share is sent by \(P_2\) only in round \(j^* + 1\). Therefore, \({\mathcal {A}}\)’s view is independent of \(P_2\)’s output.

  2. 2.

    \({\mathcal {A}}\) aborts in round \(j^* + 1\) or does not abort. In this case, in both models \({\mathcal {A}}\) learns \(c_1\) and \(P_2\) outputs \(c_2\), where \((c_1, c_2)\) are sampled from the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\).

Case 2: \(\varvec{i^* = 2j^* - 1}\) for some \(\varvec{j^* \in \{1, \ldots , r\}}\). In this case, both parties learn their outputs in round \(j^*\), but \(P_1\) learns its output first. Informally, \(P_1\) can bias \(P_2\)’s output only by aborting in round \(j^*\) after receiving \(P_2\)’s message for this round. More formally, there are three cases to consider:

  1. 1.

    \({\mathcal {A}}\) aborts before round \(j^*\). In this case, the distributions are identical: In both models, the view of the adversary is the sequence of shares and the sequence of messages up to the round in which \({\mathcal {A}}\) aborted, and the output of \(P_2\) is a random sample from \({\mathcal {D}}_2\) that is independent of \({\mathcal {A}}\)’s view.

  2. 2.

    \({\mathcal {A}}\) aborts in round \(j^*\). In this case, \({\mathcal {A}}\)’s view is identical in both models, but the distributions of \(P_2\)’s output given \({\mathcal {A}}\)’s view are not identical. In the ideal model, \(P_2\) outputs the value \(c_2\) that is correlated with the value \(c_1\) that was revealed to \({\mathcal {A}}\) by \({\mathcal {S}}\) in round \(i^*\) (i.e., \((c_1, c_2)\) is sampled from the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\)). In the hybrid model, however, the output of \(P_2\) is the value \(b_{i^* - 1}\) which is a random sample from \({\mathcal {D}}_2\) that is independent of \({\mathcal {A}}\)’s view. Thus, in this case, the statistical distance between the two distributions is \(\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)\). However, this case occurs with probability at most \(1/2r\) since in both cases \(i^*\) is odd with probability exactly \(1/2\) and is independent of \({\mathcal {A}}\)’s view until this round (that is, the probability that \({\mathcal {A}}\) aborts in round \(j^*\) is at most \(1/r\)).

  3. 3.

    \({\mathcal {A}}\) aborts in round \(j^* + 1\) or does not abort. In this case, the distribution are identical: In both models, \({\mathcal {A}}\) learns \(c_1\) and \(P_2\) outputs \(c_2\), where \((c_1, c_2)\) are sampled from the joint distribution \({\mathcal {D}} = ({\mathcal {D}}_1, {\mathcal {D}}_2)\).

This implies that the statistical distance between the two distributions is at most \(\frac{\mathrm{SD}({\mathcal {D}}, {\mathcal {D}}_1 \otimes {\mathcal {D}}_2)}{2r}\) and concludes the proof of the theorem. \(\square \)

We conclude this section by showing that Theorem 4.1 is tight for the coin-flipping functionality: There exists an efficient adversary that can bias the output of the honest party by essentially \(1/(4r)\). This adversary is a natural generalization of the adversary presented at the end of Sect. 3.

Claim 4.2

In protocol \(\mathsf{Sampling}_r\) instantiated with the distribution \({\mathcal {D}}\) that outputs the values \((0,0)\) and \((1,1)\) each with probability \(1/2\), there exists an efficient adversarial party \(P^*_1\) that can bias the output of \(P_2\) by \(\frac{1 - 2^{-r}}{4r}\).

Proof

Consider the adversarial party \(P^*_1\) that completes the preprocessing phase and then halts in the first round \(j \in \{1, \ldots , r\}\) for which \(a_{2j - 1} = 0\). We denote by \(\mathsf{Abort}\) the random variable corresponding to the round in which \(P^*_1\) aborts, where \(\mathsf{Abort} = \bot \) if \(P^*_1\) does not abort. In addition, we denote by \(c_2\) the random variable corresponding to the output bit of \(P_2\). Notice that if \(i^*\) is even, then \(P_2\) outputs \(1\) with probability \(1/2\). Now, suppose that \(i^* = 2j^* - 1\) for some \(j^* \in \{1, \ldots , r\}\), then there are two cases to consider:

  • If \(P^*_1\) aborts in round \(j \le j^*\), then \(P_2\) outputs a random bit.

  • If \(P^*_1\) does not abort, then \(P_2\) always outputs \(1\).

Therefore, for every \(j^* \in \{1, \ldots , r\}\), it holds that

$$\begin{aligned}&\mathrm{Pr} \left[ c_2 = 1 | i^* = 2j^* - 1 \right] \\&\quad = \sum _{j = 1}^{j^*} \mathrm{Pr} \left[ \mathsf{Abort} = j | i^* = 2j^* - 1 \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = j \wedge i^* = 2j^* - 1 \right] \\&\quad \quad +\,\mathrm{Pr} \left[ \mathsf{Abort} = \bot | i^* = 2j^* - 1 \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = \bot \wedge i^* = 2j^* - 1 \right] \\&\quad = \sum _{j = 1}^{j^*} \mathrm{Pr} \left[ a_1 \!=\! \cdots \!=\! a_{2j-3} \!=\! 1, a_{2j - 1} \!=\! 0 \right] \mathrm{Pr} \left[ c_2 \!=\! 1 | \mathsf{Abort} = j \wedge i^* = 2j^* - 1 \right] \\&\quad \quad +\,\mathrm{Pr} \left[ a_1 \!=\! \cdots \!=\! a_{2j^* - 3} \!=\! a_{2j^* - 1} = 1 \right] \mathrm{Pr} \left[ c_2 = 1 | \mathsf{Abort} = \bot \wedge i^* = 2j^* - 1 \right] \\&\quad = \sum _{j = 1}^{j^*} \frac{1}{2^j} \cdot \frac{1}{2} + \frac{1}{2^{j^*}} \cdot 1 \\&\quad = \frac{1}{2} + \frac{1}{2^{j^*+1}}. \end{aligned}$$

This implies that

$$\begin{aligned} \mathrm{Pr} \left[ c_2 = 1 \right]&= \mathrm{Pr} \left[ i^* \text{ is } \text{ even } \right] \mathrm{Pr} \left[ c_2 = 1 | i^* \text{ is } \text{ even } \right] \\&\quad +\,\sum _{j^* = 1}^r \mathrm{Pr} \left[ i^* = 2j^* - 1 \right] \mathrm{Pr} \left[ c_2 = 1 | i^* = 2j^* - 1 \right] \\&= \frac{1}{2} \cdot \frac{1}{2} + \sum _{j^* = 1}^r \frac{1}{2r} \left( \frac{1}{2} + \frac{1}{2^{j^*+1}} \right) \\&= \frac{1}{2} + \frac{1 - 2^{-r}}{4r}. \end{aligned}$$

\(\square \)

5 Open Problems

Identifying the Minimal Computational Assumptions Blum’s coin-flipping protocol, as well as its generalization that guarantees bias of \(O(1 / \sqrt{r})\), can rely on the existence of any one-way function. We showed that the optimal trade-off between the round complexity and the bias can be achieved assuming the existence of oblivious transfer, a complete primitive for secure computation. A challenging problem is to either achieve the optimal bias based on seemingly weaker assumptions (e.g., one-way functions) or demonstrate that oblivious transfer is in fact essential. Although meaningful progress was recently made in this direction for restricted black-box constructions by Dachman-Soled et al. [16, 17] (see Sect. 1.3), the general problem is still left open.

Identifying the Exact Trade-Off The bias of our protocol almost exactly matches Cleve’s lower bound: Cleve showed that any \(r\)-round protocol has bias at least \(1/(8r + 2)\), and we manage to achieve bias of at most \(1/(4r - c)\) for some constant \(c > 0\). It will be interesting to eliminate the multiplicative gap of \(1/2\) by either improving Cleve’s lower bound or improving our upper bound. We note, however, that this cannot be resolved by improving the security analysis of our protocol since there exists an efficient adversary that can bias our protocol by essentially \(1/(4r)\) (see Sect. 4), and therefore, our analysis is tight.

Efficient Implementation Our protocol uses a general secure computation step in the preprocessing phase. Although asymptotically optimal, the techniques used in general secure computation often have a large overhead. Hence, it would be helpful to find an efficient sub-protocol to compute the \(\mathsf{ShareGen}_r\) functionality that can lead to a practical implementation.

The Multiparty Setting Blum’s coin-flipping protocol can be extended to an \(m\)-party \(r\)-round protocol that has bias \(O(m / \sqrt{r})\), and several exciting results for restricted cases of the multiparty setting were recently achieved by Beimel et al. [7] and by Haitner and Tsfadia [24] (see Sect. 1.3). However, the general problem of identifying the optimal trade-off between the number of parties, the round complexity, and the bias is still left open.