On the (Im)Possibility of Extending Coin Toss
 573 Downloads
Abstract
We consider the task of extending a given coin toss. By this, we mean the twoparty task of using a single instance of a given coin toss protocol in order to interactively generate more random coins. A bit more formally, our goal is to generate n common random coins from a single use of an ideal functionality that gives \(m<n\) common random coins to both parties. In the framework of universal composability, we show the impossibility of securely extending a coin toss for statistical and perfect security. On the other hand, for computational security, the existence of a protocol for coin toss extension depends on the number m of random coins that can be obtained “for free.” For the case of standalone security, i.e., a simulationbased security definition without an environment, we present a protocol for statistically secure coin toss extension. Our protocol works for superlogarithmic m, which is optimal as we show the impossibility of statistically secure coin toss extension for smaller m. Combining our results with already known results, we obtain a (nearly) complete characterization under which circumstances coin toss extension is possible.
Keywords
Coin toss Universal composability Reactive simulatability Cryptographic protocols1 Introduction
Blum showed in [6] how to flip a coin over the telephone line. His protocol guarantees that even if one party does not follow the protocol, the other party still gets a uniformly distributed coin toss outcome. This general concept of generating common randomness in a way such that no dishonest party can dictate the outcome proved very useful in cryptography, for example, in the construction of protocols for general secure multiparty computation.
Here, we are interested in the task of extending a given coin toss. That is, suppose that two parties already have the possibility of making a single mbit coin toss. Is it possible for them to get \(n>m\) bits of common randomness? The answer we come up with is basically: “It depends on the security model and on the length of the coin toss used as seed.”
The first thing the extensibility of a given coin toss depends on is the required kind of security. In this work, we will consider simulationbased security notions, in which a protocol is secure if and only if it “imitates” an ideal functionality. For the case of coin toss, this ideal functionality will act as a trusted host that simply equips both parties with common random coins. However, we would like to stress that we would like to model an interactive coin toss protocol. Hence, the coin toss ideal functionality first expects an “activation signal” from both parties before handing out the random coins. This is quite different from a “common random string” (CRS) functionality that does not require such activation signals. (In fact, in this work, we will investigate also CRSs and CRS extension protocols, with somewhat different results compared to the coin toss case.)
A little more technically, one specific kind of security requirement (which we call “standalone simulatability” here) is that the protocol imitates the ideal coin toss functionality in the sense of [14], where a simulator has to invent a realistic protocol run after learning the outcome of the ideal coin toss. A stronger type of requirement is to demand universal composability, which basically means that the protocol imitates an ideal coin toss functionality even in arbitrary protocol environments. Security in the latter sense can conveniently be captured in a simulatability framework like the universal composability framework [7] (see also [19]) or the reactive simulatability model [3, 24].
Orthogonal to this, one can vary the level of fulfillment of each of these security requirements. For example, one can demand standalone simulatability of the protocol with respect to polynomialtime adversaries in the sense that real protocol and ideal functionality are only computationally indistinguishable. This specific requirement is already fulfilled by the protocol of Blum. Alternatively, one can demand, for example, universal composability of the protocol with respect to unbounded adversaries. This would then yield statistical or even perfect security. We show that whether such a protocol exists depending on the asymptotic behavior of m.
Finally, we clarify that in this paper, we consider coin toss protocols that do not necessarily guarantee output in case one party is corrupted. (We only require that when both parties are honest and all messages are delivered, both parties will give the same output.) Our definition aligns in a natural way with other simulationbased definitions (and in particular, universal composability) that usually do not guarantee output. Yet, our definition is weaker than, for example, the one considered by Cleve [11], Moran et al. [23] (which guarantees a uniformly distributed output in any case). Specifically, in our case, a dishonest party could abort the protocol (and potentially cause the other party not to output anything) once it learns that the result would have been unfavorable. We consider this weaker notion for the both assumed and achieved coin toss in a coin toss extension protocol. Hence, it is not clear to what extent even our negative results also hold for the stronger notion of coin toss. We note, however, that at least for standalone statistical security, we give a positive result (i.e., a coin toss extension protocol) that does guarantee output.
Summary of our results on coin toss extension
Security model  Level  

Computational  Statistical  Perfect  
Standalone simulatability  Yes  Depends  No 
Universal composability  Depends  No  No 
1.1 Known Results in the Perfect and Statistical Case
A folklore theorem states that (perfectly nontrivial) statistically secure coin toss is impossible from scratch (even in very lenient security models). Kitaev extended this result to protocols using quantum communication (cf. [1]). The task of extending a given coin toss was first investigated by Bellare et al. [4]. They presented a statistically secure protocol for extending a given coin toss (preshared among many parties using a verifiable secret sharing scheme), if less than \(\frac{1}{6}\) of all parties are corrupted.
Their result does not apply to the twoparty case.
1.2 Our Results in the Perfect and Statistical Case
Our results in the perfect case are most easily explained. For the perfect case, we show impossibility of any coin toss extension, no matter how (in)efficient. We show this for standalone simulatability (Corollary 11) and for universal composability (Corollary 16).
We first observe (and abstract in a helper lemma) that we may assume that any (not necessarily efficient) coin toss extension has a certain outer form, both in the standalone and UC security settings. Most interestingly, we may assume that the protocol partners run the \(m\)bit coin toss only at the end of the protocol, after all partytoparty communication. A little more formally, we show that every coin toss extension protocol can be transformed into an inefficient one in which parties do not communicate any more after initiating the \(m\)bit coin toss. Our transformation runs \(2^m\) instances of the original protocol in parallel, one for each possible seed (i.e., outcome of the \(m\)bit coin toss). Note that these instances can all be run without knowledge of the actual seed. The \(m\)bit coin toss is only initiated after all instances have terminated, and the resulting seed selects the protocol instance whose outcome is returned. We will show in the proof of the helper lemma that this modified protocol provides a secure coin toss, assuming the original protocol is secure.
We now outline our argument for the standalone case. The impossibility of perfectly secure coin toss extension in the case of universal composability then follows directly from that in the case of standalone simulatability because universal composability implies standalone simulatability.
So assume a coin toss extension protocol that extends an \(m\)bit coin toss to an \(n\)bit outcome in the perfect standalone setting. Without loss of generality, we can assume that the protocol partners run the \(m\)bit coin toss only at the end of the protocol, after all partytoparty communication. A little more formally, we show that every coin toss extension protocol (efficient or not, but perfectly secure in the standalone setting) can be transformed into an inefficient one in which parties do not communicate any more after initiating the \(m\)bit coin toss. Our transformation runs \(2^m\) instances of the original protocol in parallel, one for each possible seed (i.e., outcome of the \(m\)bit coin toss). Note that these instances can all be run without knowledge of the actual seed. The \(m\)bit coin toss is only initiated after all instances have terminated, and the resulting seed selects the protocol instance whose outcome is returned. We will show in the full proof that this modified protocol provides a secure coin toss, assuming the original protocol is secure.
Now a run of a protocol (of the form above) up to the point where the \(m\)bit coin toss is started yields a set of \(2^m\) possible outcomes, each with probability \(2^{m}\) (corresponding to the probability of each single possible seed). This protocol run without the last step (i.e., without the \(m\)bit coin toss) can hence be interpreted as a finite game with total information. At the end of that game, there are at most \(2^m\) possible candidates for the final outcome.
The goal of the game for a corrupted Alice is to end in a state in which the allzero string has a probability greater than zero (and thus greater–equal to \(2^{m}\)), whereas a corrupted Bob will try to end in a state in which the allzero string has probability 0. A finite game like this has a winning strategy, and either Alice can make the probability of the allzero string nonzero (and thus \(\ge 2^{m}>2^{n}\)) or Bob can make the probability of the allzero string equal to zero. In either case, we have a contradiction to the perfect security of the coin toss extension (in which the probability of an allzero outcome of the whole protocol is exactly \(2^{n}\)).
Now for the statistical case. When demanding only standalone simulatability, the situation depends on the number of already available common coins. Namely, we give an efficient protocol to extend m common coins to any polynomial number (in the security parameter), if m is superlogarithmic. The basic idea of the protocol is to have Alice and Bob each provide a bit string. The final outcome of the coin toss is then computed by applying a randomness extractor to (the concatenation of) both bit strings. The seed provided by the given \(m\)bit coin toss will be used as the seed for the randomness extraction. (See Theorem 14 and its proof for details.)
A standalone coin toss extension even from m to \(m+1\) bits is impossible in the statistical case if the seed is too short, i.e., not superlogarithmic (Corollary 11). To sketch our argument, assume (for contradiction) a coin toss extension protocol that achieves statistical security for nonsuperlogarithmic \(m\). Without loss of generality, we may again assume that the \(m\)bit coin toss used as seed is only queried after all partytoparty communication.
 (1)
Starting from p, an optimally playing Alice will “win,” in the sense that in the last protocol step before the seed is chosen, the allzero string has a probability not equal to zero. This nonzero probability is taken only over the value of the seed and hence must be noticeable (because the seed is short). In particular, the probability for an allzero outcome is noticeably different from an ideal coin toss.
 (2)
Starting from p, the protocol will not abort, and optimally playing Bob “wins,” in the sense that the probability of the allzero string is zero. Since \(m\) (and thus also \(m+1\)) is not superlogarithmic, this also constitutes a noticeable difference to an ideal coin toss.
 (3)
Starting from p, the protocol will abort with a nonzero probability even if both parties are uncorrupted.
In the statistical universal composability setting, the situation is more clear: We show that there is no protocol with polynomially many rounds that extends \(m\) to \(m+1\) coins, no matter how large m is (Theorem 15). (Note, however, that our result does not exclude the existence of coin toss protocols that run in a superpolynomial number of rounds.)
As above, we may assume that the protocol obtains the seed in the last protocol step. The core of our proof rests on the following observation: Given the communication between the environment and the adversary up to the point when the seed is chosen, at least half of the strings from \(\{0,1\}^n\) are no longer possible. The protocol proceeds in polynomially many rounds, and at the end (before the seed is chosen), a superpolynomial amount of strings has become impossible. Hence, there must have been a single “critical message” excluding a superpolynomial number of strings. The environment lets the adversary corrupt a party that sends such a critical message. Then, the environment chooses at random whether the critical message is sent or replaced by a different message. Replacing the critical message then has a noticeable effect on the probability distribution of the final outcome; this effect cannot be mimicked by the simulator in the ideal model.
1.3 Known Results in the Computational Case
In [6], Blum gave a computationally secure coin toss protocol. In [14, Proposition 7.4.8], this protocol is shown to be standalone simulatable, and together with the sequential composition theorem [14, Proposition 7.4.3] for standalone simulatable protocols, this gives a computationally standalone simulatable protocol for tossing polynomially many coins. This makes coin toss extension trivial in that setting; one just ignores the mbit coin toss and tosses nbit from scratch.
In the computational universal composability setting, it has been shown in [8] that coin toss cannot be achieved from scratch. However, they showed that a sufficiently large common random string (CRS) implies (an arbitrary number of) ideal bit commitments. Such ideal bit commitments allow to implement a coin toss of arbitrary length (e.g., using Blum’s protocol [6]). Thus, a sufficiently large CRS (and therefore, also a sufficiently large coin toss) can be extended to any polynomial length. However, it was unclear what the minimum size required from the CRS or the coin toss is.
Note that there is a subtle difference between the notion of a CRS and a coin toss. A CRS is randomness that is available to all parties at the beginning of the protocol, while with coin toss the randomness is only generated when all parties agree to run the coin toss. This makes the coin toss actually the stronger primitive, since in some situations, it is necessary to guarantee that not even corrupted parties learn the outcomes of the coin toss prior to a given protocol step.
In [11], the task of coin toss is considered in a scenario slightly different from ours.^{1} Cleve [11] shows that in their setting, coin toss is generally not possible even against computationally limited adversaries. However, to the best of our knowledge, an extension of a given coin toss (in any setting) has not been considered so far in the computational setting.
1.4 Our Results in the Computational Case
We show that under suitable (and plausible) computational assumptions, it is possible to extend any coin toss of superlogarithmic length \(m\). (Recall that if \(m\) is not superlogarithmic, we show—unconditionally—that coin toss extension is not possible. Hence, this positive result complements our negative result from Theorem 8, albeit under certain computational assumptions.) More specifically, [9] show that when assuming the existence of (polynomially secure) dense pseudorandom permutations, an \(m\)bit coin toss (for linear \(m\)) can be used to implement (arbitrarily many) bit commitments. These bit commitments can then be used in the cointossing protocol of Blum [6] to derive arbitrarily many fresh random coins. We show (in Theorem 7) that suitably scaling the security parameter of this construction, an \(m\)bit coin toss can be extended for any superlogarithmic \(m\), assuming exponentially strong dense pseudorandom permutations. We leave it as an open problem to find coin toss extension protocols under weaker assumptions.
1.5 CRS Extension
A common random string (CRS) can be considered to be cryptographically weaker than a coin toss functionality. The random string of a coin toss functionality is chosen only after both parties have initiated the functionality. A CRS does not give such a guarantee and the adversary may know the value of the CRS from the start. Our coin toss extension protocol from Theorem 14 strongly depends on this guarantee; it is vital that the adversary cannot make his choices dependent on the seed before both parties have initiated the choice of the seed. Hence, the results for coin toss extension do not immediately apply to the task of CRS extension.
Summary of our results on CRS extension
Security type  Level  

Computational  Statistical  Perfect  
Standalone simulatability  Yes  No  No 
Universal composability  Depends  No  No 
Differences to the case of coin toss extension are printed in boldface. A CRS extension is impossible in the case of statistical standalone simulatability even for long seeds. To show the impossibility, we look at the protocol after the CRS has been chosen. Given a concrete CRS s to extend, some bits of the protocol outcome should be undetermined at the start of the protocol. Otherwise, the resulting extended CRS would have at most the entropy of s. However, given a concrete CRS s, the situation for the undetermined bits is similar to a coin toss from scratch. These bits can be biased by Alice, or they can be biased by Bob. This proof illustrates the difference between a coin toss and a CRS. Recall that in the coin toss case, a given \(m\)bit coin toss can be moved to the end of a coin toss extension protocol without loss of generality. We stress, however, that the choice of a CRS always happens at the beginning of the protocol.
For the computational case, the results correspond to the findings for coin toss extension (with roughly the same proofs).
1.6 Notation

A function f is negligible, if for any \(c>0\), \(f(k)\le k^{c}\) for sufficiently large k.

A function f is nonnegligible if it is not negligible, i.e., if there is a \(c>0\) such that \(f(k)>k^{c}\) for infinitely many \(k\) (not to be confused with noticeable).

f is noticeable, if for some \(c>0\), \(f(k)\ge k^{c}\) for sufficiently large k. Note that functions exist that are neither negligible nor noticeable.

f is exponentially small, if there exists a \(c>0\), such that \(f(k)\le 2^{k^c}\) for sufficiently large k.

f is overwhelming, if \(1f\) is negligible.

f is polynomially bounded, if for some \(c>0\), \(f(k)\le k^c\) for sufficiently large k.

f is polynomially large, if there is a \(c>0\) such that \(f(k)^c\ge k\) for sufficiently large k.

f is superpolynomial, if for any \(c>0\), \(f(k)>k^c\) for sufficiently large k.

f is superlogarithmic, if \(f/\log k\rightarrow \infty \) (i.e., \(f\in \omega (\log k)\)). It is easy to see that f is superlogarithmic if and only if \(2^{f}\) is negligible.

f is superpolylogarithmic, if for any \(c>0\), \(f(k)>(\log k)^c\) for sufficiently large k.

f is subexponential, if for any \(c>0\), \(f(k)<2^{k^c}\) for sufficiently large k.
2 Security Definitions
In this section, we roughly sketch the security definitions used throughout this paper. We distinguish between two notions: standalone simulatability as defined in [14]^{2} and universal composability (UC) as defined in [7].
2.1 Standalone Simulatability
In [14], a definition for the security of twoparty secure function evaluations is given (called security in the malicious model). We will give a sketch; for more details, we refer to [14].
A protocol consists of two parties that alternatingly send messages to each other. The parties may also invoke an ideal functionality as an oracle. In our case, the parties invoke a smaller coin toss to realize a larger one. We remark that the ideal functionality can only be invoked once. Thus, in our case, the parties have only access to one smaller coin toss.

The real protocol execution This consists of the view of the corrupted parties upon inputs \(x_1\) and \(x_2\) for the parties and the auxiliary input z for the adversary, together with the outputs I of the parties.

The ideal protocol execution Here, the simulator first learns the auxiliary input z and possibly the input for the corrupted party (the simulator must corrupt the same party as the adversary). Then, he can choose the input of the corrupted party for the probabilistic function f, and the other inputs are chosen honestly (i.e., the first input is \(x_1\) if the first party is uncorrupted, and the second input \(x_2\) if the second party is). Then, the simulator learns the output I of f (we assume the output to be equal for all parties). It may now generate a fake view v of the corrupted parties. The ideal protocol execution then consists of v and I.
What we have sketched above is what we call computational standalone simulatability. We further define statistical standalone simulatability and perfect standalone simulatability. In these cases, we do not consider efficient adversaries and simulators, but computationally unbounded ones. In the case of statistical standalone simulatability, we require the real and ideal protocol execution to be statistically (and not only computationally) indistinguishable, and in the perfect case, we even require these distributions to be identical.
2.2 Universal Composability
In contrast to standalone simulatability, universal composability [7] is a much stricter security notion. The main difference is the existence of an environment that may interact with protocol and adversary (or with ideal functionality and simulator) and try to distinguish between real and ideal protocol. This additional strictness brings the advantage of a versatile composition theorem (the UC composition theorem [7]). We only sketch the model here and refer to [7] for details.
A protocol consists of several machines that may (a) get input from the environment (also during the execution of the protocol), (b) give output to the environment (also during the execution of the protocol), and (c) send messages to each other.
The real protocol execution consists of a protocol \(\pi \), an adversary \(\mathcal {A}\), and an environment \(\mathcal {Z}\). Here, the environment may freely communicate with the adversary, and the latter has full control over the network, i.e., it may deliver, delay, or drop messages sent between parties. We assume the authenticated model in this paper, so the adversary learns the content of the messages but may not modify it. When \(\mathcal {Z}\) terminates, it gives an output. The adversary may choose to corrupt parties at any point in time.^{3}
The ideal protocol execution is defined analogously, but instead of a protocol \(\pi \), there is an ideal functionality \(\mathcal {F}\), and instead of the adversary, there is a simulator \(\mathcal {S}\). The simulator can also corrupt parties, but does not see any inputs/outputs exchanged between uncorrupted parties and the ideal functionality. If the simulator corrupts a party, the simulator can choose all inputs from that party into the functionality and get the corresponding outputs to that party. Uncorrupted parties simply act as relays (or “dummy parties”) who forward inputs/outputs between \(\mathcal {Z}\) and \(\mathcal {F}\).
The hybrid protocol execution is defined like the real protocol execution, except that parties also have access to an ideal functionality (also called hybrid functionality in this context), in addition to their ability to communicate over the network. The adversary \(\mathcal {A}\) controls the network in the same way as in the real protocol execution, but cannot control communication to/from the hybrid functionality (like in the ideal protocol execution). We remark that while Canetti [7] allows parties access to an unbounded number of instances of hybrid functionalities, here we are only interested in protocols that invoke and access at most one instance.
We say a protocol \(\pi \) universally composably (UC)implements an ideal functionality \(\mathcal {F}\) (or, if \(\mathcal {F}\) is clear from the context: That \(\pi \) is universally composable), if for any efficient adversary \(\mathcal {A}\), there is an efficient simulator \(\mathcal {S}\), such that for all efficient environments \(\mathcal {Z}\) and all auxiliary inputs z for \(\mathcal {Z}\), the distributions of the output of \(\mathcal {Z}\) in the real^{4} and the ideal protocol executions are computationally indistinguishable.^{5}
What has been sketched above is called computational UC. We further define statistical and perfect UC. In these notions, we allow adversary, simulator, and environment to be computationally unbounded machines. Further, in the case of perfect UC, we require the distributions of the output bit of \(\mathcal {Z}\) to be identical in real/hybrid and ideal protocol executions.
2.3 The Ideal Functionality for Coin Toss
To describe the task of implementing a universally composable coin toss, we have to define the ideal functionality of nbit coin toss. In the following, let n denote a positive integervalued function.
Below is an informal description of our ideal functionality for a nbit coin toss. First, the functionality waits for initialization inputs from both parties \(P_1\) and \(P_2\).^{6} As soon as both parties have this way signaled their willingness to start, the functionality selects n coins in the form of an nbit string \(\kappa \) uniformly and sends this \(\kappa \) to the adversary. (Note that a coin toss does not guarantee secrecy of any kind.)^{7}
We will consider protocols that implement \(\mathsf {CT}_n\) (either in the sense of standalone or UC security). Unfortunately, the trivial protocol (that never generates any output) implements any functionality. (The corresponding simulator simply never delivers any outputs.) Hence, we require the following definition (see also [9] and [2, Sect. 5.1]):
Definition 1
A twoparty protocol \(\pi \) is nontrivial if the probability is overwhelming that both parties generate identical outputs in a setting in which both parties are honest and all messages are delivered. \(\pi \) is perfectly nontrivial if that probability is zero.
Using \(\mathsf {CT}_n\), we can also formally express what we mean by extending a coin toss. Namely:
Definition 2
Let \(n=n(k)\) and \(m=m(k)\) be positive, polynomially bounded, and computable functions such that \(m(k)<n(k)\) for all k. Then, a protocol is a universally composable \((m\rightarrow n)\)coin toss extension protocol if it is nontrivial and securely implements \(\mathsf {CT}_n\) by having access only to a single instance of \(\mathsf {CT}_m\). This security can be computational, statistical, or perfect.
2.4 The Ideal Functionality for Common Random Strings
Definition 3
Let \(n=n(k)\) and \(m=m(k)\) be positive, polynomially bounded, and computable functions such that \(m(k)<n(k)\) for all k. Then, a protocol is a universally composable \((m\rightarrow n)\)CRS extension protocol if it securely and nontrivially implements \(\mathsf {CRS}_n\) by having access only to \(\mathsf {CRS}_m\). This security can be computational, statistical, or perfect.
2.5 On Unbounded Simulators
Following [3], we have modeled statistical and perfect standalone and UC security using computationally unbounded simulators. Another approach is to require the simulators to be polynomial in the running time of the adversary. Our results also hold in such a model. For the impossibility results, this is straightforward, since the security notion gets stricter when the simulators become more restricted. The only possibility result for statistical/perfect security is given in Theorem 14. There, the simulator we construct is in fact polynomial in the runtime of the adversary.
In the following sections, we investigate the existence of such coin toss extension protocols, depending on the desired security level (i.e., computational / statistical / perfect security) and the parameters n and m.
3 Coin Toss Extension: The Computational Case
3.1 Universal Composability
In this section, we present two positive results (combined in Theorem 7) and a negative result (Theorem 8). Our positive results state that as long as \(m\) is superlogarithmic, we can achieve coin toss extension (under a computational assumption, whose strength depends on \(m\)). Our negative result states that for nonsuperlogarithmic \(m\), no coin toss extension is possible (unconditionally).
In the following, we start with our positive results and first need to introduce the corresponding computational assumption. Specifically, we need the assumption of (doubly) enhanced trapdoor permutations with pseudorandom public keys (called ETDs henceforth). Roughly, these are trapdoor permutations with the additional properties that (i) one can sample an image of the permutation in an oblivious fashion, i.e., even given the coins used for sampling of the image, it is infeasible to invert the function, (ii) one can sample a uniform preimage along with the random coins needed to sample the corresponding image, and (iii) the public keys are computationally indistinguishable from random strings.
We inherit the assumption that ETDs exist from [9] (they use it for the case of a uniform CRS). Although we are not aware of any concrete candidates for ETDs, that assumption seems plausible.
We will show that when ETDs exist, then so do efficient protocols for extending a suitably long coin toss. The idea is simple: First, a suitably long coin toss trivially implements a common random string (CRS), from which we can bootstrap a UCsecure multiuse bit commitment protocol using the techniques of [9]. (This is captured in Lemmas 5 and 6.) Next, the UCsecure bit commitment protocol can be used to implement any polynomially long coin toss, using Blum’s coin toss protocol [6]. (This is detailed in Theorem 7.)
We start off with the definition of ETDs. The definition of ETDs follows that of doubly enhanced trapdoor permutations in [15]; only the requirement for pseudorandom public keys has been added.
Definition 4
(Doubly enhanced trapdoor permutations with pseudorandom public keys)
A system of doubly enhanced^{8} trapdoor permutations with pseudorandom public keys (ETD) consists of the following efficient algorithms: a key generation algorithm I that (given security parameter k) generates public keys \( pk \) and corresponding trapdoors \( td \) (we treat \( pk \) and \( td \) as efficiently computable functions to facilitate notation) and a domain sampling algorithm S that given \( pk \) outputs an element in the domain of \( pk \). Additionally, the ETD defines a set of valid public keys. I, S must satisfy the following conditions:

Permutations Open image in new window , and any valid public key is a permutation.

Almost uniform sampling. For any valid public key \( pk \) in the range of \(I(\mathtt 1^k)\), the statistical distance between the output of \(S( pk )\) and randomly chosen elements in the domain (=range) of \( pk \) is bounded by \(\mu (k)\).
 Enhanced hardness For all Open image in new window Here, r denotes the randomness used by S.

Doubly enhanced There exists an efficient algorithm that, on input a valid public key \( pk \), outputs (x, r) with \( pk (x)=S( pk )\), where r denotes the randomness used by S and is distributed uniformly. In other words, it is efficiently possible to sample a preimage x along with the random coins needed to choose the image \( pk (x)\).
 Pseudorandom public keys There is a polynomially bounded, efficiently computable function s (not depending on A) such that
Lemma 5
There is a constant Open image in new window such that the following holds for all polynomially bounded functions s computable in time polynomial in k:
Assume that ETD exists such that the size of the circuits describing the ETD is bounded by s(k) for security parameter k.^{9}
Then, there is a protocol \(\pi \) using a uniform common random string (CRS) of length \(s(k)^d\) such that \(\pi \) securely UCrealizes a bit commitment that can be used polynomially many times.
Proof
The main work (i.e., finding the protocol and proving its security) has been done in [9]. It is left to show that for their construction, a CRS of length \( poly (s)\) is sufficient. By \( poly (s)\), we mean a polynomially bounded function in s that is independent of the chosen ETD. (In [9], it is only shown that a CRS of length p(k) is sufficient, where k is the security parameter and p a polynomial depending on the ETD.)
In [9], there is a protocol \(\textsf {UAHC}\) that, assuming a CRS and the existence of ETD, implements multiple commitments.^{10} The CRS is assumed to contain the following: (i) a random image under a oneway function \(f_k\) (that depends on the security parameter k), (ii) a public key for a semantically secure cryptosystem E, and (iii) a public key for a CCA2secure cryptosystem \(E_\mathrm{cca}\). We discuss how to instantiate \(f_k,E,E_\mathrm{cca}\) so that we get a CRS that is indistinguishable from uniform.
The oneway function f may be constructed from the ETD as follows: f interprets its input r as randomness to be used in the ETD key generation algorithm and outputs the resulting public key. Then, for security parameter k, the images of f have length \(s_1\le s\) (since they are public keys). Further, since the public keys are indistinguishable from uniform randomness by definition of the ETD, random images of f are computationally indistinguishable from \(s_1\)bit randomness.
Second, a semantically secure cryptosystem E can be constructed from the ETD using the construction from [16, 17]. Then, the public key for E is just a public key for the ETD. It follows that the length of the public keys is \(s_1(k)\), and random public keys are indistinguishable from \(s_1\)bit randomness.
The construction of \(E_\mathrm{cca}\) is more involved but still standard. Specifically, we use the construction by Dolev et al. [12]. For this, we first need a noninteractive zeroknowledge proof system (NIZK) to prove consistency of a ciphertext. For instance, [13, Constructions 4.10.4 and 4.10.7] together with the additional remarks in [15] present a suitable scheme, based on doubly enhanced trapdoor permutations. We will now examine the size of the CRS needed for that protocol. To prove a statement that is described by a circuit of size \(s_2\), the CRS consists—for one iteration of the proof—of \( poly (s_2)\) commitments to random bits using a trapdoor permutation. The length of each commitment is O(s) since s bounds the size of the circuits describing the trapdoor permutation scheme. To guarantee soundness, \( poly (s_2)\cdot m\) parallel executions of the scheme are necessary (using the same trapdoor permutation, see [13, Construction 4.10.4]), where m is a superlogarithmic function in the security parameter. So if we choose \(m:=s\), the length of the CRS used by the NIZK scheme is bounded by \( poly (s(k)+s_2(k))\). The CRS consists of images of uniformly random preimages under a permutation; thus, it is uniformly random.
Another ingredient we need is a universal family of oneway hash functions. In [25] (see also [18, 22]), a scheme is presented that converts a oneway function f into a universal family of oneway hash functions. Here, the image of the hash function has length \(s_3\in poly (s_4)\), where \(s_4\) is the length of the images of f. And if the oneway function is keyless, so is the hash function. If we use the keyless oneway functions f constructed above, \(s_4\le s\), and the oneway hash is keyless.
Now, we come back to the construction of \(E_\mathrm{cca}\). In this construction, the public key consists of (i) a hash function h from the abovementioned family (\(s_3\) bit), (ii) \(2s_4\) public keys for a semantically secure encryption scheme, say the scheme E constructed above (\(2s_4s_1\) bit), and (iii) a CRS for the NIZK scheme above to show a statement that can be described by a circuit of size polynomial in \(2s_4\) and the size of the circuits describing the trapdoor permutation scheme (that is bounded by s). So the CRS has a length of at most \( poly (s+s_4)\) bit. Putting this together, and noting that \(s_4\le s\), we see that the public key of \(E_\mathrm{cca}\) has a length in \(s_3+2s_4s_1+ poly (s+s_4)= poly (s)\).
Since the key of the hash function we constructed is a zero bit string (the hash function is keyless), and the public key of E as well as the CRS of the NIZK scheme is indistinguishable from uniform, the public key of \(E_\mathrm{cca}\) is also indistinguishable from uniform.
Finally, the protocol \(\textsf {UAHC}\) from [9] uses a CRS consisting of a public key for E, a public key for \(E_\mathrm{cca}\), and an image of f. By our calculations above, the total length of that CRS lies in \( poly (s)\), and the CRS is indistinguishable from uniform.
Let \(\pi \) be the protocol that results from \(\mathsf {UAHC}\) by using a uniformly random string of length \( poly (s)\). Since the new CRS is indistinguishable from the old CRS, and since \(\mathsf {UAHC}\) is a UCsecure commitment, the protocol \(\pi \) is also UCsecure commitment with uniform CRS. \(\square \)
Lemma 6
Let s(k) be a polynomially bounded function that is computable in time polynomial in k.

ETD exists and s is a polynomially large function.

Exponentially hard ETD exists and s is a superpolylogarithmic function.
Proof
This is shown by scaling the security parameter of the original ETD. Let I be the key generation algorithm and S be the sampling algorithm of the ETD.
Since I and S are efficient algorithms, there is a Open image in new window such that the size of the circuits of (I, S) is bounded by \(k^c\). Then, set \({\tilde{s}}(k):=\lfloor s(k)^{1/c}\rfloor \). Obviously, if s is superpolylogarithmic or polynomially large, respectively, then so is \({\tilde{s}}\). We now construct a new scheme \((I',S')\) as follows: \(I'(k'):=I({\tilde{s}}(k'))\) and \(S':=S\). Then, for security parameter \(k'\), the circuits of \((I',S')\) have size \({\tilde{s}}(k')^c\le s(k')\), as required. It is left to show that \((I',S')\) is a system of ETD.
We will use the following notation: When talking about the original ETD (I, S), we will use the names from Definition 4 (e.g., A, k, \(\mu \)). When talking about \((I',S')\), we will add a prime (e.g., \(A'\), \(k'\), \(\mu '\)).
Let a polynomialtime algorithm \(A'\) be given. We then construct a machine A as follows: Upon input \(\mathtt 1^k\), A chooses \(k'\) to be a uniform \(k'\) with \({\tilde{s}}(k')=k\) (i.e., \(k'\) is uniformly chosen from the set \({\tilde{s}}^{1}(\{k\})\)).
After \(k'\) is chosen, A runs \(A'(\mathtt 1^{k'})\).
First, we show that A runs in polynomial/subexponential time in k. Since A simulates \(A'\) in time polynomial in \({\tilde{k}}:=\max {\tilde{s}}^{1}(\{k\})\), it is sufficient to show that \({\tilde{k}}\) is polynomially bounded (or subexponential, respectively) in k. We distinguish two cases. Case 1: If \( {\tilde{s}}\) is polynomially large, then there is a d such that \( {\tilde{s}}(k')^{d}\ge k'\) for almost all \(k'\). Then, we have \( {\tilde{s}}(k')\ge k'^{1/d}\) and then \({\tilde{k}}=\min {\tilde{s}}^{1}(\{k\})\le k^{d}\) for almost all k.
Case 2, \({\tilde{s}}\) is superpolylogarithmic: Let Open image in new window be arbitrary. Since \({\tilde{s}}\) is superpolylogarithmic, there exists Open image in new window with \({\tilde{s}}(k')\ge (\log k')^d\) for all \(k'\ge K_d\). Now let Open image in new window be arbitrary and \({\tilde{k}}:=\max {\tilde{s}}^{1}(\{k\})\). By definition of \(K_d\), we must have \((\log {\tilde{k}})^d\le {\tilde{s}}({\tilde{k}})=k\) or \({\tilde{k}}<K_d\). If \((\log {\tilde{k}})^d\le k\), then \({\tilde{k}}\le 2^{k^{1/d}}\). Thus, \({\tilde{k}}\le \max \{2^{k^{1/d}},K_d\}\), and so \({\tilde{k}}\le 2^{k^{1/d}}\) for sufficiently large k. Since Open image in new window was arbitrary, this shows that \({\tilde{k}}\) is subexponential in k.
Theorem 7

m is polynomially large and ETD exists, or

m is superpolylogarithmic and exponentially hard ETD exists.
Proof
Let d be as in Lemma 5. If m is polynomially large or superpolylogarithmic, then \(s:=m^{1/d}\) is polynomially large or superpolylogarithmic, respectively. So, by Lemma 6, there is ETD, such that the size of the circuits describing the ETD is bounded by \(s=m^{1/d}\). Then, by Lemma 5, there is a UCsecure protocol for implementing nbit commitments using an \((m^{1/d})^d=m\)bit CRS.
Now consider the coin toss extension protocol from Fig. 1. It is easy to see that this protocol UCrealizes an nbit coin toss. We sketch the simulator \(\mathcal {S}\): As soon as all uncorrupted parties got input \(( init )\), \(\mathcal {S}\) learns what value r the ideal nbit coin toss has. When \(P_1\) is or gets corrupted, \(\mathcal {S}\) learns the value \(r_1\) as soon as \(P_1\) commits, so the simulated \(r_2\) can be chosen as \(r_1\oplus r\). When \(P_2\) is or gets corrupted, but \(P_1\) is uncorrupted at least during the commitment to \(r_1\), the simulator \(\mathcal {S}\) unveils value \(r_1\) to \(r_2\oplus r\). In the case that both parties get corrupted, the environment does not learn the value from the ideal coin toss, so the simulator can simply choose it to be \(r_1\oplus r_2\).
Furthermore, an mbit CRS can be trivially implemented using an mbit coin toss. Using the UC composition theorem [7], we can put the above constructions together and get a protocol that UCrealizes an nbit coin toss using an mbit coin toss. \(\square \)
Note that given stronger, but possibly unrealistic assumptions, the lower bound for m in Theorem 7 can be decreased. If we assume that for any superlogarithmic m, there is ETD such that the size of their circuits is bounded by \(m^{1/d}\) (where d is the constant from Lemma 5), we get coin toss extension even for superlogarithmic m (using the same proof as for Theorem 7, except that instead of Lemma 6, we use the stronger assumption).
However, we cannot expect an even better lower bound for m, as the following theorem shows:
Theorem 8
Let \(n=n(k)\) and \(m=m(k)\) be functions with \(n(k)>m(k)\ge 0\) for all k, and assume that m is not superlogarithmic (i.e., \(2^{m}\) is nonnegligible). Then, there is no nontrivial polynomialtime computationally universally composable protocol for \((m\rightarrow n)\)coin toss extension.
We first give a proof sketch. We note that our proof generalizes a similar result from [8] (that shows Theorem 8 for \(m=0\)). Canetti [8] argue that a hypothetical simulator for coin toss would have to be able to “convince” the other party of an arbitrary outcome of the coin toss. We show that a similar property holds even when an ideal (but short) \(m\)bit coin toss is available.
More specifically, first, we recall how the impossibility of a universally composable coin toss is shown in the case that we have no seed (i.e., without the functionality \(\mathsf {CT}_m\)). Assume for contradiction that a protocol \(\pi \) with parties \(P_1\) and \(P_2\) exists such that \(\pi \) implements \(\mathsf {CT}_n\). (Here, n is as in the theorem.) Then, assume an adversary \(\mathcal {A}_1\) that corrupts \(P_1\) and simply reroutes all communication with \(P_1\) to the environment. (E.g., messages sent by \(P_2\) are forwarded to the environment; cf. also the lefthand side of Fig. 2.) Assume an environment \(\mathcal {Z}_1\) that internally simulates an instance \({\overline{P}}_1\) of \(P_1\) and instructs \(\mathcal {A}_1\) to forward the messages produced by the simulated \({\overline{P}}_1\). The outputs made by \({\overline{P}}_1\) and \(P_2\) we call \({\overline{\kappa }}_1\) and \(\kappa _2\), respectively. Since the network consisting of \(\mathcal {Z}_1\), \(\mathcal {A}_1\), and \(P_2\) essentially is an honest execution of \(\pi \), we have that with overwhelming probability, \({\overline{\kappa }}_1\) and \(\kappa _2\) are nbit strings and \({\overline{\kappa }}_1=\kappa _2\). \(\mathcal {Z}_1\) outputs 1 iff \({\overline{\kappa }}_1=\kappa _2\); thus, \(\mathcal {Z}_1\) outputs 1 with overwhelming probability when running with \(\pi \) and \(\mathcal {A}_1\) as above.
Since we assume that \(\pi \) is universally composable, there is a simulator \(\mathcal {S}_1\) that simulates the adversary \(\mathcal {A}_1\). That is, \(\mathcal {Z}_1\) cannot distinguish between \(\mathcal {A}_1\) with \(P_2\) (the real model) and \(\mathcal {S}_1\) with \(\mathsf {CT}_n\) (the ideal model). Since \(\mathcal {A}_1\) just forwards messages from \(P_2\), the simulator \(\mathcal {S}_1\) effectively produces a simulation of \(P_2\)’s messages. Furthermore, in the ideal model, \(\mathcal {Z}_1\) gets the nbit string \(\kappa _2\) from \(\mathsf {CT}_n\). Since \(\mathcal {Z}_1\) cannot distinguish between the real and the ideal models, we have that \({\overline{\kappa }}_1=\kappa _2\) with overwhelming probability also in the ideal model. This implies that \(\mathcal {S}_1\) is a machine that manages to make \({\overline{P}}_1\) (which is identical to the honest party \(P_1\)) output an externally given nbit string \(\kappa _2\). This, however, violates the assumption that \(P_1\) is part of a secure coin toss protocol. In a secure coin toss protocol, \(\mathcal {S}_1\) would succeed only with probability \(2^{m}\) in making \(P_1\) output \(\kappa _2\). Thus, our assumption that \(\pi \) was a universally composable coin toss protocol is false.
Now consider the case where we additionally have an mbit seed \(\omega \) given by the ideal functionality \(\mathsf {CT}_m\) used in the real model by \(P_1\) and \(P_2\). In this case, the simulator \(\mathcal {S}_1\) is allowed to simulate the value \(\omega \). Thus, \(\mathcal {S}_1\) now is a machine that can make the honest party \(P_1\) output an externally given nbit string \(\kappa _2\) if \(\mathcal {S}_1\) is allowed to choose the seed \(\omega \). If \(\mathcal {S}_1\) may not choose the seed \(\omega \), it will only succeed if the seed \(\omega \) accidentally is the one that \(\mathcal {S}_1\) would have chosen. This happens with probability \(2^{m}\). Thus, \(\mathcal {S}_1\) manages to make \(P_1\) output an externally given value \(\kappa \) with probability \(2^{m}\) (up to a negligible error). However, since \(\pi \) is a secure coin toss protocol, \(\mathcal {S}_1\) should succeed with probability at most \(2^{n}\) (up to a negligible error). Since the difference between \(2^{n}\) and \(2^{m}\) is nonnegligible (as n is not superlogarithmic), it follows \(\mathcal {Z}_1\) can distinguish between the real model and the ideal model with \(\mathcal {S}_1\). Thus, \(\pi \) is not universally composable.
We proceed with the full proof.
Proof (of Theorem 8)
We use the notation from the proof sketch. So assume for contradiction that \(\pi \), using \(\mathsf {CT}_{m}\), implements \(\mathsf {CT}_{n}\). We start with a network \(C_1\) of machines as in a real protocol run with corrupted \(P_1\). More specifically, \(C_1\) consists of a party \(P_2\), a helping coin toss functionality \(\mathsf {CT}_{m}\), an adversary \(\mathcal {A}_1\) that takes the role of a corrupted \(P_1\), and an environment \(\mathcal {Z}_1\). Note that the corrupted party \(P_1\) has been removed, since it is taken over by the adversary.
Our first claim is that in runs of this network \(C_1\), eventually identical \(\overline{\kappa _1}\) and \(\kappa _2\) are observed by \(\mathcal {Z}_1\) with overwhelming probability. Indeed, by definition of \(\mathsf {CT}_{n}\), in an ideal protocol run with no corruptions, the outputs \(\kappa _1\) and \(\kappa _2\) must be identical if both are output. Since \(\pi \) UCimplements \(\mathsf {CT}_n\), this must also hold with overwhelming probability in runs of the real protocol without corruptions. Since protocol \(\pi \) is nontrivial, in such a case output is guaranteed, and we have thus \(\kappa _1=\kappa _2\) with overwhelming probability. This carries over to \(C_1\), since \(C_1\) is formed from an uncorrupted real protocol simply by relaying some messages through \(\mathcal {A}_1\) and by regrouping machines. So in \(C_1\), \(\mathcal {Z}_1\) gives output 1 with overwhelming probability.
Now since \(\pi \) UCimplements \(\mathsf {CT}_n\), there must be a simulator \(\mathcal {S}_1\) in the ideal setting with \(\mathsf {CT}_{n}\) that simulates attacks carried out by \(\mathcal {A}_1\). In our situation (depicted in Fig. 2), this simulator must in particular achieve that \(\overline{\kappa _1}=\kappa _2\) with overwhelming probability. In other words, \(\mathcal {S}_1\) must “convince” the simulation of \(P_1\) to output the \(\kappa _1\) that was chosen by the ideal \(\mathsf {CT}_{n}\). To this end, \(\mathcal {S}_1\) may make up an initial seed \(\omega _1\) from a machine \(\mathsf {CT}_{m}\) that is actually not present in the ideal model. Also, \(\mathcal {S}_1\) may make up suitable responses from a faked party \(P_2\) (that is also not present in the ideal model) in communication with \(\overline{P_1}\). Call this network (consisting of \(\mathcal {S}_1\), \(\mathsf {CT}_{n}\), and \(\mathcal {Z}_1\)) \(C_2\). Since the probability that \(\mathcal {Z}\) gave output 1 was overwhelming in \(C_1\), the same holds for \(C_2\) by the definition of UC security.
The networks \(C_2\) and \(C_3\) provide completely identical views for \(P_1\) when \(\omega _1=\overline{\omega _1}\) in \(C_3\). This again happens with probability \(2^{m}\) by definition. Since in \(C_2\), the environment \(\mathcal {Z}\) gave output 1 with some overwhelming probability p, it follows that in \(C_3\) the probability is at least \(2^{m}(1p)=2^{m}\mu \) for some negligible \(\mu \).
Now comes the crucial part: We combine \(\mathcal {Z}\), \(\mathcal {S}_1\), \(\mathsf {CT}_{n}\), and the dummy machine \(*\) (that is to say, all machines but \(P_1\) and \(\overline{\mathsf {CT}_{m}}\)) into a protocol environment \(\mathcal {Z}_2\). A new real adversary is added that only relays the connection between \(\mathcal {S}_1\) and \(P_1\) and the connection between \(*\) and \(\overline{\mathsf {CT}_{m}}\).
Now since \(\pi \) UCimplements \(\mathsf {CT}_n\), there must be a simulator \(\mathcal {S}_2\) that in an ideal setting with \(\overline{\mathsf {CT}_{n}}\) simulates the situation from network \(C_4\). (Here, we use the different name \(\overline{\mathsf {CT}_n}\) only to avoid conflicting names with the \(\mathsf {CT}_{n}\)instance inside \(\mathcal {Z}_2\)). This simulator simulates attacks carried out by \(\mathcal {A}_2\) on the real protocol. The network consisting of \(\mathcal {S}_2\), \(\mathcal {Z}_2\), and \(\overline{\mathsf {CT}_n}\) we call \(C_5\); see Fig. 4.
4 Coin Toss Extension: The Statistical and the Perfect Case
4.1 A Technical Lemma
We first show that we can make certain simplifying assumptions about the protocols we consider.
Lemma 9

in the honest case, both parties either output the same bit string \(z\in \{\mathtt {0},\mathtt {1}\}^n\), or both parties output nothing (in which case, we write \(z=\bot \)),

this output \(z\) (for the honest parties) is a deterministic function of the messages sent and the value s of the mbit coin toss,

each party sends in each protocol run at most one message to \(\mathsf {CT}_{m}\), and this is always an “\(\mathtt {init}\)” message,

the internal state of each of the two parties consists only of the messages exchanged (with the other party and \(\mathsf {CT}_m\)) so far,^{11} and

after \(P_i\) sends “\(\mathtt {init}\)” to \(\mathsf {CT}_{m}\), it does not further communicate with \(P_{3i}\) (for \(i=1,2\) and in case of no corruptions).
Proof
First, we modify a given protocol as follows, to enforce the first two requirements: Each party sends a confirmation messages at the end, where it tells the other party what it is going to output. If these exchanged values do not match, both parties output nothing. The same modification also achieves that the outcome is a deterministic function of the protocol transcript and the used \(m\)bit coin toss \(s\). This modification only suppresses outputs in certain cases and thus can be simulated perfectly, without any simulation error.
Next, straightforward syntactic modifications show that we can assume that each party sends at most one message to \(\mathsf {CT}_{m}\) in each run, and this is always an “\(\mathtt {init}\)” message. (Other messages to \(\mathsf {CT}_m\) would be ignored anyway.) An application of Lemma 21 further shows we can assume that the internal state each party consists only of the messages exchanged so far with the other party and \(\mathsf {CT}_m\). The remaining transformation modifies \(\pi \) such that no further communication between \(P_1\) and \(P_2\) is necessary after \(\mathsf {CT}_m\) has been invoked.
First, we change each \(P_i\) (for \(i\in \{1,2\}\)) so as to signal the other party \(P_{3i}\) before it sends an “\(\mathtt {init}\)” message to \(\mathsf {CT}_{m}\). Then, \(P_i\) proceeds to send “\(\mathtt {init}\)” to \(\mathsf {CT}_{m}\) only after it has received an acknowledgement message from \(P_{3i}\). We call the modified protocol \(\pi _1\). It is easy to see that \(\pi _1\) statistically implements (resp., UCimplements) the original protocol \(\pi \). The simulator only has to produce the additional message “\(\mathtt {init}\)”; he can do so because the functionality \(\mathsf {CT}_{m}\) informs him when it is invoked.
Second, each \(P_i\) is modified to wait for \(\mathsf {CT}_{m}\)output as soon as \(P_i\) itself has sent “\(\mathtt {init}\)” to \(\mathsf {CT}_{m}\) and \(P_{3i}\) has also signaled to do so. All messages from \(P_{3i}\) are buffered and processed by \(P_i\) only when that \(\mathsf {CT}_{m}\)output arrives. This protocol \(\pi _2\) implements (resp., UCimplements) \(\pi _1\) (and by transitivity also \(\pi \)) since the modified behavior of the \(\pi _2\)parties can be simulated by a simulator in \(\pi _1\) simply by delaying message delivery in \(\pi _1\).
Now comes the interesting part: We modify each \(P_i\) so as to postpone the “\(\mathtt {init}\)” message to \(\mathsf {CT}_{m}\) to the end of the protocol run. Instead, \(P_i\) carries on with \(\pi _2\) as if it had sent “\(\mathtt {init}\).” When it goes into the waiting state (for \(\mathsf {CT}_{m}\)output \(\omega \), which will now certainly not arrive), it immediately leaves that waiting state. Then, \(P_i\) makes \(2^{m}\) copies of its current internal state and carries on with \(2^{m}\) parallel executions of \(\pi _2\). In execution number j (\(0\le j<2^{m}\)), \(P_i\) behaves as if it had gotten a seed \(\omega =j\) from \(\mathsf {CT}_{m}\). At the end of the protocol run, when all the parallel executions have fixed their output, \(P_i\) then queries \(\mathsf {CT}_{m}\) with an “\(\mathtt {init}\)” message and waits for a seed \(\omega \) to arrive. Finally, \(P_i\) outputs whatever the \(\omega \)th execution of the parallelized protocol would have output.^{12} Call the protocol with these modified parties \(\pi _3\).
This protocol obviously fulfills the requirements in the lemma statement, and it only remains to show that \(\pi _3\) implements (resp. UCimplements) \(\pi _2\) (and thus \(\pi \)) and hence is a standalone secure (resp., universally composable) protocol for coin toss extension. We sketch a simulator \(\mathcal {S}\) that simulates attacks (performed by a an adversary \(\mathcal {A}\)) on \(\pi _3\) in the setting of \(\pi _2\). Recall that \(\pi _3\) and \(\pi _2\) proceed identically until \(\mathsf {CT}_m\) is queried (which causes \(\mathcal {S}\) to be notified by \(\mathsf {CT}_m\)). Hence, \(\mathcal {S}\) can proceed like \(\mathcal {A}\) until then.
Once a party queries \(\mathsf {CT}_m\) in \(\pi _3\), however, that party internally forks into \(2^m\) parallel executions, one for each possible \(\mathsf {CT}_m\) (as described above). In interacting with \(\pi _2\), \(\mathcal {S}\) will only see one of those protocol executions, namely the one for the actual \(\mathsf {CT}_m\)output. Hence, in order to simulate \(\pi _3\), \(\mathcal {S}\) will have to simulate an additional \(2^m1\) instances of (the remaining part of) \(\pi _2\). However, \(\mathcal {S}\) can easily start and maintain such simulations, since the state of the corresponding parties (which only consists of the exchanged messages and the hypothetical \(\mathsf {CT}_m\)output for that instance) is known.\(\square \)
4.2 Standalone Simulatability
4.3 Negative Results
Theorem 10

For any (possibly unbounded) adversary corrupting one of the parties, there is a negligible function \(\mu \) such that for every security parameter k and every \(c\in \{\mathtt 0,\mathtt 1\}^n\), the probability for protocol output c is at most \(2^{n}+\mu (k)\).
Note that the notion of security used in this theorem is intentionally very weak. For example, if the first bit of the outcome is 0, and all other bits are uniformly random (and n is superlogarithmic), this notion of security is satisfied. Since the theorem is an impossibility result, using a weaker security notion strengthens the theorem. In Corollary 11, we will instead use the familiar simulationbased security notions.
We start with a proof sketch for the first statement (for the nonperfect case with nonsuperlogarithmic \(m\)).

the available mbit coin toss is only used at the end of the protocol,

in the honest case, the parties never output distinct or invalid values, and

\(n=m+1\).

the set \(\mathfrak A\) of transcripts having nonzero probability for the protocol output \(\mathtt 0^n\),

the set \(\mathfrak B\) of transcripts having zero probability of output \(\mathtt 0^n\) and zero probability that the protocol gives no output,

and the set \(\mathfrak C\) of transcripts having nonzero probability of giving no output.
For any partial transcript p (i.e., a situation during the run of the protocol), we define three values \(\alpha \), \(\beta \), and \(\gamma \). The value \(\alpha \) denotes the probability with which a corrupted Alice can enforce a transcript in \(\mathfrak A\) starting from p, the value \(\beta \) denotes the probability with which a corrupted Bob can enforce a transcript in \(\mathfrak B\), and the value \(\gamma \) denotes the probability that the complete protocol transcript will lie in \(\mathfrak C\) if no one is corrupted. We show inductively that for any partial transcript p, we have \((1\alpha )(1\beta )\le \gamma \). In particular, this holds for the beginning of the protocol. For simplicity, we assume that \(2^{m}\) is not only nonnegligible, but noticeable (in the full proof, the general case is considered). Since a transcript in \(\mathfrak C\) gives no output with probability at least \(2^{m}\), the probability that the protocol generates no output (in the uncorrupted case) is at least \(2^{m}\gamma \). By the nontriviality condition, this probability is negligible, so \(\gamma \) must be negligible, too. So \((1\alpha )(1\beta )\) is negligible, too. Therefore, \(\min {\{1\alpha ,1\beta \}}\) must be negligible. For now, we assume that \(1\alpha \) is negligible or \(1\beta \) is negligible (for the general case, see the full proof).
If \(1\alpha \) is negligible, \(\alpha \) is overwhelming. The probability for output \(\mathtt 0^n\) is at least \(2^{m}\alpha \). Since \(\alpha \) is overwhelming and \(2^{m}\) noticeable, this is greater than \(2^{n}=\frac{1}{2} 2^{m}\) by a noticeable amount which contradicts the security property.
If \(1\beta \) is negligible, we have that Bob can ensure an output in \(\{{\mathsf {0}},{\mathsf {1}}\}^ n\setminus \{{\mathsf {0}}^n\}\) with overwhelming probability \(\beta \). By the security property, however, such an output should occur at most with probability \((2^n1)2^{n}\) plus a negligible amount. \((2^n1)2^{n}=12^{n}=12^{m}/2\) is not overwhelming since m is not superlogarithmic by assumption, so we have a contradiction.
The perfect case is proven similarly.
We proceed with the full proof.
Proof (of Theorem 10)
 (i)
If no party is corrupted, both parties always give the same (or no) output, and this output is a deterministic function of the sent messages, and the value of the used \(m\)bit coin toss.
 (ii)
No messages are sent after invoking the mbit coin toss.
 (iii)
The honest parties maintain no internal state except for the list of the messages sent so far.
We call the parties Alice and Bob.
In the following, by a complete transcript t, we mean (the sequence of) all messages sent during a run of the protocol \(\pi \), excluding the value s of the mbit coin toss. The protocol outcome (of the honest parties) is then \(f(t,s)\in \{\mathtt 0,\mathtt 1\}^n\cup \{\bot \}\) for some deterministic function f. By a partial transcript \(p\), we mean a prefix of a complete transcript. We write \(p\le p'\) to denote that partial transcript \(p\) is a prefix of partial transcript \(p'\), and we write \(p<_1p'\) to denote that \(p\) is the immediate prefix of \(p'\) (i.e., the maximal \(p\le p'\) with of \(p\ne p'\)). Finally, let \(\mathtt {last}(p)\) denote the last message for a nonempty partial transcript \(p\).
Claim 1
\((1\alpha _p)(1\beta _p)\le \gamma _p\) for every partial transcript \(p\).
Proof of Claim 1
Let first t be a complete transcript. Then, \(\alpha _t,\beta _t,\gamma _t\in \{0,1\}\). Furthermore, since \(\mathfrak A\cup \mathfrak B\cup \mathfrak C\) contains all complete transcripts, at least one of \(\alpha _t,\beta _t,\gamma _t\) is not 0. So, for every complete transcript t, it holds \((1\alpha _t)(1\beta _t)\le \gamma _t\).
Analogous reasoning can be applied when it is Bob’s turn to send a message.
By induction, we therefore get \((1\alpha _p)(1\beta _p)\le \gamma _p\) for any partial transcript p. This concludes the proof of Claim 1.
Now let \(\emptyset \) denote the empty partial transcript, i.e., the beginning of the protocol. Then, for \(\alpha :=\alpha _\emptyset ,\beta :=\beta _\emptyset ,\gamma :=\gamma _\emptyset \), Claim 1 implies \((1\alpha )(1\beta )\le \gamma \). We will construct a contradiction to the nontriviality and security properties of the protocol, which will finish the proof. \(\square \)
Claim 2
\(1\alpha \) or \(1\beta \) is negligible on an infinite subset \(K'\) of security parameters.
Proof of Claim 2
If a protocol reaches a complete transcript in \(\mathfrak C\), it will output \(\bot \) with probability at least \(2^{m}\), so the probability that \(\pi \) outputs \(\bot \) is at least \(2^{m}\gamma \). On the other hand, since \(\pi \) is nontrivial, the probability that the protocol gives output \(\bot \) in the uncorrupted case is negligible. Hence, \(2^{m}\gamma \) is negligible. Since \(2^{m}\) is nonnegligible by assumption, there exists an infinite set K of security parameters k such that \(2^{m}\) is noticeable on K. If \(\gamma \) was nonnegligible on K, \(2^{m}\gamma \) would be nonnegligible on K. So \(\gamma \) must be negligible on K. Since \((1\alpha )(1\beta )\ge \gamma \) for each \(k\in K\), one of \(1\alpha \) and \(1\beta \) is bounded by \(\sqrt{\gamma }\) which is negligible on K. So there is an infinite set \(K'\subseteq K\), such that \(1\alpha \) is negligible on \(K'\) or \(1\beta \) is negligible on \(K'\). This shows Claim 2.
We can now finish the proof by showing a contradiction to the security of the protocol in either case of Claim 2. Let us consider the first case, i.e., \(\alpha \) is overwhelming on \(K'\). By assumption, the probability P for protocol output \(\mathtt 0^n\) (with corrupted Alice) is bounded from above by \(2^{n}+\mu \) for negligible \(\mu \). But since a complete transcript in \(\mathfrak A\) has probability at least \(2^{m}\) of giving output \(\mathtt 0^n\), we have \(P\ge 2^{m}\alpha =2^{n}+(\alpha \frac{1}{2})2^{m}\) (note \(n=m+1\)), so \(\mu \ge (\alpha \frac{1}{2})2^{m}\). Since \(\alpha \) is overwhelming and \(2^{m}\) noticeable on \(K'\), \(\mu \) is not negligible, which concludes the proof in this case.
For the perfect case, the proof of \((1\alpha )(1\beta )\le \gamma \) is performed identically (since we did not use the nontriviality and the security of \(\pi \) in that part of the proof). By the perfect nontriviality, we get \(\gamma =0\), so for every k, at least one of \(\alpha ,\beta \) is 1. If \(\alpha =1\), the probability for an output of \(\mathtt 0^n\) is (for suitable adversary) \(\ge 2^{m}>2^{n}\). If \(\beta =1\), the probability for an output in \(\{\mathtt 0,\mathtt 1\}^n\setminus \{\mathtt 0^n\}\) is \(1>(2^n1)2^{n}\). Both cases contradict the security property. \(\square \)
Corollary 11
Let m be not superlogarithmic and \(n>m\). Then, there is no nontrivial (in the sense of Definition 1) protocol realizing nbit coin toss using an mbit coin toss in the sense of statistical standalone simulatability.
Let m be any function (possibly superlogarithmic) and \(n>m\). Then, there is no perfectly nontrivial protocol realizing nbit coin toss using an mbit coin toss in the sense of perfect standalone simulatability.
4.4 Positive Results
Now we will prove that there exists a protocol for coin toss extension from m to n bits that is statistically standalone simulatably secure. The basic idea is to have the parties \(P_1\) and \(P_2\) contribute random strings to generate one string with sufficiently large minentropy (the minentropy of a random variable X is defined as Open image in new window ). The randomness from this string is then extracted using a randomness extractor. The amount of perfect randomness (i.e., the size of the mbit coin toss) one needs to invest is smaller than the amount extracted. This makes coin toss extension possible.
For our protocol, we need a family of strong randomness extractors with suitable parameters. The following lemma states the existence of these extractors.
Lemma 12
For every m, there exists a function \(h_m:\{0,1\}^m\times \{0,1\}^{m1} \rightarrow \{0,1\}, (s,x)\mapsto r\) such that for a uniformly distributed s and for an x with minentropy of at least t, the statistical distance between \(s\Vert h_m(s,x)\) and the uniform distribution on \(\{0,1\}^{m+1}\) is at most \(2^{t/2}/\sqrt{2}\). The functions \(h_m\) are efficiently computable.
Proof
Let \(h_m(s,x) := \langle s_1\dots s_{m1},x\rangle \oplus s_{m}\). Here, \(\langle \cdot ,\cdot \rangle \) denotes the inner product and \(\oplus \) the addition over \(\mathrm {GF}(2)\). It is easy to verify that \(h_m(s,\cdot )\) constitutes a family of universal hash functions [10], where s is the index selecting from that family. Therefore, the Leftover Hash Lemma [20, 26] guarantees that the statistical distance between \(s\Vert h_m(s,x)\) and the uniform distribution on \(\{0,1\}^{m+1}\) is bounded by \(\frac{1}{2} \sqrt{2\cdot 2^{t}}=2^{t/2}/\sqrt{2}\).\(\square \)
With this family of functions \(h_m\), a simple protocol is possible that extends m(k) coin tosses to \(m(k)+1\) if the function m(k) is superlogarithmic.
Theorem 13
Let m(k) be a superlogarithmic function. Then, there exists a constant round statistically standalone simulatable protocol with efficient simulator that realizes an \((m+1)\)bit coin toss using an mbit coin toss.
Proof
 1.
\(P_1\) uniformly chooses \(a\in \{0,1\}^{\lfloor \frac{m1}{2}\rfloor }\) and sends a to \(P_2\).
 2.
\(P_2\) uniformly chooses \(b\in \{0,1\}^{\lceil \frac{m1}{2}\rceil }\) and sends b to \(P_1\).
 3.
If one party fails to send a string of appropriate length or aborts, then this string is assumed by the other party to be an allzero string of the appropriate length.
 4.
\(P_1\) and \(P_2\) invoke the mbit coin toss functionality and obtain a uniformly distributed \(s\in \{0,1\}^m\). If one party \(P_i\) fails to invoke the coin toss functionality or aborts, then the other party chooses s at random.
 5.
Both \(P_1\) and \(P_2\) compute \(s\Vert h_m(s,a\Vert b)\) and output this string.
Now for a specific adversary \(\mathcal A\) with fixed random tape corrupting \(P_2\), the output distribution of the real protocol (i.e., view and output) is completely described by the following game: Choose Open image in new window , let \(b\leftarrow f_\mathcal{A}(a)\), choose Open image in new window , let \(r\leftarrow s\Vert h_m(s,a\Vert b)\), and return ((a, b, s), r).
We now describe the simulator. To distinguish the random variables in the ideal model from their real counterparts, we decorate them with a \(\sim \), e.g., \({\tilde{a}},{\tilde{b}},{\tilde{s}}\). The simulator in the ideal model obtains a string Open image in new window from the ideal nbit coin toss functionality and sets \({\tilde{s}}=r_1\dots r_{m}\). Then, the simulator chooses Open image in new window and computes \(\tilde{b}=f_\mathcal{A}({\tilde{a}})\) by giving \({\tilde{a}}\) to a simulated copy of the real adversary. If \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}}) = \tilde{r}_{m+1}\), the simulator gives \({\tilde{s}}\) to the simulated real adversary expecting the coin toss. Then, the simulator outputs the view \(({\tilde{a}},{\tilde{b}},{\tilde{s}})\). If, however, \(h_m({\tilde{s}},\tilde{a}\Vert {\tilde{b}}) \ne {\tilde{r}}_{m+1}\), then the simulator rewinds the adversary, i.e., the simulator chooses a fresh Open image in new window and again computes \(\tilde{b}=f_\mathcal{A}(a)\). If now \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}})=\tilde{r}_{m+1}\), the simulator outputs \(({\tilde{a}},{\tilde{b}},{\tilde{s}})\). If again \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}}) \ne {\tilde{r}}_{m+1}\), then the simulator rewinds the adversary again. If after k invocations of the adversary no triple \(({\tilde{a}},{\tilde{b}},{\tilde{s}})\) was output, the simulator aborts and outputs \( fail \).
To show that the simulator is correct, we have to show that the following two distributions are statistically indistinguishable: ((a, b, s), r) as defined in the real model, and \((({\tilde{a}},\tilde{b},{\tilde{s}}),{\tilde{r}})\).
By construction of the simulator, it is obvious that the two distributions are identical under the condition that \(r_m=0\), \(\tilde{r}_m=0\) and that the simulator does not fail. The same holds given \(r_m=1\), \({\tilde{r}}_m=1\) and that the simulator does not fail. Therefore, it is sufficient to show two things: (i) The statistical distance between r and the uniform distribution on n bits is negligible, and (ii) the probability that the simulator fails is negligible. Property (i) is shown using the properties of the randomness extractor \(h_m\). Since a is chosen at random, the minentropy of a is at least \(\lfloor \frac{m1}{2}\rfloor \ge \frac{m}{2}1\), so the minentropy of \(a\Vert b\) is also at least \(\frac{m}{2}1\). Since s is uniformly distributed, it follows by Lemma 12 that the statistical distance between \(r=s\Vert h_m(s,a\Vert b)\) and \({\tilde{r}}\) is bounded by \(2^{m/41/2}/\sqrt{2}=(2^{m})^{1/4}/2\). Since for superlogarithmic m, we have that \(2^{m}\) is negligible, this statistical distance is negligible.
Property (ii) is then easily shown: From (i), we see that after each invocation of the adversary, the distribution of \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}})\) is negligibly far from uniform. So the probability that \(h_m({\tilde{s}},{\tilde{a}}\Vert \tilde{b})\ne {\tilde{r}}_m\) is at most negligibly higher than \(\frac{1}{2}\). Since the \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}})\) in the different invocations of the adversary are independent, the probability that \(h_m({\tilde{s}},{\tilde{a}}\Vert {\tilde{b}})\ne {\tilde{r}}_m\) for all m is negligibly far from \(2^{k}\). So the simulator fails only with negligible probability.
It follows that the real and the ideal protocol executions are indistinguishable, and the protocol standalone simulatably implements an \((m{+}1)\)bit coin toss.\(\square \)
The idea of the one bit extension protocol can be extended by using an extractor that extracts a larger amount of randomness. This yields constant round coin toss extension protocols. However, the simulator needed for such a protocol does not seem to be efficient, even if the real adversary is. To get a protocol that also fulfills the property of both computational standalone simulatability and statistical standalone simulatability, we need a simulator that is efficient if the adversary is.
Below, we give such a coin toss extension protocol for superlogarithmic m(k). This protocol is statistically and computationally secure, i.e., the simulator for polynomialtime adversaries is polynomially bounded, too. The basic idea here is to extract one bit at a time in polynomially many rounds.
Theorem 14
Let m(k) be superlogarithmic, and p(k) be a positive polynomially bounded function, then there exists a statistically and computationally standalone simulatable protocol with efficient simulator that realizes an \((m+p)\)bit coin toss using an mbit coin toss.
Proof
 1.\(\mathtt{for}\ i=1\ \mathtt{to}\ p(k)\) \(\mathtt{do}\)
 (a)
\(P_1\) uniformly chooses \(a_i\in \{0,1\}^{\lfloor \frac{m1}{2}\rfloor }\) and sends \(a_i\) to \(P_2\).
 (b)
\(P_2\) uniformly chooses \(b_i\in \{0,1\}^{\lceil \frac{m1}{2}\rceil }\) and sends \(b_i\) to \(P_1\).
 (c)
If one party fails to send a string of appropriate length or aborts, then this string is assumed by the other party to be an allzero string of the appropriate length.
 (a)
 2.
\(P_1\) and \(P_2\) invoke the mbit coin toss functionality and obtain a uniformly distributed \({s\in \{0,1\}^m}\). If one party \(P_i\) fails to invoke the coin toss functionality or aborts, then the other party chooses s at random.
 3.
\(P_1\) and \(P_2\) compute \(s\Vert h_m(s,a_1\Vert b_1)\Vert \dots \Vert h_m(s,a_{p(k)}\Vert b_{p(k)})\) and output this string.
To show security, we have to construct a polynomialtime simulator that, after obtaining a random \({\tilde{r}}\in \{0,1\}^{m+p}\) from the ideal nbit coin toss functionality, outputs \(({\tilde{a}}_1,{\tilde{b}}_1,\dots ,{\tilde{a}}_p,{\tilde{b}}_p,{\tilde{s}})\) such that \((a_1,b_1,\dots ,a_p,b_p,s,r)\) and \(({\tilde{a}}_1,{\tilde{b}}_1,\dots ,{\tilde{a}}_p,{\tilde{b}}_p,{\tilde{s}},\tilde{r})\) are statistically indistinguishable.
To construct the simulator, we first construct auxiliary algorithms \(S_i\): Given a seed \({\tilde{s}}\), values \(a_1,\dots ,a_{i1}\), and a bit \({\tilde{r}}_i\), \(S_i({\tilde{s}}, a_1,\dots ,a_{i1}, {\tilde{r}}_i)\) picks a random \({\tilde{a}}_i\in \{0,1\}^{\lceil \frac{m1}{2}\rceil }\), sets \({\tilde{b}}_i:=f_i(a_1,\dots ,a_{i1},{\tilde{a}}_i)\), and checks whether \(h_m({\tilde{s}}, {\tilde{a}}_i\Vert {\tilde{b}}_i)={\tilde{r}}_i\). If so, \(S_i\) returns \({\tilde{a}}_i,{\tilde{b}}_i\). Otherwise, \(S_i\) tries again (picking a new \({\tilde{a}}_i\)). \(S_i\) performs up to k tries. (k is the security parameter.)
This shows security in the case of corrupted \(P_2\). \(\square \)
4.5 Universal Composability (Statistical/Perfect Case)
In contrast to the standalone case, in the UC setting, statistically secure coin toss extension protocols are impossible. Intuitively, the reason for this difference is that our positive result for standalone security (Theorem 14) rewinds an adversary in a simulation, while this is not possible for UC security.
More precisely, we show that there is no protocol that runs a polynomial number of rounds, uses an mbit coin toss functionality as a seed, and statistically UCimplements the nbit coin toss functionality for \(n>m\).
The proof of this statement is done by contradiction. Invoking Lemma 9, we can assume that a protocol for statistically universally composable coin toss extension has a certain outer form. Then, we show that any such protocol (of this particular outer form) is insecure.
More concretely, our plan of action will be as follows. For contradiction, assume a statistically universally composable \((m\rightarrow n)\)coin toss extension protocol. We may assume that the mbit seed coin toss is only invoked at the end of the extension protocol.
Also, slightly simplifying things, we can think of the produced nbit coin toss as a deterministic function f(c, s) of the protocol transcript c (i.e., the transcript of all messages exchanged between the parties) and the mbit coin toss s. Now for every transcript c, the set \(\{f(c,s)\mid s\in \{0,1\}^m\}\) of possible (valid) protocol outputs after transcript c is at most half the size of \(\{0,1\}^n\). On the other hand, initially, almost all outputs of \(\{0,1\}^n\) should be roughly equally probable. Hence, a full transcript c “cuts away” about half of all possible protocol outputs.
By assumption, the transcript c is generated interactively from scratch, without using the mbit coin toss s. Also, every party contributes only polynomially many messages to c. Hence, there is a single message that “cuts away” a nonnegligible fraction of all possible outputs. Call such a message “critical.” Our adversary \(\mathcal {A}\) will corrupt one party passively and detect the first such critical message. When encountering such a message, \(\mathcal {A}\) will then internally toss a coin. If heads comes out, \(\mathcal {A}\) will continue the protocol run, and let the corrupted party send that critical message. If tails comes out, \(\mathcal {A}\) will rewind the party and let it send a different message. We will show that this decision (whether to let the party send the critical message) has a nonnegligible impact on the protocol’s output distribution. More concretely, the probability that the protocol output lies in exactly that subset of possible outputs that would have been “cut away” by the critical message is highly correlated with the outcome of \(\mathcal {A}\)’s coin flip.
We will now proceed to formalize this proof outline. This will require some preparations.
For the following statements, we always assume that \(m=m(k)\), \(n=n(k)\) are arbitrary functions, only satisfying \(0\le m(k)<n(k)\) for all k. We also restrict to protocols that proceed in a polynomial number of rounds. That is, by a “protocol,” we mean in the following one in which each party halts after at most p(k) activations, where p(k) is a polynomial that depends only on the protocol. (We do not, however, require the parties to be computationally limited.) We stress that a protocol in which the honest parties run in polynomial time automatically has a polynomial number of rounds; the restriction to a polynomial number of rounds is thus a very weak one.
Theorem 15
There is no nontrivial statistically or perfectly universally composable protocol for \((m\rightarrow n)\)coin toss extension that proceeds in a polynomial number of rounds.
Proof
Assume for contradiction that \(\pi \), using \(\mathsf {CT}_{m}\), is a statistically universally composable implementation of \(\mathsf {CT}_{n}\). By Lemma 9, we may also assume that \(\pi \) satisfies the requirements from that lemma.
Note that the parties have, apart from their communication \(\mathrm{com}\), only the seed \(\omega \in \{0,1\}^{m}\) provided by \(\mathsf {CT}_{m}\) for computing their final output \(\kappa \). So we may assume that there is a deterministic function f for which \(\kappa _1=\kappa _2=f(\mathrm{com},\omega )\) with overwhelming probability.
A message m that satisfies (6) for \(c:=\mathrm{com}\) we call critical. (Remember that \(\mathrm{com}\) is the random variable describing the communication in an execution of the real protocol.)
Setting \(D_j\) Note that in any execution, a critical m is sent by at least one party. So there is a \(j\in \{1,2\}\) such that for infinitely many k, party \(P_j\) sends a critical m with probability at least 1 / 2. We describe a modification \(D_j\) of setting \(D_0\). In setting \(D_j\), party \(P_j\) is corrupted and simulated (honestly) inside \(\mathcal {Z}_j\). Furthermore, adversary \(\mathcal {A}_j\) simply relays all communication between this simulation inside \(\mathcal {Z}_j\) and the external machines \(P_{3j}\) and \(\mathsf {CT}_{m}\). For supplying inputs to the simulation of \(P_j\) and to the uncorrupted \(P_{3j}\), a simulation of \(\mathcal {Z}_0\) is employed inside \(\mathcal {Z}_j\). The situation (for \(j=1\)) is depicted in Fig. 5.
Since \(D_j\) is basically only a regrouping of \(D_0\), the random variables \(\mathrm{com}\), \(\omega \), and \(\kappa _i\) are distributed exactly as in \(D_0\), so we simply identify them with the corresponding random variables in \(D_0\). In particular, in \(D_j\), for infinitely many k, a critical message is sent by \(P_j\).
If (7) holds at some point for the first time, then \(\mathcal {Z}_j'\) tosses a coin b uniformly at random and proceeds as follows: If \(b=0\), then \(\mathcal {Z}_j'\) keeps going just as \(\mathcal {Z}_j\) would have. In particular, \(\mathcal {Z}_j'\) then lets \(P_j\) send m to \(P_{3j}\). However, if \(b=1\), then \(\mathcal {Z}_j'\) rewinds the simulation of \(P_j\) to the point before that activation and activates \(P_j\) again with fresh randomness, thereby letting \(P_j\) send a possibly different message \(m'\). In the further proof, \(\overline{c}\), m, and M refer to these values for which (7) holds.
In any case, after having tossed the coin b once, \(\mathcal {Z}_j'\) remembers the set M from (7) and does not check (7) again. After the protocol finishes, \(\mathcal {Z}_j'\) outputs \((b,\beta )\). Here, b is as above, and \(\beta :=1\) iff \(\kappa _1=\kappa _2\in M\) and \(\beta :=0\) otherwise. (\(b,\beta :=\bot \) if (7) was never fulfilled.)
Now by our choice of j, and since a critical m fulfills (7), Open image in new window for infinitely many k.
Also, Lemma 9 guarantees that the internal state of the parties at the time of tossing b consists only of \(\overline{c}\). So, when \(\mathcal {Z}_j'\) has chosen \(b=1\), and rewound the simulated \(P_j\), the probability that at the end of the protocol \(\kappa _1=\kappa _2\in M\) holds is the same as the probability of that event in the setting \(D_j\) under the condition that the communication \(, \) begins with \({\bar{c}}\). This probability again is exactly \({\mathsf {E}}(M,{\bar{c}})\) by definition.
Similarly, when \(\mathcal {Z}_j'\) has chosen \(b=0\), the probability that at the end of the protocol \(\kappa _1=\kappa _2\in M\) is the same as the probability of that event in the setting \(D_j\) under the condition that the communication \(, \) begins with \({\bar{c}}m\), i.e., \({\mathsf {E}}(M,{\bar{c}}m)\).
Therefore, just before \(\mathcal {Z}_j'\) chooses b (i.e., when \({\bar{c}}\) and M are already determined), the probability that at the end we will have \(\beta =1\wedge b=1\) is \(\tfrac{1}{2}{\mathsf {E}}(M,{\bar{c}})\) and the probability of \(\beta =1\wedge b=0\) is \(\tfrac{1}{2}\mathsf E(M,{\bar{c}}m)\). Therefore, the difference between these probabilities is at least \(\tfrac{1}{2}\bigl ({\mathsf {E}}(M,{\bar{c}})\mathsf E(M,{\bar{c}}m)\bigr )\ge \frac{1}{6p(k)}\).
The contradiction We show that no simulator \(\mathcal {S}_j\) can achieve property (8) in the ideal model, where \(\mathcal {Z}_j'\) runs with \(\mathsf {CT}_{n}\) and \(\mathcal {S}_j\). To distinguish random variables during a run of \(\mathcal {Z}_j'\) in the ideal model from those in the real model, we add a tilde to a random variable in a run of \(\mathcal {Z}_j'\) in the ideal model, for example, \({\tilde{b}}\), \({\tilde{\beta }}\).
Since the protocol \(\pi \) is nontrivial, for any \(\mathcal {S}_j\) achieving indistinguishability of real and ideal model, we can assume without loss of generality that \(\mathcal {S}_j\) always delivers the outputs \({\tilde{\kappa }}_1={\tilde{\kappa }}_2=:{\tilde{\kappa }}\).
Actually, in the case of perfect security, impossibility holds even for protocols with arbitrarily many rounds. Namely, in the proof of Theorem 15, we have used that the protocol has only polynomially many rounds only in one place. Namely, we obtained in (6) that one party sends a message that has nonnegligible impact on the probability that \(\kappa \in M\). For perfect security, we need only that one party has some nonzero impact on that probability, i.e., we can drop the requirement on the polynomial number of protocol rounds in the perfect case. The reasoning in the proof stays exactly the same only that we end up with the lefthand side of (8) being nonzero instead of nonnegligible. This suffices to show that the considered protocol is not perfectly secure and thus:
Corollary 16
There is no nontrivial perfectly universally composable protocol for \((m\rightarrow n)\)coin toss extension (the number of rounds does not matter here).
However, we do not know whether or not there is a protocol for the statistical case that proceeds in a superpolynomial number of rounds.
Note that all discussions above assume that statistical security means security with respect to computationally unbounded adversaries, simulators, and environments, i.e., machines that can implement any probabilistic function, even, for example, the halting problem or similar. Often, however, statistical security is instead defined with respect to (computationally unbounded) Turing machines, i.e., machines that can only implement computable functions. To show the above results for this case, one could try and check whether all constructions given in the proof above are indeed computable or can be replaced by computable approximations. Fortunately, however, there is an easier way, using results from [27].
Corollary 17
Say a protocol is bounded time if there is a (not necessary small or computable) bound on the execution time of that protocol (e.g., all efficient protocols are bounded time). Let further n, m be computable functions, and \(m>n\).
Then, there is no nontrivial boundedtime protocol for \((m\rightarrow n)\)coin toss extension that proceeds in a polynomial number of rounds and that is statistically universally composable with respect to adversaries / environments / simulators that are computationally unbounded Turing machines.
Proof
[27] shows that a boundedtime protocol universally composably implements a boundedtime functionality with respect to computationally unbounded adversaries / environments / simulators if and only if it universally composably implements that functionality with respect to computationally unbounded Turing adversaries / environments / simulators. Since the nbit and mbit coin toss functionalities are bounded time, too (n(k) can be evaluated in finite time), a protocol contradicting this corollary would also contradict Theorem 15.\(\square \)
Similar reasoning applies to the perfect case, and we omit the details here.
5 CRS Extension
Security type  Level  

Computational  Statistical  Perfect  
Standalone simulatability  Yes  No  No 
Universal composability  Depends\(^\mathrm{a}\)  No  No 
5.1 The Computational Case
As already mentioned in Introduction, [14, Proposition 7.4.8] and [14, Proposition 7.4.3] show the existence of an nbit coin toss protocol \(\pi \) for any polynomially bounded, efficiently computable n. This makes \((m\rightarrow n)\)CRS extension trivial: One can ignore the mbit seed and use the protocol \(\pi \) to produce an nbit random string which is then used as the CRS.
In the setting of computational universal composability, the results from Sect. 3.1 carry over directly. To state these results, we first have to specify the ideal functionality \(\mathsf {CRS}\).
The following corollary shows that CRS extension is possible in the computational UC setting given sufficiently long seeds.
Corollary 18

m is polynomially large and ETD exists, or

m is superpolylogarithmic and exponentially hard ETD exists.
Proof
The proof of Theorem 7 actually shows that an nbit coin toss can be realized from an mbit CRS. Furthermore, from an nbit coin toss, we can trivially realize an nbit CRS. Thus, from an mbit CRS, we can realize an nbit CRS. \(\square \)
The following corollary shows that extending coin toss is impossible in the computational UC setting for short seeds.
Corollary 19
Let \(n=n(k)\) and \(m=m(k)\) be functions with \(n(k)>m(k)\ge 0\) for all k and assume that m is not superlogarithmic (i.e., \(2^{m}\) is nonnegligible). Then, there is no nontrivial polynomialtime computationally universally composable protocol for \((m\rightarrow n)\)CRS extension.
The proof is identical to that of Theorem 8. (Except, of course, that we have to replace the mentions of the functionality \(\mathsf {CT}\) by \(\mathsf {CRS}\) and that the environment \(\mathcal {Z}\) sends \(\mathtt {getcrs}\) instead of \(\mathtt {init}\).)
5.2 The Statistical and the Perfect Case
For superlogarithmic m and \(n>m\), Theorem 13 states that an (\(m\rightarrow n\))coin toss extension is possible with respect to statistical standalone simulatability. This is not true, however, for CRS extension. We will show that CRS extension is impossible for any length m of the seed, both for statistical and perfect security, and both for standalone simulatability and UC.
Theorem 20

There is a negligible function \(\mu \) in the security parameter such that for any (possibly unbounded) adversary corrupting one of the parties, for every security parameter k, and for every set \(M\subseteq \{0,1\}^n\), we have that the probability that the output of the honest party lies in M is at most \(2^{n}M+\mu \).^{13}
We begin with a proof sketch. For contradiction, assume a protocol \(\pi \) with bounds \(\mu \) and \(\nu \) as in the statement of the theorem. Let S denote the value of the seed (the initial CRS), and let R denote the outcome of the protocol (the extended CRS).
First, we find that there is an index i such that the ith bit \(R_i\) of R is not completely determined by S (up to some negligible error). If there was no such index, each bit of R would be determined by S, and hence R could only take \(2^m\ll 2^n\) different values.
Furthermore, for a fixed value s of S, let \(\alpha _s\) denote the maximum probability that a corrupted Alice can achieve \(R_i=0\). Similarly, \(\beta _s\) denotes the maximum probability that a corrupted Bob can achieve \(R_i=1\). For any fixed s, we are in the same situation as in a coin toss protocol that has to pick a random bit \(R_i\) without using any seed at all. In this case, either Alice can enforce outcome 0 or Bob can enforce outcome 1 (Theorem 10). Thus, for all s, \(\alpha _s\approx 1\) or \(\beta _s\approx 1\). Let \(V_\alpha :=\{s:\alpha _s\approx 1\}\), i.e., \(V_\alpha \) is the set of all seeds for which Alice can enforce outcome 0. \(V_\beta \) is defined analogously.
Let \(\varDelta _\alpha \) denote the probability that in an honest execution, \(S\in V_\alpha \) and \(R_i\ne 0\). Let \(\varDelta _\beta \) denote the probability that in an honest execution, \(S\in V_\beta \) and \(R_i\ne 1\). If both \(\varDelta _\alpha \approx 0\) and \(\varDelta _\beta \approx 0\), then the value of \(R_i\) would be determined by whether \(S\in V_\alpha \) holds. But this contradicts the fact that \(R_i\) is not determined by S. Thus, \(\varDelta _\alpha \not \approx 0\) or \(\varDelta _\beta \not \approx 0\). Without loss of generality, assume \(\varDelta _\alpha \not \approx 0\), i.e., with noticeable probability, in an honest execution we have \(R_i\ne 0\), but the seed S is such that a corrupted Alice could have enforced \(R_i=0\).
Thus, a corrupt Alice can increase the bias toward 0 by the noticeably amount \(\varDelta _\alpha \) when compared to the honest case. But since in the honest case, the bias toward 0 is \(\frac{1}{2}\), Alice can enforce \(R_i=0\) with probability \(\frac{1}{2}+\varDelta _\alpha \). This violates the security of the protocol \(\pi \).
Thus, we have led our initial assumption to a contradiction; hence, Theorem 20 holds.
We now proceed with the full proof.
Proof
Assume for contradiction that a protocol \(\pi \) with negligible bounds \(\mu \) and \(\nu \) as in the statement of the theorem exists.
Without loss of generality, we may assume that honest parties always give output in \(\{0,1\}^{n}\) or no output. We also assume that if both parties are honest (and all messages are delivered), with probability 1, both parties give the same output or both parties give no output. The latter can be achieved by adding two additional messages at the end of the protocol where the parties compare their outputs (and give no output in the case of disagreement).
For the remainder of the proof, we fix the security parameter k.
We first make a number of simple definitions. Denote by the random variable \(S\in \{0,1\}^m\) the initial seed that is available to both protocol parties (the CRS). Let the random variable \(R\in \{0,1\}^n\cup \{\bot \}\) denote the protocol outcome, i.e., the extended CRS, in an honest protocol execution. We write \(R=\bot \) for the case that no output is given. Let \(R_i\) be the ith bit of R, with \(R_i:=\bot \) if \(R=\bot \).
Let \(b_i(s)\in \{0,1\}\) with \(b_i(s):=1\) iff Open image in new window . (Intuitively, \(b_i(S)\) is the most probable value of \(R_i\) given S.) \(\square \)
Claim 1
First, assume for contradiction that (10) does not hold for any i. Let \(f(s):=b_1(s)\Vert \dots \Vert b_n(s)\) for \(s\in \{0,1\}^{m}\), i.e., f(S) is the value of R resulting from predicting R bitwise.
But by the security of \(\pi \), we have that \(\Pr [R\in M]\le 2^{n} M+\mu \le 2^{n}2^m+\mu \frac{1}{2}+\mu \). Thus, we have a contradiction, and hence (10) holds and Claim 1 is shown.
Claim 2
Note the implications of Claim 2: Intuitively it states that (if \(\mu \) is small) there is a bit of the protocol output that is (to a certain extent) undetermined at the start of the protocol, even when knowing the seed S.
In the following, let i be as in Claims 1 and 2.
Claim 3
The case that \(\varDelta _\beta \ge \frac{12\mu }{4n}\) is handled analogously, except that now \(A^*\) corrupts Bob and achieves Open image in new window for all \(s\in V_\beta \). This shows Claim 3.
From Claim 3, we immediately get Claim 20, since no pair of negligible functions \(\mu ,\nu \) can fulfill (13). \(\square \)
From Theorem 20, we can directly derive the impossibility of statistical and perfect coin toss extension for both standalone simulatability and UC.
Footnotes
 1.
In [11], parties may not abort protocol execution without generating output. In contrast, in our setting, a party may abort at any time, for example, when detecting a cheating other party, or when it becomes clear that the overall output may be undesirable. We note that in this setting without aborts, any secure coin toss must be “fair,” in the sense that a party is guaranteed to obtain a random output.
 2.
In fact, Goldreich [14] does not use the name standalone simulatability but simply speaks about security in the malicious model. We adopt the name standalone simulatability for this paper to be able to better distinguish the different notions.
 3.
If the adversary always only corrupts parties before the start of the protocol, it is called static. Otherwise, the adversary is called adaptive. The results in this paper hold for both types of adversaries, resp., corruptions.
 4.
Or, if \(\pi \) uses a hybrid functionality, in the hybrid execution.
 5.
 6.
Here, our formalization of the coin toss functionality differs from that of [7]. They define a coin toss as a uniformly distributed common random string. In particular, their functionality does not wait for both parties to initialize the coin toss.
 7.
If the functionality now sent \(\kappa \) directly and without delay to the parties, this behavior would not be implementable by any protocol (this would basically mean that the protocol output is immediately available, even without interaction). So the functionality lets the adversary decide when to deliver \(\kappa \) to each party. Note, however, that the adversary may not in any way influence the \(\kappa \) that is delivered.
 8.
We require doubly enhanced trapdoor permutations because they are actually required by current constructions of NIZK proofs, see [15].
 9.
By the size of the circuits, we means the total size of the circuits describing both the key generation and the domain sampling algorithm. Note that then trivially also the size of the resulting keys and the amount of randomness used by the domain sampling algorithm are bounded by s(k).
 10.
Canetti et al. [9] state their result with respect to enhanced, not doubly enhanced trapdoor permutations. This is due to the fact that, at the time, it was believed that enhanced trapdoor permutations are sufficient for constructing noninteractive zeroknowledge proofs. It was, however, pointed out in [15] (following an observation by Jonathan Katz) that this is not the case. In particular, we stress that also [9] actually require the existence of doubly enhanced trapdoor permutations. In case of a uniform CRS, in fact ETDs according to Definition 4 are required.
 11.
In particular, the randomness required to produce a message is always chosen directly before sending that message.
 12.
Note that it is crucial here that the machines do not have any secret internal state, since otherwise some protocol instances might reveal secrets that make the other instances insecure. This fact is used in the construction of the simulator below.
 13.
We bound this probability from above only since we want to allow that an honest party gives no output with high probability.
 14.
By having the same behavior, we mean that given a fixed sequence of inputs, the outputs of M and \(M'\) have the same probability distribution.
Notes
Acknowledgements
This work was partially supported by the projects ProSecCo (IST200139227) and SECOQC of the European Commission and by the Cluster of Excellence “Multimodal Computing and Interaction” and by the institutional research funding IUT21 of the Estonian Ministry of Education and Research. Part of this work was done while the first and third authors were with the IAKS, University of Karlsruhe. Further, we thank the anonymous referees for valuable comments.
References
 1.A. Ambainis, H. Buhrman, Y. Dodis, H. Röhrig, Multiparty quantum coin flipping, in 19th Annual IEEE Conference on Computational Complexity, Proceedings of CCC’04, (IEEE Computer Society, 2004), Online available at arXiv:quantph/0304112. pp. 250–259
 2.M. Backes, D. Hofheinz, J. MüllerQuade, D. Unruh, On fairness in simulatabilitybased cryptographic systems, in R. Küsters, J. Mitchell, editors, Proceedings of the 2005 ACM Workshop on Formal Methods in Security Engineering, (ACM Press, 2005). Full version as IACR ePrint 2005/294. pp. 13–22Google Scholar
 3.M. Backes, B. Pfitzmann, M. Waidner, Secure asynchronous reactive systems. IACR ePrint Archive, (2004). Online available at http://eprint.iacr.org/2004/082
 4.M. Bellare, J.A. Garay, T. Rabin, Distributed pseudorandom bit generators—a new way to speedup shared coin tossing, in Fifteenth Annual ACM Symposium on Principles of Distributed Computing, Proceedings of PODC 2003, (ACM Press, 1996). Online available at http://wwwcse.ucsd.edu/users/mihir/papers/dprg.pdf. pp. 191–200
 5.M. Bellare, O. Goldreich, E. Petrank, Uniform generation of NPwitnesses using an NPoracle. Inf. Comput., 163(2), 510–526 (2000).MathSciNetCrossRefGoogle Scholar
 6.M. Blum, Coin flipping by telephone, in A. Gersho, editor, Advances in Cryptology: A Report on CRYPTO 81, U.C. Santa Barbara Department of Electrical and Computer Engineering, (1981). Online available at http://www2.cs.cmu.edu/~mblum/research/pdf/coin/. pp. 11–15
 7.R. Canetti, Universally composable security: a new paradigm for cryptographic protocols, in 42th Annual Symposium on Foundations of Computer Science, Proceedings of FOCS 2001, (IEEE Computer Society, 2001). Full version online available at http://www.eccc.unitrier.de/ecccreports/2001/TR01016/revisn01.ps. pp. 136–145
 8.R. Canetti, M. Fischlin, Universally composable commitments, in J. Kilian, editor, Advances in Cryptology, Proceedings of CRYPTO ’01, vol. 2139 Lecture Notes in Computer Science, (SpringerVerlag, 2001). Full version online available at http://eprint.iacr.org/2001/055.ps. pp. 19–40
 9.R. Canetti, Y. Lindell, R. Ostrovsky, A. Sahai, Universally composable twoparty and multiparty secure computation, in 34th Annual ACM Symposium on Theory of Computing, Proceedings of STOC 2002, (ACM Press, 2002). Extended abstract, full version online available at http://eprint.iacr.org/2002/140.ps. pp. 494–503
 10.J.L. Carter, M.N. Wegman, Universal classes of hash functions. J. Comput. Syst. Sci., 18(2), 143–154 (1979).MathSciNetCrossRefGoogle Scholar
 11.R. Cleve, Limits on the security of coin flips when half the processors are faulty, in Eighteenth Annual ACM Symposium on Theory of Computing, Proceedings of STOC 1986, (ACM Press, 1986). Online available at https://doi.org/10.1145/12130.12168. pp. 364–369
 12.D. Dolev, C. Dwork, M. Naor, Nonmalleable cryptography, in TwentyThird Annual ACM Symposium on Theory of Computing, Proceedings of STOC 1991, (ACM Press, 1991). Extended abstract, full version online available at http://www.wisdom.weizmann.ac.il/~naor/PAPERS/nmc.ps. pp. 542–552
 13.O. Goldreich, Foundations of cryptography—volume 1 (basic tools). (Cambridge University Press, 2001). Previous version online available at http://www.wisdom.weizmann.ac.il/~oded/frag.html
 14.O. Goldreich, Foundations of cryptography–volume 2 (basic applications). (Cambridge University Press, May 2004). Previous version online available at http://www.wisdom.weizmann.ac.il/~oded/frag.html.
 15.O. Goldreich, Basing noninteractive zeroknowledge on (enhanced) trapdoor permutations: The state of the art. Online available at http://www.wisdom.weizmann.ac.il/~oded/PSBookFrag/nizktdp.ps, (Oct. 2009).
 16.O. Goldreich, L.A. Levin, A hardcore predicate for all oneway functions, in STOC ’89: Proceedings of the TwentyFirst Annual ACM Symposium on Theory of Computing, (New York, NY, USA, 1989. ACM Press). pp. 25–32Google Scholar
 17.S. Goldwasser, S. Micali, Probabilistic encryption. J. Comput. Syst. Sci., 28(2), 270–299 (1984).MathSciNetCrossRefGoogle Scholar
 18.I. Haitner, T. Holenstein, O. Reingold, S.P. Vadhan, H. Wee, Universal oneway hash functions via inaccessible entropy, in Advances in Cryptology, Proceedings of EUROCRYPT 2010, (2010), pp. 616–637Google Scholar
 19.D. Hofheinz, V. Shoup, GNUC: a new universal composability framework. J. Cryptology, 28(3), 423–508 (2015).MathSciNetCrossRefGoogle Scholar
 20.R. Impagliazzo, L.A. Levin, M. Luby, Pseudorandom generation from oneway functions, in TwentyFirst Annual ACM Symposium on Theory of Computing, Proceedings of STOC 1989, (ACM Press, 1989). Online available at https://doi.org/10.1145/73007.73009. pp. 12–24
 21.M. Jerrum, L.G. Valiant, V.V. Vazirani, Random generation of combinatorial structures from a uniform distribution. Theor. Comput. Sci., 43, 169–188 (1986).MathSciNetCrossRefGoogle Scholar
 22.J. Katz, C. Koo, On constructing universal oneway hash functions from arbitrary oneway functions. IACR Cryptology ePrint Archive, 2005, 328 (2005)Google Scholar
 23.T. Moran, M. Naor, G. Segev, An optimally fair coin toss. J. Cryptology, 29(3), 491–513 (2016).MathSciNetCrossRefGoogle Scholar
 24.B. Pfitzmann, M. Waidner, A model for asynchronous reactive systems and its application to secure message transmission, in IEEE Symposium on Security and Privacy, Proceedings of SSP ’01, (IEEE Computer Society, 2001). Full version online available at http://eprint.iacr.org/2000/066.ps. pp. 184–200
 25.J. Rompel, Oneway functions are necessary and sufficient for secure signatures, in TwentySecond Annual ACM Symposium on Theory of Computing, Proceedings of STOC 1990, (ACM Press, 1990), pp. 387–394Google Scholar
 26.D.R. Stinson, Universal hash families and the leftover hash lemma, and applications to cryptography and computing. J. Combin. Math. Combin. Comput., 42, 3–31 (2002). Online available at http://www.cacr.math.uwaterloo.ca/~dstinson/papers/leftoverhash.ps
 27.D. Unruh, Relations among statistical security notions or why exponential adversaries are unlimited, (2006). Available as IACR ePrint 2005/406Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.