1 Introduction

In 1986, Fiat and Shamir [FS86] proposed a general method for converting any three-round identification (ID) scheme into a signature scheme. This method quickly gained popularity both in theory and in practice, since known ID schemes (in which a sender interactively identifies himself to a receiver) are significantly simpler and more efficient than known signature schemes, and thus this heuristic gives an efficient and easy way to implement digital signature schemes.

The Fiat-Shamir method is both simple and intuitive: The public key of the signature scheme consists of a pair (pkH), where pk is a public key corresponding to the underlying ID scheme, and H is a hash function chosen at random from a hash family. To sign a message m, compute a triplet \((\alpha ,\beta ,\gamma )\), such that \(\beta =H(\alpha ,m)\) and \((\alpha ,\beta ,\gamma )\) is an accepting transcript of the ID scheme with respect to pk.

The main question is:

figure a

Namely, for what hash function families is the signature scheme, obtained by applying the Fiat-Shamir heuristic to a secure ID scheme, secure against adaptive chosen message attacks?

The intuition for why the heuristic may be sound, is that if H looks like a truly random function, and if all the adversary (i.e., impersonator) can do is use H in a black-box manner, then interacting with H is similar to interacting with the real verifier. This intuition was formalized by Pointcheval and Stern [PS96], and by followup works [OO98, AABN02], who proved that the Fiat-Shamir heuristic is sound in the so-called Random Oracle Model (ROM) – when the hash function is modeled by a random oracle [BR93], assuming the underlying ID scheme is sound against passive impersonation attacks.

This led to the belief that if a 2-round protocol, obtained by applying the Fiat-Shamir paradigm, is insecure, then it must be the case that the hash family used is not “secure enough”, and the hope was that there exists another hash family that is sufficiently secure. These positive results (in the ROM), together with the popularity and importance of the Fiat-Shamir heuristic, led many researchers to try to prove the security of this paradigm in the plain model (without resorting to random oracles). Unfortunately, these attempts led mainly to negative results.

Goldwasser and Kalai [GK03] proved a negative result, by constructing a (contrived) 3-round public-coin ID scheme, for which the resulting signature scheme obtained by applying the Fiat-Shamir heuristic, is insecure, no matter which hash family is used.

Extending the Fiat-Shamir Heuristic. The Fiat-Shamir heuristic can be used outside the regime of ID and signature schemes. It can be used to convert any constant-round public-coin proof system into a two-round proof system, as follows: In the first round, the verifier sends a hash function H, where H is chosen at random from a hash family; in the second round, the prover sends the entire transcript of the interactive protocol, where the verifier’s messages are computed by applying H to the communication so far.

The first work to extend the Fiat-Shamir paradigm to this regime, was the work of Micali [Mic94] on CS-proofs. We note that in this regime, the importance of the Fiat-Shamir heuristic stems from the fact that latency, caused by sending messages back and forth, is often a bottleneck in running cryptographic protocols [MNPS04, BDNP08].

The main question about this (extended) heuristic is therefore:

figure b

Namely, does there exist an explicit hash family, for which is it infeasible for a (computationally bounded) cheating prover, given an input outside the language and a random function H from the family, to generate an accepting transcript for the original interactive protocol (where each verifier-message is computed by applying H to the communication so far).

Barak [Bar01] gave the first negative result in the “plain model”, by constructing a constant-round public-coin protocol, such that for any hash family \(\mathcal{H}\), the resulting 2-round protocol, obtained by applying the Fiat-Shamir heuristic to this interactive protocol with respect to \(\mathcal{H}\), is not sound.Footnote 1 However, the interactive protocol constructed in [Bar01] has only computational soundness, and thus is an argument system (as opposed to a proof). This gave rise to the following question:

figure c

Namely, does there exist an explicit hash family for which the transformation, when applied to an information-theoretically sound interactive proof, produces a (computationally) sound two-message argument system?

In this work, we give a positive answer to this final question (under strong computational assumptions). Before we present our results in detail, we describe previous works which attempted to answer this question.

Barak, Lindell and Vadhan [BLV06] presented a security property for the Fiat-Shamir hash function which, if realized, would imply the soundness of the Fiat-Shamir paradigm applied to any constant-round public-coin interactive proof system.Footnote 2 However, they left open the problem of realizing this security definition under standard hardness assumptions (or under any assumption beyond simply assuming that the definition holds for a given hash function).

Dodis, Ristenpart and Vadhan [DRV12] showed that under specific assumptions regarding the existence of robust randomness condensers for seed-dependent sources, the definitions of [BLV06] can be realized. However, the question of constructing such suitable robust randomness condensers was left open by [DRV12].

On the other hand, Bitansky et al. [BDG+13] gave a negative result. They showed that that soundness of the Fiat-Shamir paradigm, even when applied to interactive proofs, cannot be proved via a black-box reduction to any so-called falsifiable assumption, a notion defined by Naor [Nao03]).Footnote 3 \(^,\) Footnote 4

Correlation Intractable Hash Functions. Our results can be cast more generally in the language of correlation intractability, a notion defined in the seminal work of Canetti, Goldreich and Halevi [CGH04].

Roughly speaking, a correlation intractable function family is one for which it is infeasible to find input-output pairs that satisfy some “rare” relation. More precisely, a binary relation R is said to be evasive if for every value x only negligible fraction of the y values satisfy \((x,y)\in R\). A function family \(F=\{f_s\}\) is correlation intractable if for every evasive relation R it is computationally hard, given a description of a random function \(f_s\in F\), to find a value x such that \((x,f_s(x))\in R\).

It was shown in [CGH04] that there does not exist a correlation intractable hash family whose seeds are shorter than the input length. The question of whether there exists a correlation intractable function family whose seeds are larger than the input, remained open. Very recently, [CCR15] construct a function family that is correlation intractable with respect to all relations that are computable in a-priori bounded polynomial complexity (under computational assumptions).

In this work, we construct a correlation intractable hash family with respect to all relations (under computational assumptions). We provide a more detailed comparison between our work and that of [CCR15] after we present our result more formally, below.

1.1 Our Results

In this work, we construct a hash family, and prove that the Fiat-Shamir paradigm is sound w.r.t. this hash family, when applied to interactive proofs (as opposed to arguments). We also show that the family is correlation intractable. Both results are shown under the following three cryptographic assumptions:

  1. 1.

    The existence of \(2^n\)-secure indistinguishability obfuscation \(\mathsf{iO}\), where \(2^n\) is the domain size of the functions being obfuscated.Footnote 5

    Recently, several constructions of \(\mathsf{iO}\) obfuscation were proposed, starting with the work of Garg et al. [GGH+13]. However, to date, none of these constructions are known to be provably secure under what is known as a complexity assumption [GK16] or more generally a falsifiable assumption [Nao03]. We mention that [GLSW14] provided a construction and proved its security under the subgroup elimination assumption, which is a complexity assumption (and in particular is a falsifiable assumption). However, this assumption has been refuted in all candidate multi-linear groups.

  2. 2.

    The existence of \(2^n\)-secure puncturable pseudo-random function (PRF) family \(\mathcal{F}\), where \(2^n\) is the domain size.

    Puncturable PRFs were defined in [BW13, BGI14, KPTZ13]. The PRF family of [GGM86] is a puncturable PRF family, and thus \(2^n\)-secure puncturable PRFs can be constructed from any sub-exponentially secure one-way function.

  3. 3.

    The existence of an exponentially secure input-hiding obfuscation \(\mathsf{hideO}\) for the class of multi-bit point functions \(\{\mathcal{I}_{n,k}\}\).

    The class \(\{\mathcal{I}_{n,k}\}\) consists of functions of the form \(I_{\alpha ,\beta }\) where \(|\alpha |=n\) and \(|\beta |=k\), and where \(I_{\alpha ,\beta }(x)=\beta \) for \(x=\alpha \) and \(I_{\alpha ,\beta }(x)=0\) otherwise. An obfuscation for this class is said to be input-hiding with T-security if any poly-size adversary that is given an obfuscation of a random function \(I_{\alpha ,\beta }\) in this family, guesses \(\alpha \) with probability at most \(T^{-1}\). Note that we assume hardness for a distribution where the value \(\beta \) may be correlated with \(\alpha \) and furthermore, it may be computationally difficult to find \(\beta \) from \(\alpha \).

    For our construction we require T that is roughly equal to \(2^n \cdot \mu \), where \(\mu \) is the soundness error of the underlying proof system. For example, if we start off with an interactive proof with soundness error \(2^{-n^\epsilon }\) (where n is an upper bound the length of prover messages), then we require roughly \(T=2^{n-n^\epsilon }\). For constructing correlation intractable functions, \(\mu \) is the “evasiveness” of the relation R. That is, for every value x, the fraction of y’s satisfying \((x,y) \in R\) is at most \(\mu \).

    This assumption was considered in [CD08, BC14], who also provided a candidate construction based on a strong variant of the DDH assumption (we elaborate on this in Sect. 2.4).Footnote 6 See further discussion on various notions of point function obfuscation in [BS16].

    We emphasize that we do not assume security of the multi-bit point function obfuscation with auxiliary input. Indeed, security with auxiliary input is known to be problematic, and, as was shown by Brzuska and Mittlebach [BM14], if \(\mathsf{iO}\) obfuscation exists then multi-bit point function obfuscation with auxiliary inputs does not exist. We do not allow auxiliary information, and we only assume input-hiding (against exponential-time adversaries) for a random function from the family (rather than black-box worst-case).

Theorem 1

(Informally Stated, see Theorem 4 ). Under the assumptions above, for any constant-round public-coin interactive proof \(\varPi \), the resulting 2-message argument \(\varPi ^{\mathsf {FS}}\), obtained by applying the Fiat-Shamir paradigm to \(\varPi \) with the function family \(\mathsf{iO}(\mathcal F)\), is sound.

This theorem provides a general-purpose transformation for reducing interaction in interactive proof systems. Beyond our primary motivation of studying the security of the Fiat-Shamir transformation (and its implications to zero-knowledge proofs), the secure transformation can also serve as an avenue for obtaining new public-coin 2-message argument systems (often referred to as publicly-verifiable non-interactive arguments). For example, it can be applied to the interactive proofs of [RRR16] to obtain arguments for bounded-space polynomial-time computations, with small communication and almost-linear-time verification. We note, however, that prior works [BGL+15] have shown how to construct such arguments for general polynomial-time computations using subexponential \(\mathsf{iO}\) and one-way functions (without the need for multi-bit point function obfuscation). Nonetheless, one advantage of Theorem 1 is that it can be applied to any interactive proof, which may give more efficient arguments for specific languages in P and for languages outside of P.

Cast in the language of correlation intractability, we prove:

Theorem 2

(Informally Stated). Under the assumptions above, the function family \(\mathsf{iO}(\mathcal F)\) is correlation intractable.

Here and throughout this work \(\mathsf{iO}(\mathcal {F})\) refers to an \(\mathsf{iO}\) obfuscation of a program that computes the PRF, using a hardwired random seed.

Remark 1

Although outside the scope of this paper, we note that this transformation from interactive proofs to 2-message arguments preserves some secrecy guarantees.

In particular, it is easy to see that the Fiat-Shamir paradigm always preserves witness indistinguishability. Namely, if the underlying interactive proof is witness indistinguishable then the resulting 2-message argument, obtained by applying the Fiat-Shamir method with respect to any function family, is also witness indistinguishability. Loosely speaking, this follows from the fact that witness indistinguishability is defined to hold with respect to any cheating (poly-size) verifier.

Moreover, we claim that the Fiat-Shamir paradigm, applied with our function family \(\mathsf{iO}(\mathcal {F})\), preserves honest-verifier zero-knowledge. Loosely speaking, this (non-trivial) claim follows from the following argument: To simulate the 2-message argument with respect to some input x, first use the simulator for the interactive proof to obtain a simulated transcript \((m_1,r_1,\ldots ,m_c,r_c,m_{c+1})\). Note that this transcript may not be consistent with any hash function from the family. To obtain a simulated transcript for the 2-message argument, we simulate the verifier as sending the \(\mathsf{iO}\) of a randomly chosen PRF function \(f_s\leftarrow \mathcal {F}\), punctured at the points \(m_1, (m_1,r_1,m_2),\ldots , (m_1,r_1,\dots ,m_{c_1},r_{c-1},m_c)\}\), and hardwire the values \(r_1,r_2\ldots ,r_c\) for these points (respectively). Standard \(\mathsf{iO}\) techniques can be used to argue that this obfuscated circuit is indistinguishable from \(\mathsf{iO}(f_s)\).

As we discuss next, Theorem 1 settles a long lasting open problem about zero-knowledge proofs.

Impossibility of Constant-Round Public-Coin Zero-Knowledge. Hada and Tanaka [HT98] and Dwork et al. [DNRS99] observed an intriguing connection between the security of the Fiat-Shamir paradigm and the existence of certain zero-knowledge protocols. In particular, if there exists a constant-round public-coin zero-knowledge proof for a language outside \(\mathsf {BPP}\), then the Fiat-Shamir paradigm is not secure when applied to this zero-knowledge proof.Footnote 7 Intuitively, this follows from the following observation: Consider the cheating verifier that behaves exactly like the Fiat-Shamir hash function. The fact that the protocol is zero-knowledge implies that there exists a simulator who can simulate the view in an indistinguishable manner. Thus, for elements in the language the simulator generates accepting transcripts. The simulator cannot distinguish between elements in the language and elements outside the language (since the simulator runs in poly-time and the language is outside of \(\mathsf {BPP}\)). In addition, the protocol is public-coin, which implies that the simulator knows whether the transcript is accepted or not. Hence, it must be the case that the simulator also generates accepting transcripts for elements that are not in the language, which implies that the Fiat-Shamir paradigm is not secure.

Thus, Theorem 1, combined with [DNRS99, Theorem 5.4] implies the following corollary.

Corollary 1

Under the assumptions above, there does not exist a constant-round public-coin zero-knowledge proof with negligible soundness for languages outside \(\mathsf {BPP}\).

We emphasize that the above negative result not only rule out black-box simulation, but also rules out non-black-box simulation. Moreover, as pointed out by [DNRS99], this negative result actually rules out even extremely weak notions of zero-knowledge which they call ultra weak zero knowledge (see [DNRS99, Sect. 5]).

In particular, this corollary implies that (under the assumptions above) parallel repetition of Blum’s Hamiltonicity protocol for \(\mathsf {NP}\) [Blu87] is not zero-knowledge. Previously it was not known whether (in general) parallel repetition preserves zero-knowledge. Our result shows that it does not (under the assumptions above).

The existence of constant-round public-coin zero-knowledge proofs has been a long-standing open question (see, e.g., [GO94, GK96, KPR98, Ros00, CKPR02, BLV06, BGGL01, BL04, Rey01]). For black-box zero-knowledge proofs (which means that the simulator only uses the verifier as a black-box), the work of Goldreich and Krawczyk [GK96] ruled out constant-round public-coin protocols (for languages outside of \(\mathsf {BPP}\)). It is known, however, that non black-box techniques can be quite powerful in the context of zero-knowledge [Bar01]. Under the assumptions stated above, our work rules out any constant-round public-coin zero knowledge proof (even non black-box ones).

We note that even for those who are skeptical about the obfuscation assumptions we make, this corollary implies that finding a constant-round public-coin zero-knowledge proof requires overcoming technical barriers, and in particular requires disproving the existence of sub-exponentially secure \(\mathsf{iO}\) obfuscation, or the existence of exponentially secure input-hiding obfuscation for the class of multi-bit point functions (or, less likely, disproving the existence of sub-exponential OWF).

Comparison to Concurrent Works

Comparison to [CCR15]. As mentioned above, in a concurrent and independent work, Canetti et al. [CCR15] construct a correlation intractable function family that withstands all relations computable in a-priori bounded polynomial complexity. More specifically, they construct a function family that is correlation intractable with respect to all evasive relations that can be computed in time p, for any a priori polynomial p, where the size of the functions in the family grows with p.

We note that this result does not have any implications to the security of the Fiat-Shamir paradigm, since to prove the security of this paradigm we need a correlation intractable ensemble for relations that cannot be computed in polynomial time. Moreover, we note that since the size of the functions grow with p, leveraging techniques do not seem to apply here.

As mentioned above, our result on the security of the Fiat-Shamir paradigm can be cast more generally in the language of correlation intractability. In particular, the hash family that we construct, and with which we prove the security of the Fiat-Shamir paradigm, is correlation intractable (with respect to all relations) under our assumption stated above.

In terms of the assumptions used, [CCR15] assume the existence of sub-exponentially secure indistinguishability obfuscation, the existence of a sub-exponentially secure puncturable PRF family, and the existence of input-hiding obfuscation for the class of evasive functions [BBC+14]. Comparing to the assumptions we make in this work, we also make the first two assumptions. However, we assume input-hiding obfuscation only for multi-bit point functions (a significantly smaller family compared to general evasive functions). On the other hand, we require an exponentially secure input-hiding obfuscation, whereas their work only needs polynomial-time hardness of the input-hiding obfuscation.

Comparison with [MV16]. In an additional independent and concurrent work, Mittelbach and Venturi [MV16] showed a hash function for which the Fiat-Shamir is secure for a very particular class of protocols. The class of protocols that they consider in itself does not include any previously-studied protocols. However, [MV16] show an additional transformation for 3 message protocols (on top of Fiat-Shamir) that works when the first message in the underlying 3-message protocol is independent (as a function) of the input. Mittelbach and Venturi also show that their transformation, which is based on indistinguishability obfuscation, maintains zero-knowledge, and can be used to obtain signature schemes and \(\textsf {NIZK}\)s.

In contrast to [MV16], our primary motivation and goal is showing that the Fiat-Shamir transformation can be used to reduce interaction while preserving soundness. Reducing the interaction in cryptographic protocols and particularly showing that the Fiat-Shamir transform can be proved sound has been a central and widely-studied question in the cryptographic literature. We emphasize that the [MV16] result does not yield a method for reducing rounds while preserving soundness.Footnote 8

1.2 Overview

Throughout this overview we focus on proving the security of the Fiat-Shamir paradigm, when applied to 3-round public-coin interactive proofs. The more general case, of any constant numberFootnote 9 of rounds, is then proved by induction on the number of rounds (we refer the reader to Sect. 4 for details). Consider any 3-round proof \(\varPi \) for a language L. Denote the transcript by \((\alpha ,\beta ,\gamma )\) where \(\alpha \) is the first message sent by the prover, \(\beta \) is the random message sent by the verifier, and \(\gamma \) is the final message sent by the prover. Fix any \(x\notin L\). The fact that \(\varPi \) is a sound proof means that for every \(\alpha \), for most of the verifier’s messages \(\beta \), there does not exist \(\gamma \) that makes the verifier accept.

The basic idea stems from the original intuition for why the Fiat-Shamir is secure, which is that if we use a hash function H that looks like a truly random function, then all the prover can do is use H in a black-box manner, in which case interacting with H is similar to interacting with the real verifier, and hence security follows.

The first idea that comes to mind is to choose the hash function randomly from a pseudo-random function (PRF) family. However, the security guarantee of a PRF is that given only black-box access to a random function f in the PRF family, one cannot distinguish it from a truly random function. No guarantees are given if the adversary is given a succinct circuit for computing f.

Obfuscation to the Rescue. A natural next step is to try to obfuscate f, in the hope that whatever can be learned given the obfuscation of f can also be learned from black-box access to f. However, this requires virtual-black-box (VBB) security, and VBB obfuscation is known not to exist [BGI+12]. Moreover, there are specific PRF families for which VBB obfuscation is impossible [BGI+12]. Further obstacles to VBB obfuscation of PRFs and, more generally, functions with high pseudo-entropy (w.r.t. auxiliary input) are given in [GK05, BCC+14]. Given these obstacles to achieving VBB obfuscation, could we hope to prove security using relaxed notions of obfuscation, such as \(\mathsf{iO}\) obfuscation? The question is:

figure d

It is well known that \(\mathsf{iO}\) obfuscation is not strong enough to prove the security of the Fiat-Shamir paradigm when applied to computationally sound interactive arguments. Indeed the Fiat-Shamir paradigm is known be insecure when applied to arguments as opposed to proofs.Footnote 10 In contrast, we show that \(\mathsf{iO}\) obfuscation (together with additional assumptions) is strong enough to prove security when the Fiat-Shamir paradigm is applied to interactive proofs (rather than arguments).

For proving security of the Fiat-Shamir paradigm for proofs, consider a cheating prover for the transformed protocol \(\varPi ^{\mathsf {FS}}\), who receives the obfuscation \(\mathsf{iO}(f_s)\) of a pseudo-random function \(f_s\). Since \(f_s\) is a PRF, we know that there will only be a small set \(\mathsf {Bad}_s\) of inputs \(\alpha \) (corresponding to the prover’s first message in the proof \(\varPi \)), for which the communication prefix \((\alpha ,f_s(\alpha ))\) can lead the verifier in the interactive proof to accept (i.e. \(\alpha \)’s for which there exists \(\gamma \) s.t. \((\alpha ,f(\alpha ),\gamma )\) is an accepting transcript).

To show the security of the resulting protocol, we now want to claim that the obfuscation hides this (small) set \(\mathsf {Bad}_s\) of inputs, and that a cheating prover \(P^*\) cannot find any input \(\alpha \in \mathsf {Bad}_s\). Note, however, that \(\mathsf{iO}\) obfuscation only guarantees that one cannot distinguish between the obfuscation of two functionally equivalent circuits of the same size, and it does not give any hiding guarantees.

Puncturable PRFs to the Rescue? As mentioned above, \(\mathsf{iO}\) obfuscation does not immediately seem to give any hiding guarantees. Nonetheless, starting with the beautiful work of Sahai and Waters [SW14], \(\mathsf{iO}\) has proved remarkably powerful in the construction of a huge variety of cryptographic primitives. A basic technique used in order to get a hiding guarantee from \(\mathsf{iO}\) obfuscation, as pioneered in [SW14], is to use it with a puncturable PRF family.

A puncturable PRF family is a PRF family that allows the “puncturing” of the seed at any point \(\alpha \) in the domain of f. Namely, for any point \(\alpha \) in the domain, and for any seed s of the PRF, one can generate a “punctured” seed, denoted by \(s\{\alpha \}\). This seed allows the computation of \(f_s\) anywhere in the domain, except at point \(\alpha \), with the security guarantee that for a random seed s chosen independently of \(\alpha \), the element \(f_s(\alpha )\) looks (computationally) random given \((s\{\alpha \},\alpha )\). The security of \(\mathsf{iO}\) obfuscation guarantees that one cannot distinguish between \(\mathsf{iO}(s)\) and \(\mathsf{iO}(s\{\alpha \},\alpha ,f_s(\alpha ))\),Footnote 11 which together with the security of the puncturable PRF, implies that one cannot distinguish between \(\mathsf{iO}(s)\) and \(\mathsf{iO}(s\{\alpha \},\alpha ,u)\) for a truly random output u. Thus, we managed to use \(\mathsf{iO}\), together with the puncturing technique, to generate a circuit for computing \(f_s\) that hides the value of \(f_s(\alpha )\). We emphasize that this technique crucially relies on the fact that the punctured point \(\alpha \) is independent of the seed s, and hence as a result \(f_s(\alpha )\) is computationally random.

It is natural to try and use obfuscated puncturable PRFs to show security of the Fiat-Shamir paradigm. Consider the following naive (and flawed) analysis, which loosely speaking proceeds in three steps: Suppose that there exists a poly-size cheating prover \(P^*\) that convinces the verifier to accept \(x\notin L\). Recall that we denote transcripts by \((\alpha ,\beta ,\gamma )\). The (statistical) soundness of \(\varPi \) implies that for every \(\alpha \), for most of the verifier’s messages \(\beta \), there does not exist \(\gamma \) that makes the verifier accept. For any function f consider the (evasive) relation \(R=\{(\alpha ,\beta ): \exists \gamma \text{ s.t. } V(x,\alpha ,\beta ,\gamma )=1\}\). Suppose that the cheating prover \(P^*\), given \(\mathsf{iO}(s)\), outputs \(\alpha \) such that \((\alpha ,f_s(\alpha ))\in R\), with non-negligible probability.

  1. 1.

    Puncture the PRF at a random point \(\alpha ^*\) s.t. \(\alpha ^* \in \mathsf {Bad}_s\), and send the obfuscation of \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,f_s(\alpha ^*))\) to the cheating prover \(P^*\). Note that this does not change the functionality.

    Therefore, we can use the (sub-exponential) security of \(\mathsf{iO}\) to argue that the cheating prover \(P^*\) cannot tell where we punctured the PRF, and still succeeds with non-negligible probability. In particular, taking M to be the expected number of \(\alpha \)’s such that \((\alpha ,f_s(\alpha ))\in R\), we have that \(P^*\) outputs \(\alpha ^*\) with probability \({\approx }1/M\) (up to \(\text {poly}(n)\) factors).Footnote 12

  2. 2.

    Next, we want to use the (sub-exponential) security of the puncturable PRF to argue that the cheating prover \(P^*\) cannot distinguish between \((s\{\alpha ^*\},\alpha ^*,f_s(\alpha ^*))\) and \((s\{\alpha ^*\},\alpha ^*,\beta ^*)\) where \((\alpha ^*,\beta ^*)\) is random in R. Thus, given \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,\beta ^*)\) the cheating prover \(P^*\) still outputs \(\alpha ^*\) with probability \({\approx }1/M\) (up to \(\text {poly}(n)\) factors).

  3. 3.

    In the final step, we argue that \(\alpha ^*\) is close to uniform (for an appropriate modification of the original protocol) and independent of s. Thus, given \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,\beta ^*)\), the cheating prover \(P^*\) outputs \(\alpha ^*\) with probability \({\approx }1/M\) (up to \(\text {poly}(n)\) factors), where \(\alpha ^*\) is close to truly random. We want to argue that this contradicts the (sub-exponential) security of \(\mathsf{iO}\).

Unfortunately, the argument sketched above is doubly-flawed. In particular, the arguments in Step (2) and Step (3) are simply false. In Step (2) we start with a distribution where \(f_s\) is punctured at a point \(\alpha ^*\) for which \((\alpha ^*,f_s(\alpha ^*))\) is not (computationally) random, and in fact the choice of \(\alpha ^*\) depends on the seed s. We want to argue that this is indistinguishable from the case where we pick \((\alpha ^*,\beta ^*)\) randomly in R, and then puncture at \(\alpha ^*\). It is not a-priori clear why the puncturable PRF or \(\mathsf{iO}\) would guarantee this indistinguishability. Indeed, the functions generated by these two distributions can be distinguished with some advantage by simply counting the number of input-output pairs that are in R.

Nevertheless, in our analysis (see Lemma 1) we manage to argue that the cheating prover \(P^*\), given \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,\beta ^*)\) where \((\alpha ^*,\beta ^*)\) is random in R, still outputs \(\alpha ^*\) with probability significantly higher than \(1/2^n\) (i.e., significantly higher than guessing). Indeed, \(P^*\) still outputs \(\alpha ^*\) with probability \({\approx }1/M\) (up to \(\text {poly}(n)\) factors).

We next move to the flaw in Step (3). The problem here is that puncturing at the point \(\alpha ^*\) does not at all hide \(\alpha ^*\). It is also not clear whether the \(\mathsf{iO}\) obfuscation of the punctured seed hides \(\alpha ^*\).

Input-Hiding Obfuscation to the Rescue. We overcome this hurdle by using an exponentially secure input-hiding obfuscation to hide the punctured point.

Namely, we replace \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,\beta ^*)\) with \(\mathsf{iO}(s,\mathsf{hideO}(\alpha ^*,\beta ^*))\), where \(\mathsf{hideO}\) is an exponentially secure input hiding obfuscator, and where we did not change the functionality of the circuit; i.e. the circuit on input x first runs \(\mathsf{hideO}(\alpha ^*,\beta ^*)\) to check if \(x=\alpha ^*\); if so it outputs \( \beta ^*\) and otherwise it outputs \(f_s(x)\). The security of \(\mathsf{iO}\) implies that \(P^*(\mathsf{iO}(s,\mathsf{hideO}(\alpha ^*,\beta ^*)))\) outputs \(\alpha ^*\) with probability 1 / M (up to \(\text {poly}(n)\) factors).

It remains to note that s is independent of \((\alpha ^*,\beta ^*)\), and hence we conclude that there exists a poly-size adversary that given \(\mathsf{hideO}(\alpha ^*,\beta ^*)\) outputs \(\alpha ^*\) with probability 1 / M (up to \(\text {poly}(n)\) factors). In the last step we replace the distribution of \((\alpha ^*,\beta ^*)\) with a distribution where \(\alpha ^*\) is chosen uniformly at random from \(\{0,1\}^n\) and \(\beta ^*\) is chosen at random such that \((\alpha ^*,\beta ^*)\in R\) and prove that still there exists a poly-size adversary that given \(\mathsf{hideO}(\alpha ^*,\beta ^*)\) (where \((\alpha ^*,\beta ^*)\) is according to the new distribution) outputs \(\alpha ^*\) with probability 1 / M (up to \(\text {poly}(n)\) factors). This contradicts the exponential security of the input-hiding obfuscator \(\mathsf{hideO}\).

Remark 2

We note that the input-hiding obfuscator was only used in the security analysis. It plays no role in the construction itself. This is similar to some other recent uses of indistinguishability obfuscation in the literature.

We note that the idea of using input-hiding obfuscation to hide the punctured point, was also used in [BM14]. However, as opposed to this work, they relied on the obfuscation being secure against auxiliary inputs.

2 Preliminaries

2.1 Indistinguishability

Definition 1

For any function \(T:\mathbb {N}\rightarrow \mathbb {N}\) and for any function \(\mu :\mathbb {N}\rightarrow [0,1]\), we say that \(\mu =\text {negl}(T)\) if for every constant \(c>0\) there exists \(K\in \mathbb {N}\) such that for every \(k\ge K\),

$$ \mu (k)\le T(k)^{-c}. $$

Definition 2

Two distribution families \(\mathcal {X}=\{\mathcal {X}_\kappa \}_{\kappa \in \mathbb {N}}\) and \(\mathcal {Y}=\{\mathcal {Y}_\kappa \}_{\kappa \in \mathbb {N}}\) are said to be T-indistinguishable (denoted by \(\mathcal {X}\mathop {\approx }\limits ^{T}\mathcal {Y}\)) if for every circuit family \(D=\{D_\kappa \}_{\kappa \in \mathbb {N}}\) of size \(\text {poly}(T(\kappa ))\),

$$\text {Adv}_{D}^{\mathcal {X}, \mathcal {Y}}(T) \mathop {=}\limits ^{\mathsf {def}} \left| \Pr [D(x) =1] - \Pr [D(y) =1] \right| = \text {negl}(T(\kappa )),$$

where the probabilities are over \(x\leftarrow \mathcal {X}_\kappa \) and over \(y\leftarrow \mathcal {Y}_\kappa \).

2.2 Puncturable PRFs

Our construction uses a puncturable pseudo-random function (PRF) family [BW13, BGI14, KPTZ13, SW14] that is \(2^n\)-secure (where n is the input length); see the definitions below.

Definition 3

( T -Secure PRF [GGM86]). Let \(m=m(\kappa )\), \(n=n(\kappa )\) and \(k=k(\kappa )\) be functions of the security parameter \(\kappa \). A PRF family is an ensemble \(\mathcal {F}= \{ \mathcal {F}_{\kappa } \}_{\kappa \in \mathbb {N}}\) of function families, where \(\mathcal {F}_{\kappa } = \{f_s : \{0,1\}^n \rightarrow \{0,1\}^k\}_{s \in \{0,1\}^m}\). The PRF \(\mathcal {F}\) is T-secure, for \(T=T(\kappa )\), if for every \(\text {poly}(T)\)-size (non-uniform) adversary \(\text {Adv}\):

$$\left| \text {Adv}^{f_s}(1^\kappa ) - \text {Adv}^f(1^{\kappa }) \right| = \text {negl}(T(\kappa )),$$

where \(f_s\) is a random function in \(\mathcal {F}_{\kappa }\), generated using a uniformly random seed \(s \in \{0,1\}^{m(\kappa )}\), and f is a truly random function with domain \(\{0,1\}^n\) and range \(\{0,1\}^k\).

We use \(2^{n}\)-secure PRF families in our construction (for \(k=\text {poly}(n)\)). We can construct such PRFs assuming subexponentially hard one-way functions by taking the seed length m to be a sufficiently large polynomial in n. Observe that, since the entire truth table of the function can be constructed in time \(\text {poly}(n) \cdot 2^{n}\), we get that \(2^n\)-security implies that the entire truth table of a PRF \(f_s\) is indistinguishable from a uniformly random truth table.Footnote 13

Definition 4

( T -Secure Puncturable PRF [SW14]). A T-secure family of PRFs (as in Definition 3) is puncturable if there exist \(\mathrm {PPT}\) procedures \(\mathsf {puncture}\) and \(\mathsf {eval}\) such that

  1. 1.

    Puncturing a PRF key \(s \in \{0,1\}^m\) at a point \(r \in \{0,1\}^n\) gives a punctured key \(s\{r\}\) that can still be used to evaluate the PRF at any point \(r' \ne r\)

    $$\forall r \in \{0,1\}^n,r'\ne r: \mathop {\Pr }\limits _{s,s\{r\} \leftarrow \mathsf {puncture}(s,r)} \left[ \mathsf {eval}(s\{r\},r') = f_s(r') \right] =1$$
  2. 2.

    For any fixed \(r \in \{0,1\}^n\), given a punctured key \(s\{r\}\), the value \(f_s(r)\) is pseudorandom:

    $$ (s\{r\},r,f_s(r)) \mathop {\approx }\limits ^{T(\kappa )} (s\{r\},r,u),$$

    where \(s\{r\}\) is obtained by puncturing a random seed \(s \in \{0,1\}^{m(\kappa )}\) at the point r, and u is uniformly random in \(\{0,1\}^k\).

We note that the GGM-based construction of PRFs gives a construction of \(2^{n}\)-secure puncturable PRFs from any subexponentially hard one-way function [GGM86, HILL99].

2.3 Indistinguishability Obfuscation

Our construction uses an indistinguishability obfuscator \(\mathsf{iO}\) with \(2^{-n}\) security. A candidate construction was first given in the work of Garg et al. [GGH+13].

Definition 5

( T -secure Indistinguishability Obfuscator [BGI+12]).

Let \(T: \mathbb {N} \rightarrow \mathbb {N}\) be a function. Let \(\mathbb {C}=\{\mathbb {C}_{n}\}_{n\in \mathbb {N}}\) be a family of polynomial-size circuits, where \(\mathbb {C}_{n}\) is a set of boolean circuits operating on inputs of length n. Let \(\mathsf{iO}\) be a \(\mathrm {PPT}\) algorithm, which takes as input a circuit \(C \in \mathbb {C}_n\) and a security parameter \(\kappa \in \mathbb {N}\), and outputs a boolean circuit \(\mathsf{iO}(C)\) (not necessarily in \(\mathbb {C}\)).

\(\mathsf{iO}\) is a T-secure indistinguishability obfuscator for \(\mathbb {C}\) if it satisfies the following properties:

  1. 1.

    Preserving Functionality: For every \(n,\kappa \in \mathbb {N}\), \(C \in \mathbb {C}_n\), \(x \in \{0,1\}^n\):

    $$(\mathsf{iO}(C,1^\kappa ))(x) = C(x).$$
  2. 2.

    Indistinguishable Obfuscation: For every two sequences of circuits \(\{C^1_n\}_{n\in \mathbb {N}}\) and \(\{C^2_n\}_{n\in \mathbb {N}}\), such that for every \(n\in \mathbb {N}\), \(|C^1_n|=|C^2_n|\), \(C^1_n\equiv C^2_n\), and \(C^1_n,C^2_n\in \mathbb {C}_n\), and for every polynomially-bounded function \(m:\mathbb {N} \rightarrow \mathbb {N}\) it holds that:

    $$\left( 1^{\kappa },\mathsf{iO}(C^1_{m(\kappa )},1^{\kappa }) \right) \mathop {\approx }\limits ^{T(\kappa )} \left( 1^{\kappa }, \mathsf{iO}(C^2_{m(\kappa )},1^{\kappa }) \right) .$$

2.4 Input-Hiding Obfuscation

An input-hiding obfuscator for a class of circuits \(\mathbb {C}\), as defined by Barak et al. [BBC+14], has the security guarantee that given an obfuscation of a randomly drawn circuit in the family \(\mathbb {C}\), it is hard for an adversary to find an accepting input. In our work, we consider input-hiding obfuscation for the class of multi-bit point functions. A multi-bit point function \(I_{x,y}\) is defined by an input \(x \in \{0,1\}^n\), and an output \(y \in \{0,1\}^k\). \(I_{x,y}\) outputs y on input x, and 0 on all other inputs. Informally, we assume that given the obfuscation of \(I_{x,y}\) for a uniformly random x and an arbitrary y, it is hard for an adversary to recover x.

Definition 6

( T -secure Input-Hiding Obfuscator [BBC+14]). Let \(T: \mathbb {N} \rightarrow \mathbb {N}\) be a function, and let \(\mathbb {C}=\{\mathbb {C}_{n}\}_{n\in \mathbb {N}}\) be a family of poly-size circuits, where \(\mathbb {C}_{n}\) is a set of boolean circuits operating on inputs of length n. A \(\mathrm {PPT}\) obfuscator \(\mathsf{hideO}\) is a T-secure input-hiding obfuscator for \(\mathbb {C}\), if it satisfies the preserving functionality requirement of Definition 5, as well as the following security requirement. For every poly-size (non-uniform) adversary \(\text {Adv}\) and all sufficiently large n,

$$\mathop {\Pr }\limits _{C \leftarrow \mathbb {C}_{n},\mathsf{hideO}} \left[ C(\text {Adv}(\mathsf{hideO}(C))) \ne 0 \right] \le T^{-1}(n).$$

We emphasize that (unlike other notions of T-security used in this work), we only allow the adversary for a T-secure input hiding obfuscation to run in polynomial time. Nevertheless, depending on the function T, the definition of T-secure input hiding is quite strong. In particular, for the typical case of proof-systems with soundness \(2^{n^\epsilon }\) (where \(\epsilon >0\) is a constant) we will assume input-hiding obfuscation for \(T = 2^{n-n^\epsilon }\), which means that a polynomial-time adversary can only do sub-exponentially better than the trivial attack that picks random inputs until it finds an accepting input (this attack succeeds with probability \(\text {poly}(n)/2^n\)). This is also why we do not separate the security parameter from the input length (the adversary can always succeed with probability \(2^{-n}\), assuming there exists an accepting input).

We assume input-hiding obfuscation for the class of multi-bit point functions (see above), where the point x is drawn uniformly at random, and the output y is arbitrary. In particular, we do not assume that the collection \(\mathbb {C}\) of pairs (xy) can be sampled efficiently, only that its marginal distribution on x is uniform.

Assumption 3

( T -secure Input-Hiding for Multi-Bit Point Functions). Let \(T,k: \mathbb {N} \rightarrow \mathbb {N}\) be functions. An obfuscator \(\mathsf{hideO}\) is a T-secure input-hiding obfuscator for (nk)-multi-bit point functions if for every collection \(\mathbb {C}\) as below, \(\mathsf{hideO}\) is a T-secure input-hiding obfuscator for \(\mathbb {C}\). In the collection \(\mathbb {C}\) , for every \(n\in \mathbb {N}\), every function \(I_{x,y} \in \mathbb {C}_{n}\) has \(x \in \{0,1\}^{n}, y \in \{0,1\}^{k(n)}\), and the marginal distribution of a random draw from \(\mathbb {C}_{n}\) on x is uniform.

The assumption is strong in that we do not assume that a random function in \({\mathbb {C}}\) can be sampled efficiently, or that the output y is an efficient function of the input x. This assumption was studied in [CD08, BC14]. A candidate construction (in the standard model) was provided in [CD08]. Loosely speaking, their construction is an extension of the point function obfuscation of Canetti [Can97], where the obfuscation of \(I_{x,y}\) consists of a pair of the form \((r,r^x)\), together with k pairs of the form \((r_i,r_i^{\alpha _i})\) where \(\alpha _i = x\) if \(y_i=1\) and is uniformly random otherwise. It was proved in [BC14] that this construction is secure in the generic group model, where the inversion probability is at most \(\text {poly}(n) \cdot 2^{-n}\).

2.5 Interactive Proofs and Arguments

An interactive proof, as introduced by Goldwasser, Micali and Rackoff [GMR89], is a protocol between two parties, a computationally unbounded prover and a polynomial-time verifier. Both parties have access to an input x and the prover tries to convince the verifier that \(x \in L\). Formally an interactive proof is defined as follows:

Definition 7

(Interactive Proof [GMR89]). An r-message interactive proof for the language L is an r-message protocol between the verifier \(V\), which is polynomial-time, and a prover \(P\), which is computationally unbounded. We require that the following two conditions hold:

  • \(\mathbf {Completeness{:}}\) For every \(x \in L\), if \(V\) interacts with \(P\) on common input x, then \(V\) accepts with probability at least 2 / 3.

  • \(\mathbf {Soundness{:}}\) For every \(x \notin L\) and every (computationally unbounded) cheating prover strategy \(\tilde{P}\), the verifier \(V\) accepts when interacting with \(\tilde{P}\) with probability at most 1 / 3.

We say that an interactive-proof is public-coin if all messages sent from \(V\) to \(P\) consist of fresh random coins tosses. Also, recall that the constants 1 / 3 and 2 / 3 are arbitrary and can be amplified by (e.g., parallel) repetition.

Interactive Arguments. An interactive argument is defined similarly to an interactive proof except that the soundness condition is only required to hold for cheating provers that run in polynomial time. We also require that the honest prover run in polynomial-time, given the witness as an auxiliary input.

Definition 8

(Interactive Argument). An r -message argument for the language \(L \in \mathsf {NP}\) is an r-message protocol between a verifier \(V\) and a prover \(P\), both of which are polynomial-time algorithms. We require that the following two conditions hold:

  • \(\mathbf {Completeness{:}}\) There exists a negligible function \(\text {negl}\) such that for every \(x \in L\), if \(V\) interacts with \(P\) on common input x, where \(P\) is given in addition an \(\mathsf {NP}\) witness w for \(x \in L\), then \(V\) accepts with probability at least \(1-\text {negl}(|x|)\).

  • \({\mathbf {Soundness{:}}}\) For every polynomial-size cheating prover strategy \(\tilde{P}\) and for every \(x \notin L\), the verifier \(V\) accepts when interacting with \(\tilde{P}\) on common input x, with probability at most \(\text {negl}(|x|)\).

We remark that in contrast to Definition 7, here we require negligible completeness and soundness errors. This is because parallel repetition does not necessarily decrease the soundness error for interactive arguments [BIN97]. We further remark that it is common to add a security parameter to the definition of argument systems so as to allow obtaining strong security guarantees even for short inputs. For simplicity of notations however we refrain from introducing a security parameter and note that better security guarantees for short inputs can be simply obtained by padding the input.

2.6 The Fiat-Shamir Paradigm

In this section, we recall the Fiat-Shamir paradigm. For the sake of simplicity of notation, we describe this paradigm when applied to 3-round (as opposed to arbitrary constant round) public-coin protocols. Let \(\varPi = (P, V)\) be a 3-round public-coin proof system for an \(\mathsf {NP}\) language L. We denote its transcripts by \((\alpha , \beta , \gamma )\), where \(\beta \) are the messages sent by the verifier, and \(\alpha ,\gamma \) are the messages sent by the prover. We denote by n the length of \(\alpha \) (i.e., \(\alpha \in \{0,1\}^n\)), and we denote by k the length of \(\beta \) (i.e., \(\beta \in \{0,1\}^k\)). We assume that \(k\le \text {poly}(n)\) (since otherwise we can just pad).

Let \(\{\mathcal{H}_n\}_{n \in \mathbb {N}}\) be an ensemble of hash functions, such that for every \(n\in \mathbb {N}\) and for every \(h\in \mathcal{H}_n\),

$$h:\{0,1\}^n\rightarrow \{0,1\}^{k}. $$

We define \(\varPi ^{\mathsf {FS}}\), with respect to the hash family \(\mathcal{H}\) to be the 2-round protocol obtained by applying the Fiat-Shamir transformation to \(\varPi \) using \(\mathcal{H}\). A formal presentation of the “collapsed” protocol \(\varPi ^{\mathsf {FS}} = (P^{\mathsf {FS}}, V^{\mathsf {FS}})\) is in Fig. 1.

Remark 3

We emphasize that our main result is that the Fiat-Shamir paradigm in its original formulation (as presented in Fig. 1) is secure when applied to interactive proofs and when using a particular hash function (based on the assumption mentioned above).

Fig. 1.
figure 1

Collapsing a 3-round Protocol \(\varPi = (P, V)\) into a 2-round Protocol \(\varPi ^{\mathsf {FS}} = (P^{\mathsf {FS}}, V^{\mathsf {FS}})\) using \(\mathcal{H}\)

3 Security of Fiat-Shamir for 3-Message Proofs

We show an instantiation of the Fiat-Shamir paradigm that is sound when it is applied to interactive proofs (as opposed to arguments). Taking n to be a bound on the message lengths of the prover in \(\varPi \), our instantiation assumes the existence of a \(2^n\)-secure indistinguishability obfuscation scheme \(\mathsf{iO}\), a \(2^{n}\)-secure puncturable PRF family \(\mathcal{F}\), and a \(2^{n}\)-secure input-hiding obfuscation for the class of multi-bit point functions \(\mathcal{I}_{n,k}\).

For clarity of exposition, we first show that our instantiation is secure for 3-round public-coin interactive proofs. This is the regime for which the Fiat-Shamir paradigm was originally suggested. We then build on the proof for the 3-message case (or rather the 4-message case, see below), and prove security for any constant number of rounds.

Theorem 4

(Fiat-Shamir for 3-message Proofs). Let \(\varPi \) be a public-coin 3-message interactive proof system with negligible soundness error. Let n be an upper bound on the input length and the length of the prover’s messages and let \(k\le \text {poly}(n)\) be an upper bound on the length of the verifier’s messages.

Assume the existence of a \(2^{n}\)-secure puncturable PRF family \(\mathcal{F}\), the existence of a \(2^{n}\)-secure Indistinguishability Obfuscation \(\mathsf{iO}\), and the existence of a secure input-hiding obfuscation for the class of multi-bit point functions \(\{\mathcal{I}_{n,k}\}\) with security \(T=2^n \cdot \text {negl}(n)\).

Then, the resulting 2-round argument \(\varPi ^{\mathsf {FS}}\), obtained by applying the Fiat-Shamir paradigm (see Fig. 1) to \(\varPi \) with the function family \(\mathsf{iO}(\mathcal F)\), is secure.

(Recall that we defined \(\mathsf{iO}(\mathcal {F})\) as the \(\mathsf{iO}\) obfuscation of a program that computes the \(\mathrm {PRF}\), using a hardwired random seed.)

In Sect. 4 we prove the security of the Fiat-Shamir paradigm when applied to any constant round interactive proof. To prove the general (constant round) case, we need to rely on a more general (and more technical) variation of Theorem 4. First, we rely on the security of the Fiat-Shamir paradigm for any 4-round interactive proof \(\varPi \) where the first message is sent by the verifier. In the transformed protocol \(\varPi ^{\mathsf {FS}}\), the first message of the verifier consists of the first message as in \(\varPi \), along with a Fiat-Shamir hash function, which will be applied to the prover’s first message. In addition, in the generalized theorem we allow the verifier in the original protocol \(\varPi \) to run in time \(2^{O(n)}\).

We state the generalized theorem below.

Theorem 5

(Theorem 4 , more General Statement). Let \(\varPi \) be a 4-message public-coin interactive proof system, where the first message is sent by the verifier. Let n be an upper bound on the input lengthFootnote 14 and the lengths of the prover’s messages, let \(k\le \text {poly}(n)\) be a bound on the verifier’s messages, let \(\mu (n) = \text {negl}(n)\) be the soundness errorFootnote 15 error, and assume that the verifier runs in time at most \(2^{O(n)}\).

Assume the existence of a \(2^{n}\)-secure puncturable PRF family \(\mathcal{F}\), the existence of a \(2^{n}\)-secure Indistinguishability Obfuscation \(\mathsf{iO}\), and the existence of a input-hiding obfuscation for the class of multi-bit point functions \(\{\mathcal{I}_{n,k}\}\) that is T-secure for every \(T = 2^n \cdot \mu /\nu \), where \(\nu \) is any non-negligible function.

Then the resulting 2-round argument \(\varPi ^{\mathsf {FS}}\), obtained by applying the Fiat-Shamir paradigmFootnote 16 to \(\varPi \) with the function family \(\mathsf{iO}(\mathcal F)\), is secure.

We remark that \(\mu \cdot 2^n \cdot \text {poly}(n)\) is a shorthand for a function T such that for every \(c>0\) and all sufficiently large \(n \in \mathbb {N}\) it holds that \(T(n) \ge \mu (n) \cdot 2^n \cdot n^c\).

Proof

(Proof of Theorem 5 ). Fix any 4-round interactive proof \(\varPi = (P,V)\) as claimed in the theorem statement. Let \(\mu = \text {negl}(n)\) be the soundness error of \(\varPi \).

Suppose for the sake of contradiction that there exists a poly-size cheating prover \(P^*\) who breaks the soundness of the protocol \(\varPi ^{\mathsf {FS}}\) with respect to some \(x^*\notin L\) with non-negligible probability \(\nu \). We will use \(P^*\) to eventually break the security of the input-hiding obfuscation, while using along the way the soundness of \(\varPi \) as well as the security of the PRF \(\mathcal{F}\) and Indistinguishability Obfuscator \(\mathsf{iO}\).

There must exist a choice for the verifier’s first message \(\tau \) in \(\varPi \), such that the following two conditions hold: (i) Even conditioned on the first part of the first message in \(\varPi ^{\mathsf {FS}}\) being \(\tau \), the cheating prover \(P^*\) still breaks the soundness of the protocol \(\varPi ^{\mathsf {FS}}\) on \(x^*\) with probability at least \((\nu /2)\), and (ii) even conditioned on the first message in \(\varPi \) being \(\tau \), the original protocol \(\varPi \) still has soundness error at most \((2\mu / \nu )\). Such a \(\tau \) must exist because at least a \((\nu /2)\)-fraction of the messages must satisfy condition (i) (otherwise \(P^*\) cannot break \(\varPi ^{\mathsf {FS}}\) with total probability \(\nu \)), and the fraction that do not satisfy condition (ii) must be smaller than \((\nu /2)\) (otherwise the soundness of \(\varPi \) is smaller than \(\mu \)).

Fix the verifier’s first message to always be \(\tau \) (both in the original and in the transformed protocols). We have that:

$$\begin{aligned} \mathop {\Pr }\limits _{s,\mathsf{iO}} \Big [ P^*(\tau ,\mathsf{iO}(s))=(\alpha ,\gamma ) \text{ s.t. } V(x^*,\tau ,\alpha , f_s(\alpha ),\gamma )=1 \Big ]\ge \nu /2, \end{aligned}$$
(3.1)

where \(\mathsf{iO}(s)\) refers to the \(\mathsf{iO}\) obfuscation of a random function \(f_s\) from the family \(\mathcal{F}\).

The relaxed verifier and its properties. To obtain a contradiction, we analyze a relaxed verifier \(V'\) (which is only used in the security analysis). The relaxed verifier accepts a transcript \((\alpha ,\beta ,\gamma )\) if the original verifier V would accept, or if the first \(\lceil {\log (\nu /(2\mu ) \rceil }\) bits of \(\beta \) are all 0 (where recall that \(\mu \) is the soundness error of \(\varPi \)).Footnote 17 In particular, whenever \(V\) accepts, the relaxed verifier \(V'\) also accepts, and so:

$$\begin{aligned} \mathop {\Pr }\limits _{s,\mathsf{iO}} \Big [ P^*(\tau ,\mathsf{iO}(s))=(\alpha ,\gamma ) \text{ s.t. } V'(x^*,\tau ,\alpha , f_s(\alpha ),\gamma )=1 \Big ]\ge \nu /2. \end{aligned}$$
(3.2)

We take \(\mu '\) to be the soundness of the interactive proof \((P,V')\) (after \(\tau \) is fixed), which runs the relaxed verifier. Observe that by a union bound

$$\mu ' \le (2\mu /\nu ) + 2^{-\lceil {\log (\nu /(2\mu )) \rceil }} \le 4\mu /\nu ,$$

(in particular if \(\mu \) is negligible, then so is \(\mu '\)).

We define:

$$\begin{aligned} \mathsf{ACC}= & {} \big \{(\alpha ,\beta ) \text{: } \exists \gamma \text{ s.t. } V'(x^*,\tau ,\alpha ,\beta ,\gamma )=1\big \} \end{aligned}$$

Observe that membership in \(\mathsf{ACC}\) can be computed in time \(2^{n} \cdot \text {poly}(n) = 2^{O(n)}\) by enumerating over all \(\gamma \)’s and running \(V'\). Equation (3.2) implies that there exists a poly-size adversary \(\mathcal{A}\) (that just outputs the first part of \(P^*\)’s output) such that:

(3.3)

Using Eq. (3.3) we prove our main lemma.

Lemma 1

$$ \mathop {\Pr }\limits _{s,\alpha ^*,u^*,\mathsf{iO}} \Big [ \mathcal{A}\big (\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,u^*) \big )=\alpha ^* \;\Big \vert \; \big ( \alpha ^*,u^* \big ) \in \mathsf{ACC}\Big ] \ge 2^{-n+2} \cdot \nu / \mu ' $$

where \(\alpha ^*\) and \(u^*\) are uniformly distributed (in \(\{0,1\}^n\) and \(\{0,1\}^k\), respectively) and \(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,u^*)\) refers to an \(\mathsf{iO}\) obfuscation of the program that contains the seed s punctured at the point \(\alpha ^*\), and on input \(\alpha \) first checks if \(\alpha =\alpha ^*\) and if so outputs \(u^*\) and otherwise outputs \(f_s(\alpha )\).

Proof

We prove the lemma by analyzing the probability that the event

$$\Big ( \mathcal{A}(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,u^*))=\alpha ^*\Big ) \;\wedge \; \Big (\big ( \alpha ^*,u^* \big ) \in \mathsf{ACC}\Big ) $$

occurs.

By the exponential hardness of the puncturable PRF, and the fact that membership in \(\mathsf{ACC}\) is computable in \(2^{O(n)}\) time, we have that

(3.4)

Further applying the exponential hardness of the \(\mathsf{iO}\) scheme (and the fact that membership in \(\mathsf{ACC}\) can be decided in \(2^{O(n)}\) time), we get that:

(3.5)

Using elementary probability theory, we have that:

where the last inequality is by Eq. (3.3). Thus, we have that:

By the soundness of the underlying proof-system, it holds thats \(\Pr _{\alpha ^*,u^*}[ (\alpha ^*,u^*) \in \mathsf{ACC}] \le \mu '\) (since otherwise a cheating prover could violate soundness by just sending a random \(\alpha ^*\)).Footnote 18

Let \(\zeta = \Pr _{s,\alpha ^*,u^*,\mathsf{iO}} \Big [ \mathcal{A}(\mathsf{iO}(s\{\alpha ^*\},\alpha ^*,u^*))=\alpha ^* \;\Big \vert \big ( \alpha ^*,u^* \big ) \in \mathsf{ACC}\Big ]\).

Then, by definition of conditional probability we have that

and the lemma follows.

We are now ready to use (and break) our input-hiding obfuscator \(\mathsf{hideO}\). Lemma 1, together with the \(2^n\)-security of the \(\mathsf{iO}\) implies that

(3.6)

where \(\alpha ^*\) and \(u^*\) are uniformly distributed and \(\mathsf{iO}(s,\mathsf{hideO}(\alpha ^*,u^*))\) refers to the \(\mathsf{iO}\) obfuscation of the program that contains a seed s for a PRF (in its entirety), and the input-hiding obfuscation \(\mathsf{hideO}(\alpha ^*,u^*)\) of a multi-bit point function that on input \(\alpha ^*\) outputs \(u^*\). The program uses the input-hiding obfuscation to check if its input equals \(\alpha ^*\), and if so outputs the same value as \(\mathsf{hideO}(\alpha ^*,u^*)\). Otherwise the program behaves like the PRF.

Equation (3.6) is almost what we want. Namely, an adversary that given access to \(\mathsf{hideO}(\alpha ^*,u^*)\) produces \(\alpha ^*\) with probability \(\omega (\text {poly}(n)/2^n)\) (since \(\nu \) is inverse polynomial and \(\mu \) is a negligible function). The only remaining problem is that the distribution of \((\alpha ^*,u^*)\) is not quite what we need. More specifically, in Eq. (3.6) \((\alpha ^*,u^*)\) are distributed uniformly conditioned on \((\alpha ^*,u^*) \in \mathsf{ACC}\), whereas we need for the marginal distribution of \(\alpha \) to be uniform in order to break the \(\mathsf{hideO}\) obfuscation. Using the properties of the relaxed verifier, we show that these two distributions are actually closely related.

We define the following two distributions. The distribution \(\mathcal{T}_1\) is obtained by jointly picking a pair \((\alpha ,\beta )\) uniformly from \(\mathsf{ACC}\) (this is the distribution from which \((\alpha ^*,u^*)\) are sampled from in Eq. (3.6)). \(\mathcal{T}_2\) is the distribution obtained by picking a uniformly random \(\alpha \in \{0,1\}^n\) and then a random \(\beta \) conditioned on \((\alpha ,\beta ) \in \mathsf{ACC}\) (i.e. the marginal distribution on \(\alpha \) is uniform). For \(\alpha ^* \in \{0,1\}^n\), \(\beta ^* \in \{0,1\}^k\), we use \(\mathcal{T}_1[\alpha ^*,\beta ^*]\) and \(\mathcal{T}_2[\alpha ^*,\beta ^*]\) to denote the probability of the pair \((\alpha ^*,\beta ^*)\) by \(\mathcal{T}_1\) and by \(\mathcal{T}_2\) (respectively).

Proposition 1

For any \(\alpha ^* \in \{0,1\}^n\) and \(\beta ^* \in \{0,1\}^k\):

$$\mathcal{T}_2[\alpha ^*,\beta ^*] \ge \frac{1}{4} \mathcal{T}_1[\alpha ^*,\beta ^*]$$

Proof

For every \(\alpha ^*\) denote by:

$$S_{\alpha ^*} = \big \{\beta ^* \in \{0,1\}^k : (\alpha ^*,\beta ^*) \in \mathsf{ACC}\big \}.$$

By construction of the relaxed verifier \(V'\), we know that for every \(\alpha \in \{0,1\}^n\) it holds that

$$\frac{\mu }{\nu } \le \frac{ |S_{\alpha }|}{ 2^k} \le \frac{4\mu }{\nu }.$$

In particular, for any \(\alpha ,\alpha ^* \in \{0,1\}^n\):

$$|S_{\alpha }| \ge \frac{1}{4} |S_{\alpha ^*}|. $$

Now we have that:

$$\begin{aligned} \mathcal{T}_1[\alpha ^*,\beta ^*] = \frac{1}{\sum _{\alpha \in \{0,1\}^n} |S_{\alpha }|} \le \frac{4}{\sum _{\alpha \in \{0,1\}^n} |S_{\alpha ^*}|} = \frac{4}{2^n \cdot |S_{\alpha ^*}|} = 4 \cdot \mathcal{T}_2[\alpha ^*,\beta ^*] \end{aligned}$$
(3.7)

In particular, drawing by \(\mathcal{T}_2\) rather than \(\mathcal{T}_1\) can only decrease the success probability of \(\mathcal{A}\) by a multiplicative factor of 4. Moreover, when drawing by \(\mathcal{T}_2\), the marginal distribution on \(\alpha ^*\) is uniform. Thus Proposition 1 and Eq. (3.6) imply that there exists a poly-size adversary \(\mathcal{A}\), such that

$$ \mathop {\Pr }\limits _{(\alpha ^*,u^*) \leftarrow \mathcal{T}_2,\mathsf{hideO}}[\mathcal{A}(\mathsf{hideO}(\alpha ^*,u^*))=\alpha ^*]\ge \frac{1}{32} \cdot \frac{\nu }{\mu ' \cdot 2^{n}} $$

where \(\alpha ^*\) drawn by \(\mathcal{T}_2\) is uniformly random. Since \(\nu \) is a non-negligible function and \(\mu ' = O(\mu /\nu )\), this contradicts the security of the input-hiding obfuscation \(\mathsf{hideO}\).

4 Security of Fiat-Shamir for Multi-round Proofs

In this section we show a secure instantiation of the Fiat-Shamir methodology for transforming any constant-round interactive proof into a 2-round computationally-sound argument. We assume for the sake of simplicity, and without loss of generality, that the verifier always sends the first message, and thus consider interactive protocols with an even number of rounds. Namely, for any constant \(c \ge 2\), we consider a 2c-round interactive proof \(\varPi = (P, V)\). We assume without loss of generality that all of the prover’s messages are of the same length, and denote this length by n (i.e. \(\forall i, \alpha _i \in \{0,1\}^n\)). Similarly, we assume without loss of generality that all of the verifier’s messages are of the same length, and denote this length by k (i.e. \(\forall i, \beta _i \in \{0,1\}^k\)). We assume without loss of generality that \(k\le n\). All these assumptions are only for the simplicity of notations, and can be easily achieved by padding.

For every \(i\in [c-1]\), let \(\{\mathcal{F}^{(i)}_n\}_{n \in \mathbb {N}}\) be an ensemble of hash functions, such that for every \(n\in \mathbb {N}\) and for every \(f^{(i)}\in \mathcal{F}_n\),

$$ f^{(i)}:\{0,1\}^{i\cdot (n+k)}\rightarrow \{0,1\}^{k}. $$

We assume without loss of generality that there exists a polynomial p such that for every \(i\in [c-1]\) and for every \(n\in \mathbb {N}\),

$$ \mathcal{F}^{(i)}_{n}=\{f^{(i)}_s\}_{s\in \{0,1\}^{p(n)}}. $$

We define \(\varPi ^{\mathsf {FS}}\) to be the 2-round protocol obtained by applying the multi-round Fiat-Shamir transformation to \(\varPi \) using \((\mathsf{iO}(f^{(1)}_{s_1}),\ldots ,\mathsf{iO}(f^{(c-1)}_{s_{c-1}}))\), where \(f^{(i)}_{s_i} \leftarrow \mathcal{F}^{(i)}_n\) for every \(i \in [c-1]\). The security of \(\varPi ^{\mathsf {FS}}\) is shown in Theorem 6 below.

Theorem 6

(Fiat-Shamir Transform for Multi-Round Interactive Proofs). Let \(\mu : \mathbb {N}\rightarrow [0,1]\) be a function. Assume the existence of a \(2^n\)-secure puncturable PRF family \(\mathcal{F}\), assume the existence of a \(2^n\)-secure Indistinguishability Obfuscation, and assume the existence of an input-hiding obfuscation for the class of multi-bit point functions \(\{\mathcal{I}_{n,k}\}\) that is T-secure for any \(T= 2^{n} \cdot \mu /\nu \), where \(\nu \) is any non-negligible function.

Then for any constant \(c\in \mathbb {N}\) such that \(c\ge 2\), and any 2c-round interactive proof \(\varPi \) with soundness \(\mu \), the resulting 2-round argument \(\varPi ^{\mathsf {FS}}\), obtained by applying the multi-round Fiat-Shamir transformation to \(\varPi \) with the function family \(\mathsf{iO}(\mathcal F)\), is secure.

Proof

The proof is by induction on \(c\in \mathbb {N}\), for \(c\ge 2\). The base case \(c=2\) follows immediately from Theorem 4. Suppose the theorem statement is true for \(<c\) rounds, and we will prove that it is true for c rounds.

To this end, fix any 2c-round interactive proof \(\varPi \) for proving membership in a language L. Suppose for the sake of contradiction that \(\varPi ^{\mathsf {FS}}\) is not secure. Namely, there exists a poly-size cheating prover \(P^*\) and there exists \(x^*\notin L\) such that \(P^*\) succeeds in convincing the verifier of \(\varPi ^{\mathsf {FS}}\) that \(x^*\in L\) with non-negligible probability. We assume without loss of generality that \(P^*\) is deterministic.

Consider the following protocol \(\varPsi \) for proving membership in L, which consists of \(2c-2\) rounds: In the first round the verifier chooses the first message that it would have sent in \(\varPi \), which we denote by \(\beta _0\). In addition, it chooses a random seed \(s_1\leftarrow \{0,1\}^{p(n)}\), and sends to the prover the pair \((\beta _0,\mathsf{iO}(f^{(1)}_{s_1}))\). Then, the prover chooses \((\alpha _1,\beta _1,\alpha _2)\) such that \(\beta _1=f^{(1)}_{s_1}(\alpha _1)\), and such that \(\alpha _1\) and \(\alpha _2\) are chosen as in \(\varPi \). It sends \((\alpha _1,\beta _1,\alpha _2)\) to the verifier. Then the prover and verifier continue to execute the protocol \(\varPi \) interactively, conditioned on \((\beta _0,\alpha _1,\beta _1,\alpha _2)\). Finally, the verifier accepts if and only if the verifier of \(\varPi \) would have accepted the resulting transcript and \(\beta _1=f^{(1)}_{s_1}(\alpha _1)\).

Consider the protocol \(\varPsi _{P^*}\), in which we fix the first message from the prover in \(\varPsi \) to be the message \((\alpha _1,\beta _1,\alpha _2)\) generated by \(P^*\) in \(\varPi ^{\mathsf {FS}}\). If \(\varPsi _{P^*}\) is a sound proof then, by our induction hypothesis \(({\varPsi _{P^*}})^{\mathsf {FS}}\) is sound. However, note that \(P^*\) can be trivially converted into a cheating prover that breaks the soundness of \(({\varPsi _{P^*}})^{\mathsf {FS}}\), contradicting our induction hypothesis that the Fiat-Shamir transformation is sound for interactive proofs with \(2(c-1)\) rounds (with the function family \(\mathsf{iO}(\mathcal F)\)). Thus, it must be the case that \({\varPsi _{P^*}}\) is not a sound proof. Namely, there exists a (possibly inefficient) cheating prover \(P^{**}\), an element \(x^*\notin L\), and a polynomial q, such that \(P^{**}\) convinces the verifier of \(\varPsi _{P^*}\) to accept \(x^*\) with probability \(\ge 1/q(\kappa )\) for infinitely many \(\kappa \in \mathbb {N}\).

Consider the 4-round protocol \(\varPhi \), which consists of the first 4 rounds of \(\varPi \), denoted by \((\beta _0,\alpha _1,\beta _1,\alpha _2)\). Given a transcript \((\beta _0,\alpha _1,\beta _1,\alpha _2)\) the verifier of \(\varPhi \) accepts if and only if there exists a strategy of the (cheating) prover of \(\varPi \) that causes the verifier of \(\varPi \) to accept with probability \(\ge 1/q(\kappa )\) conditioned on the first 4-rounds of \(\varPi \) being \((\beta _0,\alpha _1,\beta _1,\alpha _2)\). Note that the verifier of \(\varPhi \) runs in time \(\text {poly}(2^{c(n+k)})=2^{O(n)}\). The statistical soundness of \(\varPi \) implies that \(\varPhi \) is also statistically sound. Note however that \(\varPhi ^{\mathsf {FS}}\) is not computationally sound. To see this, consider a poly-size cheating prover for \(\varPhi ^{\mathsf {FS}}\) that sends the message \((\alpha _1,\beta _1,\alpha _2)\) that \(P^{*}\) sends in \(\varPi \). By the fact that \(\varPsi _{P^*}\) is not sound (since \(P^{**}\) breaks its soundness), the verifier of \(\varPhi ^{\mathsf {FS}}\) will accept \(x^* \not \in \L \). This is in contradiction to Theorem 5 (where we used the fact that Theorem 5 holds even for verifiers running in time \(2^{O(n)}\)).