Abstract
RSA Full Domain Hash (RSAFDH) is a digital signature scheme, secure against chosen message attacks in the random oracle model. The best known security reduction from the RSA assumption is nontight, i.e., it loses a factor of \(q_s\), where \(q_s\) is the number of signature queries made by the adversary. It was furthermore proven by Coron (Advances in cryptology—EUROCRYPT 2002, Lecture notes in computer science, vol 2332. Springer, Berlin, pp 272–287, 2002) that a security loss of \(q_s\) is optimal and cannot possibly be improved. In this work, we uncover a subtle flaw in Coron’s impossibility result. Concretely, we show that it only holds if the underlying trapdoor permutation is certified. Since it is well known that the RSA trapdoor permutation is (for all practical parameters) not certified, this renders Coron’s impossibility result moot for RSAFDH. Motivated by this, we revisit the question whether there is a tight security proof for RSAFDH. Concretely, we give a new tight security reduction from a stronger assumption, the PhiHiding assumption introduced by Cachin et al. (Advances in Cryptology—EUROCRYPT’99. Lecture notes in computer science, vol 1592. Springer, Berlin, pp 402–414, 1999). This justifies the choice of smaller parameters in RSAFDH, as it is commonly used in practice. All of our results (positive and negative) extend to the probabilistic signature scheme PSS (with message recovery).
Introduction
Among all digital signatures schemes based on the RSA problem, arguably among the most important ones is RSA Full Domain Hash (RSAFDH) by Bellare and Rogaway [4]. It is extensively used in a wide variety of applications and serves as the basis of several existing standards such as PKCS #1 [29]. It has been demonstrated by means of a security reduction that, in the random oracle model [3], breaking the security of RSAFDH (in the sense of existential unforgeability against chosen message attacks) is asymptotically at least as hard as inverting the RSA function.
The seminal work by Bellare and Rogaway introduced the concept of concrete security [4] and highlights the importance of considering the tightness of a security reduction. A security reduction is tight if an adversary breaking the scheme yields another adversary breaking the underlying hardness assumption with roughly the same success probability and running time. The current state of RSAFDH is as follows. Coron’s reduction [12] (which improves on earlier results by Bellare and Rogaway [4]) bounds the probability \(\varepsilon \) of breaking RSAFDH in time t by \(\varepsilon ' \cdot q_s\), where \(\varepsilon '\) is the probability of inverting RSA in time \(t' \approx t\) and \(q_s\) is the number of signature queries by the forger. In other words, the security reduction for RSAFDH is loose (it loses a factor of \(q_s\)), which can have great negative impact on the practical parameter choices of the scheme. As a numerical example, for 80 bits of security and assuming that an adversary can make up to \(q_s=2^{30}\) signature queries [4], one should use a large enough RSA modulus N such that inverting the RSA function cannot be done in fewer than \(2^{110} = 2^{30} \cdot 2^{80}\) operations. Concretely, using the recommended key sizes from [31], this leads to a modulus N of about 2432 bits, compared to 1248 bits if RSAFDH had a tight reduction. We further refer to [23] for a recent discussion on the practical impact of nontight security reductions in cryptography.
It is an interesting question of great practical impact whether or not there is a tight security reduction for general FDH signatures (based on any trapdoor permutation TDP) and, in particular, for RSAFDH. Unfortunately, this question was already answered to the negative exactly 10 years ago by Coron [13, 14] who showed that the above nontight security reduction is essentially optimal. That is, every security reduction from inverting the TDP (i.e., RSA in the case of RSAFDH) to breaking FDH signatures will inevitably lose a \(q_s\) factor. Consequently, for RSAFDH a large RSA modulus seems unavoidable to obtain a meaningful security proof.
An Overview of Our Results
Revisiting Coron’s Impossibility Result. We uncover a gap in Coron’s result about the impossibility of a tight security reduction for FDH signatures [14]. As acknowledged by the author of [14], his impossibility result only holds if the underlying trapdoor permutation (i.e., RSA in the case of RSAFDH) is a certified trapdoor permutation. A trapdoor permutation is certified [6, 25] if one can publicly verify that it actually defines a permutation. However, in the case of the RSA trapdoor permutation, it is only known to be certified if the exponent e is a prime number larger than \(N^{1/4}\) [21], and therefore, the impossibility result does not apply to all instances of RSAFDH. In particular, it is common practice to use either \(e=3\) or \(e=2^{16}+1\), which are less then \(N^{1/4}\).
A Tight Security Reduction for FDH Signatures. In light of the above, we revisit the question whether there exists a tight security reduction for FDH signatures. Unfortunately, we are not able to give such a tight security reduction from the assumption that the TDP is oneway, but from a stronger (yet still noninteractive) assumption, namely that the TDP is lossy (in the sense of Peikert and Waters [28]). Our main result (Theorem 3) shows that there is a tight security reduction from the lossiness of the TDP to breaking security of FDH, in the random oracle model. Our results also extend to the probabilistic signature scheme (PSS) (with message recovery) [4]. We obtain a tight reduction for TDPPSS with arbitrary (possibly zero) size random seed from the assumption that the TDP is lossy.
Applications to RSAFDH and RSAPSS. Recently, Kiltz et al. [22] showed that the RSA trapdoor permutation is lossy under the \(\varPhi \)Hiding Assumption. The \(\varPhi \)Hiding Assumption was introduced by Cachin et al. [10] and it states that, roughly, (N, e) with \(\gcd (\varphi (N), e)=1\) and \(e < N^{1/4}\) is computationally indistinguishable from \((N',e')\) with \(e' \;\,\varphi (N')\). (Here, \(\varphi (N)\) is Euler’s totient function.) This give a tight security reduction for RSAFDH from the \(\varPhi \)Hiding Assumption. We remark that the \(\varPhi \)Hiding Assumption (or, more generally, the assumption that RSA is lossy) is a stronger assumption than the assumption that RSA is oneway. However, it dates back to 1999 [10] and has ever since been used in a number of cryptographic applications (e.g., [9, 15, 17, 19, 22, 26]). It has been cryptalanyzed (e.g., [9, 10, 30]), and for the parameters of interest there is no known algorithm that breaks it without first factoring, the modulus \(N=pq\). The common interpretation is that the \(\varPhi \)Hiding Assumption can in practice be viewed as as hard as factoring and hence gives a theoretical justification as to why RSAFDH with a small modulus N is secure in practice.
We also obtain a tight reduction for RSAPSS (with message recovery) from the \(\varPhi \)Hiding Assumption for arbitrary (possibly zero) size random seed. Assuming again that the \(\varPhi \)Hiding Assumption is as hard as factoring, we get a signature scheme (with message recovery) with only 160 bits of overhead for 80 bits security.
Full Domain Hash and Coron’s Impossibility Result
Recall that a FDH signature on a message m is \(\sigma = f^{1}(H(m))\), where f is the public description of the TDP and H is a hash function modeled as a random oracle.
A reduction \(\mathcal {R}\) that reduces inverting the TDP to breaking FDH inputs a challenge instance \((f, y=f(x))\) of the TDP and generates a public key for FDH that is passed to a forger \(\mathcal {F}\) attacking FDH signatures. Next, \(\mathcal {F}\) makes a number of signature queries (which are answered by \(\mathcal {R}\)) and finally outputs a forgery. Finally, \(\mathcal {R}\) uses the gathered information to invert the TDP, i.e., to compute \(x=f^{1}(y)\). Reduction \(\mathcal {R}\) is tight if the success probability of \(\mathcal {R}\) is roughly the same as the one of \(\mathcal {F}\).
Coron’s impossibility result shows that any reduction \(\mathcal {R}\) from inverting the TDP f to breaking FDH which is tight (then, does not lose more than a factor \(q_s\)) can be turned into an efficient inverting algorithm \(\mathcal {I}\) for the TDP f (that works without forger \(\mathcal {F}\)). In a nutshell, the argument is as follows. Given an instance of the TDP, the inverter \(\mathcal {I}\) runs reduction \(\mathcal {R}\) providing it with a simulated forger \(\mathcal {F}\) by making a number of hash queries and then signature queries to \(\mathcal {R}\). Next, \(\mathcal {I}\) rewinds reduction \(\mathcal {R}\) to an earlier state (after the hash queries) and uses one of the signed messages/signature pairs [say \((m^*,\sigma ^*)\)] obtained before the rewind as its forgery. To \(\mathcal {R}\), this counts as a valid forgery since after the rewind, \(\mathcal {I}\) did not make a signing query on \(m^*\). The central argument is as follows: Consider a real forger that is provided with the same view as the simulated forger who outputs a forgery \(\sigma '\) on the same message \(m^*\). FDH has unique signatures ^{Footnote 1}, and hence, we can argue that \(\sigma ^*\) (provided by \(\mathcal {R}\) before the rewind) equals \(\sigma '\) (provided by a real forger). Hence, \(\mathcal {R}\) is convinced and interacts with a real forger and outputs a solution to the TDP instance.
Consequently, from \(\mathcal {R}\), we were able to construct an algorithm \(\mathcal {I}\) that inverts the TDP without using any forger. It is shown by a combinatorial argument that the success probability of \(\mathcal {I}\) is nonnegative as long as the reduction \(\mathcal {R}\) does not loose more than a factor of \(q_s\), the number of signature queries.
The Gap in the Proof. During the proof of [13, Th. 5] it is silently assumed that the public key \( pk \) generated by reduction \(\mathcal {R}\) is a real public key, honestly generated by the keygeneration algorithm of FDH, i.e., it contains f which describes a permutation.^{Footnote 2} However, that does not necessarily hold, and the public key generated by \(\mathcal {R}\) could be anything. In fact, it is possible that the public key generated by the reduction \(\mathcal {R}\) is fake in the sense that the FDH signatures are no longer unique relative to this fake \( pk \). Once signatures are no longer unique (with respect to the fake \( pk \)), it is possible that a real forger outputs a forgery \(\sigma '\) on \(m^*\) which is different from \(\sigma ^*\), the one provided by reduction \(\mathcal {R}\) before the rewind. In fact, it could be possible that \(\sigma ^* \ne \sigma '\) is no longer useful for \(\mathcal {R}\) in order to solve the RSA instance after the rewind, and hence, the impossibility result breaks down.
In Sect. 3, we restate (and prove) a corrected version of Coron’s impossibility result. Fortunately, it turns out that Coron’s argument can be salvaged by requiring the trapdoor permutation in FDH to be certified. Note that in case of a certified trapdoor permutation it is not possible for the reduction \(\mathcal {R}\) to generate a fake public key without detection, and hence, signatures are guaranteed to be unique.
A Tight Security Reduction for FDH Signatures
It is precisely the nonuniqueness of FDH signatures with respect to a fake public key that will allow us to prove a tight security from the lossiness of the TDP [i.e., the \(\varPhi \)Hiding Assumption in the case of RSAFDH]. Our proof is surprisingly simple and is sketched as follows. In a first step, we substitute the trapdoor permutation in public key with a lossy one. We use the programmability of the random oracle to show that this remains unnoticed by the adversary assuming lossiness of the TDP. Note that once the TDP is lossy, FDH signatures (i.e., \(\sigma \) with \(f(\sigma ) = H(m)\)) are not longer unique since, by the definition of lossiness, each H(m) has many preimages under a lossy f. In the second step, we show that any successful forger will be able to find a collision in the TDP, i.e., two values \(x \ne \hat{x}\) with \(f(x) = f(\hat{x})\), which is again hard assuming lossiness. The full proof is given in Sect. 3.
For the important case of RSAFDH, this gives a tight security reduction from the \(\varPhi \)Hiding Assumption, in the random oracle model. The \(\varPhi \)Hiding Assumption is believed to be true for sufficiently small public RSA exponents \(e < N^{1/4  \varepsilon }\) [10]. This in particular includes the important lowexponent cases of \(e=3\) and \(e=2^{16}+1\) since they allow efficient verification of RSAFDH signatures.^{Footnote 3}
It is interesting to remark that, at a conceptual level, FDH is the first signature scheme with unique signatures and a tight security reduction (from a noninteractive assumption).^{Footnote 4} Previously, only tight security reductions for randomized signatures were known (e.g., [4, 7, 16, 18]).
The Probabilistic Signature Scheme PSS
Our observations can also be applied to the probabilistic signature scheme (PSS) [4] which is contained in IEEE P1363a [20], ISO/IEC 97962, and PKCS#1 v2.1 [29]. Improving an earlier result by Bellare and Rogaway, [4] Coron proved that, if at least \(\log _2(q_s)\) bits of random salt are used in PSS, then there is a tight security reduction from the onewayness of the TDP [13, 14]. Furthermore, Coron also proved that \(\log _2(q_s)\) bits of random salt are essentially optimal for a tight security reduction. Our results for PSS are similar to the ones for FDH. We note that Coron’s impossibility proof for PSS contains the same gap as the one in FDH, i.e., it is only correct if the underlying trapdoor permutation is certified. We show a tight security proof from lossiness to the security of PSS, with random salt of arbitrary (possibly zero) length.
Our results also apply to PSS with message recovery (PSSR), where the signature encodes (parts of) the message. We show that PSSR with arbitrarylength salt is tightly secure assuming lossiness of the TDP. Concretely, our security reduction shows that PSSR with zerolength salt has an overhead (signature length minus message length) of only 2k bits, where k is the security parameter. Interestingly, this matches the overhead of BLS short signatures [8] over bilinear maps.
Related Work
There is a lot of work on FDH and tightly secure signature schemes, we try to summarize part of it relevant to this work.
Tight Security Reduction for RSAFDH from an Interactive Assumption. Kobiltz and Menezes [24, Sec. 3] show a tight reduction from an interactive assumption they call the RSA1 assumption (which is related to the onemoreRSA assumption RSACTI [2]): Given N, e, and a set of \(q_s+q_h\) values \(y_i\) chosen uniformly from \({\mathbb {Z}}_N\), the adversary is permitted adaptively to select up to \(q_s\) of those \(y_i\) for which he is given solutions \(x_i\) to \(x_i^e = y_i \bmod N\). The adversary wins if he produces a solution \(x_i^e = y_i \bmod N\) for one of the remaining \(y_i\). Even though the RSA1 assumption looks plausible, it is an interactive assumption and almost a tautology for expressing that RSAFDH signatures are secure in the random oracle model. In fact, our tight security proof for RSAFDH also serves to show a tight reduction from \(\varPhi \)Hiding to RSA1.
Nonunique Signatures with Tight Reductions. There exists several previous works that build digital signature schemes with a tight security reduction. We stress that all of them have, in contrast to FDH, a randomized signing algorithm, i.e., signatures are not unique. Goh et al. [18] show that adding one single bit of random salt to the hash function of FDH allows to prove a tight security reduction from the RSA assumption. Bernstein [7] shows a tight security reduction for (a certain randomized variant of) Rabin–Williams signature scheme from the factoring assumption. More generally, Gentry et al. [16] introduce the concept of preimage samplable trapdoor functions which are noninjective trapdoor functions with an efficient preimage sampling algorithm. They further propose a probabilistic variant of FDH and prove it tightly secure. In fact, their proof technique is reminiscent to the second step in our proof of FDH from the lossiness but FDH can not be viewed as an instance of their probabilistic FDH variant.
RSAOAEP. Recently, [22] used the \(\varPhi \)Hiding Assumption to show that the RSA function is lossy and used this fact to prove positive instantiability results of RSAOAEP in the standard model.
Open Problems
On the one hand, the \(\varPhi \)Hiding Assumption is believed to be true for public exponents \(e \le N^{1/4\varepsilon }\), and hence, for these values, we get a tight security reduction for RSAFDH. On the other hand, Coron’s impossibility results hold for prime e with \(e > N\). This leaves the interesting open problem whether for public exponents \(N^{1/4} \le e \le N\) there exists a tight security reduction for RSAFDH (under a reasonable assumption).
Definitions
Notations and Conventions
We denote our security parameter as k. For all \(n \in \mathbb {N}\), we denote by \(1^n\) the nbit string of all ones. For any element x in a set S, we use \(x \in _{_R}S\) to indicate that we choose x uniformly random in S. All algorithms may be randomized. For any algorithm A, we define \(x \leftarrow _{{\tiny {\$}}}A(a_1,\ldots ,a_n)\) as the execution of A with inputs \(a_1,\ldots ,a_n\) and fresh randomness and then assigning the output to x. We denote the set of prime numbers by \(\mathbb {P}\) and we denote the subset of kbit primes as \(\mathbb {P}[k]\). Similarly, we have the integers denoted by \(\mathbb {Z}\) and \(\mathbb {Z}[k]\). We denote by \(\mathbb {Z}_N^*\) the multiplicative group modulo \(N \in \mathbb {Z}\).
Games
A game (such as in Fig. 1) is defined as a collection of procedures, as per the model of [5]. There is an Initialize procedure and a Finalize procedure, as well a procedure for each separate oracle. Executing a game \(\mathsf {G}\) with and adversary \(\mathcal {A}\) means running the adversary and using the procedures to answer any oracle queries. The adversary must first make one query to Initialize. Then, it may query the oracles as many times as allowed by the definition of the game. After this, the adversary must then make 1 query to Finalize, which is the final procedure call of the game. The output of Finalize is denoted by \(\mathsf {G}^{\mathcal {A}}\). In particular, we write \(\mathsf {G}^{\mathcal {A}} \Rightarrow 0\) and \(\mathsf {G}^{\mathcal {A}} \Rightarrow 1\), when Finalize outputs 0 respectively 1. Where the Finalize procedure simply returns the output of the adversary, we omit the Finalize procedure. We use a strongly typed pseudocode with implicit initialization. Which means all variables maintain their type throughout the execution of the games and they are all implicitly declared and initialized. Boolean flags are initialized to \(\mathsf {False}\), numerical types are initialized to 0, and sets are initialized to \(\emptyset \).
Signature Schemes
A digital signature is a messagedependant bit string \(\sigma \), which can only be generated by the signer, using a secret signing key \( sk \) and is transmitted with the message. The signature can then be verified by the receiver using a public verification key \( pk \). A digital signature scheme is defined as a triple of probabilistic algorithms \(\mathsf {SIG}= (\mathsf {KeyGen}, \mathsf {Sign}, \mathsf {Verify})\), which we describe below:

1.
\(\mathsf {KeyGen}\) takes as an input the unary representation of our security parameter (\(1^k\)) and outputs a signing key sk and verification key pk.

2.
\(\mathsf {Sign}\) takes as input a signing key sk, message m and outputs a signature \(\sigma \).

3.
\(\mathsf {Verify}\) is a deterministic algorithm, which on input of a public key and a messagesignature pair \((m,\sigma )\) outputs 1 (accept) or 0 (reject).
We say that \(\mathsf {SIG}\) is correct if for all public key and secret key pairs generated by \(\mathsf {KeyGen}\), we have:
We now define \(\mathsf {UF}\hbox {}\mathsf {CMA}\) (unforgeability under chosen message attacks) assuming the signature scheme \(\mathsf {SIG}\) contains a hash function \(h : \{0,1\}^* \rightarrow \mathsf {Dom}\) which is modeled as a random oracle.
We say a signature scheme \(\mathsf {SIG}\) is \((t,\varepsilon ,q_h, q_s)\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the random oracle model, if for all adversaries \(\mathcal {A}\) running in time upto t, making at most \(q_h\) hashing and \(q_s\) signing oracle queries, they have an advantage of at most \(\varepsilon \), where the advantage of \(\mathcal {A}\) is defined as:
Here, we assume wlog that \(\mathcal {A}\) always makes a query to \(\mathsf {Hash}(m)\) before calling \(\mathsf {Sign}(m)\) or \(\mathsf {Finalize}(m, \cdot )\). What this means is that we always have \(q_h > 0\), as at least one hash query must be made before calling \(\mathsf {Finalize}\). Furthermore, we see that we have \(q_h \ge q_s + 1\), as the adversary must call \(\mathsf {Finalize}(\cdot ,\cdot )\) with a value not previously submitted to \(\mathsf {Sign}(\cdot )\).
Trapdoor Permutations
We recall the definition of trapdoor permutation families.
Definition 1
A family of trapdoor permutations \(\mathsf {TDP}=(\mathsf {Gen},\mathsf {Eval},\mathsf {Invert})\) consists of the following three polynomialtime algorithms.

1.
The probabilistic algorithm \(\mathsf {Gen}\), which on input \(1^k\) outputs a public description \( pub \) (which includes an efficiently sampleable domain \(\mathsf {Dom}_ pub \)) and a trapdoor \( td \).

2.
The deterministic algorithm \(\mathsf {Eval}\), which on input \( pub \) and \(x \in \mathsf {Dom}_ pub \), outputs \(y \in \mathsf {Dom}_ pub \). We write \(f(x) = \mathsf {Eval}( pub ,x)\).

3.
The deterministic algorithm \(\mathsf {Invert}\), which on input \( td \) and \(y \in \mathsf {Dom}_ pub \), outputs \(x \in \mathsf {Dom}_ pub \). We write \(f^{1}(y) = \mathsf {Invert}( td ,y)\).
We require that for all \(k \in {\mathbb {N}}\) and all \(( pub , td )\) output by \(\mathsf {Gen}(1^k), f(\cdot )=\mathsf {Eval}( pub ,\cdot )\) defines a permutation over \(\mathsf {Dom}_ pub \) and that for all \(x \in \mathsf {Dom}_ pub , \mathsf {Invert}( td ,\mathsf {Eval}( pub ,x))=x\).
We want to point out that \(f(\cdot )=\mathsf {Eval}( pub ,\cdot )\) is only required to be a permutation for correctly generated \( pub \), but not every bit string \( pub \) necessarily yields a permutation. A family of trapdoor permutations \(\mathsf {TDP}\) is said to be certified [6] if the fact that it is a permutation can be verified in polynomial time given \( pub \).
Definition 2
A family of trapdoor permutations \(\mathsf {TDP}\) is called certified if there exists a deterministic polynomialtime algorithm \(\mathsf {Certify}\) that, on input of \(1^k\) and an arbitrary (polynomially bounded) bit string \( pub \) (potentially not generated by \(\mathsf {Gen}\)), returns 1 iff \(f(\cdot )=\mathsf {Eval}( pub ,\cdot )\) defines a permutation over \(\mathsf {Dom}_ pub \).
We now recall security notion for trapdoor permutations. A trapdoor permutation \(\mathsf {TDP}\) is hard to invert (oneway) if given \( pub \) and f(x) for uniform \(x \in \mathsf {Dom}_ pub \), it is hard to compute x. More formally, it is \((t,\varepsilon )\)hard to invert if for all adversaries \(\mathcal {A}\) running in time \(t, \Pr [\mathcal {A}( pub ,\mathsf {Eval}( pub ,x)) = x] \le \varepsilon \), where the probability is taken over \(( pub , td ) \leftarrow \mathsf {Gen}(1^k), x \in _{_R}\mathsf {Dom}_ pub \) and the random coin tosses of \(\mathcal {A}\). The following security notion, lossiness [28], is a stronger requirement than onewayness.
Definition 3
Let \(l \ge 2\). A trapdoor permutation \(\mathsf {TDP}\) is a \((l, t,\varepsilon )\) lossy trapdoor permutation if the following two conditions hold.^{Footnote 5}

1.
There exists a probabilistic polynomialtime algorithm \(\mathsf {LossyGen}\), which on input \(1^k\) outputs \( pub '\) such that the range of \(f'(\cdot ) := \mathsf {Eval}( pub ', \cdot )\) under \(\mathsf {Dom}_{ pub '}\) is at least a factor of l smaller than the domain \(\mathsf {Dom}_{ pub '}\): \(\mathsf {Dom}_{ pub '}/f_{ pub '}(\mathsf {Dom}_{ pub '}) \ge l\). (Note that we measure the lossiness in its absolute value l, i.e., the function has \(\lceil \log _2 l \rceil \) bits of lossiness.)

2.
All distinguishers \(\mathcal {D}\) running in time at most t have an advantage \(\mathbf{Adv }^{\mathsf {L}}_\mathsf {TDP}(\mathcal {D})\) of at most \(\varepsilon \) (cf. Fig. 2), where:
$$\begin{aligned} \mathbf{Adv }^{\mathsf {L}}_\mathsf {TDP}(\mathcal {D}) = \Pr [\mathsf {L}_1^\mathcal {D}\Rightarrow 1]  \Pr [\mathsf {L}_0^\mathcal {D}\Rightarrow 1] . \end{aligned}$$
We say \(\mathsf {TDP}\) is regular \((l, t,\varepsilon )\) lossy if \(\mathsf {TDP}\) is \((l, t,\varepsilon )\) lossy and all functions \(f'(\cdot )=\mathsf {Eval}( pub ',\cdot )\) generated by \(\mathsf {LossyGen}\) are lto1 on \(\mathsf {Dom}_{ pub '}\).
The RSA Trapdoor Permutation
We define the RSA trapdoor permutation \(\mathsf {RSA}= (\mathsf {RSAGen},\mathsf {RSAEval},\mathsf {RSAInv})\) as follows. The RSA instance generator \(\mathsf {RSAGen}(1^k)\) outputs \( pub =(N,e)\) and \( td =d\), where \(N = pq\) is the product of two random k / 2bit primes, e is chosen randomly from the set \(\{e\gcd (e,\varphi (N))=1\}\), and \(d=e^{1} \mod \varphi (N)\). The domain is \(\mathsf {Dom}_ pub = {\mathbb {Z}}_N^*\). The evaluation algorithm \(\mathsf {RSAEval}( pub ,x)\) returns \(f(x) = x^e \bmod N\), the inversion algorithm \(\mathsf {RSAInv}( td ,y)\) returns \(f^{1}_ pub (y) = y^d \bmod N\). The standard assumption is that \(\mathsf {RSA}\) is hard to invert. We will review the (regular) lossiness of \(\mathsf {RSA}\) in Sect. 4.
Full Domain Hash Signatures
The Scheme
For a familiy of trapdoor permutations \(\mathsf {TDP}=(\mathsf {Gen},\mathsf {Eval},\mathsf {Invert})\), we define the Full Domain Hash (\(\mathsf {TDP}\hbox {}\mathsf {FDH}\)) signature scheme [4] in Fig. 3.
Classical Security Results of TDPFDH
The original reduction by Bellare and Rogaway from onewayness of \(\mathsf {TDP}\) loses a factor of \((q_h + q_s)\) [4], which was later improved by Coron to a factor of \(q_s\) [12] for the case of the \(\mathsf {RSA}\) trapdoor permutation.
Theorem 1
(Coron [12]) Assume the trapdoor permutation \(\mathsf {RSA}\) is \((t',\varepsilon ')\)hard to invert. Then, for any \((q_h,q_s), \mathsf {RSA}\hbox {}\mathsf {FDH}\) is \((t, \varepsilon , q_h, q_s)\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the Random Oracle Model, where
A Corrected Version of Coron’s Optimality Result
Coron showed that a security loss of a factor \(q_s\) (times some constant) is essentially optimal for \(\mathsf {TDP}\hbox {}\mathsf {FDH}\) [13, 14]. To state a corrected version of Coron’s impossibility result, we first recall the following definitions [13].
Definition 4
We say a reduction \(\mathcal {R}\) \((t_{\mathcal {F}}, t_{\mathcal {R}}, q_h, q_s, \varepsilon _{\mathcal {F}}, \varepsilon _{\mathcal {R}})\)reduces inverting a trapdoor permutation to breaking \(\mathsf {SIG}= (\mathsf {KeyGen}, \mathsf {Sign}, \mathsf {Verify})\) if after running a forger \(\mathcal {F}\), in a blackbox manner, that \((t_{\mathcal {F}},q_h,q_s,\varepsilon _{\mathcal {F}})\)breaks \(\mathsf {SIG}\), the reduction outputs a preimage of y, with probability at least \(\varepsilon _{\mathcal {R}}\), with running time at most \(t_{\mathcal {R}}\).
We now state the corrected version of Coron’s impossibility result which we prove in Sect. 5.
Theorem 2
Suppose \(\mathsf {TDP}\) is a certified trapdoor permutation. Let \(\mathcal {R}\) be a reduction that \((t_{\mathcal {F}} , t_{\mathcal {R}},q_h,q_s,\varepsilon _{\mathcal {F}},\varepsilon _{\mathcal {R}})\)reduces breaking onewayness of \(\mathsf {TDP}\) to breaking \(\mathsf {UF}\hbox {}\mathsf {CMA}\) security of \(\mathsf {TDP}\hbox {}\mathsf {FDH}\). If \(\mathcal {R}\) runs the forger only once, then we can build an inverter \(\mathcal {I}\) which \((t_{\mathcal {I}},\varepsilon _{\mathcal {I}})\)breaks onewayness of \(\mathsf {TDP}\) with:
Hence, given a security reduction from onewayness to the unforgeability of \(\mathsf {TDP}\hbox {}\mathsf {FDH}\) which loses less than a factor of \(q_s\), one obtains an efficient inverter \(\mathcal {I}\) for \(\mathsf {TDP}\) (with nonnegative success probability \(\varepsilon _\mathcal {I}\)). It is worth noting that the success probability of a forger is defined by taking the probability for valid \( pk \) only.
A Tight Security Proof for TDPFDH
The impossibility result of Theorem 2 only holds for \(\mathsf {TDP}\hbox {}\mathsf {FDH}\) if \(\mathsf {TDP}\) is a certified trapdoor permutation. However, if \(\mathsf {TDP}\) is not certified, this leaves room for a tight proof for \(\mathsf {TDP}\hbox {}\mathsf {FDH}\). We now state our main result, namely that \(\mathsf {TDP}\hbox {}\mathsf {FDH}\) is tightly secure assuming \(\mathsf {TDP}\), is regular lossy.
Theorem 3
Assume \(\mathsf {TDP}= (\mathsf {Gen}, \mathsf {Eval}, \mathsf {Invert})\) is a regular \((l, t',\varepsilon ')\)lossy trapdoor permutation for \(l \ge 2\). Then, for any \((q_h,q_s), \mathsf {TDP}\hbox {}\mathsf {FDH}\) is \((t,\varepsilon ,q_h,q_s)\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the Random Oracle Model, where
and \(T_\mathsf {TDP}\) is the time to evaluate \(\mathsf {TDP}\).
Proof
We prove our theorem using a series of games. The description of these games is found in Fig. 4. Let \(\mathcal {A}\) be an adversary that runs in time t against \(\mathsf {TDP}\hbox {}\mathsf {FDH}\) executed in the UFCMA experiment described in \(\mathsf {G}_0\) in Fig. 1 with \( \varepsilon =\Pr [\mathsf {G}_0^{\mathcal {A}} \Rightarrow 1]\). Recall that we assume wlog that \(\mathcal {A}\) always makes a query to \(\mathsf {Hash}(m)\) before calling \(\mathsf {Sign}(m)\) or \(\mathsf {Finalize}(m, \cdot )\).
Lemma 1
\(\Pr [\mathsf {G}_0^{\mathcal {A}} \Rightarrow 1] = \Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]\).
Proof
In \(\mathsf {G}_0\), we modeled the hash function as a random oracle. In \(\mathsf {G}_1\) we modify the random oracle and the signing queries. On any m the random oracle now works by evaluating the permutation on a random element \(\sigma _m \in \mathsf {Dom}_{ pub }\). We then modify the signing oracle to return this element \(\sigma _m\). Note that signing no longer requires the trapdoor \( td \). It can be seen that all our signatures will verify due to the fact that \(\mathsf {Eval}( pub , \sigma _m) = y_m\) for all m. Thus, our simulation of the signatures is correct. Since \(\mathsf {TDP}\) is a permutation, the distribution of our hash queries in \(\mathsf {G}_1\) is identical to the distribution in \(\mathsf {G}_0\). Thus, we have \(\Pr [\mathsf {G}_0^{\mathcal {A}} \Rightarrow 1] = \Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]\).\(\square \)
Lemma 2
There exists a distinguisher \(\mathcal {D}_1\) against the lossiness of \(\mathsf {TDP}\), which runs in time \(t = t_{\mathcal {A}} + (q_h+q_s)\cdot T_\mathsf {TDP}\) and such that \(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_1)\).
Proof
From \(\mathsf {G}_1\) to \(\mathsf {G}_2\), we change the key generation from a normal permutation to a lossy permutation; however, the oracles are identical in both games. We now build a distinguisher \(\mathcal {D}_1\) against the lossiness of \(\mathsf {TDP}\), using these games. The distinguisher will run \(\mathcal {A}\) and simulates the oracles \(\mathsf {Sign}(\cdot ), \mathsf {Hash}(\cdot )\) as described in games \( \mathsf {G}_1 \& \mathsf {G}_2\), for which it requires time \((q_h+q_s)\cdot T_\mathsf {TDP}\). Note that \(\mathcal {D}_1\) does not require the trapdoor \( td \) to simulate the oracles. After \(\mathcal {A}\) calls \(\mathsf {Finalize}, \mathcal {D}_1\) returns the output of \(\mathsf {Finalize}\). Thus, we can see that \(\Pr [\text {L}_0^{\mathcal {D}_1} \Rightarrow 1] = \Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]\). Similarly, we have \(\Pr [\text {L}_1^{\mathcal {D}_1} \Rightarrow 1] = \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\). Hence, we have \(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_0^{\mathcal {D}_1} \Rightarrow 1]  \Pr [\text {L}_1^{\mathcal {D}_1} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_1)\).\(\square \)
Lemma 3
\(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = \left( \frac{l 1}{l}\right) \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\).
Proof
In \(\mathsf {G}_3\), we introduce a new rule, which sets BAD to true if the forgery \(\sigma ^*\) provided by \(\mathcal {A}\) is the same as the simulated signature \(\sigma _{m^*}\) for the target message \(m^*\). If this is the case, the adversary loses the game, i.e., \(\mathsf {G}_3\) outputs 0. \(\sigma _{m^*}\) is independent of \(\mathcal {A}\)’s view and is uniformly distributed in the set of preimages of \(y_{m^*}\). Due to the l regular lossiness of \(\mathsf {TDP}\), the probability of a collision is equal to exactly 1 / l. Thus, we see that the BAD rule reduces the probability of the adversary winning the game by 1 / l, hence \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = (1  \frac{1}{l})\Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \left( \frac{l 1}{l}\right) \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\). \(\square \)
Lemma 4
There exists a distinguisher \(\mathcal {D}_2\) against the lossiness of \(\mathsf {TDP}\), which runs in time \(t = t_{\mathcal {A}} + (q_h+q_s)\cdot T_\mathsf {TDP}\) and that \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_2)\).
Proof
From \(\mathsf {G}_3\) to \(\mathsf {G}_4\), we change the key generation from a lossy permutation to a normal permutation; however, the oracles are identical in both games. We now build a distinguisher \(\mathcal {D}_2\) against the lossiness of \(\mathsf {TDP}\), using these games. The distinguisher will act as the challenger to \(\mathcal {A}\). It will simulate the oracles as described in games \( \mathsf {G}_3 \& \mathsf {G}_4\), for which it requires time \((q_h+q_s)\cdot T_\mathsf {TDP}\). After \(\mathcal {A}\) calls \(\mathsf {Finalize}, \mathcal {D}_2\) returns the output of \(\mathsf {Finalize}\). We can see that \(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_0^{\mathcal {D}_2} \Rightarrow 1]\). Similarly, we have \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_1^{\mathcal {D}_2} \Rightarrow 1]\). Hence, we have \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_1^{\mathcal {D}_2} \Rightarrow 1]  \Pr [\text {L}_0^{\mathcal {D}_2} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_2)\).\(\square \)
Lemma 5
\(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = 0\).
Proof
In \(\mathsf {G}_4\) we again use the original \(\mathsf {KeyGen}\) such that \(\mathsf {Eval}( pub ,\cdot \)) defines a permutation. This means that our signing function is now a permutation, and thus, any forgery implies a collision. Therefore, whenever the adversary is able to make a forgery, the game outputs 0 due to the BAD rule. Whenever they are unable to make a forgery, the game outputs 0. Thus, we can see that in all cases, the game will output 0, hence \(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = 0\). \(\square \)
We combine Lemmas 1 to 5 to get:
where l is the lossiness of \(\mathsf {TDP}\). Because the distinguishers run in the same time, we know that both distinguishers can have at most an advantage of \(\varepsilon '\), giving us:
This completes the proof. \(\square \)
Lossiness of RSA from the \(\varPhi \)Hiding Assumption
Lossiness of RSA
The lossiness of RSA for a number of specific instance generators \(\mathsf {RSAGen}\) was first considered in [22]. We now recall (and extend) some of the results from [22].
First, we recall some definitions from [22]. We denote by \({\mathcal {RSA}}_k:= \{(N,p,q)\;  \; N = pq, p,q \in \mathbb {P}_{k/2}, p \ne q\}\) the set of all the tuples (N, p, q) such that \(N = pq\) is the product of two distinct k / 2bit primes. Such an N is called an RSA modulus. By \((N,p,q) \in _{_R}{\mathcal {RSA}}_k\) we mean the (N, p, q) is sampled according to the uniform distribution on \({\mathcal {RSA}}_k\). Let R be some relation on p and q. By \({\mathcal {RSA}}_k[R]\), we denote the subset of \({\mathcal {RSA}}_k\) such that the relation R holds on p and q. For example, let e be a prime. Then \({\mathcal {RSA}}_k[p = 1 \bmod e]\) is the set of all (N, p, q), where \(N = pq\) is the product of two distinct k / 2bit primes p, q and \(p = 1 \bmod e\). That is, the relation R(p, q) is true if \(p = 1 \bmod e\) and q is arbitrary. By \((N, p, q) \in _{_R}{\mathcal {RSA}}_k[R]\) we mean that (N, p, q) is sampled according to the uniform distribution on \({\mathcal {RSA}}_k[R]\).
\(\alpha \)\(\varPhi \)Hiding Assumption. We recall a variant of the \(\varPhi \)Hiding Assumption introduced by Cachin, Micali and Stadler [10], where we build on a formalization by Kiltz, O’Neil and Smith [22]. The main statement of the assumption is that given an kbit RSA modulus \(N=pq\) and a random \(\alpha \cdot k\)bit prime e (where \(0< \alpha < \frac{1}{4}\) is a public constant such that \(\alpha \cdot k \in \mathbb {Z}\)), it is difficult to decide if \(e \;\; \varphi (N)\) or if \(gcd(e,\varphi (N)) = 1\). We note that if \(e \;\; \varphi (N)\) with \(e \ge N^{1/4}\), then N can be factored using Coppersmith’s attacks [11], see [10] for details. Hence, for the \(\alpha \)\(\varPhi \)Hiding Assumption to hold, the bit length of e must not exceed onefourth of the bit length of N.
Consider a distinguisher \(\mathcal {D}\) which plays one of the games \(\mathsf {P}_0\) or \(\mathsf {P}_1\) defined in Fig. 5. The advantage of \(\mathcal {D}\) is defined as:
We say that the \(\alpha \)\(\varPhi \)Hiding Problem is \((t, \varepsilon )\)hard if for all distinguishers \(\mathcal {D}\) running in time at most t have an advantage of at most \(\varepsilon \).
Define an RSA instance generator \(\mathsf {RSAGen}\) as an algorithm that returns (N, e, p, q) sampled as \(e\in _{_R}\mathbb {P}_{\alpha k}\) and \((N,p,q) \in _{_R}(gcd(e,\varphi (N)) = 1, N \mod e \ne 1)\). (See [22] for details on the sampling algorithm.)
Lemma 6
If the \(\alpha \)\(\varPhi \)Hiding Problem is \((t, \varepsilon )\)hard, then the \(\mathsf {RSA}=(\mathsf {RSAGen}, \mathsf {RSAEval},\mathsf {RSAInv})\) defines a \((2^{\alpha }, t,\varepsilon )\)lossy trapdoor permutation.
Proof
If (N, e) is sampled using \(\mathsf {RSAGen}\), then \(\gcd (e,\varphi (N)=1)\) and (N, e) defines a permutation \(\mathsf {RSA}(x)=x^e \bmod N\) over \({\mathbb {Z}}_N^*\). We define \(\mathsf {LossyGen}\) to be an algorithm that returns (N, e) sampled as \(e\in _{_R}\mathbb {P}[\alpha k]\) and \((N,p,q) \in _{_R}(p = 1 \mod e, p \ne 1 \mod e^2, q \ne 1 \mod e)\). If (N, e) is sampled using \(\mathsf {LossyGen}\) then \(e \;\; \varphi (N)\), and hence, the RSA function is exactly eto1 on the domain \(\mathsf {Dom}_ pub = {\mathbb {Z}}_N^*\). We note in particular that \(e^2 \not \; (p1)\), and thus, we have exactly an eto1 function. By definition, the outputs of \(\mathsf {RSAGen}\) and \(\mathsf {LossyGen}\) are indistinguishable if the \(\alpha \)\(\varPhi \)Hiding Problem is hard. \(\square \)
Remark 1
We use the classical definition of lossiness here, due to the fact that our public exponent e is not a parameter, and thus, we cannot give the exact regular lossiness of the RSA trapdoor permutation.
FixedPrime \(\varPhi \)Hiding Assumption. In practice, e is chosen to be small and is generally fixed to some specific numbers, such as \(e = 3\) or \(e = 2^{16} + 1\), which allows for fast exponentiation. We now show a minor variant of the \(\alpha \)\(\varPhi \)Hiding Assumption for fixed primes e, where our formalization relies on discussions from [10] and [22, Footnote 9].
First, we discuss the special case of \(e = 3\). We define our RSA instance \({\mathsf {RSAGen}}_3\) generator as an algorithm that samples (N, p, q) uniformly from \({\mathcal {RSA}}_k[p = 2 \mod 3, q = 2 \mod 3]\), which is equivalent to \({\mathcal {RSA}}_k[gcd(3,\varphi (N))=1]\). We note that \(N \mod 3\) is always 1. This means that for the lossy case, we must also ensure the \(N \mod 3 = 1\), otherwise there would be a simple distinguisher. To ensure this is to have 3 divide both \(p1\) and \(q1\). Thus, our lossy keys are sampled from the \((p,q = 1 \mod 3, p \ne 1 \mod 9, q \ne 1 \mod 9)\).
Consider a distinguisher \(\mathcal {D}\) which plays one of the games in Fig. 6. The advantage of \(\mathcal {D}\) is defined as
We say that the FixedPrime \(\varPhi \)Hiding Problem, with \(e=3\), is \((t, \varepsilon )\)hard if all distinguishers running in time at most t have an advantage of at most \(\varepsilon \).
Lemma 7
If the FixedPrime \(\varPhi \)Hiding Problem, with \(e=3\), is \((t, \varepsilon )\)hard, then the \({\mathsf {RSA}}_3=({\mathsf {RSAGen}}_3,\mathsf {RSAEval},\mathsf {RSAInv})\) defines a regular \((9, t,\varepsilon )\)lossy trapdoor permutation.
Proof
If \((N,p,q) \in (gcd(3,\varphi (N))=1)\), then (N, 3) clearly makes the RSA function a permutation. If \((N,p,q) \in (p,q = 1 \mod 3, p \ne 1 \mod 9, q \ne 1 \mod 9)\), then \(9 \;\; \varphi (N)\) and, hence, the RSA function is 9to1 on the domain \(\mathsf {Dom}_ pub = {\mathbb {Z}}_N^*\). \(\square \)
We now consider the general case of fixed \(e > 3\). For this case, we define our RSA instance generator \({\mathsf {RSAGen}}_e\) as an algorithm that samples (N, p, q) from \((gcd(e,\varphi (N))=1, N \mod e \ne 1)\). We note that \(N \mod e\) will be some value between 2 and \(e1\). This means that for the lossy case, we require e to divide \(p1\) and not \(q1\), otherwise we would have a simple distinguisher. Our lossy keys are sampled from \((p = 1 \mod e, p \ne 1 \mod e^2, q \ne 1 \mod e)\).
Consider a distinguisher \(\mathcal {D}\) which plays one of the games in Fig. 7. The advantage of \(\mathcal {D}\) is defined as
We say that the FixedPrime \(\varPhi \)Hiding Problem, with \(e>3\), is \((t, \varepsilon )\)hard if for all distinguishers running in time at most t have an advantage of at most \(\varepsilon \).
Lemma 8
If the FixedPrime \(\varPhi \)Hiding Problem, with \(e>3\), is \((t, \varepsilon )\)hard, then \({\mathsf {RSA}}_e=({\mathsf {RSAGen}}_e,\mathsf {RSAEval},\mathsf {RSAInv})\) defines a regular \((e, t,\varepsilon )\)lossy trapdoor permutation.
Proof
If \((N,p,q) \in (gcd(e,\varphi (N))=1, N \mod e \ne 1)\), then (N, e) clearly defines a permutation. If \((N,p,q) \in (p = 1 \mod e, p \ne 1 \mod e^2, q \ne 1 \mod e)\), then \(e \;\; \varphi (N)\), and hence, the RSA function is eto1 on the domain \(\mathsf {Dom}_ pub = {\mathbb {Z}}_N^*\). We note in particular that \(e^2 \not \; (p1)\); thus, we have exactly an eto1 function. \(\square \)
Proof of Coron’s Impossibility Results (Corrected)
Metareductions
We now proceed to present a corrected version of Coron’s proofs. These proofs use a technique know as a metareduction, in which we perform a proof by reduction on a extant reduction. In a normal setting, a reduction \(\mathcal {R}\) would interact with an adversary \(\mathcal {A}\) in a somewhat blackbox manner. While \(\mathcal {R}\) is unable to modify or even view the internal state of \(\mathcal {A}\), it is still able to rewind \(\mathcal {A}\) to a previous state. In particular, this means that \(\mathcal {R}\) can rewind the randomness tape of \(\mathcal {A}\).
On the other hand, the reduction \(\mathcal {R}\) normally interacts with a challenger \(\mathcal {C}\) for some (noninteractive) hard problem. After having received an input from \(\mathcal {C}\) and possibly some precomputations, \(\mathcal {R}\) begins to interact with \(\mathcal {A}\). At some point, \(\mathcal {A}\) gives some final output to \(\mathcal {R}\), which \(\mathcal {R}\) uses to compute a solution for \(\mathcal {C}\). The reduction then sends this solution to \(\mathcal {C}\). We represent this pictorially in Fig. 8.
The metareduction \({\mathcal {M}}\)’s interaction with \(\mathcal {R}\) will be twofold. In the first instance, \({\mathcal {M}}\) will simulate an adversary \(\mathcal {A}'\) for \(\mathcal {R}\) to interact with as described above. This means that \(\mathcal {A}'\) will simulate all the queries of \(\mathcal {A}\) and will behave in an expected manner when having its state and/or randomness tape rewound. In the second instance, \({\mathcal {M}}\) will not only simulate a challenger \(\mathcal {C}'\), but will also interact with \(\mathcal {R}\) in the same way that \(\mathcal {R}\) interacted with \(\mathcal {A}\). That is to say \({\mathcal {M}}\) can rewind \(\mathcal {R}\) to a previous state, which includes rewinding \(\mathcal {R}\)’s randomness tape. We represent this pictorially in Fig. 9.
It is worth noting at this juncture that the running times of \(\mathcal {A}, \mathcal {R}\) and \({\mathcal {M}}\) do not consider the running time of the other algorithms that they are running with. What this means is that we assume that an query to another algorithm is answered instantly, that is to say, in 1 time unit. We do this in order to eliminate any “waiting time”, wherein an algorithm is waiting for a response from another algorithm, which is superfluous for our analysis.
Proof of Theorem 2
Proof
Assume \(\mathcal {R}\) is a reduction that \((t_{\mathcal {F}}, t_{\mathcal {R}}, q_h, q_s, \varepsilon _{\mathcal {R}}, \varepsilon _{\mathcal {F}})\)reduces inverting \(\mathsf {TDP}\) to breaking \(\mathsf {TDP}\hbox {}\mathsf {FDH}\). We will now build an adversary \(\mathcal {I}\) against the onewayness of \(\mathsf {TDP}\). \(\mathcal {I}\) receives \( pub \) and \(y = \mathsf {Eval}( pub ,x) \in \mathsf {Dom}_{ pub '}\) for unknown \(x \in _{_R}\mathsf {Dom}_{ pub }\). The goal of \(\mathcal {I}\) is to compute x.

1.
Adversary \(\mathcal {I}\) runs the reduction \(\mathcal {R}\) providing it with \(( pub ,y)\) and in reply receives a public key \( pk = pub '\) for \(\mathsf {TDP}\hbox {}\mathsf {FDH}\). Note that \( pub '\) provided by \(\mathcal {R}\) may be different from \( pub \). \(\mathcal {I}\) verifies that \( pub '\) defines a permutation on \(\mathsf {Dom}_{ pub '}\) by running \(\mathsf {Certify}(1^k, pub ')\). If not, then \(\mathcal {I}\) outputs \(\bot \) to \(\mathcal {R}\). Otherwise, \(\mathcal {I}\) continues by simulating a forger \({\mathcal {F}}_\mathrm{sim}\) for \(\mathcal {R}\). Recall that the success probability of a forger is defined for valid \( pk \) only, hence we abort if we receive an invalid \( pk \). (We note that it may be the case that \( pub '\not = pub \), in particular we do not know if \( pub '\) was correctly generated but this does not matter for what follows.)

2.
Let \(q = \max (q_h,2 \cdot q_s)\). The adversary \(\mathcal {I}\) picks a set \(\mathcal {X}\) of q arbitrary messages (e.g., at random or the lexicographically first). \(\mathcal {I}\) then picks \(i \in _{_R}\{1, \ldots , q_s\}, m^* \in _R \mathcal {X}\) and \((m_1, \ldots , m_{q_s}) \in _R (\mathcal {X}{\setminus } \{m^*\})^{q_s}\). This defines the following two sequences of messages:
$$\begin{aligned} {\mathcal {M}}^{\text {rw}}= (m_1, \ldots , m_{q_s}), \quad {{\mathcal {M}}}^{\text {first}}= (m_1, \ldots , m_{i1}, m^*). \end{aligned}$$ 
3.
Adversary \(\mathcal {I}\) queries the signing oracle, with implicit hash queries, on the messages in \({{\mathcal {M}}}^{\text {first}}\) (i.e., \((m_1, \ldots , m_{i1}, m^*)\)) and receives the response list \({\mathcal {S}}^{\text {first}}= (\sigma _1, \ldots , \sigma _{i1}, \sigma ^*)\), if the reduction does not abort. We stress that these are all valid signatures, which we can check using the public verification key. If any of the signatures is not valid, and the reduction does not abort, then we would abort.

4.
Adversary \(\mathcal {R}\) is then rewound back to its initial state.^{Footnote 6} Now \(\mathcal {I}\) queries the signing oracle on the messages \({\mathcal {M}}^{\text {rw}}\) (i.e., \((m_1, \ldots , m_{q_s})\)) and receives \({\mathcal {S}}^{\text {rw}}= (\sigma _1,\ldots ,\sigma _{q_s})\), if the reduction does not abort. We stress that these are all valid signatures, which we can check using the public verification key. If any of the signatures is not valid, and the reduction does not abort, then we would abort.

5.
Adversary \(\mathcal {I}\) then makes a hash query on \(m^*\) and \(q_hq_s1\) additional messages from the set \(\mathcal {X}\). This means that \(\mathcal {I}\) has made exactly \(q_h\) hash queries, including the implicit hash queries from signing, so that it matches what the reduction expects.

6.
Adversary \(\mathcal {I}\) then tosses a biased coin \(\tau \) with probability \(\varepsilon _{\mathcal {F}}\) of returning 1, and if \(\tau = 0\), then \(\mathcal {I}\) sends \(\bot \) to \(\mathcal {R}\). If \(\tau = 1\), then \(\mathcal {I}\) submits \((m^*,\sigma ^*)\) as a forgery. Note that this is done in time \(t_{\mathcal {F}}\) in order to correctly simulate a forger.

7.
Because the reduction was rewound, this constitutes a valid forgery, as \(m^*\) was not queried to the signature oracle and \(\sigma ^*\) is indeed a valid signature on \(m^*\). \(\mathcal {R}\) will then return x (with probability \(\varepsilon _{\mathcal {R}}\)) which \(\mathcal {I}\) submits as its solution to oneway experiment.
We now analyze \(\mathcal {I}\)’s success probability in breaking onewayness of \(\mathsf {TDP}\). To this end we define \(\mathcal {Q}\) as the set of all sequences of indices such that the corresponding signature queries are correctly answered by \(\mathcal {R}\), after the hash queries (in time less than \(t_{\mathcal {R}}\)). If a sequence of signature queries is correctly answered by \(\mathcal {R}\), then also the same sequence of signature queries without the last message is correctly answered by \(\mathcal {R}\), so for any sequence \((m_1, \ldots , m_j) \in \mathcal {Q}\), we have \((m_1, \ldots , m_{j1}) \in \mathcal {Q}\).
Consider (as a thoughtexperiment) a (possibly nonefficient) real forger \({\mathcal {F}}_\mathrm{real}\) who inputs arbitrary \( pub '\) such that \(\mathsf {Certify}( pub ',1^k)=1\), makes hash queries to the messages from \(\mathcal {U}\) (receiving answers \(\mathcal {H}\)), signature queries to the messages from sequence \({\mathcal {M}}^{\text {rw}}\) (receiving answers \({\mathcal {S}}^{\text {rw}}\)) and outputs a valid forgery \(\sigma ^*\) on the message \(m^*\) with probability \(\varepsilon _{{\mathcal {F}}_\mathrm{real}}\). This is a valid forger so by the assertion of the theorem reduction \(\mathcal {R}\) (interacting with \({\mathcal {F}}_\mathrm{real}\)) outputs \(x=\mathsf {Invert}( pub ,y)\) with probability at least \(\varepsilon _{\mathcal {R}}\). After the rewind, \(\mathcal {R}\) (interacting with \(\mathcal {I}\)) sees exactly the same transcript as he would interact with \({\mathcal {F}}_\mathrm{real}\), except if \({{\mathcal {M}}}^{\text {first}}\not \in \mathcal {Q}\) (\(\mathcal {R}\) does not answer all the signature queries before the rewind) and \({\mathcal {M}}^{\text {rw}}\in \mathcal {Q}\) (\(\mathcal {R}\) answers all the signature queries after the rewind). In that case the forger \({\mathcal {F}}_\mathrm{real}\) would output a valid forgery (with probability \(\varepsilon _{{\mathcal {F}}_\mathrm{real}}\)) but our simulated forger does not. Here, we are relying on the fact that \(\mathsf {Eval}( pub ',\cdot )\) is a permutation, and hence, the forgery \(\sigma ^*\) on \(m^*\) output by \({\mathcal {F}}_\mathrm{real}\) is the same as the simulated one. (Otherwise it could be the case that the simulated forgery is useless for the reduction.)
Let \(\mathcal {R}^{{\mathcal {F}}_\mathrm{real}}\) denote the execution of \(\mathcal {R}\) with the above real forger and let \(\mathcal {R}^{{\mathcal {F}}_\mathrm{sim}}\) denote the execution of \(\mathcal {R}\) with the forger \({\mathcal {F}}_\mathrm{sim}\) simulated by \(\mathcal {I}\). The two games \(\mathcal {R}^{{\mathcal {F}}_\mathrm{real}}\) and \(\mathcal {R}^{{\mathcal {F}}_\mathrm{sim}}\) are identical unless \({\mathcal {M}}^{\text {rw}}\in \mathcal {Q}\) and \({{\mathcal {M}}}^{\text {first}}\not \in \mathcal {Q}\), and \(\tau =1\), where all three “bad events” are defined in the game involving inverter \(\mathcal {I}\). By the above we get
We need the following combinatorial lemma due to Coron [13, Appendix D].
Lemma 9
Let \(\mathcal {Q}\) be a set of sequences of at most \(q_s\) integers in \(\mathcal {X}\), such that for any sequence \((m_1,\ldots ,m_j) \in \mathcal {Q}\), we have \((m_1,\ldots ,m_{j1}) \in \mathcal {Q}\). Then:
By Lemma 9 we have
(We note that our \(m_i\) are drawn from \(\mathcal {X}{\setminus } \{m^*\}\) instead of \(\mathcal {X}\), as stated in the Lemma. This leads to the term \(\left( 1 \frac{q_s}{ \mathcal {X}} \right) ^{1}\), which is the bound on the probability that \(m_i \ne m^*, \forall i \{1,\ldots ,q_s\}\), which then allows us to use the Lemma.) Overall, we obtain for the success probability \(\varepsilon _\mathcal {I}\) of \(\mathcal {I}\):
The last inequality is due to the fact the \(q \ge 2 \cdot q_s\), which gives \(\left( 1 \frac{q_s}{q} \right) ^{1} \le 2\). The time bound comes from running the reduction once, plus the rewind and the additional time for at most \(q_s\) additional signing queries and the rewind. This is essentially equal to running the reduction twice and simulating the forger twice. \(\square \)
Impossibility for Any Certified Unique Signature Scheme
Coron also showed that an analogue of this result can be shown to hold in the standard model [13]. This result gives essentially the same bound, with the number of hash queries \(q_h\) replaced by the size of the message space \(2^\ell \). Above and beyond moving the standard model, we further extend Theorem 2 in two directions, simultaneously. Firstly, we extend it to reductions from any noninteractive hard problem \(\varPi \) to forging any certified unique signature scheme \(\mathsf {SIG}\). Secondly we allow the reduction to run the forger \(\mathcal {F}\) multiple times.
We first define a unique signature scheme and then we define a certified unique signature scheme.
Definition 5
A signature scheme \(\mathsf {SIG}\) is said to be a unique signature scheme if for all \(( pk , sk ) \leftarrow _{{\tiny {\$}}}\mathsf {KeyGen}\), for any message m, there exists exactly one valid signature \(\sigma \) such that \(\mathsf {Verify}(pk,m,\sigma _m) = 1\).
Definition 6
A signature scheme \(\mathsf {SIG}\) is said to be a certified unique signature scheme if it is a unique signature scheme and there exists a polynomialtime algorithm \(\mathsf {Certify}\), that on input \(1^k\) and a public key \( pk \), which may have been generated adversarially, will output 1 iff \( pk \) defines a unique signature scheme.
We will now recall the definition of a noninteractive problem instance generator due to Abe et al. [1]. This definition is very generic and encompasses both search and decisional problems.
Definition 7
A noninteractive problem instance generator consists of a triple of PPT algorithms \(\varPi = (\mathsf {Gen}, \mathsf {Verify}, \mathsf {U})\) such that:

\(\mathsf {Gen}(1^k)\) outputs an instance I and a witness w.

\(\mathsf {Verify}(I, S, w)\), where S is a candidate solution and outputs 1 or 0 that represents acceptance or rejection, respectively.

\(\mathsf {U}(I)\) outputs a candidate solution S.
\(\mathsf {U}\) can be seen as a trivial guessing algorithm against which the advantage of an adversary \(\mathcal {A}\) is measured. Concretely, we define the advantage of an adversary \(\mathcal {A}\) as:
We say that \(\varPi \) is \((t,\varepsilon )\)hard if for all PPT adversaries \(\mathcal {A}\) running in time at most t have and \(\mathbf{Adv}^\mathsf {NIP}_{\varPi }(\mathcal {A}) \le \varepsilon \).
Examples of Noninteractive Problems. We now give a description of some known problems, both search and decisional, in terms of noninteractive problems. Firstly, we have the RSA problem which states that it is hard to invert the RSA trapdoor permutation. An instance of the RSA problem is given as \(I = (N,e,y=x^e \mod N)\), where (N, e) are taken from the output of \(\mathsf {RSAGen}\), as defined in Sect. 4.1, \(x \in _{_R}\mathbb {Z}_N\), and the witness is \(w = x\). If the RSA problem is \((t,\varepsilon )\)hard, then this defines a \((t,\varepsilon )\)hard noninteractive problem, where the trivial guessing algorithm \(\mathsf {U}\) simply picks a random element in \(\mathbb {Z}_N^*\). Secondly, we consider the \(\varPhi \)Hiding Problem, as described in Sect. 4.1. An instance of this problem is given as \(I = (N,e)\) and the witness is \(w=1\) if \(\gcd (e, \varphi (N))=1\) and \(s=0\) if \(e \mid \varphi (N)\). The \(\mathsf {Verify}\) algorithm takes as input (I, b, s) and outputs 1 if \(b=s\), else 0. Hence, if the \(\varPhi \)Hiding Problem is \((t,\varepsilon )\)hard, then this defines a \((t,\varepsilon )\)hard problem with \(\mathsf {U}\) picking a random bit. Note that we can also define the Computational and Decisional Diffie–Hellman problems in terms of a hard noninteractive problem. Furthermore, this definition also covers problems with nonunique solutions, such as computing modular square roots or approximateSVP. It is due to these types of problem that we require the \(\mathsf {Verify}\) function. In these cases, we may not be able to store all the possible solutions, but we can verify a solution using the witness.
We are now ready to state our main impossibility result which generalizes Theorem 2.
Theorem 4
Suppose \(\mathsf {SIG}\) is a certified unique signature scheme, where the size of the message space is at least \(2^{\ell }\). Let \(\mathcal {R}\) be a reduction that \((t_{\mathcal {F}}, t_{\mathcal {R}}, q_s, \varepsilon _{\mathcal {F}}, \varepsilon _{\mathcal {R}}, r)\)reduces breaking a noninteractive problem \(\varPi \) to breaking UFCMA security of \(\mathsf {SIG}\), where \(\mathcal {R}\) can run the forger at most r times sequentially. Then we can build an adversary \(\mathcal {A}\) which \((t_{\mathcal {A}},\varepsilon _{\mathcal {A}})\)breaks \(\varPi \) with
Proof
The proof or this theorem is very similar to that of Theorem 2. For \(1 \le n \le r\), we say that the reduction \(\mathcal {R}\) is in the nth round when it has already run the forger \(n1\) times. We use this reduction to construct and adversary \(\mathcal {A}\) against the problem \(\varPi \). The reduction is initialized by the adversary \(\mathcal {A}\) sending it and instance I of the problem \(\varPi \). We consider the nth round of the reduction below:

1.
The reduction \(\mathcal {R}\) provides the adversary \(\mathcal {A}\) with a public key \( pk _n\). Note that this public key may be different to the public keys for the previous rounds. \(\mathcal {A}\) verifies that \( pk _n\) defines a unique signature scheme by running \(\mathsf {SIG}.\mathsf {Certify}(1^k, pk _n)\). Recall that the success probability of a forger is defined for valid \( pk \) only, hence we abort if we receive an invalid \( pk \). If not, then \(\mathcal {A}\) outputs \(\bot \) to \(\mathcal {R}\). Otherwise, \(\mathcal {A}\) continues.

2.
Adversary \(\mathcal {A}\) picks \(2^{\ell }\) messages \(\mu _{n,1}, \ldots , \mu _{n,2^{\ell }}\) from the message space \({\mathcal {M}}\) (e.g., at random or the lexicographically first) and defines the universe \(\mathcal {U}_n = (\mu _{n,1}, \ldots , \mu _{n,2^{\ell }})\).

3.
\(\mathcal {A}\) picks \(i_n \in _{_R}\{1, \ldots , q_s\}, m^*_n \in _R \{\mu _1,\ldots , \mu _{2^{\ell }}\}\) and \((m_{n,1}, \ldots , m_{n,q_s}) \in _R (\{\mu _1,\ldots , \mu _{2^{\ell }}\} {\setminus } \{m^*_n\})^{q_s}\). This defines the following two sequences of integers:
$$\begin{aligned} {\mathcal {M}}^{\text {rw}}_n = (m_{n,1}, \ldots , m_{n,q_s}), \quad {{\mathcal {M}}}^{\text {first}}_n = (m_{n,1}, \ldots , m_{n,i_{n}1}, m^*_n). \end{aligned}$$ 
4.
Adversary \(\mathcal {A}\) then queries the signing oracle on the messages \({{\mathcal {M}}}^{\text {first}}_n\) (i.e., \((m_{n,1}, \ldots , m_{n,i_{n}1}, m^*_n )\)) and receives the response \({\mathcal {S}}^{\text {first}}_n\) which are the signatures, with the last \(\sigma ^*\) being the signature on the message indexed by \(m^*_n\), if the reduction does not abort. We stress that these are all valid signatures, which we can check using the public verification key. If any of the signatures is not valid, and the reduction does not abort, then we would abort.

5.
Adversary \(\mathcal {R}\) is then rewound back to before it answered the signature queries. Now \(\mathcal {A}\) queries the signing oracle on the messages indexed by \({\mathcal {M}}^{\text {rw}}_n\) (i.e., \((m_1, \ldots , m_{q_s})\)) and receives \({\mathcal {S}}^{\text {rw}}\) which contains the signatures of the messages indexed by \({\mathcal {M}}^{\text {rw}}_n\), if the reduction does not abort. We stress that these are all valid signatures, which we can check using the public verification key. If any of the signatures is not valid, and the reduction does not abort, then we would abort.

6.
Adversary \(\mathcal {A}\) then tosses a biased coin \(\tau _n\) with probability \(\varepsilon _{\mathcal {F}}\) of returning 1, and if \(\tau _n = 0\), then \(\mathcal {I}\) sends \(\bot \) to \(\mathcal {R}\). If \(\tau = 1\), then \(\mathcal {I}\) submits \((m^*_n,\sigma _n^*)\) as a forgery. Note that this is done in time \(t_{\mathcal {F}}\) in order to correctly simulate a forger.

7.
Because the reduction was rewound, this constitutes a valid forgery, as \(m_n^*\) was not queried to the signature oracle and \(\sigma _n^*\) is indeed a valid signature on \(m_n^*\).
This process is repeated for each round, with \(n = 1\) to r. We note that in each round, we need to draw a new set of messages for the universe \(\mathcal {U}_n\), as the message space may have changed, due to the public key being different. After at most r rounds, the reduction \(\mathcal {R}\) outputs a solution \(\widehat{S}\) with probability \(\varepsilon _{\mathcal {R}}\), which \(\mathcal {A}\) will use as its solution to I.
We now analyze the success probability of \(\mathcal {A}\) in breaking the search problem \(\varPi \). For the nth round, we define \(\mathcal {Q}_n\) as the set of sequences of indices such that the corresponding signature queries are correctly answered by \(\mathcal {R}\) in the nth round. We can see that if a sequence is correctly answered by \(\mathcal {R}\) in the nth round, then the same sequence without the last query is also correctly answered by \(\mathcal {R}\) in the nth round. That is to say, for any sequence \((m_{n,1}, \ldots , m_{n,j}) \in \mathcal {Q}_n\), we have \((m_{n,1}, \ldots , m_{n,j1}) \in \mathcal {Q}_n\).
Consider now a real forger \({\mathcal {F}}_\mathrm{real}\), which is run r times and in the nth round it receives with public keys \( pk _n\) and makes signature queries corresponding to \(m_n\), receives \({\mathcal {S}}^{\text {rw}}_n\) as an answer, and then submits \((m_{m^*_n},\sigma ^*_n)\) as forgery, with probability \(\varepsilon _{{\mathcal {F}}_\mathrm{real}}\). By definition, after at most the rth round, the reduction will output a candidate solution S with probability \(\varepsilon _{\mathcal {R}}\). After each nth rewind, the reduction \(\mathcal {R}\) sees the same transcript, in the nth round, when interacting with \({\mathcal {F}}_\mathrm{sim}\) as it does when it interacts with \({\mathcal {F}}_\mathrm{real}\), unless \({{\mathcal {M}}}^{\text {first}}_n \not \in \mathcal {Q}_n\) and \({\mathcal {M}}^{\text {rw}}_n \in \mathcal {Q}_n\). In this case, the real forger \({\mathcal {F}}_\mathrm{real}\) will output a forgery with probability \(\varepsilon _{{\mathcal {F}}_\mathrm{real}}\); however, the forger simulated by \(\mathcal {A}\) will be unable to output a forgery. This relies on the fact that \( pk _n\) defines a unique signature scheme, and hence, the forgery output by the forger \({\mathcal {F}}_\mathrm{real}\) would be the same as the simulated one.
We denote the execution of the reduction with the real forger as \(\mathcal {R}^{{\mathcal {F}}_\mathrm{real}}\) and the execution of the reduction with the forger simulated by \(\mathcal {A}\) as \(\mathcal {R}^{{\mathcal {F}}_\mathrm{sim}}\). From above, we get:
By Lemma 9 we have
Overall, we obtain for the success probability \(\varepsilon _\mathcal {A}\) of \(\mathcal {A}\):
Now we consider the case where the reduction can rewind the forger to some state St. This is equivalent to restarting the forger with the same random coins and giving it the same input as before until it reaches the required state St. If the reduction rewinds the forger in the \((n+1)\)th round and sends the same public key to the forger, the forger will make the same signature queries and submit the same forgery as in the previous round. This can be simulated by \(\mathcal {A}\) by setting \(\mathcal {U}_{n+1} = \mathcal {U}_n, m^*_{n+1} = m^*_n, {\mathcal {M}}^{\text {rw}}_{n+1} = {\mathcal {M}}^{\text {rw}}_n,{{\mathcal {M}}}^{\text {first}}_{n+1} ={{\mathcal {M}}}^{\text {first}}_n\) and proceeding as before. However, in the case where the reduction sends a new public key, then the forger will make different queries and submit a different forgery. Adversary \(\mathcal {A}\) can simulate this by picking new messages and new values for \(m^*_{n+1},{\mathcal {M}}^{\text {rw}}_{n+1},{{\mathcal {M}}}^{\text {first}}_{n+1}\), as it normally does, and proceeding as before. We see that in both cases the transcript the reduction sees in the \((n+1)\)th round when interacting with \(\mathcal {A}\) is the same as when interacting with a real forger \({\mathcal {F}}_\mathrm{real}\), except when \({\mathcal {M}}^{\text {rw}}_{n+1} \in \mathcal {Q}_{n+1} \wedge {{\mathcal {M}}}^{\text {first}}_{n+1} \not \in \mathcal {Q}_{n+1}\). By a similar argument as above, we get the same bound.
We see that the adversary runs each round of the reduction twice and simulates the forger twice, thus giving us our time bound. \(\square \)
Remark 2
We need that the forgers are run sequentially, otherwise the proof might not go through. When the reduction is allowed to run the forgers in parallel, it may interleave the executions of the forgers and make the replies to one dependant on the queries of the others. If this is the case, then real forgers would not have a problem producing a forgery. However, the simulation of the forger as described above may not be able to do so. Due to the fact that the reduction may change its responses based on the queries, the simulated forgers may never be able to get a forgery and then submit it.
As an example, consider the following execution of the adversary described above, with only 2 parallel forgers. The adversary sends the instance I to the reduction. The reduction in reply sends a public key \( pk _1\) for the first forger \(\mathcal {F}_1\). The first forger defines the universe \(\mathcal {U}_1\) and sequences \({\mathcal {M}}^{\text {rw}}_1, {{\mathcal {M}}}^{\text {first}}_1\) as defined above. It then proceeds to make signing queries on messages indexed \({{\mathcal {M}}}^{\text {first}}_1\). Based on these queries, \(\mathcal {R}\) computes a public key \( pk _2\) for a second forger \(\mathcal {F}_2\). The second forger begins by defining its universe \(\mathcal {U}_2\) and sequences \({\mathcal {M}}^{\text {rw}}_2,{{\mathcal {M}}}^{\text {first}}_2\). Forger \(\mathcal {F}_2\) then proceeds to make signing queries on the messages indexed by \({{\mathcal {M}}}^{\text {first}}_2\). Based on this the reduction \(\mathcal {R}\) computes the responses \({\mathcal {S}}^{\text {first}}_1\) and sends it to forger \(\mathcal {F}_1\). Having received this, adversary \(\mathcal {A}\) rewinds \(\mathcal {R}\) to before it answered the signature queries. At this point forger \(\mathcal {F}_1\) will make signature queries to the messages indexed by \({\mathcal {M}}^{\text {rw}}_1\). Using these new queries, the reduction \(\mathcal {R}\) will compute a new public key \(\widehat{ pk }_2\) and use it to initialize forger \(\mathcal {F}_2\). Due to the fact that \(\widehat{pk}_2 \ne pk_2\), forger \(\mathcal {F}_2\) will have to define a new universe \(\widehat{\mathcal {U}}_2\) and new sequences \(\widehat{{{\mathcal {M}}}^{\text {first}}_2}, \widehat{{\mathcal {M}}^{\text {rw}}_2}\). At this point forger \(\mathcal {F}_2\) will make signature queries on the messages indexed by \(\widehat{{{\mathcal {M}}}^{\text {first}}_2}\). Based on these queries, the reduction \(\mathcal {R}\) will compute the responses to forger \(\mathcal {F}_1\). At this point it is not clear how to proceed, as the internal state of the reduction has changed. In particular the set \(\mathcal {Q}'_1\) may now exclude \({\mathcal {M}}^{\text {rw}}_1\), which would mean that we can no longer apply Lemma 9. Extending Theorem 4 to parallel forgers remains an open problem.
The Probabilistic Signature Scheme
The Scheme
Let \(\mathsf {TDP}=(\mathsf {Gen},\mathsf {Eval},\mathsf {Invert})\) be a trapdoor permutation. For simplicity we assume that \(\mathsf {Dom}_ pub = \{0,1\}^k\), for all \( pub \) output by \(\mathsf {Gen}\). (If that is not the case and we have, e.g., \(\mathsf {Dom}_ pub ={\mathbb {Z}}^*_N\), the scheme can be adapted accordingly [4].) We now recall the Probablistic Signature Scheme (PSS) [4] PSS is parametrized not only by the security parameter, but also by two additional integers \(k_0\) and \(k_1\). The first parameter \(k_0\) defines the size of randomness in bits and the second defines the domains of the two hash functions. PSS uses two hash functions \(\mathsf {H}: \{0,1\}^*\rightarrow \{0,1\}^{k_1}\) and \(\mathsf {F}: \{0,1\}^{k_1} \rightarrow \{0,1\}^{kk_1}\). We also require two additional functions \(\mathsf {F}_1\) and \(\mathsf {F}_2\). \(\mathsf {F}_1(\omega )\) returns the first \(k_0\) bits of the output of \(\mathsf {F}(\omega )\) and \(\mathsf {F}_2(\omega )\) returns the remaining \((kk_1)k_0\) bits of \(\mathsf {F}(\omega )\). The scheme \(\mathsf {TDP}\hbox {}\mathsf {PSS}[k_0,k_1]\) is defined in Fig. 10.
Classical Security Results of TDPPSS
The original reduction by Bellare and Rogaway from onewayness of \(\mathsf {TDP}\) with a security loss of \(q_h\) [4] was later improved by Coron to a factor of \(q_s\) [14] for the case of the \(\mathsf {RSA}\) trapdoor permutation.
Theorem 5
(Coron [14]) Assume the trapdoor permutation \(\mathsf {RSA}\) is \((t',\varepsilon ')\)hard to invert. Then for any \((q_h,q_s), \mathsf {RSA}\hbox {}\mathsf {PSS}[k_0,k_1]\) is \((q_h,q_s,t,\varepsilon )\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the Random Oracle Model, where
A Tight Security Proof for TDPPSS
We now present a tight reduction which reduces breaking \(\mathsf {TDP}\hbox {}\mathsf {PSS}\) to the lossiness of the underlying trapdoor permutation \(\mathsf {TDP}\).
Theorem 6
Assume \(\mathsf {TDP}= (\mathsf {Gen}, \mathsf {Eval}, \mathsf {Invert})\) is a regular \((l, t',\varepsilon ')\)lossy trapdoor permutation for \(l \ge 2\). Then, for any \((q_h,q_s,k_0,k_1), \mathsf {TDP}\hbox {}\mathsf {PSS}[k_0,k_1]\) is \((t,\varepsilon ,q_h,q_s)\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the Random Oracle Model, where
and \(T_\mathsf {TDP}\) is the time to evaluate \(\mathsf {TDP}\).
Remark 3
We stress that the security reduction of Theorem 6 is independent of the size of the randomness \(k_0\). In particular, this allows to set \(k_0 = 0\).
Proof
We prove our theorem using a series of games. The description of these games is found in Fig. 11. Let \(\mathcal {A}\) be an adversary that runs in time t against \(\mathsf {TDP}\hbox {}\mathsf {PSS}\) executed in the \(\mathsf {UF}\hbox {}\mathsf {CMA}\) experiment described in \(\mathsf {G}_0\) in Fig. 11 with \(\varepsilon =\Pr [\mathsf {G}_0^{\mathcal {A}} \Rightarrow 1] = \Pr [\mathsf {UF}\hbox {}\mathsf {CMA}^\mathcal {A}\Rightarrow 1]\).
Now consider Game \(\mathsf {G}_1\) from Fig. 9.
Lemma 10
\(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1] \ge \Pr \left[ \mathsf {G}_0^{\mathcal {A}} \Rightarrow 1\right]  \frac{(q_h + q_s + 1)^2}{2^{k_1}}\).
Proof
In \(\mathsf {G}_0\), we modeled the hash functions as a random oracle. In \(\mathsf {G}_1\), we modify the \(\mathsf {H}\) oracle and the signing oracle. On any (m, r), the \(\mathsf {H}\) oracle now works by evaluating the permutation on a random element \(\sigma _{(m,r)} \in \mathsf {Dom}_{ pub }\). The result is then parsed as \(\omega _{(m,r)}r^*_{(m,r)}\gamma _{(m,r)}\) and we check whether \(\omega _{(m,r)}\) has been previously queried to the \(\mathsf {F}\) oracle. If it has, then we abort, otherwise we store the randomness and the values \(\omega _{(m,r)}\) and \(\sigma _{(m,r)}\). Then, we compute the output of \(\mathsf {F}(\omega _{(m,r)})\) and store it. The signing oracle is then modified to pick a random value r, for which we have already have computed \(\mathsf {H}(mr)\), and return the element \(\sigma _{(m,r)}\). Note that signing no longer requires the trapdoor \( td \). It can be seen that all our signatures will verify due to the manner in which we compute the responses to queries to \(\mathsf {H}\) and \(\mathsf {F}\). Thus, our simulation of the signatures is correct. Since \(\mathsf {TDP}\) is a permutation, the distribution of our \(\mathsf {H}\)Oracle queries in \(\mathsf {G}_1\) is the same as in \(\mathsf {G}_0\), except when the reduction aborts, due to a collision in the \(\mathsf {F}\)Oracle. The probability of a collision is at most \(\frac{(q_s + q_h+1)^2}{2^{k_1}}\), since the adversary makes at most \(q_h + q_s + 1\) queries, implicit or explicit, to the \(\mathsf {F}\)oracle, giving at most \((q_h + q_s + 1)^2\) possible collisions from a total of \(2^{k_1}\) possible choices. Furthermore, we do not abort if collision matches the value already set, which happens with a probability \(1\frac{1}{2^{kk_1}}\). Hence, the total probability that we about is \(\left( 1\frac{1}{2^{kk_1}} \right) \left( \frac{(q_h + q_s + 1)^2}{2^{k_1}} \right) \). Thus, we have \(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1] \ge \Pr [\mathsf {G}_0^{\mathcal {A}} \Rightarrow 1]  \frac{(q_h + q_s + 1)^2}{2^{k_1}}\). \(\square \)
Lemma 11
There exists a distinguisher \(\mathcal {D}_1\) against the lossiness of \(\mathsf {TDP}\), which runs in time \(t = t_{\mathcal {A}} + (q_h+q_s)\cdot T_\mathsf {TDP}\) and that \(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_1)\).
Proof
From \(\mathsf {G}_1\) to \(\mathsf {G}_2\), we change the key generation from a normal permutation to a lossy permutation; however, the oracles are identical in both games. We now build a distinguisher \(\mathcal {D}_1\) against the lossiness of \(\mathsf {TDP}\), using these games. The distinguisher will run \(\mathcal {A}\) and simulates the oracles \(\mathsf {Sign}(\cdot ), \mathsf {H}(\cdot ,\cdot ),\mathsf {F}(\cdot )\) as described in games \( \mathsf {G}_1 \& \mathsf {G}_2\), for which it requires time \((q_h+q_s)\cdot T_\mathsf {TDP}\). Note that \(\mathcal {D}_1\) does not require the trapdoor \( td \) to simulate the oracles. After \(\mathcal {A}\) calls \(\mathsf {Finalize}, \mathcal {D}_1\) returns the output of \(\mathsf {Finalize}\). Thus, we can see that \(\Pr [\text {L}_0^{\mathcal {D}_1} \Rightarrow 1] = \Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]\). Similarly, we have \(\Pr [\text {L}_1^{\mathcal {D}_1} \Rightarrow 1] = \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\). Hence, we have \(\Pr [\mathsf {G}_1^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_1^{\mathcal {D}_1} \Rightarrow 1]  \Pr [\text {L}_0^{\mathcal {D}_1} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_1)\). \(\square \)
Lemma 12
\(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = \left( \frac{l 1}{l}\right) \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\).
Proof
In \(\mathsf {G}_3\), we introduce a new rule, which sets BAD to true if the forgery \(\sigma ^*\) provided by \(\mathcal {A}\) is the same as the simulated signature \(\sigma _{m^*,r}\) for the target message \(m^*\) and randomness r. If this is the case, the adversary loses the game, i.e., \(\mathsf {G}_3\) outputs 0. \(\sigma _{m^*,r}\) is independent of \(\mathcal {A}\)’s view and is uniformly distributed in the set of preimages of \(y_{m^*,r}\). Due to the l regular lossiness of \(\mathsf {TDP}\), the probability of a collision is equal to exactly 1 / l. Thus, we see that the BAD rule reduces the probability of the adversary winning the game by 1 / l, hence \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = (1  \frac{1}{l})\Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1] = \left( \frac{l 1}{l}\right) \Pr [\mathsf {G}_2^{\mathcal {A}} \Rightarrow 1]\). \(\square \)
Lemma 13
There exists a distinguisher \(\mathcal {D}_2\) against the lossiness of \(\mathsf {TDP}\), which runs in time \(t = t_{\mathcal {A}} + (q_h+q_s)\cdot T_\mathsf {TDP}\) and that \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_2)\).
Proof
From \(\mathsf {G}_3\) to \(\mathsf {G}_4\), we change the key generation from a lossy permutation to a normal permutation; however, the oracles are identical in both games. We now build a distinguisher \(\mathcal {D}_2\) against the lossiness of \(\mathsf {TDP}\), using these games. The distinguisher will act as the challenger to \(\mathcal {A}\). It will simulate the oracles as described in games \( \mathsf {G}_3 \& \mathsf {G}_4\), for which it requires time \((q_h+q_s)\cdot T_\mathsf {TDP}\). After \(\mathcal {A}\) calls \(\mathsf {Finalize}, \mathcal {D}_2\) returns the output of \(\mathsf {Finalize}\). We can see that \(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_0^{\mathcal {D}_2} \Rightarrow 1]\). Similarly, we have \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_1^{\mathcal {D}_2} \Rightarrow 1]\). Hence, we have \(\Pr [\mathsf {G}_3^{\mathcal {A}} \Rightarrow 1]  \Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = \Pr [\text {L}_1^{\mathcal {D}_2} \Rightarrow 1]  \Pr [\text {L}_0^{\mathcal {D}_2} \Rightarrow 1] = \mathbf{Adv }^{\mathsf {L}}_{\mathsf {TDP}}(\mathcal {D}_2)\). \(\square \)
Lemma 14
\(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = 0\).
Proof
In \(\mathsf {G}_4\), we again use the original \(\mathsf {KeyGen}\) such that \(\mathsf {Eval}( pub ,\cdot \)) defines a permutation. This means that our signing function is now a permutation, and thus, any forgery implies a collision. Therefore, whenever the adversary is able to make a forgery, the game outputs 0 due to the BAD rule. Whenever they are unable to make a forgery, the game outputs 0. Thus, we can see that in all cases, the game will output 0, hence \(\Pr [\mathsf {G}_4^{\mathcal {A}} \Rightarrow 1] = 0\). \(\square \)
We combine Lemmas 10 to 14 to get:
where l is the lossiness of \(\mathsf {TDP}\). Because the distinguishers run in the same time, we know that both distinguishers can have at most an advantage of \(\varepsilon '\), giving us:
This completes the proof. \(\square \)
PSS with Message Recovery
We now recall the PSS with message recovery (PSSR) scheme, which a digital signature scheme with message recovery based on PSS.
Definition 8
A digital signature scheme with message recovery is a triple of probabilistic algorithms \(\mathsf {SIG}\hbox {}\mathsf {R}= (\mathsf {KeyGen}, \mathsf {Sign}, \mathsf {Recover})\), where:

1.
\(\mathsf {KeyGen}\) takes as an input the unary representation of our security parameter (\(1^k\)) and outputs a signing key sk and verification key pk.

2.
\(\mathsf {Sign}\) takes as input a signing key sk, message m and outputs an “enhanced signature” \(\sigma \).

3.
\(\mathsf {Recover}\) is a deterministic algorithm, which on input of a public key and an “enhanced signature” \(\sigma \) outputs either the corresponding message m (accept) or the special symbol \(\bot \) (reject).
We say that \(\mathsf {SIG}\hbox {}\mathsf {R}\) is correct if for all public key and secret key pairs generated by \(\mathsf {KeyGen}\), we have:
We describe \(\mathsf {TDP}\hbox {}\mathsf {PSS}\hbox {}\mathsf {R}\) in Fig. 12 using the same notation as in Sect. 6. The overhead of signature scheme with recover is defined as the difference in size between the message and the signature. With PSSR, we can sign and recover messages of size at most \(n = k  k_0  k_1\) bits long with a k bit signature. Thus, we have a total overhead of \(k_1 + k_0\) bits.
Theorem 7
Assume \(\mathsf {TDP}= (\mathsf {Gen}, \mathsf {Eval}, \mathsf {Invert})\) is a regular \((l, t',\varepsilon ')\)lossy trapdoor permutation for \(l \ge 2\). Then, for any \((q_h,q_s,k_0,k_1), \mathsf {TDP}\hbox {}\mathsf {PSS}\hbox {}\mathsf {R}[k_0,k_1]\) is \((t,\varepsilon ,q_h,q_s)\)\(\mathsf {UF}\hbox {}\mathsf {CMA}\) secure in the Random Oracle Model, where
and \(T_\mathsf {TDP}\) is the time to evaluate \(\mathsf {TDP}\).
The proof of security for PSSR is similar to that of PSS, and hence, we do not present it.
We now discuss the choices of parameters \(k_0,k_1\). First off, we notice that the security of \(\mathsf {TDP}\hbox {}\mathsf {PSS}\hbox {}\mathsf {R}\) does not depend at all on the size of the randomness \(k_0\). Hence, we can simply pick \(k_0 = 0\). We note that this does not contradict the result of Coron [14] that states if we use \(k_0 = 0\), then our reduction must lose a factor of \(q_s\). This is due to fact that this only holds if \(\mathsf {TDP}\hbox {}\mathsf {PSS}\) is instantiated with a certified trapdoor permutation.
For the choice of \(k_1\), we need the notion of timetosuccess ratios, also sometimes referred to as “work factor”, whose definition we now recall.
Definition 9
The work factor \(\mathsf {WF}\) of an adversary \(\mathcal {A}\) with a success probability of \(\varepsilon _\mathcal {A}\) and running time \(t_\mathcal {A}\) is given by:
We say that a scheme achieves \(\kappa \) bit security if for all adversaries \(\mathcal {A}\), we have \(\mathsf {WF}(\mathcal {A}) \ge 2^{\kappa }\), that is to say, the adversary must perform at least \(2^{\kappa }\) expected operations to win the security game. We remark that if we assume any problem is \((t,\varepsilon )\)hard, we have that for all adversaries \(\mathcal {A}\) against the problem, we have \(\mathsf {WF}(\mathcal {A}) \ge 2^{\kappa }\).
The condition that \(\mathsf {WF}\ge 2^{\kappa }\) determines our choice of parameters for a given security level \(\kappa \). Using this and the formulae from Theorem 7, we can compute the value for \(k_1\) such that \(\mathsf {TDP}\hbox {}\mathsf {PSS}\) has \(\kappa \)bit security. Using the values from Theorem 7, we get:
If we rearrange this expression, we see that in fact we need:
The first requirement is satisfied by our assumption that we have a regular \((l,t',\varepsilon ')\)lossy trapdoor permutation, which means \(t'/\varepsilon ' \ge 2^{\kappa }\) by assumption. If we look at the second inequality, we see that we require \(k_1 \ge \kappa + 2 \log (q_h+q_s)  \log t\). However, we also have that \(t \ge q_s + q_h\), hence giving us \(k_1 \ge \kappa + \log (q_h+q_s)\).
We can perform similar calculations for the proofs of BellareRogaway and Coron, but we omit them here. We compute some concrete figures for the overhead imposed by each security proof and present them in Table 1. We take \(\kappa = 80, q_h = 2^{80}, q_s = 2^{30}\).
Notes
 1.
A signature scheme has unique signatures if for each message there exists exactly one signature that verifies w.r.t. a given (honestly generated) public key.
 2.
Such restricted reductions were called keypreserving reductions in [27].
 3.
We stress that our tight proof technically does not give a counterexample to Coron’s impossibility result since our reduction is from the \(\varPhi \)Hiding Assumption, not the RSA assumption. However, as corollary the impossibility result would exclude any (even nontight) equivalence between the \(\varPhi \)Hiding and the RSA assumption.
 4.
Here, we do not count tight security proofs from “tautological assumptions” which are essentially assuming that the signature scheme is secure.
 5.
We deviate in two ways from the original definition of lossy trapdoor functions Peikert and Waters [28]. First, we define the permutation over arbitrary domains \(\mathsf {Dom}\), rather than \(\{0,1\}^k\); second, we measure the absolute lossiness l, rather than the bits of lossiness \(\ell =\log _2(l)\).
 6.
More formally, \(\mathcal {R}\) is run again with the same random tape. Hence, upto this point \(\mathcal {R}\) behaves exactly the same as in its first execution.
References
 1.
M. Abe, J. Groth, M. Ohkubo. Separating short structurepreserving signatures from noninteractive assumptions, in Advances in Cryptology—ASIACRYPT 2011, Lecture Notes in Computer Science (Springer, Berlin, 2011), pp. 628–646
 2.
M. Bellare, C. Namprempre, D. Pointcheval, M. Semanko, The onemoreRSAinversion problems and the security of Chaum’s blind signature scheme. J. Cryptol. 16(3), 185–215 (2003)
 3.
M. Bellare, P. Rogaway, Random oracles are practical: a paradigm for designing efficient protocols, in V. Ashby (ed.) ACM CCS 93: 1st Conference on Computer and Communications Security (ACM Press, New York, 1993), pp. 62–73
 4.
M. Bellare, P. Rogaway, The exact security of digital signatures: how to sign with RSA and Rabin, in U.M. Maurer (ed) Advances in Cryptology—EUROCRYPT’96. Lecture Notes in Computer Science, vol. 1070 (Springer, Berlin, 1996), pp. 399–416
 5.
M. Bellare, P. Rogaway, The security of triple encryption and a framework for codebased gameplaying proofs, in S. Vaudenay (ed) Advances in Cryptology—EUROCRYPT 2006. Lecture Notes in Computer Science, vol. 4004 (Springer, Berlin, 2006), pp. 409–426
 6.
M. Bellare, M. Yung, Certifying permutations: noninteractive zeroknowledge based on any trapdoor permutation. J. Cryptol. 9(3), 149–166 (1996)
 7.
D.J. Bernstein, Proving tight security for Rabin–Williams signatures. In N.P. Smart (ed) Advances in Cryptology—EUROCRYPT 2008. Lecture Notes in Computer Science, vol. 4965 (Springer, Berlin, 2008), pp. 70–87
 8.
D. Boneh, B. Lynn, H. Shacham, Short signatures from the Weil pairing, in C. Boyd (ed) Advances in Cryptology—ASIACRYPT 2001. Lecture Notes in Computer Science, vol. 2248 (Springer, Berlin, 2001), pp. 514–532
 9.
C. Cachin. Efficient private bidding and auctions with an oblivious third party, in ACM CCS 99: 6th Conference on Computer and Communications Security (ACM Press, New York, 1999), pp. 120–127
 10.
C. Cachin, S. Micali, M. Stadler, Computationally private information retrieval with polylogarithmic communication, in J. Stern (ed) Advances in Cryptology—EUROCRYPT’99. Lecture Notes in Computer Science, vol. 1592 (Springer, Berlin, 1999), pp. 402–414
 11.
D. Coppersmith, Finding a small root of a univariate modular equation, in U.M. Maurer (ed) Advances in Cryptology—EUROCRYPT’96. Lecture Notes in Computer Science, vol. 1070 (Springer, Berlin, 1996), pp. 155–165
 12.
J.S. Coron, On the exact security of full domain hash, in M. Bellare (ed) Advances in Cryptology—CRYPTO 2000. Lecture Notes in Computer Science, vol. 1880 (Springer, BErlin, 2000), pp. 229–235
 13.
J.S. Coron, Optimal security proofs for PSS and other signature schemes. Cryptology ePrint Archive, Report 2001/062, 2001. http://eprint.iacr.org/2001/062.
 14.
J.S. Coron, Optimal security proofs for PSS and other signature schemes, in L.R. Knudsen (ed) Advances in Cryptology—EUROCRYPT 2002. Lecture Notes in Computer Science, vol. 2332 (Springer, Berlin, 2002), pp. 272–287
 15.
C. Gentry, P.D. Mackenzie, Z. Ramzan, Password authenticated key exchange using hidden smooth subgroups, in V. Atluri, C. Meadows, A. Juels (eds) ACM CCS Conference on Computer and Communications Security (ACM Press, New York, 2005), pp. 299–309
 16.
C. Gentry, C. Peikert, V. Vaikuntanathan, Trapdoors for hard lattices and new cryptographic constructions, in R.E. Ladner, C. Dwork (eds) 40th Annual ACM Symposium on Theory of Computing (ACM Press, New York 2008), pp. 197–206
 17.
C. Gentry, Z. Ramzan, Singledatabase private information retrieval with constant communication rate, in L. Caires, G.F. Italiano, L. Monteiro, C. Palamidessi, M. Yung, (eds) ICALP 2005: 32nd International Colloquium on Automata, Languages and Programming.Lecture Notes in Computer Science, vol. 3580 (Springer, Berlin, 2005), pp. 803–815
 18.
E.J. Goh, S. Jarecki, J. Katz, N. Wang, Efficient signature schemes with tight reductions to the Diffie–Hellman problems. J. Cryptol. 20(4), 493–514 (2007)
 19.
B. Hemenway, R. Ostrovsky, Publickey locallydecodable codes, in D. Wagner (ed) Advances in Cryptology—CRYPTO 2008. Lecture Notes in Computer Science, vol. 5157 (Springer, Berlin, 2008), pp. 126–143
 20.
IEEE P1363a Committee. IEEE P1363a / D9 — standard specifications for public key cryptography: Additional techniques. http://grouper.ieee.org/groups/1363/index.html/, June 2001. Draft Version 9.
 21.
S.A. Kakvi, E. Kiltz, A. May, Certifying RSA, in Advances in Cryptology—ASIACRYPT 2012. Lecture Notes in Computer Science (Springer, Berlin, 2012), pp. 404–414
 22.
E. Kiltz, A. O’Neill, A. Smith, Instantiability of RSAOAEP under chosenplaintext attack, in T. Rabin (ed) Advances in Cryptology—CRYPTO 2010. Lecture Notes in Computer Science, vol. 6223 (Springer, Berlin, 2010), pp. 295–313
 23.
N. Koblitz, A. Menezes, Another look at security definitions. Cryptology ePrint Archive, Report 2011/343, 2011. http://eprint.iacr.org/2011/343.
 24.
N. Koblitz, A.J. Menezes, Another look at “provable security”. J. Cryptol. 20(1), 3–37 (2007)
 25.
A. Lysyanskaya, S. Micali, L. Reyzin, H. Shacham, Sequential aggregate signatures from trapdoor permutations, in C. Cachin, J. Camenisch (eds) Advances in Cryptology—EUROCRYPT 2004. Lecture Notes in Computer Science, vol. 3027 (Springer, Berlin, 2004), pp. 74–90
 26.
S. Micali, Computationally sound proofs (2000), pp. 1253–1298
 27.
P. Paillier, J.L. Villar. Trading onewayness against chosenciphertext security in factoringbased encryption, in X. Lai, K. Chen (eds) Advances in Cryptology – ASIACRYPT 2006. Lecture Notes in Computer Science, vol. 4284 (Springer, Berlin, 2006), pp. 252–266
 28.
C. Peikert, B. Waters, Lossy trapdoor functions and their applications, in R.E. Ladner, C. Dwork (eds) 40th Annual ACM Symposium on Theory of Computing (ACM Press, New York, 2008), pp. 187–196
 29.
PKCS #1: RSA cryptography standard. RSA Data Security, Inc., Sept. 1998. Version 2.0
 30.
C. Schridde, B. Freisleben, On the validity of the phihiding assumption in cryptographic protocols, in J. Pieprzyk (ed) Advances in Cryptology—ASIACRYPT 2008. Lecture Notes in Computer Science, vol. 5350 (Springer, Berlin, 2008), pp. 344–354
 31.
N. Smart, Ecrypt II yearly report on algorithms and keysizes (2009–2010). Framework (2010), p. 116
Acknowledgements
We thank Mihir Bellare, Dennis Hofheinz, and Bertram Poettering for valuable comments on an earlier draft. We also thank the Journal of Cryptography reviewers for their helpful comments. We would also like to thank the reviewer for the Journal of Cryptology for their insightful comments. This work was done while Saqib A. Kakvi was employed at RuhrUniversity Bochum. The authors were supported by a Sofja Kovalevskaja Award of the Alexander von Humboldt Foundation, funded by the German Federal Ministry for Education and Research.
Author information
Affiliations
Corresponding author
Additional information
Communicated by Rogaway.
Rights and permissions
About this article
Cite this article
Kakvi, S.A., Kiltz, E. Optimal Security Proofs for Full Domain Hash, Revisited. J Cryptol 31, 276–306 (2018). https://doi.org/10.1007/s0014501792579
Received:
Published:
Issue Date:
Keywords
 Digital signatures
 Full domain hash
 Lossiness
 Security reduction