Digital signatures [6] are one of the central primitives of modern cryptography, both as an application, and as a building block for other primitives. A signature scheme consists of three algorithms: a setup algorithm that outputs a secret signing key and a public verification key, a signing algorithm that computes signatures on messages using the signing key, and a verification algorithm that uses the verification key to check if a signature corresponding to a message is valid. Clearly, the setup algorithm must be probabilistic. The signing algorithm can be transformed to a deterministic one by using a pseudorandom function (PRF). In such a transformation, a PRF key can be included as part of the secret key, and to sign a message, this PRF key can be used to generate the randomness for the signing algorithm. However, such a transformation cannot be applied to the verification algorithm, since verification is public.
Indeed, as noted by [8], “no generic transformation outside the random oracle model is known that makes the verification deterministic”. As a result, when describing or using signatures, the verification algorithm is often implicitly or explicitly assumed to be deterministic. This is especially true for works which use signatures as part of a larger protocol. Moreover, in certain applications [8, 9], it is important for the verification algorithm to be deterministic. This assumption regarding the determinism of the verification algorithm is often justified by the fact that most existing schemes have deterministic verification.
At the same time there are circumstances where a signature construction with a randomized verification will naturally arise. The most popular one is the transformation from identity based encryption (IBE) schemes to digital signatures suggested by Naor (sketched in [2]). This transformation, popularly referred to as Naor transformation, can be described as follows. The signing key consists of the IBE master secret key and the verification key is the IBE public key. To sign a message m, the signing algorithm generates an IBE secret key corresponding to the identity m. This signature can be verified by first choosing a random message x from the IBE message space, encrypting x for identity m and checking if the secret key can decrypt this ciphertext correctly. We would like to point out that the random coins used by verification algorithm consists of the IBE message x and the randomness required to encrypt x.Footnote 1
The signature schemes derived from IBE schemes have some nice properties (for instance shorter signatures) as compared to the traditional signature schemes. Fortunately, for many IBE schemes, there are naturally derived signature schemes where the verification is deterministic. Boneh, Lynn and Shacham showed how to modify the Boneh-Franklin IBE scheme to get a signature scheme with deterministic verification [3]. Waters, in [10], showed an IBE scheme in the standard model, and then showed how it can be modified to get a signature scheme with deterministic verification. However, such signature scheme counterparts with deterministic verification do not exist for all IBE schemes. For example, the dual system based Waters IBE scheme [11] did not give an immediate signature with deterministic verification, and the generic Naor transformation was the only known approach. More generally, in future, there could be signature schemes with desirable properties, but where the verification algorithm is randomized.
Our Goals and Contributions. With this motivation, we explore a broad notion of signature schemes with randomized verification. We note that a few different works have considered randomized verification with varying degrees of formality (e.g., [7, 8, 11]). The main contributions of our work can be summarized as follows. First, we propose a formal security definition for signature schemes with randomized verification security. This security notion, called \(\chi \)-Existential Unforgeability under Chosen Message Attack (\(\chi \)-EUFCMA) security, captures a broad class of signature schemes with randomized verification. Next, we give a formal proof of Naor’s IBE to signatures transformation. The proof of Naor transformation also illustrates the importance of our proposed security definition. Third, we show how to amplify the security of a \(\chi \)-EUFCMA secure scheme. Finally, we show generic transformations from a signature scheme with randomized verification to one with deterministic verification. Each of these contributions is described in more detail below.
Defining a broad notion of security for signature schemes with randomized verification. Our first goal is to define a broad notion of security for signature schemes with randomized verification. Intuitively, a signature scheme is said to be secure if no polynomial time adversary can produce a forgery that gets verified with non-negligible probability. For deterministic verification, this can be easily captured via a security game [6] between a challenger and an adversary, where the adversary can query for polynomially many signatures, and eventually, must output a forgery \(\sigma ^{*}\) on a fresh (unqueried) message \(m^{*}\). The adversary wins if the verification algorithm accepts \(\sigma ^{*}\) as a signature on message \(m^{*}\), and we want that no polynomial time adversary wins with non-negligible probability.
Now one can achieve a standard notion of security for randomized verification if we just take the same game and make the attacker’s success probability be over the coins of the challenger’s verification algorithms (as well as the other coins in the game). While this is the most natural extension of the security to randomized verification, there are signature schemes that do not meet this definition, yet still offer some meaningful notion of security. Looking ahead, signature schemes that arise from the Naor transformation applied to IBE schemes with small message space does not satisfy this strong notion but still provide some type of security that we would like to be able to formally capture.
In order to capture a broad class of signature schemes with randomized verification, we introduce a weaker notion called \(\chi \)-EUFCMA security. In this security game, the adversary receives a verification key from the challenger. It is allowed to make polynomially many signature queries, and must finally output a forgery \(\sigma ^{*}\) on message \(m^{*}\) (not queried during the signature query phase). Informally, a signature scheme is said to be \(\chi \)-EUFCMA secure if for any forgery \(\sigma ^{*}\) on message \(m^{*}\) produced by a polynomially bounded adversary, the fraction of random coins that accept \(\sigma ^{*}\), \(m^{*}\) is at most \(\chi \). Ideally, we would want \(\chi = 0\) (or negligible in the security parameter). However, as we will show below, this is not possible to achieve for certain Naor transformed signature schemes.
Naor Transformation for IBE schemes with small message spaces. For simplicity, let us consider the Naor transformation from IBE schemes with one bit message space (for example the Cocks IBE scheme [4]). Here, the adversary can send a malformed signature such that the adversary wins the game, but the signature-to-IBE reduction algorithm has 0 advantage. In more detail, consider an adversary that, with probability 1/2, outputs a valid IBE key \(\sigma ^*\) for ID \(m^*\), and with probability 1/2, outputs a malformed IBE key \(\sigma ^*\) for \(m^*\). This malformed key always outputs the incorrect decryption of the ciphertext. Such an adversary would break the signature scheme security, but if a reduction algorithm uses such a forgery directly to break the IBE security, then it has advantage 0. To get around this problem, we use a counting technique which is inspired by the artificial abort technique introduced by Waters [10]. The main idea behind Waters’ artificial abort was to reduce the dependence between the event in which adversary wins and the one where simulation aborts. However, we require a technique which could efficiently test whether the forgery (i.e. IBE key) is malformed or not. To this end, the reduction algorithm runs the IBE decryption multiple times, each time on freshly generated ciphertext, and counts the number of correct decryptions. This helps the reduction in estimating whether the key is malformed or not. If the key is malformed, then the reduction algorithm simply aborts, otherwise it uses the forgery to break IBE security. The analysis is similar to that in [10].
The above issue with small message space IBE schemes was also noted by the work of Cui et al. [5]. They outlined a sketch of proof for large message spaces. However, even in the case of large message spaces, the issue of artificial abort needs to be addressed (although, unlike the small message case, the reduction algorithm does not suffer a loss in the winning advantage).
Amplifying the soundness from \(\chi = 1 - 1/\mathsf {poly}(\cdot )\) to \(\chi = 0\)
. Next, we show how to amplify the soundness from \(\chi = 1 - 1/\mathsf {poly}(\lambda )\) to \(\chi = 0\). Such a problem closely resembles that of transforming weak OWFs to strong OWFs. Therefore, the direct product amplification technique [12] could be used. So, if the verification algorithm is run sufficiently many times, each time with fresh randomness, and the signature is accepted only if all verifications succeed, then we could achieve \(\chi = 0\)-EUFCMA security.
Derandomizing the verification algorithm. Finally, we show how to transform any \(\chi = 0\)-EUFCMA secure signature scheme with randomized verification algorithm to a signature scheme with deterministic verification. We show two such transformations, one in the random oracle model (ROM), and another in the standard model. The transformation in the random oracle model is fairly straightforward. Here, the verification algorithm uses a hash function H which will be modeled as a random oracle in the proof. To verify a signature \(\sigma \) on message m, the verification algorithm computes \(r = H(m,\sigma )\) and uses r as the randomness for the verification. To prove security, we first model the hash function H as a random oracle. Suppose there exists an attacker on this modified signature scheme. After the hash function is replaced with a random oracle, the attacker’s forgery can be directly used as a forgery for the original \(\chi = 0\)-EUFCMA secure signature scheme. Here, note that it is crucial that the starting signature scheme with randomized verification is \(\chi = 0\)-EUFCMA secure, else the adversary’s forgery may not be useful for the reduction as it will not translate to a forgery for the original signature scheme.
In our standard model transformation, the setup algorithm chooses a sufficiently large number of random strings \(r_1, \ldots , r_{\ell }\) and includes them as part of the public verification key. The verification algorithm uses each of these strings as its randomness, and checks if \(m, \sigma \) verify in all cases. If so, it outputs 1, else it outputs 0. For security, suppose the adversary \(\mathcal {A}\) sends a message \(m^{*}\) and a forgery \(\sigma ^{*}\) such that the verification algorithm accepts \(m^{*}, \sigma ^{*}\) when using \(r_1, \ldots , r_{\ell }\) as the random strings. The reduction algorithm simply forwards \(m^{*}, \sigma ^{*}\) as a forgery to the original signature scheme’s challenger. To compute the winning probability of the reduction algorithm, we need to consider the case when the adversary \(\mathcal {A}\) wins, but the reduction algorithm does not. This, in particular, can happen if the set of ‘bad’ random strings that result in \((m^{*}, \sigma ^{*})\) being accepted is a negligible fraction of the total number of random strings. Suppose there exists a message \(m^{*}\) and forgery \(\sigma ^{*}\) such that the number of ‘bad’ randomness strings that cause the verification algorithm to accept \(m^{*}, \sigma ^{*}\) is a negligible fraction of the total number of randomness strings, and all strings \(r_1, \ldots , r_{\ell } \) are in the bad set. We argue that since \(r_1, \ldots , r_{\ell } \) are chosen uniformly at random, and since \(\ell \) is a suitably large polynomial in the security parameter, this probability is negligible in the security parameter.
Conclusions. Many times in cryptography and security there is a concept that is thought to be well understood, but has never been rigorously proven or explored. Often when such an exploration takes place some new twists or facts will be discovered. In this case when we first decided to dig into the IBE to signature transformation we had an initial expectation that there would be a lossless connection from IBE to signatures. We actually were surprised that our initial attempts did not work out and that an artificial abort type mechanism was required for the small message spaces. Besides formally proving the IBE to signature transformation, we also show how to derandomize any signature scheme with randomized verification, both in the random oracle model as well as the standard model.