1 Introduction

Leakage resilient cryptography. Modern cryptography analyzes the security of cryptographic algorithms in the black-box model. Namely an adversary may view the algorithm’s inputs and outputs, but the secret key as well as all the internal computation remain perfectly hidden. For instance, consider the classic security definition of signature schemes [18] where the adversary is given the verification key and block-box access to the signing algorithm. Still, the adversary cannot obtain (or exploit) any information about the secret state of the signer during its attack.

Unfortunately, the assumption of perfectly hidden keys does not reflect practice as demonstrated by a large volume of works on side-channel attacks [3, 6, 17, 2224, 32], since when implementing cryptographic protocols in real-world hardware some information on the secret key may leak to the adversary. Side-channel attacks do not only allow the adversary to gain partial knowledge of the secret key thereby making security proofs less meaningful, but in many cases may result in complete security breaches; see an example regarding the RSA and AES cryptosystems [22].

In the last years, significant progress has been made within the theory community to incorporate information leakage into the black-box model. To this end, these works develop new models to formally describe the information leakage [2, 28, 36] and design new schemes that can be proven secure therein. This recent line of works (cf. [1, 2, 10, 1416, 21, 27, 28, 30] and many more) presents leakage resilient cryptographic primitives with security proven even in the presence of arbitrary (but somewhat restricted) leakage from the secret key. These works design leakage resilient primitives both in the secret key and public key settings, including stream ciphers [15, 31], MACs [21] and public key encryptions [2, 30]. In this paper, we focus on digital signature schemes; see a broader discussion below.

Leakage modeling. Loosely speaking, the leakage is typically characterized by a leakage function \(h\) that takes as input the secret key \(\mathsf {sk}\) and reveals \(h(\mathsf {sk})\)—the so-called leakage—to the adversary. Of course, we cannot allow \(h\) to be any function as otherwise it may just reveal the complete secret key. Hence, certain restrictions on the class \(\mathcal {H}\) of admissible leakage functions are necessary. With very few exceptions (outlined in the next section), most works assume some form of quantitative restriction on the amount of information leaked to an adversary during the security game.

More formally, in the bounded leakage model [2], it is assumed that \(\mathcal {H}\) is the set of all polynomial-time computable functions \(h: \{0,1\}^{|\mathsf {sk}|} \rightarrow \{0,1\}^\lambda \) with \(\lambda \ll |\mathsf {sk}|\). Namely in the context of signature schemes, the adversary is allowed to specify the description of a leakage function \(h\) as above (that may be chosen based on the verification key) and learns the leakage \(h(\mathsf {sk})\) in addition to any information it is meant to learn during the security game (such as the verification key and valid signatures).

This restriction can be weakened in several ways. For instance, instead of requiring a concrete bound \(\lambda \) on the amount of leakage, it often suffices that given the leakage \(h(\mathsf {sk})\), the secret key still has a “sufficient” amount of min-entropy left [13, 15, 30, 31]. This so-called noisy leakage models real-world leakage functions more accurately as now the leakage can be arbitrarily large. Indeed, real-world measurements of physical phenomenons are usually described by several megabytes or even gigabytes of information rather than by a few bits.

1.1 The Auxiliary Input Model

While security against bounded or noisy leakage often provides a first good indication for the security of a cryptographic implementation, in practice leakage typically information theoretically determines the entire secret key [37]. Namely the only difficulty of a side-channel adversary lies in extracting the relevant key information efficiently. Formally, this can be modeled by assuming that \(\mathcal {H}\) is the set of all polynomial-time computable functions such that given \(h(\mathsf {sk})\), it is still “hard” to compute \(\mathsf {sk}\). Such hard-to-invert leakage is a very natural generalization of both the bounded leakage model and the noisy leakage model and is the focus of this work. More concretely, we consider two classes of hard-to-invert leakage functions:

  1. 1.

    A function \(h\) of the secret key \(\mathsf {sk}\) is polynomially hard-to-invert auxiliary information, if there exists a negligible function \(\mathsf {negl}\) such that for sufficiently large \(k=|\mathsf {sk}|\), any polynomial-time adversary will succeed with probability at most \(\mathsf {negl}(k)\) in inverting \(h(\mathsf {sk})\).

  2. 2.

    A function \(h\) of the secret key \(\mathsf {sk}\) is exponentially hard-to-invert auxiliary information if there exists a constant \(c>0\) such that for sufficiently large \(k=|\mathsf {sk}|\), any polynomial-time adversary \(\mathcal {A}\) will succeed with probability at most \(2^{-ck}\) in inverting \(h(\mathsf {sk})\). Notice that the result gets stronger and the class of admissible leakage function gets larger, if \(c\) is smaller.

The auxiliary input model of Dodis et al. [14] introduced the notion of security of cryptographic schemes in the presence of computationally hard-to-invert leakage. They proposed constructions for secret key encryption with IND-CPA and IND-CCA security against an adversary who obtains an arbitrary polynomial-time computable hard-to-invert leakage \(h(\mathsf {sk})\). Security is shown to hold under a non-standard variant of the learning parity with noise (LPN) assumption with respect to any exponentially hard-to-invert function. In a follow-up paper, and most relevant for our work, Dodis et al. [11] study the setting of public key encryption. They show that the BHHO encryption scheme [5] based on the Decisional Diffie-Hellman (DDH) hardness assumption and variants of the GPV encryption scheme [19] based on the learning with errors (LWE) hardness assumption are secure with respect to auxiliary input leakage. All their schemes remain secure under sub-exponentially hard-to-invert leakage.Footnote 1 As discussed in their work, some important subtleties arise in the public key setting which are also important for our work.

  1. 1.

    We shall allow the leakage to depend also on the corresponding public key \(\mathsf {pk}\). One approach to model this is to let the adversary adaptively choose the leakage function after seeing the public key \(\mathsf {pk}\) [2]. An alternative that is taken in the work of Dodis et al. [11] assumes admissible leakage functions \(h:\{0,1\}^{|\mathsf {sk}| + |\mathsf {pk}|} \rightarrow \{0,1\}^{*}\), where it is hard to compute \(\mathsf {sk}\) given \(h(\mathsf {pk},\mathsf {sk})\).

  2. 2.

    The public key itself may leak information about the secret key, which may make the scheme insecure if the adversary also obtains additional auxiliary input leakage about the secret key. For instance, consider the setting where the public key \(\mathsf {pk}\) contains the first \(k/2\) bits of the secret key. In this case, there is no hope to prove security with respect to \(2^{-k/2}\) hard to compute leakage functions. Hence, the definition of the set of admissible leakage functions needs to take into account also the information that is revealed by the public key. To handle this issue, Dodis et al. [11] proposed a natural notion of auxiliary input security, which says that a leakage function is admissible if it is hard to compute the secret key even when given the auxiliary input leakage together with the public key (we point out that this notion is called “weak” in [11] since the class of admissible leakage functions now becomes smaller). A more detailed discussion on this issue can be found in [11].

Following Dodis et al. [11], in this paper, we will usually first prove security in the weaker setting where we consider only leakage functions that are hard to invert given also the public key. As shown in [11], when the public key is short, this weaker notion of auxiliary input security implies security for functions \(h\) solely under the assumption that given \(h(\mathsf {pk},\mathsf {sk})\), it is computationally hard to compute \(\mathsf {sk}\) (i.e., without defining hardness with respect to \(\mathsf {pk}\)). The underlying idea is that the public key can be guessed within the proof, which implies that the hardness assumption gets stronger when applying this proof technique (in particular, such a guessing strategy always results in an exponential loss in the hardness assumption).

Notice that the distinction between the weak and strong model only affects the size of the set of admissible leakage functions. More precisely, as in the traditional “non-leaky” setting, the adversary is allowed to always see the public key as well when it attacks the signature scheme. The distinction between the weak and strong model is that in the weak model, we assume that the additional auxiliary input leakage it obtains is even hard to invert when given that public key, while in the strong (and desired) model, we only assume that the function is hard to invert without considering the public key. Notice that in the latter setting, the set of admissible leakage functions can be much larger, and hence, the result becomes stronger.

While in general we aim for the stronger notion of auxiliary input security, we further note that as outlined in [11], the weaker notion already suffices for composition of different cryptographic schemes using the same public key. For instance, consider an encryption and signature scheme sharing the same public key. If the encryption scheme is weakly secure with respect to any polynomially hard-to-invert leakage function, then the scheme remains secure even if the adversary sees arbitrary signatures—as these signatures can be viewed as polynomially hard-to-invert leakage.

More recently, Brakerski and Goldwasser [4] and Brakerski and Segev [7] proposed further constructions of public key encryption secure against auxiliary input leakage. In the former, the authors show how to construct a public key encryption scheme secure against sub-exponentially hard-to-invert leakage, based on the Quadratic Residuosity (QR) and Decisional Composite Residuosity (DCR) hardness assumptions. In the latter, the concept of security against auxiliary input has been introduced in the context of deterministic public key encryption, and several secure constructions were proposed based on DDH and subgroup indistinguishability assumptions. Finally, a more recent work by Yuen et al. [38] presents the first identity-based encryption scheme with security in the presence of continual auxiliary input leakage, by applying a modified theorem of Goldreich-Levin. Their security model allows leakage from both the master secret key as well as identity-based secret keys.

1.2 Our Contributions

In this work, we will analyze the security of digital signature schemes in the presence of computationally hard-to-invert leakage. We show somewhat surprisingly that simple variants of constructions for the bounded and noisy leakage settings also achieve security with respect to the more general class of hard-to-invert leakage. We stress that our work is theoretical in nature, and it is unclear to what extent it would offer any protection against real-world side-channel attacks.

Despite significant progress on constructing encryption schemes in the auxiliary input model, the question of whether digital signature schemes can be built with security against hard-to-invert leakage has remained open so far. This is somewhat surprising as a large number of constructions for the bounded and noisy leakage setting are known [1, 8, 12, 13, 26, 29]. In this paper, we close this gap and propose the first constructions for digital signature schemes with security in the auxiliary input model. As a first contribution of our work, we propose new security notions that are attainable in the presence of hard-to-invert leakage. We then show that certain constructions that have been proven to be secure when the amount of leakage is bounded also achieve security in the presence of hard-to-invert leakage. In a nutshell, our results can be summarized as follows:

  1. 1.

    As discussed above, existential unforgeability is unattainable in the presence of polynomially hard-to-invert leakage. We thus weaken the security notion by focusing on the setting where the challenge message is chosen uniformly at random. Our construction uses ideas from [29] to achieve security against polynomially hard-to-invert leakage when prior to the challenge message the adversary only has seen signatures for random messages. Such schemes can straightforwardly be used to construct identification schemes with security against any polynomially hard-to-invert leakage (cf. Sects. 3.2, 4).

  2. 2.

    Next, we show that the generic constructions proposed in [8, 13, 26] achieve the strongest notion of security, namely existentially unforgeable under chosen message attacks, if we restrict the adversary to obtain only exponentially hard-to-invert leakage. As basic ingredients these schemes use a family of second preimage resistant hash functions, an IND-CCA secure public key encryption scheme with labels and a reusable non-interactive zero-knowledge argument (NIZK) system. For our result to be meaningful, we require both the decryption key and the simulation trapdoor of the underlying encryption scheme to be short when compared to the length of the signing key for the signature scheme (cf. Sect. 3.3).

  3. 3.

    We show an instantiation of this generic transformation that satisfies our requirements on the length of the keys based on the 2-Linear hardness assumption in pairing based groups, using the Groth–Sahai proof system [20] (cf. Sect. 5).

We elaborate on these results in more detail below.

Polynomially hard-to-invert leakage and random challenges. Importantly, as hinted above, security with respect to polynomially hard-to-invert leakage is impossible if the message for which the adversary needs to output a forgery is fixed at the time the leakage function is chosen. This is certainly the case for the standard security notion of existential unforgeability. One potential weakening of the security definition is by requiring the adversary to forge a signature on a random challenge message. In the case when the challenge messages are sampled uniformly at random, even though the leakage may reveal signatures for some messages, it is very unlikely that the adversary hits a forgery for the challenge message.

Specifically, inspired by the work of Malkin et al. [29], we propose a construction that guarantees security in the presence of any polynomially hard-to-invert leakage, when the challenge message is chosen uniformly at random. The scheme uses the message as the CRS for a non-interactive zero-knowledge proof of knowledge \((\mathsf {NIZKPoK})\).To sign, we use the CRS to prove knowledge of \(\mathsf {sk}\) such that the verification key \(\mathsf {vk}= H(\mathsf {sk})\), where \(H\) is a second preimage resistant hash function. Therefore, if an adversary forges a signature given \(\mathsf {vk}\) and the leakage \(h(\mathsf {vk},\mathsf {sk})\) with non-negligible probability, we can use this forgery to extract a preimage of \(\mathsf {vk}\) which either contradicts the second preimage resistance of \(H\) or the assumption that \(h\) is polynomially hard to invert. An obvious drawback of this scheme is that prior to outputting a forgery for the challenge message, the adversary only sees signatures on random messages. Finally, as a natural application of such schemes, we show that auxiliary input security for signatures carries over to passive auxiliary input security of identification schemes. Hence, our scheme can be readily used to build simple identification schemes with security against any polynomially hard-to-invert leakage function.

Exponentially hard-to-invert leakage and existential unforgeability. The standard security notion for signature schemes is existential unforgeability under adaptive chosen-message attacks [18]. Here, one requires that an adversary cannot forge a signature of any message \(m\), even when given access to a signing oracle. We strengthen this notion and additionally give the adversary leakage \(h(\mathsf {vk},\mathsf {sk})\), where \(h\) is some admissible function from class \(\mathcal {H}\). It is easy to verify that no signature scheme can satisfy this security notion when the only assumption that is made about \(h \in \mathcal {H}\) is that it is polynomially hard to compute \(\mathsf {sk}\) given \(h(\mathsf {vk},\mathsf {sk})\). The reason for this is as follows. Since the secret key must be polynomially hard to compute even given some set of signatures (and the public key), a signature is an admissible leakage function with respect to \(\mathcal {H}\). Hence, a forgery is a valid leakage. This observation holds even in the weaker model when we define the hardness of \(h\) with respect to the public key as well.

Our first observation toward constructing signatures with auxiliary input security is that the above issues do not necessarily arise when we consider the more restricted class of functions that maintain (sub)-exponentially hardness of inversion. Suppose, for concreteness, that there exists a constant \(1>c>0\) such that there exists a probabilistic polynomial-time algorithm, taking as input a signature and the public key and outputting \(\mathsf {sk}\) with probability \(p\). Here, we assume that \(\mathsf {negl}(k) \ge p \gg 2^{-k^{c}}\) for some negligible function \(\mathsf {negl}(\cdot )\). Then, if we let \(\mathcal {H}\) be the class of functions with hardness at least \(2^{-k^{c}}\), the signing algorithm is not in \(\mathcal {H}\) and hence the artificial counterexample from above does not work anymore! We instantiate this idea by adding an encryption \(C = \mathsf {Enc}_{\mathsf {ek}}(\mathsf {sk})\) of the signing key \(\mathsf {sk}\) to each signature. The encryption key \(\mathsf {ek}\) is part of the verification key of the signature scheme, but the decryption key \(\mathsf {dk}\) associated with \(\mathsf {ek}\) is not part of the signing key. However, we set up the scheme such that \(\mathsf {dk}\) can be guessed with probability \(p\). Interestingly, it turns out that recent constructions of leakage resilient signatures [8, 13, 26], which originally were designed to protect against bounded leakage, use as part of the signature an encryption of the secret key. This enables us to prove that these schemes also enjoy security against the larger class of exponentially hard-to-invert leakages and hence provides a strengthening of the security proofs given in the bounded leakage model of [8, 13, 26].

One may object that artificially adding an encryption of the secret key to the signature is somewhat counterintuitive, as it seems that we obtain provable security by just reducing the security of the signature scheme to a point where signatures are no longer allowed leakage. This is actually not the case. The better way to see the construction is that the encryption forces signatures to be so long that leaking them is at least as hard as leaking the secret key, and then, we just have to pick the security parameters such that it is hard enough to guess \(\mathsf {dk}\) and hard enough to leak the secret key (which is the goal for all leakage resilient schemes). To elaborate on this, notice that all that is needed for security of our scheme is that guessing \(\mathsf {dk}\) is significantly easier than guessing \(\mathsf {sk}\). For a given desirable security level, we can therefore first pick the length of \(\mathsf {dk}\), as to achieve the desirable security level, and after that pick the length of \(\mathsf {sk}\) long enough to get a meaningful leakage bound. For the example above, if we instantiate the scheme with a larger security parameter \(k' = k^{1/c}\), then we can allow \(p\) to be exponentially small, say \(2^{d k}\) for \(0 < d < 1\). In that case, the signature scheme can still have exponential security. After that we can then, for instance, pick \(\vert \mathsf {sk}\vert = 100 \vert \mathsf {dk}\vert \). This ensures that even after leaking \(98\%\) of the bits of the secret key, it is easier to guess \(\mathsf {dk}\) than \(\mathsf {sk}\) and hence our leakage class will in particular include the leakage of \(98\%\) of the secret key. Instantiating any cryptographic primitive in practice will involve such considerations of how to instantiate the security parameters. As this is particularly the case for our scheme, we provide a concrete security analysis, which allows to conveniently instantiate our scheme with any desirable security level. Note, also, that adding trapdoors to cryptographic schemes for what superficially only seems to be proof reasons is common in the field, e.g., non-interactive zero-knowledge is another prominent example.

For readers familiar with the security proof of the Katz–Vaikuntanathan scheme from [26], we note that the crux of our new proof is that in the reduction, we cannot generate a CRS together with its simulation trapdoor due to technicality arises in the definition of the admissible class of leakage functions (see more discussion in Sect. 3.1). Instead, to simulate signatures for chosen messages, we will guess the simulation trapdoor. Fortunately, we can show that the loss from guessing the simulation trapdoor only effects the tightness in the reduction to the inversion hardness of the leakage functions. As we use a NIZK proof system with a short simulation trapdoor and only aim for exponential hard-to-invert leakage functions, we can successfully complete the reduction.

Auxiliary input secure identification schemes. One particular immediate application is non-interactive identification schemes with auxiliary input security. We define two notions of passive security for identifications scheme in the presence of auxiliary input and show that auxiliary input secure signature schemes, with random messages and a random challenge, imply these notions. Active security for identification schemes still remains an open question. In particular, known transformations from passive to active security only apply when the underlying building block is \(\Sigma \)-protocols. These do not apply in the auxiliary input setting.

Instantiation under the 2-linear assumption. As a concrete example, we show how to instantiate our generic transformation using the Groth–Sahai proofs system based on the 2-linear assumption [20]. This yields security with respect to any \(2^{-6k}\)-hard-to-invert leakage where \(k\) is the security parameter. If we do not wish to define the hardness with respect to the public key as well, it is possible to guess it and thus loose an additional factor of \(2^{-3k}\) in the hardness assumption. Here, \(k := \log (p)\) for a prime \(p\) that denotes the order of the group for which the 2-linear assumption is hard.

1.3 A Road Map

In Sect. 2, we specify basic security definitions and our modeling for the auxiliary input setting. In Sect. 3, we present our signature schemes for random messages (Sect. 3.2) and chosen massage attack security (Sect. 3.3). In Sect. 4, we show how to use signatures on random messages to construct identification schemes with security against any polynomially hard-to-invert leakage. Finally, in Sect. 5, we show an instantiation of the later signature scheme under the 2-linear hardness assumption.

2 Preliminaries

Basic notations. We denote the security parameter by \(k\) and by PPT probabilistic polynomial-time. For a set \(S\), we write \(x\leftarrow S\) to denote that \(x\) is sampled uniformly from \(S\). We write \(y\leftarrow \mathcal {A}(x)\) to indicate that \(y\) is the output of an algorithm \(\mathcal {A}\) when running on input \(x\). We denote by \(\langle a,b\rangle \) the inner product of field elements \(a\) and \(b\). We say that a function \(f: \mathbb N\rightarrow {\mathbb {R}}\) is negligible if for every polynomial \(p(\cdot )\), there exists an integer \(n_{p} \in \mathbb N\) such that \(f(n)<1/p(n)\), for every \(n>n_{p}\). Finally, we specify the definition of computational indistinguishability.

Definition 2.1

(Computational indistinguishability by circuits) Let \(X=\) \(\left\{ X_{n}(a)\right\} _{n\in \mathbb N,a\in \{0,1\}^{*}}\) and \(Y=\left\{ Y_{n}(a)\right\} _{n\in \mathbb N,a\in \{0,1\}^{*}}\) be distribution ensembles. We say that \(X\) and \(Y\) are computationally indistinguishable, denoted \(X \approx Y\), if for every family \(\{C_{n}\}_{n\in \mathbb N}\) of polynomial-size circuites, there exists a negligible function \(\mathsf {negl}\) such that for all \(a\in \{0,1\}^{*}\),

$$\begin{aligned} |\text {Pr}[C_{n}(X_{n}(a))=1]-\text {Pr}[C_{n}(Y_{n}(a))=1]|<\mathsf {negl}(n). \end{aligned}$$

2.1 Public Key Encryption Schemes

We specify the notion of a labeled public key encryption scheme following the notation used in [9, 13]. In this work, we require a weaker notion of security called IND-WLCCA, where the adversary cannot query the decryption oracle with label \(L\) such that \(L\) is the label picked for the challenge. (This is in contrast to the IND-LCCA notion where the adversary is not allowed to query the decryption oracle on \((L,c)\), where \(c\) is the challenge ciphertext). We further motivate this security notion in Sect. 3.3.

Definition 2.2

(LPKE)   We say that a tuple of PPT algorithms \(\Pi =(\mathsf {KeyGen}, \mathsf {Enc}, \mathsf {Dec})\) is a labeled public key encryption scheme (LPKE) with perfect decryption if:

  • \(\mathsf {KeyGen}\), given a security parameter \(k\), outputs keys \((\mathsf {ek},\mathsf {dk})\), where \(\mathsf {ek}\) is a public encryption key and \(\mathsf {dk}\) is a secret decryption key. We denote this by \((\mathsf {ek},\mathsf {dk})\leftarrow \mathsf {KeyGen}(1^{k})\).

  • \(\mathsf {Enc}\), given the public key \(\mathsf {ek}\), a label \(L\) and a plaintext message \(m\), outputs a ciphertext \(c\) encrypting \(m\). We denote this by \(c\leftarrow \mathsf {Enc}^{L}(\mathsf {ek},m)\).

  • \(\mathsf {Dec}\), given a label \(L\), the secret key \(\mathsf {dk}\) and a ciphertext \(c\), with \(c \leftarrow \mathsf {Enc}^{L}(\mathsf {ek},m)\), then with probability \(1\) outputs \(m\). We denote this by \(m \leftarrow \mathsf {Dec}^{L}(\mathsf {dk},c)\).

Definition 2.3

(IND-WLCCA secure encryption scheme) We say that a labeled public key encryption scheme \(\Pi =(\mathsf {KeyGen}, \mathsf {Enc}, \mathsf {Dec})\) is IND-WLCCA secure encryption scheme if, for every admissible PPT adversary \(\mathcal {A}=(\mathcal {A}_{1},\mathcal {A}_{2})\), there exists a negligible function \(\mathsf {negl}\) such that the probability \(\text{ IND-LCCA }_{\Pi ,\mathcal {A}}(k)\) that \(\mathcal {A}\) wins the IND-WLCCA game as defined below is at most \(\text{ IND-LCCA }_{\Pi ,\mathcal {A}}(k) \le \frac{1}{2} + \mathsf {negl}(k)\).

  • IND-WLCCA game:

    $$\begin{aligned}&(\mathsf {ek},\mathsf {dk})\leftarrow \mathsf {KeyGen}(1^{k})\\&(L,m_{0},m_{1},history)\leftarrow \mathcal {A}_{1}^{\mathsf {Dec}^{(\cdot )}(\mathsf {dk},\cdot )}(\mathsf {ek}), s.t. |m_{0}|=|m_{1}|\\&c\leftarrow \mathsf {Enc}^{L}(\mathsf {ek},m_{b}), where\, b\leftarrow \{0,1\}\\&b'\leftarrow \mathcal {A}_{2}^{\mathsf {Dec}^{(\cdot )}(\mathsf {dk},\cdot )}(c,history)\\&\mathcal {A}\,wins\, if\, b'=b. \end{aligned}$$

An adversary is admissible if it does not query \({\mathsf {Dec}^{(\cdot )}(\mathsf {dk},\cdot )}\) with \((L,\cdot )\) where \(L\) is the label picked to compute the challenge.

2.2 Non-interactive Zero-Knowledge Arguments (of Knowledge)

A non-interactive zero-knowledge argument for a language \(L\) is a tuple of PPT algorithms \((\mathsf {CRSGen},\mathsf {P},\mathsf {V})\), where \(\mathsf {CRSGen}\) generates a common reference string \(\mathsf {crs}\), the prover \(\mathsf {P}\) takes as input \((\mathsf {crs},x,\omega )\) for \((x,\omega ) \in R_L\), the witness relation of \(L\), and outputs a proof \(\pi \). Finally, the verifier \(\mathsf {V}\) takes as input \((\mathsf {crs},x,\pi )\) and outputs 0 or 1 (respectively, rejecting or accepting the proof). Moreover, security is formalized in the following definition.

Definition 2.4

(NIZK) A non-interactive zero-knowledge argument \((\mathsf {NIZK})\) for a language \(L\) is a tuple of three PPT algorithms \((\mathsf {CRSGen},\mathsf {P},\mathsf {V})\), such that the following properties are satisfied:

Completeness: :

For every \((x,\omega )\in R_L: \text {Pr}[\mathsf {V}(\mathsf {crs},x,\mathsf {P}(\mathsf {crs},x,\omega ))=1]=1\).

Soundness: :

For all PPT algorithms \(\mathcal {A}, \mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k})\) and \(x\notin L\)

$$\begin{aligned} {\mathop {\hbox {Pr}}\limits _{(x,\pi )\leftarrow \mathcal {A}(\mathsf {crs})}}[\mathsf {V}(\mathsf {crs},x,\pi )=1]\le \mathsf {negl}(k). \end{aligned}$$
Zero-knowledge: :

There exist a PPT simulator \(S = (S_{1},S_{2})\) such that

$$\begin{aligned}&\left| \text {Pr}_{\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k})}\left[ \mathcal {A}^{\mathcal {O}_{0}^{\mathsf {crs}}(\cdot )}(\mathsf {crs}) = 1 \right] \right. \\&\quad \left. - \text {Pr}_{(\mathsf {crs},\mathsf {td}_{s})\leftarrow S_{1}(1^{k})}\left[ \mathcal {A}^{\mathcal {O}_{1}^{\mathsf {crs},\mathsf {td}_{s}}(\cdot )}(\mathsf {crs}) = 1\right] \right| \le \mathsf {negl}\end{aligned}$$

for all PPT adversaries \(\mathcal {A}\), where \(\mathcal {O}_{0}^{\mathsf {crs}}(\cdot )\) is an oracle with state \(\mathsf {crs}\) such that \(\mathcal {O}_{0}^{\mathsf {crs}}(x,\omega ) = P(\mathsf {crs},x,\omega )\) if \((x,\omega ) \in R_L\) and \(\mathcal {O}_{0}^{\mathsf {crs}}(x,\omega ) = \bot \) otherwise, whereas \(\mathcal {O}_{1}^{\mathsf {crs},\mathsf {td}_{s}}\) is an oracle with state \((\mathsf {crs},\mathsf {td}_{s})\), where \(\mathcal {O}_{1}^{\mathsf {crs},\mathsf {td}_{s}}(x,\omega ) = S_{2}(\mathsf {crs},x,\mathsf {td}_{s})\) if \((x,\omega ) \in R_L\) and \(\mathcal {O}_{1}^{\mathsf {crs},\mathsf {td}_{s}}(x,\omega ) = \bot \) otherwise. We say that the scheme is ZK if \(\mathcal {A}\) may only query its oracle once. We say that it is reusable-CRS ZK if there is no restriction on how many times \(\mathcal {A}\) can query its oracle.

For our first construction that is describe in Sect. 3.2, where security holds with respect to random messages, we need the additional property of proof of knowledge. For completeness, we specify below the formal definition for non-interactive zero-knowledge argument of knowledge (NIZKPoK).

Definition 2.5

(NIZKPoK) A non-interactive zero-knowledge argument \((\mathsf {NIZKPoK})\) for a relation \(R_L\) is a tuple of three PPT algorithms \((\mathsf {CRSGen},\mathsf {P},\mathsf {V})\), such that the following properties are satisfied:

Completeness: :

As in Definition 2.4.

Knowledge soundness: :

There exists a PPT knowledge extractor \(E=(E_{1},E_{2})\) such that:

a):

for all PPT algorithms \(\mathcal {A}\):

$$\begin{aligned} \text {Pr}_{(\mathsf {crs},\mathsf {td}_e) \leftarrow E_{1}(1^{k})}[\mathcal {A}(\mathsf {crs})=1 ]=\text {Pr}_{\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k})}[\mathcal {A}(\mathsf {crs})=1]. \end{aligned}$$
b):

for all PPT algorithms \(\mathcal {A}\):

$$\begin{aligned}&\text {Pr}_{(\mathsf {crs},\mathsf {td}_e)\leftarrow E_{1}(1^{k}), (x,\pi )\leftarrow \mathcal {A}(\mathsf {crs}), w \leftarrow E_{2}(\mathsf {crs},x,\mathsf {td}_e,\pi )} [\mathsf {V}(\mathsf {crs},x,\pi )\\&\quad =0 \vee (x,\omega )\in R]\ge 1-\mathsf {negl}(k). \end{aligned}$$
Zero-knowledge: :

As in Definition 2.4.

2.3 Second Preimage Resistant Hash Functions

A family of hash functions \(H = \{H_{i}\}_{i \in \{0,1\}^{\ell (k)}}\), where \(\ell (k)\) is some function of the security parameter, is a family of polynomial-time computable functions together with a PPT algorithm \(\mathsf {Gen}_{H}\). On input \(1^{k}, \mathsf {Gen}_{H}\) generates a key \(s\) for a function \(H_{s} \in H\) where \(H_{s}:\{0,1\}^{\ell '(k)}\rightarrow \{0,1\}^{\ell ''(k)}\), for \(\ell '(k)>\ell ''(k)\). Loosely speaking, a family of hash functions \(H\) is second preimage resistant if, given \(s \leftarrow \mathsf {Gen}_{H}(1^{k})\) and a random input \(x\), it is infeasible for any PPT adversary to find \(x'\) such that \(x\ne x'\) and \(H_{s}(x)=H_{s}(x')\). Formally,

Definition 2.6

(Second preimage resistance (SPR)) A family of hash functions \(H_{s}\) is second preimage resistant if for all PPT adversaries \(\mathcal {A}\), there exists a negligible function \(\mathsf {negl}\) such that

$$\begin{aligned} \hbox {Pr}[\mathsf {Hash}_{\mathcal {A},H}(k)=1]\le \mathsf {negl}(k) \end{aligned}$$

where game \(\mathsf {Hash}\) is defined as follows:

  1. 1.

    Key \(s\) is sampled by running \(s\leftarrow \mathsf {Gen}_{H}(1^{k})\) together with \(x \leftarrow \{0,1\}^{\ell '(k)}\).

  2. 2.

    The adversary \(\mathcal {A}\) is given \((s,x)\) and outputs \(x'\).

  3. 3.

    The output of the game is \(1\) if and only if \(x\ne x'\) and \(H_{s}(x) = H_{s}(x')\). In such a case, we say that \(\mathcal {A}\) wins the game.

2.4 Signature Schemes

A signature scheme is a tuple of PPT algorithms \(\Sigma = (\mathsf {Gen},\mathsf {Sig},\mathsf {Ver})\) defined as follows. The key generation algorithm \(\mathsf {Gen}\), on input \(1^{k}\), outputs a signing and a verification key \((\mathsf {sk},\mathsf {vk})\). The signing algorithm \(\mathsf {Sig}\) takes as input a message \(m\) and a signing key \(\mathsf {sk}\) and outputs a signature \(\sigma \). The verification algorithm \(\mathsf {Ver}\), on input \((\mathsf {vk}, m, \sigma )\), outputs either 0 or 1 (respectively, rejecting or accepting the signature). A signature scheme has to satisfy the following correctness property: for any message \(m\) and keys \((\mathsf {sk},\mathsf {vk})\leftarrow \mathsf {Gen}(1^{k})\)

$$\begin{aligned} \text {Pr}[\mathsf {Ver}(\mathsf {vk},m,\mathsf {Sig}(\mathsf {sk},m))=1]=1. \end{aligned}$$

The standard security notion for a signature scheme is existentially unforgeability under chosen message attacks. A scheme is said to be secure under this notion if even after seeing signatures for chosen messages, no adversary can come up with a forgery for a new message. In this paper, we extend this security notion and give the adversary additional auxiliary information about the signing key. To this end, we define a set of admissible leakage functions \(\mathcal {H}\) and allow the adversary to obtain the value \(h(\mathsf {sk},\mathsf {vk})\) for any \(h \in \mathcal {H}\). Notice that by giving \(\mathsf {vk}\) as input to the leakage function, we capture the fact that the choice of \(h\) may depend on \(\mathsf {vk}\). We formally define our two notions of security.

Definition 2.7

(Existential unforgeability under chosen message and auxiliary input attacks) We say that a signature scheme \(\Sigma =(\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) is existential unforgeable against chosen message and auxiliary input attacks (EU-CMAA) with respect to \(\mathcal {H}\) if for all PPT adversaries \(\mathcal {A}\) and any function \(h \in \mathcal {H}\), there exists a negligible function \(\mathsf {negl}\) such that

$$\begin{aligned} \text {Pr}[{\mathsf {CMA}}_{\Sigma ,\mathcal {A},h}(k)=1]\le \mathsf {negl}(k) \end{aligned}$$

where game \({\mathsf {CMA}}_{\Sigma ,\mathcal {A},h}(k)\) is defined as follows:

figure a

We note that the leakage may also depend on \(\mathcal {A}\)’s signature queries as the function \(h\) may internally run \(\mathcal {A}\), using the access to the secret key in order to emulate the entire security game, including the signature queries made by \(\mathcal {A}\).

As outlined in the introduction, we are also interested in a weaker security notion where the adversary is required to output a forgery for a random message after seeing signatures for random messages. To this end, we extend the definition from above and let the signing oracle reply with random messages, as well as pick the challenge message at random. This is formally described in the following definition.

Definition 2.8

(Random message unforgeability under random message and auxiliary input attacks) We say that a signature scheme \(\Sigma =(\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) is random message unforgeable against random message and auxiliary input attacks (RU-RMAA) with respect to \(\mathcal {H}\) if for all PPT adversaries \(\mathcal {A}\) and any function \(h \in \mathcal {H}\), there exists a negligible function \(\mathsf {negl}\) such that

$$\begin{aligned} \text {Pr}[{\mathsf {RMA}}_{\Sigma ,\mathcal {A},h}(k) = 1]\le \mathsf {negl}(k) \end{aligned}$$

where game \({\mathsf {RMA}}_{\Sigma ,\mathcal {A},h}(k)\) is defined as follows:

figure b

We notice that the notion of unforgeability under random messages is useful in some settings. For instance, it suffices in order to construct 2-rounds identification schemes w.r.t auxiliary inputs. In Sect. 4, we propose formal definitions and simple constructions of identification schemes with security in the presence of auxiliary input leakage.

2.5 Classes of Auxiliary Input Functions

The above notions of security require to specify the set of admissible functions \(\mathcal {H}\). In the public key setting, one can define two different types of classes of auxiliary input leakage functions. In the first class, we require that given the leakage \(h(\mathsf {sk},\mathsf {vk})\), it is computationally hard to compute \(\mathsf {sk}\), while in the latter, we require hardness of computing \(\mathsf {sk}\) when additionally given the public key \(\mathsf {vk}\). We follow the work of Dodis et al. [11] to formally define these classes. Concretely,

  1. 1.

    We denote by \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) the class of polynomial-time computable functions \(h: \{0,1\}^{|\mathsf {sk}| + |\mathsf {vk}|} \rightarrow \{0,1\}^{*}\) such that given \(h(\mathsf {sk},\mathsf {vk})\), no PPT adversary can find \(\mathsf {sk}\) with probability \(\ell (k) \ge 2^{-k}\) or greater, i.e., for any PPT adversary \(\mathcal {A}\)

    $$\begin{aligned} {\mathop {\hbox {Pr}}_{(\mathsf {sk},\mathsf {vk}) \leftarrow \mathsf {Gen}(1^{k})}}[\mathsf {sk}\leftarrow \mathcal {A}(h(\mathsf {sk},\mathsf {vk})) ] < \ell (k). \end{aligned}$$
  2. 2.

    We denote by \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\) the class of polynomial-time computable functions \(h: \{0,1\}^{|\mathsf {sk}| + |\mathsf {vk}|} \rightarrow \{0,1\}^{*}\) such that given \((\mathsf {vk},h(\mathsf {sk},\mathsf {vk}))\), no PPT adversary can find \(\mathsf {sk}\) with probability \(\ell (k) \ge 2^{-k}\) or greater, i.e., for any PPT adversary \(\mathcal {A}\)

    $$\begin{aligned} {\mathop {\hbox {Pr}}_{(\mathsf {sk},\mathsf {vk}) \leftarrow \mathsf {Gen}(1^{k})}}[\mathsf {sk}\leftarrow \mathcal {A}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) ] < \ell (k). \end{aligned}$$

Security with respect to auxiliary input gets stronger if \(\ell (k)\) is larger. Therefore, our goal is typically to make \(\ell (k)\) as large as possible as long as it is a negligible function. Moreover, in case \(\ell (k)<2^{-|\mathsf {sk}|}\), then our definitions are trivialized since then no leakage is admissible. If a scheme is EU-CMAA for \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\) according to Definition 2.7, we say for short that it is \(\ell (k)\)-EU-CMAA. Similarly, if a scheme is RU-RMAA for \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\), then we say that it is an \(\ell (k)\)-RU-RMAA signature scheme. If the class of admissible leakage functions is \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), we will mention it explicitly.

Note that in the definition of \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\), we ask that it is hard to compute the secret key, when given the public key in addition to the leakage, which means that the allowed leakage functions depend on the information in the verification key, which might make it very hard to intuitively understand what leakage functions are allowed. In contrast, when defining \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), we ask that the secret key is hard to compute given only the leakage. Therefore, the leakage class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) is defined independently of the signature scheme, and hence, it is much easier to understand what leakage functions are allowed. It is therefore primarily security against \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) that we are interested in.

However, as outlined in the introduction, we typically prove security with respect to the class \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\). The stronger security notion where hardness is required to hold only given the leakage, i.e., for the class of admissible functions \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) can then be achieved by a relation between \(\mathcal {H}_{\mathsf {ow}}(\cdot )\) and \(\mathcal {H}_{\mathsf {vkow}}(\cdot )\) proven by Dodis et al. [11], given in Lemma 2.1 below. For this relation to make sense, it will be important that our public keys have a length which is independent of the length of the secret key. We will elaborate on this issue after each of our main theorems.

Lemma 2.1

([11]) If \(|\mathsf {vk}| = t(k)\) then for any \(\ell (k)\), we have

  1. 1.

    \(\mathcal {H}_{\mathsf {vkow}}(\ell (k)) \subseteq \mathcal {H}_{\mathsf {ow}}(\ell (k))\).

  2. 2.

    \(\mathcal {H}_{\mathsf {ow}}(2^{-t(k)}\ell (k)) \subseteq \mathcal {H}_{\mathsf {vkow}}(\ell (k))\).

The first point of Lemma 2.1 says that if no PPT adversary finds \(\mathsf {sk}\) given \((\mathsf {vk}, h(\mathsf {sk}, \mathsf {vk}))\) with probability \(\ell (k)\) or better, then no PPT adversary finds \(\mathsf {sk}\) given only \(h(\mathsf {sk}, \mathsf {vk})\) with probability \(\ell (k)\) or better. Clearly, this is the case since knowing \(\mathsf {vk}\) will not make it harder to guess \(\mathsf {sk}\). The second point states that if no PPT adversary finds \(\mathsf {sk}\) given \(h(\mathsf {sk}, \mathsf {vk})\) with probability \(2^{-t(k)}\ell (k)\) or better, then any PPT adversary has advantage at most \(\ell (k)\) in guessing \(\mathsf {sk}\) when given additionally \(\mathsf {vk}\). To see this, consider a PPT adversary \(\mathcal {A}\) that finds \(\mathsf {sk}\) given \((\mathsf {vk}, h(\mathsf {sk}, \mathsf {vk}))\) with probability \(\ell '(k) \ge \ell (k). \mathcal {A}\) then implies a PPT adversary \(\mathcal {B}\) that given \(h(\mathsf {sk}, \mathsf {vk})\) simply tries to guess \(\mathsf {vk}\) and uses it to run \(\mathcal {A}\). Since \(\mathcal {B}\) can guess \(\mathsf {vk}\) with probability at least \(2^{-t(k)}, \mathcal {B}\) has probability at least \(2^{-t(k)}\ell '(k)\) of finding \(\mathsf {sk}\). Thus contradicting \(h \in \mathcal {H}_{\mathsf {ow}}(2^{-t(k)}\ell (k))\).

3 Designing Signature Schemes with Auxiliary Input Security

3.1 A Warm-Up Construction

In order to illustrate the difficulties encountered in designing cryptographic primitives in the auxiliary input setting, we present a warm-up construction of a signature scheme inspired by Katz [26] that may seem secure at first glance. Unfortunately, proving its security seems impossible. Essentially, the problem arises due to the computational hardness of the leakage and does not occur in other leakage models, where given the leakage, the secret key is still information theoretically hidden. For ease of understanding, in this warm-up construction, we only aim for the simpler one-time security notion on random messages, where the adversary only views a single signature before it outputs its forgery on a random message. We consider two building blocks for the following scheme:

  1. 1.

    A family of second preimage resistance (SPR) hash functions \(H\).

  2. 2.

    A non-interactive zero-knowledge argument of knowledge system \(\Pi =(\mathsf {CRSGen},\) \(\mathsf {P},\mathsf {V})\).

Informally, the signature scheme is built as follows. The signing key \(\mathsf {sk}\) is a random element \(x\) in the domain of the hash function, whereas the verification key \(\mathsf {vk}\) is \(y=H(x)\). The verification key \(\mathsf {vk}\) also contains a common reference string \(\mathsf {crs}\) for \(\Pi \). A signature on a message \(m\) is the bit \(b=\langle m,\mathsf {sk}\rangle \) together with a non-interactive argument with respect to \(\mathsf {crs}\) proving that \(b\) was computed as the inner product of the preimage of \(y\) and the message \(m\). More precisely, define the signature scheme \(\Sigma = (\mathsf {Gen}_{\Sigma }, \mathsf {Sig}_{\Sigma },\mathsf {Ver}_{\Sigma })\) as follows:

Key Generation, \(\mathsf {Gen}_{\Sigma }(1^{k})\)::

Sample a SPR hash function \(H\), a random element \(x\) in the domain of \(H\) and \(\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k})\). Output \(\mathsf {sk}=x,\, \mathsf {vk}= (H(x),\mathsf {crs})\).

Signing, \(\mathsf {Sig}_{\Sigma }(\mathsf {sk},m)\)::

Parse \(\mathsf {vk}\) as \((H(\mathsf {sk}),\mathsf {crs})\). Compute \(b = \langle m, \mathsf {sk}\rangle \). Use the \(\mathsf {crs}\) to generate a non-interactive zero-knowledge argument of knowledge \(\pi \), demonstrating that \(b = \langle m, \mathsf {sk}\rangle \) and \(H(\mathsf {sk}) = y\). Output \(\sigma = (b, \pi )\).

Verifying, \(\mathsf {Ver}_{\Sigma }(\mathsf {vk}, m, \sigma )\)::

Parse \(\mathsf {vk}\) as \((H(\mathsf {sk}),\mathsf {crs})\) and \(\sigma \) as \((b, \pi )\). Use \(\mathsf {crs}\) to verify the proof \(\pi \). Output 1 if the proof is verified correctly and 0 otherwise.

We continue with an attempt to prove security. Note first that by the properties of \(\Pi \), the ability to generate a forgery \((\sigma ', m')\) reduces to the ability of using the extraction trapdoor to either find a second preimage for the hash function or break the hardness assumption of the leakage function. As the difficulties arise in the reduction to the hardness of the leakage function, we focus in this outline on that part. Assume there is an adversary \(\mathcal {A}\) attacking signature scheme \(\Sigma \) given auxiliary input leakage \(h(\mathsf {sk},\mathsf {vk})\) and \(\mathsf {vk}=(y,\mathsf {crs})\). Then, an attempt to construct \(\mathcal {B}\) that breaks the hardness assumption of the leakage function by invoking \(\mathcal {A}\) works as follows. \(\mathcal {B}\) obtains \((y,\mathsf {crs})\) and the leakage \(h(\mathsf {sk},\mathsf {vk})\) from its challenge oracle. It forwards them to \(\mathcal {A}\) who will ask for a signature query. Unfortunately, at that point \(\mathcal {B}\) cannot answer this query as it cannot simulate a proof without knowing the witness or the trapdoor.

An alternative approach for proving security with respect to the leakage class \(\mathcal {H}_{\mathsf {ow}}(\ell (k)\) is to let \(\mathcal {B}\) sample the CRS itself using the simulator for the non-interactive zero-knowledge argument in order to ensure that \(\mathcal {B}\) knows the trapdoor. Unfortunately, this approach is also deemed to fail as in this case \(\mathcal {B}\) cannot efficiently find \(y=H(\mathsf {sk})\) that is consistent with the leakage. Moreover, this results into several additional difficulties in defining the set of admissible leakage functions, as they must be different now for \(\mathcal {A}\) and \(\mathcal {B}\). This can be illustrated as follows. Suppose that the CRS is a public key for an encryption scheme and the trapdoor is the corresponding secret key. As \(\mathcal {A}\) only knows the CRS but not the trapdoor, a leakage function \(h\) that outputs an encryption of \(\mathsf {sk}= x\) is admissible. On the other hand, for \(\mathcal {B}\) who knows the trapdoor (and hence the secret key of the encryption scheme), such leakage cannot be admissible.

This shows that we need to consider different approaches when analyzing the security of digital signature schemes in the presence of auxiliary input. In what follows, we demonstrate two different approaches for such constructions, obtaining two different notions of security.

3.2 An RU-RMAA Signature Scheme

In this section, we present our construction of a RU-RMAA signature scheme as defined in Definition 2.8, where both the message queries as well as the challenge are picked at random in the security game. For this scheme, we require the following building blocks:

  1. 1.

    A family \(H\) of second preimage resistant hash functions (cf. Definition 2.6) with input length \(k_{1}\) and key sampling algorithm \(\mathsf {Gen}_{H}\). We require that the output length of \(H\) in independent of the input length. We use \(q(k)\) to denote the output length of \(H\), where \(q\) is a polynomial.

  2. 2.

    A \(\mathsf {NIZKPoK}\) system \(\Pi = (\mathsf {CRSGen}, \mathsf {P}, \mathsf {V})\) (cf. Definition 2.5) for proving knowledge of a secret value \(x\) so that \(y = H_{s}(x)\) given \(s\) and \(y\). We further require that the CRS’s of \(\Pi \) are uniformly random strings of some length \(p(k)\) for security parameter \(k\) and some polynomial \(p(\cdot )\). We require that \(p(k)\) depends only \(k\) and the scheme, not the length of the witnesses \(y\) that the proof can handle. Denote the message space \(\mathcal {M}\) by \(\{0,1\}^{p(k)}\).

The main idea for this scheme is inspired by the work of Malkin et al. [29] where we view each message \(m\) as a common reference string for the argument system \(\Pi \). Due to the fact that \(m\) is uniformly generated, we are guaranteed that the CRS is generated correctly and knowledge soundness holds. Intuitively since each new message induces a new CRS, each proof is given with respect to an independent CRS. This implies that in the security proof, the simulator (playing the role of the signer) can use the trapdoor of the CRS that corresponds to the challenge message \(m^{*}\).

We formally define our scheme \(\Sigma = (\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) as follows.

Key Generation, \(\mathsf {Gen}(1^{k})\)::

Sample \(s \leftarrow \mathsf {Gen}_{H}(1^{k})\). Sample \(x\leftarrow \{0,1\}^{k_{1}}\) and compute \(y=H_{s}(x)\). Output \(\mathsf {sk}= (x,s)\) and \(\mathsf {vk}= (y,s)\).

Signing, \(\mathsf {Sig}(\mathsf {sk}, m)\)::

To sign \(m \leftarrow \mathcal {M}\), let \(\mathsf {crs}= m\) and sample the signature \(\sigma \leftarrow \mathsf {P}(\mathsf {crs}, \mathsf {vk}, \mathsf {sk})\) as an argument of knowledge of \(x\) such that \(y=H_{s}(x)\).

Verifying, \(\mathsf {Ver}(\mathsf {vk}, m, \sigma )\)::

To verify \(\sigma \) on \(m = \mathsf {crs}\), output \(\mathsf {V}(\mathsf {crs}, \mathsf {vk}, \sigma )\).

We are now ready to prove our theorem.

Theorem 3.1

Assume that \(H\) is a second preimage resistant family of hash functions and that \(\Pi = (\mathsf {CRSGen}, \mathsf {P}, \mathsf {V})\) is a \(\mathsf {NIZKPoK}\) system. Then, \(\Sigma = (\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) is a \(\mathsf {negl}(k)\)-RU-RMAA signature scheme.

Remark

Notice that we only prove security for the leakage class \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\), for \(\ell (k) = \mathsf {negl}(k)\). We can, however, use Lemma 2.1 to obtain security for the class \(\mathcal {H}_{\mathsf {ow}}(2^{-t(k)}\ell (k))\), where \(t(k)\) is the length of our public key. Note that the leakage class \(\mathcal {H}_{\mathsf {ow}}(2^{-t(k)}\ell (k))\) only asks that the leakage is hard to invert when the adversary is not given the public key. This in particular means that the leakage class is independent of the hash function used, except through the length of a hash, which is why we require the length of a digest to be independent of the input \(x\) to the hash function. Another consequence is that the bound \(2^{-t(k)}\ell (k)\) is independent of the length of the secret key \(x\). We can therefore set the length of \(x\) to much longer than \(t(k) + \log _{2}(\ell (k)^{-1})\). If we, e.g., let \(\vert x \vert = 100 (t(k) + \log _{2}(\ell (k)^{-1}))\), then \(\mathcal {H}_{\mathsf {ow}}(2^{-t(k)}\ell (k))\) in particular includes the leakage functions which leak up to \(98\%\) of the bits of the secret key. Besides this it additionally includes all the leakage functions which information ally leaks \(x\) but still renders it \(2^{-t(k)}\ell (k)\)-hard to guess \(x\) in polynomial-time.

The intuition of the proof is that if one can efficiently forge a signature on a random \(m^{*}\) after getting signatures on random messages then one can also efficiently compute \(x\), contradicting the assumption that the leakage is hard to efficiently invert. Specifically, during the simulated attack, the signatures on random messages are simulated by sampling \(m = \mathsf {crs}\), where \(\mathsf {crs}\) is sampled along with the simulation trapdoor. Then, at the challenge phase, one samples \(m^{*} = \mathsf {crs}\), where \(\mathsf {crs}\) is sampled along with the extraction trapdoor. Consequently, upon receiving a forgery on \(m^{*}\), it is possible to extract \(x\) using the extraction trapdoor.

We note that in the standard setting, a simple modification to our construction using Chameleon hash functions [25] enables to achieve a stronger notion of security. Recall first that Chameleon hash functions are collision resistance hash functions such that given a trapdoor one can efficiently find collisions for every given preimage and its hashed value. Thereby, instead of signing random messages, the scheme can be modified so that the signer signs the hashed value of the message. This achieves chosen message attacks security so that the adversary picks the messages to be signed during the security game, yet the challenge is still picked at random. Nevertheless, when introducing hard-to-invert leakage into the system, this approach does not enable to obtain security against polynomially hard-to-invert leakage, because the same problems specified in Sect. 3.1 are encountered here as well. In Sect. 3.3, we demonstrate how to obtain the strongest security notion of existential unforgeability under chosen message and auxiliary input attacks.

Proof

Let \(\mathsf {Exp}_{\Sigma , \mathcal {A}, h}\) be as defined in Definition 2.8 for a PPT adversary \(\mathcal {A}\) and leakage function \(h \in \mathcal {H}_{\mathsf {vkow}}(\mathsf {negl}(k))\). Furthermore let \(W\) be the event that \(\mathcal {A}\) wins the game. We show that \(\text {Pr}[W]\) is negligible. Denote this probability by \(p_{0}\). Consider the following modification to \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\).

  1. 1.

    Generate \((\mathsf {vk}, \mathsf {sk})\) as in \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\).

  2. 2.

    Instead of sampling the challenge \(m^{*}\) as \(m^{*} \leftarrow M\) sample \((m', \mathsf {td}_{e}) \leftarrow E_{1}(1^{k})\) and let \(m^{*} = m'\), where \(E=(E_{1}, E_{2})\) is the extractor for \(\Pi \) implied by Definition 2.5.

  3. 3.

    Give input to \(\mathcal {A}\) as in \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\).

  4. 4.

    To answer the oracle queries of \(\mathcal {A}\), sample \((m', \mathsf {td}_{s}) \leftarrow S_{1}(1^{k})\), let \(m = m'\) and return the signature \((m, S_{2}(m, \mathsf {vk}, \mathsf {td}_{s}))\), where \(S = (S_{1}, S_{2})\) is the simulator for \(\Pi \) implied by Definition 2.5.

  5. 5.

    Receive a forgery \(\sigma ^{*}\) from \(\mathcal {A}\) as in \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\).

  6. 6.

    Output as in \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\).

Let \(p_{1}\) be the probability that the modified experiment above outputs 1. Also consider \(x' = E_{2}(m^{*}, \mathsf {vk}, \mathsf {td}_{e}, \sigma ^{*})\), i.e., \(x'\) is a signing key extracted from \(\mathcal {A}\)’s forgery. By Definition 2.5, we have that distributions of messages and signatures in the modified experiment are indistinguishable from the distributions in the original experiment \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\). Thus it follows that \(p_{1}\) is negligibly close to \(p_{0}\). Let \(p_{2}\) be the probability that \(H_{s}(x') = y\). By the knowledge soundness of \(\Pi \), it follows that \(p_{2}\) is negligibly close to \(p_{0}\).

Note then that, since \(S\) and \(E\) are both PPT algorithms, the modified experiment describes a PPT algorithm which computes \(x'\) where with probability \(p_{2}\) it holds that \(y = H_{s}(x')\). Let \(p_{3}\) be the probability that \(y = H_{s}(x')\) and \(x' \ne x\) and let \(p_{4}\) be the probability that \(x' = x\). Note that \(p_{2} = p_{3} + p_{4}\).

The Event \(X\): Consider the PPT algorithm \(\mathcal {B}\) that given \(\mathsf {vk}\) and leakage \(h(\mathsf {sk},\mathsf {vk})\), where \((\mathsf {sk}, \mathsf {vk}) \leftarrow \mathsf {Gen}(1^{k})\), runs steps 2–5 of the modified experiment above and outputs \(x^{*} = E_{2}(m^{*}, \mathsf {vk}, \mathsf {td}_{e},\sigma ^{*})\). Denote by \(X\) the event in which \(\mathcal {B}\) outputs \(x^{*} = x\). Since \((\mathsf {vk}, \mathsf {sk})\) is generated as in \(\mathsf {Exp}_{\Sigma ,\mathcal {A},h}(k) \text {Pr}[X]\ge p_{4}\). Thus by definition of \(\mathcal {H}_{\mathsf {vkow}}(\mathsf {negl}(k)), p_{4}\) is negligible.

The Event \(C\): On the other hand, consider the PPT algorithm \(\mathcal {B}\) that is given \(s, x\) and \(y = H_{s}(x). \mathcal {B}\) lets \(\mathsf {vk}= (y, s)\) and runs steps 2–5 of the modified experiment above (notice that \(\mathcal {B}\) is given \(x\), so it can compute the leakage \(h\)) and outputs \(x^{*} = E_{2}(m^{*}, \mathsf {vk}, \mathsf {td}_{e},\sigma ^{*})\). Denote by \(C\) the event in which \(\mathcal {B}\) outputs \(x^{*} \ne x\) so that \(H_{s}(x^{*}) = H_{s}(x)\). Notice again that \(((H_{s}, y), x)\) are generated as in \(\mathsf {Exp}_{\Sigma , \mathcal {A},h}(k)\) and therefore \(\text {Pr}[C]\ge p_{3}\). Thus by the second preimage resistance hardness of the family \(H, p_{3}\) is negligible.

This implies that \(p_{3}\) and \(p_{4}\) are negligible and so is \(p_{2} = p_{3} + p_{4}\). Since \(p_{0}\) is negligibly close to \(p_{2}, p_{0}\) must also be negligible. By definition \(p_{0} = \text {Pr}[\mathsf {Exp}_{\Sigma , \mathcal {A}, h}(k) = 1]\) and so by Definition 2.8, \(\Sigma \) is a \(\mathsf {negl}(k)\)-RU-RMAA signature scheme. \(\square \)

Remark

Notice that in the above, we assume that the CRS of the \(\mathsf {NIZKPoK}\Pi \) is a uniformly random bit string. As an example of a \(\mathsf {NIZKPoK}\) with this property, we can use the construction of [34]. In their construction, the CRS is a pair \((\mathsf {ek}, r)\) where \(r\) is a random string and \(\mathsf {ek}\) is an encryption key for some semantically secure public key encryption scheme. Thus, we can use the construction of [34] with a public key encryption scheme where uniformly random bit strings can act as public keys, like Regev’s LWE scheme [33].

3.3 An EU-CMAA Signature Scheme

In this section, we describe our second construction and build a EU-CMAA signature scheme. Recalling that \(k\) denote the security parameter for the signature scheme, our construction employs the following tools:

  1. 1.

    A family of second preimage resistant hash functions \(H\) with key sampling algorithm \(\mathsf {Gen}_{H}\) (cf. Definition 2.6) where (i) the input length can be set to any \(l_{\mathsf {in}} = {\text {poly}}(k)\), (ii) the length of the randomness used by \(s \leftarrow \mathsf {Gen}_{H}(1^{k})\) is some \(l_{\mathsf {s}} = {\text {poly}}(k)\) independent of \(l_{\mathsf {in}}\) and (iii) the length of an output \(y = H_{s}(x)\) is some \(l_{\mathsf {out}} = {\text {poly}}(k)\) independent of \(l_{\mathsf {in}}\), i.e., it is possible to increase the input length of \(H_{s}\) without increasing the randomness used to generate \(s\) or the output length.

  2. 2.

    An IND-WLCCA secure labeled encryption scheme \(\Gamma = (\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\) with perfect decryption (cf. Definition 2.3), where the length of \(\mathsf {dk}\) is some \(l_{\mathsf {dk}} = {\text {poly}}(k)\) independent of the length of the messages that \(\Gamma \) can encrypt.

  3. 3.

    A reusable-CRS NIZK system \(\Pi =(\mathsf {CRSGen}, \mathsf {P}, \mathsf {V})\) (cf. Definition 2.4), where the length of the simulation trapdoor \(\mathsf {td}_{s}\) is some \(l_{\mathsf {td}_{s}} = {\text {poly}}(k)\) independent of the size of the proofs that the NIZK can handle.

We stress that the reason we use the IND-WLCCA security notion is that our signature scheme requires to encrypt its secret key which is much longer than the decryption key. For that we need to break the secret key into blocks and encrypt each block separately under the same label (looking ahead, the label would be the signed message). Note that the security of labeled public key encryption schemes for arbitrary length massages is not implied by the security of IND-LCCA encryption scheme for fixed length messages. This is because the adversary can change the order of the ciphertexts within a specific set of ciphertexts and ask for a decryption of the modified ciphertext. We therefore work with a weaker notion of security that is sufficient for our purposes to design secure signature schemes and is easier to instantiate as demonstrated in Sect. 5.

Our signature scheme \(\Sigma \) is formally defined as follows:

Key Generation, \(\mathsf {Gen}(1^{k})\)::

Sample \(s \leftarrow \mathsf {Gen}_{H}(1^{k})\) and \((\mathsf {ek},\mathsf {dk}) {\leftarrow } \mathsf {KeyGen}(1^{k})\). Furthermore, sample \((\mathsf {crs}, \mathsf {td}_{s}) {\leftarrow } S_{1}(1^{k})\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\), where \(S {=} (S_{1}, S_{2})\) is the simulator for \(\Pi \).Footnote 2 Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

Signing, \(\mathsf {Sig}(\mathsf {sk},m)\)::

Compute \(C \!=\! \mathsf {Enc}^{m}(\mathsf {ek},x)\). Using \(\mathsf {crs}\) and \(\Pi \), generate a NIZK proof \(\pi \) proving that \(\exists x\) such that \((C \!=\! \mathsf {Enc}^{m}(\mathsf {ek},x) \wedge y=H_{s}(x))\). Output \(\sigma = (C, \pi )\).

Verifying, \(\mathsf {Ver}(\mathsf {vk}, m, \sigma )\)::

Parse \(\sigma \) as \(C, \pi \). Use \(\mathsf {crs}\) and \(\mathsf {V}\) to verify the NIZK proof \(\pi \). Output \(1\) if the proof verifies correctly and \(0\) otherwise.

As explained in [13], a NIZK proof system together with a CCA-secure encryption scheme imply a specific instantiation of true-simulation extractable (tSE). An alternative instantiation of tSE would be to compose a simulation-sound NIZK with a CPA-secure encryption scheme. This approach was used in [26]. We note that our proof follows similarly for this instantiation as well.

Theorem 3.2

Assume \(H, \Gamma =(\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\) and \(\Pi = (\mathsf {CRSGen}, \mathsf {P}, \mathsf {V})\) have the properties listed above. Then the following holds:

  1. 1.

    If we consider the class \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\), then \(\Sigma \) is \(2^{-k_{{\text {W}}}}\)-EU-CMAA where \(k_{{\text {W}}} = k + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}}\).

  2. 2.

    If we consider the class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), then \(\Sigma \) is \(2^{-k_{{\text {S}}}}\)-EU-CMAA where \(k_{{\text {S}}} = k + l_{\mathsf {s}} + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}} + l_{\mathsf {out}}\).

Specifically, we claim that the best success against \(\Sigma \) in the forging game with \(2^{-k_{{\text {W}}}}\)-hard leakage by a PPT adversary \(\mathcal {A}\) is \(2^{-k} + \sum _{i=0}^{3} \varepsilon _{i} + u \varepsilon _{4}\), where \(u\) is a polynomial and

  • \(\varepsilon _{0}\) and \(\varepsilon _{3}\) are the advantages of some PPT adversaries in the ZK game against \(\Pi \) with a security parameter \(k_{\mathsf {td}_{s}}\),

  • \(\varepsilon _{1}\) is the success probability of some PPT adversary in the soundness game against \(\Pi \) with a security parameter \(k_{\mathsf {td}_{s}}\),

  • \(\varepsilon _{2}\) is the probability that some PPT adversary wins the second preimage game against \(H\) with a security parameter \(k_{\mathsf {s}}\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\),

  • \(\varepsilon _{4}\) is the advantage of some PPT adversary in the IND-WLCCA game against \(\Gamma \) with a security parameter \(k_{{\text {S}}}\).

The intuition behind the proof of security is that a forged signature contains an encryption of the secret key \(x\), so forging leads to extracting \(x\) using \(\mathsf {dk}\), giving a reduction to the assumption that it is hard to compute \(x\) given the leakage. In this reduction, the signing oracle is simulated by encrypting \(0^{l_{\mathsf {in}}}\) and simulating the proofs using the simulation trapdoor \(\mathsf {td}_{s}\). This will clearly still lead to an extraction of \(x\), using reusable-CRS NIZK system and IND-WLCCA. The only hurdle is that given \((\mathsf {vk}, h(\mathsf {sk},\mathsf {vk}))\), we do not know \(\mathsf {dk}\) or \(\mathsf {td}_{s}\). We can, however, guess these with probabilities \(2^{-l_{\mathsf {dk}}}\) and \(2^{-l_{\mathsf {td}_{s}}}\), respectively. This is why we only get security \(k_{{\text {W}}} = k + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}}\). When we prove security for \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), the reduction is not given \(\mathsf {vk}\) either, so we additionally have to guess \(s\) and \(y\), leading to \(k_{{\text {S}}} = k + l_{\mathsf {s}} + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}} + l_{\mathsf {out}}\).

We note that it is primarily security against the class \(\mathcal {H}_{\mathsf {ow}}\) that we are interested in, as this leakage class is independent of the signature scheme in general and the primitives used by the signature scheme in particular, like the hash function, except via the length of the public key (see the remark after Theorem 3.1). Note also the leakage class \(\mathcal {H}_{\mathsf {ow}}(2^{-k_{{\text {S}}}})\) that we prove security against is independent of the length of the secret key, so we can again obtain any desired leakage bound simply by making the secret key longer. Note in particular that if we set \(l_{\mathsf {in}} = k + l_{\mathsf {s}} + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}} + l_{\mathsf {out}} + L\), then leaking \(L\) bits from \(x\) would be an admissible leakage. Since, by the assumption on our primitives, \(l_{\mathsf {s}}, l_{\mathsf {dk}}, l_{\mathsf {td}_{s}}\) and \(l_{\mathsf {out}}\) do not grow with \(l_{\mathsf {in}}\), it thus follows that we can set \(L\) to be any polynomial and be secure against leaking any fraction \((1-k^{-O(1)})\) of the secret key.

Proof

Let \(\mathcal {A}\) be any PPT adversary attacking our scheme and let \(W\) be the event that \(\mathcal {A}\) wins the game. We derive a bound on \(\text {Pr}[W]\). We start by writing out the forging game \(\mathsf {Game}\) for our particular scheme:

Key generation: :

Sample \(s \leftarrow \mathsf {Gen}_{H}\) and \((\mathsf {ek},\mathsf {dk}) \leftarrow \mathsf {KeyGen}(1^{k_{{\text {S}}}})\). Also, sample \((\mathsf {crs}, \mathsf {td}_{s}) \leftarrow S_{1}(1^{k_{\mathsf {td}_{s}}})\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\). Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

Leakage: :

Give \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\) to \(\mathcal {A}\) along with \(h(\mathsf {sk},\mathsf {vk})\).

Signing Oracle: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute the ciphertext \(C = \mathsf {Enc}^{m}(\mathsf {ek},x)\). Using \(\mathsf {crs}\), generate a NIZK proof \(\pi \) proving that \(\exists x\) such that \((C =\) \( \mathsf {Enc}^{m}(\mathsf {ek},x) \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Calling the Game: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). The adversary wins iff \(\pi ^{*}\) is an acceptable proof that \(\exists x^{*}\) such that \((C^{*} = \wedge y=H_{s}(x^{*}))\), and \(m^{*}\) was not queried. Output \(1\) if \(\mathcal {A}\) Output \(1\) if \(\mathcal {A}\) wins the game and output \(0\) otherwise.

Clearly, \(\text {Pr}[W] = \text {Pr}[\mathsf {Game}=1].\)

Let \(\mathsf {Game}_{0}\) denote the game which proceeds as \(\mathsf {Game}\), with the following change:

Key generation 0: :

Sample \(s \leftarrow \mathsf {Gen}_{H}\) and \((\mathsf {ek},\mathsf {dk}) \leftarrow \mathsf {KeyGen}(1^{k_{{\text {S}}}})\). Moreover, sample \(\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k_{\mathsf {td}_{s}}})\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\).Footnote 3 Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

The only change from \(\mathsf {Game}\) is that we run with \(\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k_{\mathsf {td}_{s}}})\) instead of a \(\mathsf {crs}\) sampled along with a simulation trapdoor \(\mathsf {td}_{s}\). Note, however, that a PPT adversary which can distinguish these two distributions with advantage \(\varepsilon _{0}\) can win the ZK game against \((\mathsf {CRSGen}, \mathsf {P}, \mathsf {V})\) with advantage \(\varepsilon _{0}\), using \(0\) queries to the oracle which gives either real proofs or simulated proofs. A simple reduction thus shows that,

$$\begin{aligned} \text {Pr}[\mathsf {Game}= 1] - \text {Pr}[\mathsf {Game}_{0} = 1] \le \varepsilon _{0} \end{aligned}$$

where \(\varepsilon _{0}\) is the advantage of some PPT adversary against the ZK game against \((\mathsf {CRSGen},\) \( \mathsf {P}, \mathsf {V})\) with a security parameter \(k_{\mathsf {td}_{s}}\).

Let \(\mathsf {Game}_{1}\) denote the game which proceeds as \(\mathsf {Game}_{0}\), with the following change:

Calling the Game 1: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). The adversary wins iff \(\pi ^{*}\) is an acceptable proof that \(\exists \) such that \(x^{*}\) such that \((C^{*} = \mathsf {Enc}^{m^{*}}(\mathsf {ek},x^{*}) \wedge y=H_{s}(x^{*}) )\), and \(m^{*}\) was not queried. If \(\mathcal {A}\) wins the game, then compute \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk},C^{*})\). Output \(1\) iff \(\mathcal {A}\) wins the game and \(y=H_{s}(x^{*})\).

The only change is that we only output \(1\) if the extra condition \(y=H_{s}(x^{*})\) holds. Note, however, that if \(y=H_{s}(x^{*})\) is false, then in particular \(y \ne H_{s}(x^{*})\). By the perfect decryption, this implies that \(\not \exists ~x^{*}\) such that \((C^{*} = \mathsf {Enc}^{m^{*}}(\mathsf {ek},x^{*}) \wedge y=H_{s}(x^{*}))\). This implies that in \(\mathsf {Game}_{1}\), the adversary computes a proof \(\pi ^{*}\) for a false statement with probability at least \(\text {Pr}[\mathsf {Game}_{0} = 1] - \text {Pr}[\mathsf {Game}_{1} = 1]\). Specifically, given a CRS sampled by \(\mathsf {crs}\leftarrow \mathsf {CRSGen}(1^{k_{\mathsf {td}_{s}}}), \mathsf {Game}_{1}\) can be emulated in polynomial-time with that specific \(\mathsf {crs}\) in \(\mathsf {vk}\), implying that

$$\begin{aligned} \text {Pr}[\mathsf {Game}_{0} = 1] - \text {Pr}[\mathsf {Game}_{1} = 1] = \varepsilon _{1}\, \end{aligned}$$

where \(\varepsilon _{1}\) is the probability when some PPT adversary successfully attacks the soundness property of \(\Pi \) with a security parameter \(k_{\mathsf {td}_{s}}\).

Let \(\mathsf {Game}_{2}\) denote the game which proceeds as \(\mathsf {Game}_{1}\), with the following change:

Calling the Game 2: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). The adversary wins iff \(\pi ^{*}\) is an acceptable proof that \(\exists x^{*}\) such that \(( C^{*} = \mathsf {Enc}^{m^{*}}(\mathsf {ek},x^{*})\wedge y=H_{s}(x^{*}))\), and \(m^{*}\) was not queried. If \(\mathcal {A}\) wins the game, then compute \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk},C^{*})\). Output \(1\) iff \(\mathcal {A}\) wins the game and \(x^{*} = x\).

Note that if we run \(\mathsf {Game}_{2}\), then with probability at least \(\text {Pr}[\mathsf {Game}_{1} = 1] - \text {Pr}[\mathsf {Game}_{2} = 1]\) we have that \(x^{*} \ne x\) and \(y = H_{s}(x^{*})\). Specifically, we can take a random \(s\) and a random \(x\) as input and emulate \(\mathsf {Game}_{2}\) in polynomial-time, with that specific \(s\) as key for \(H_{s}\) and that specific \(x\) as signing key \(\mathsf {sk}\), thus

$$\begin{aligned} \text {Pr}[\mathsf {Game}_{1} = 1] - \text {Pr}[\mathsf {Game}_{2} = 1] \le \varepsilon _{2}\, \end{aligned}$$

where \(\varepsilon _{2}\) is the probability that some PPT adversary wins the second preimage game against \(H\) with a security parameter \(k_{\mathsf {s}}\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\).

Let \(\mathsf {Game}_{3}\) be the game which proceeds as follows:

Key generation 3: :

Sample \(s \leftarrow \mathsf {Gen}_{H}\) and \((\mathsf {ek},\mathsf {dk}) \leftarrow \mathsf {KeyGen}(1^{k_{{\text {S}}}})\). Furthermore, sample \((\mathsf {crs}, \mathsf {td}_{s}) \leftarrow S_{1}(1^{k_{\mathsf {td}_{s}}})\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\). Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

Leakage 3: :

Give \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\) to \(\mathcal {A}\) along with \(h(\mathsf {sk},\mathsf {vk})\).

Signing Oracle 3: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute \(C = \mathsf {Enc}^{m}(\mathsf {ek},x)\). Using \(\mathsf {td}_{s}\), generate a simulated NIZK \(\pi \) proving that \(\exists x\) such that \((C = \mathsf {Enc}^{m}(\mathsf {ek},x)\) \( \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Calling the Game 3: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). The adversary wins iff \(\pi ^{*}\) is an acceptable proof that \(\exists x^{*}\) such that \(( C^{*} = \mathsf {Enc}^{m^{*}}(\mathsf {ek},x^{*}) \wedge y=H_{s}(x^{*}))\), and \(m^{*}\) was not queried. If \(\mathcal {A}\) wins the game, then compute \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk},C^{*})\). Output \(1\) iff \(\mathcal {A}\) wins the game and \(x^{*} = x\).

The only difference between \(\mathsf {Game}_{2}\) and \(\mathsf {Game}_{3}\) is whether we give real or simulated proofs. Specifically, we can take \(\mathsf {crs}\) as input plus access to an oracle \(\mathcal {O}\) which produces either real proofs under \(\mathsf {crs}\) or simulated proofs under \(\mathsf {crs}\) and produce the output of \(\mathsf {Game}_{2}\), respectively, \(\mathsf {Game}_{3}\), in polynomial-time, thus

$$\begin{aligned} \text {Pr}[\mathsf {Game}_{2} = 1] - \text {Pr}[\mathsf {Game}_{3} = 1] \le \varepsilon _{3}\, \end{aligned}$$

where \(\varepsilon _{3}\) is the advantage of some PPT adversary in the ZK game against \(\Pi \) with a security parameter \(k_{\mathsf {td}_{s}}\).

Let \(\mathsf {Game}_{4}\) be \(\mathsf {Game}_{3}\) with the following change:

Signing Oracle 4: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute \(C = \mathsf {Enc}^{m}(\mathsf {ek},0^{l_{\mathsf {in}}})\). Using \(\mathsf {td}_{s}\), generate a simulated NIZK \(\pi \) proving that \(\exists x\) such that \(( C = \mathsf {Enc}^{m} (\mathsf {ek},x)\) \( \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Consider the following adversary \(\mathcal {B}^{h}\) for the labeled IND-WLCCA game against \(\Gamma \), where \(h\) is a natural number.

Key generation 3-4: :

Sample \(s \leftarrow \mathsf {Gen}_{H}\) and get \(\mathsf {ek}\) as input from the IND-WLCCA game. Furthermore, sample \(\mathsf {crs}\) along with a simulation trapdoor \(\mathsf {td}_{s}\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\). Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

Leakage 3-4: :

Give \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\) to \(\mathcal {A}\) along with \(h(\mathsf {sk},\mathsf {vk})\).

Signing Oracle 3-4: :

In the \(i\)’th signing request, proceed as follows: If \(i < h\), then sign as in \(\mathsf {Game}_{3}\). If \(i > h\), then sign as in \(\mathsf {Game}_{4}\). If \(i = h\), then sign as follows:

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Output \((m,x, 0^{l_{\mathsf {in}}})\) to the encryption oracle and get back \(C\). Using \(\mathsf {td}_{s}\), generate a simulated NIZK \(\pi \) proving that \(\exists x\) such that \((C = \mathsf {Enc}^{m}(\mathsf {ek},x) \wedge y=H_{s}(x))\). Give \(\sigma \) \(= (C, \pi )\) to \(\mathcal {A}\).

Calling the Game 3-4: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). The adversary wins iff \(\pi ^{*}\) is an acceptable proof that \(\exists x^{*}\) such that \(( C^{*} = \mathsf {Enc}^{m^{*}}(\mathsf {ek},x^{*}) \wedge y=H_{s}(x^{*}))\), and \(m^{*}\) was not queried. If \(\mathcal {A}\) wins the game, then query the decryption oracle on \((m^{*}, C^{*})\) and get back \(x^{*}\). Output \(1\) iff \(\mathcal {A}\) wins the game and \(x^{*} = x\).

This is an admissible IND-WLCCA adversary as \(\vert x \vert = l_{\mathsf {in}}\) and because the label in the \((m^{*}, C^{*})\) submitted to the decryption oracle is different from all the labels in the \((m, x)\) submitted to the encryption oracle, as a condition for \(\mathcal {A}\) winning is that \(m^{*}\) was not queried to the signing oracle. Let \(u\) be a polynomial upper bound on the number of signing queries of \(\mathcal {A}\). By our construction, we have that the sum of the advantages of adversaries \(\mathcal {B}^{1}, \ldots , \mathcal {B}^{u}\) is an upper bound on \(\vert \text {Pr}[\mathsf {Game}_{3} = 1] - \text {Pr}[\mathsf {Game}_{4} = 1] \vert \). It follows that

$$\begin{aligned} \text {Pr}[\mathsf {Game}_{3} = 1] - \text {Pr}[\mathsf {Game}_{4} = 1] \le u \varepsilon _{4} \end{aligned}$$

where \(\varepsilon _{4}\) is the advantage of some PPT adversary in the IND-WLCCA game against \(\Gamma \) with a security parameter \(k_{{\text {S}}}\).

Consider then the following algorithm \(\mathcal {B}_{5}(x)\), which takes \(x \in \{0,1\}^{l_{\mathsf {in}}}\) as input.

Key generation 5: :

Sample \(s \leftarrow \mathsf {Gen}_{H}\) and \((\mathsf {ek},\mathsf {dk}) \leftarrow \mathsf {KeyGen}(1^{k_{{\text {S}}}})\). Furthermore, sample \((\mathsf {crs}, \mathsf {td}_{s}) \leftarrow S_{1}(1^{k_{\mathsf {td}_{s}}})\). Get \(x \in \{0,1\}^{l_{\mathsf {in}}}\) as input. Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\).

Leakage 5: :

Give \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\) to \(\mathcal {A}\) along with \(h(\mathsf {sk},\mathsf {vk})\).

Signing Oracle 5: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute \(C = \mathsf {Enc}^{m}(\mathsf {ek},0^{l_{\mathsf {in}}})\). Using \(\mathsf {td}_{s}\), generate a simulated NIZK \(\pi \) proving that \(\exists x\) such that \((C = \mathsf {Enc}^{m} (\mathsf {ek},x)\) \( \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Calling the Game 5: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). Output \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk},C^{*})\).

Clearly,

$$\begin{aligned} \text {Pr}_{x \leftarrow \{0,1\}^{l_{\mathsf {in}}}}[\mathcal {B}_{5}(x) = x \vert ] \ge \text {Pr}[\mathsf {Game}_{4} = 1]. \end{aligned}$$

Consider then the following algorithm \(\mathcal {B}_{6}(\mathsf {vk},a)\), which takes a verification key for \(\Sigma \) and some auxiliary input \(a \in \{0,1\}^{*}\) as input.

Key generation 6: :

Get the input \((\mathsf {vk},a)\) and parse \(\mathsf {vk}\) as \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\). Sample \(\mathsf {dk}' \leftarrow \{0,1\}^{l_{\mathsf {dk}}}\) and \(\mathsf {td}_{s}' \leftarrow \{0,1\}^{l_{\mathsf {td}_{s}}}\).

Leakage 6: :

Give \(\mathsf {vk}= (y,s,\mathsf {ek},\mathsf {crs})\) to \(\mathcal {A}\) along with \(a\).

Signing Oracle 6: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute \(C = \mathsf {Enc}^{m}(\mathsf {ek},0^{l_{\mathsf {in}}})\). Using \(\mathsf {td}_{s}'\), generate a simulated NIZK proof \(\pi \) proving that \(\exists x\) such that \(( C =\) \( \mathsf {Enc}^{m}(\mathsf {ek},x) \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Calling the Game 6: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). Output \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk}',C^{*})\).

Let \(V\) denote the distribution on \((\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s})\) produced by sampling as in \(\mathsf {Gen}\), i.e., \(V\) is produced as follows: Sample \(s \leftarrow \mathsf {Gen}_{H}(1^{k_{\mathsf {s}}})\) and \((\mathsf {ek},\mathsf {dk}) \leftarrow \mathsf {KeyGen}(1^{k_{{\text {S}}}})\). Furthermore, sample \((\mathsf {crs}, \mathsf {td}_{s}) \leftarrow S_{1}(1^{k_{\mathsf {td}_{s}}})\) and \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\). Compute \(y = H_{s}(x)\). Set \((\mathsf {sk},\mathsf {vk}) = (x,(y,s,\mathsf {ek},\mathsf {crs}))\). Output \((\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s})\). By construction

$$\begin{aligned}&\text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) \\&\quad = \mathsf {sk}\vert \mathsf {dk}' = \mathsf {dk}\wedge \mathsf {td}_{s}'=\mathsf {td}_{s}] = \text {Pr}_{x \leftarrow \{0,1\}^{l_{\mathsf {in}}}}[\mathcal {B}_{5}(x) = x]\, \end{aligned}$$

where \(\mathsf {dk}'\) and \(\mathsf {td}_{s}'\) are the values sampled by \(\mathcal {B}_{6}\). This must hold since \(\text {Pr}[\mathsf {dk}' = \mathsf {dk}\wedge \mathsf {td}_{s}'=\mathsf {td}_{s}] = 2^{-l_{\mathsf {dk}}-l_{\mathsf {td}_{s}}}\), thus either

$$\begin{aligned} \text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}] \ge 2^{-l_{\mathsf {dk}}-l_{\mathsf {td}_{s}}} \text {Pr}_{x \leftarrow \{0,1\}^{l_{\mathsf {in}}}}[\mathcal {B}_{5}(x) = x]\, \end{aligned}$$

or

$$\begin{aligned} \text {Pr}_{x \leftarrow \{0,1\}^{l_{\mathsf {in}}}}[\mathcal {B}_{5}(x) = x] \le 2^{l_{\mathsf {dk}}+l_{\mathsf {td}_{s}}} \text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}]. \end{aligned}$$

The distribution on \((\mathsf {sk},\mathsf {vk})\) induced by \(V\) is identical to that induced by the key generation of \(\Sigma \), and \(\mathcal {B}_{6}\) is PPT. Consequently, when we are proving security we can assume that

$$\begin{aligned} \text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}] \le 2^{-k_{{\text {W}}}}. \end{aligned}$$

Combining our inequalities so far, we get that

$$\begin{aligned} \text {Pr}[W] \le 2^{l_{\mathsf {dk}}+l_{\mathsf {td}_{s}}-k_{{\text {W}}}} + \sum _{i=0}^{3} \varepsilon _{i} + u \varepsilon _{4} = 2^{-k} + \sum _{i=0}^{3} \varepsilon _{i} + u \varepsilon _{4}. \end{aligned}$$

Now, if we want prove security for the class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), we consider the following algorithm \(\mathcal {B}_{7}(a)\) which takes some auxiliary input \(a \in \{0,1\}^{*}\).

Key generation 7: :

Get the input \(a\). Sample \(\mathsf {dk}' \leftarrow \{0,1\}^{l_{\mathsf {dk}}}\) and \(\mathsf {td}_{s}' \leftarrow \{0,1\}^{l_{\mathsf {td}_{s}}}\), and let \(\mathsf {ek}'\) be the public key corresponding to \(\mathsf {dk}'\) and let \(\mathsf {crs}'\) be the CRS corresponding to \(\mathsf {td}_{s}'\).Footnote 4 Sample \(r \leftarrow \{0,1\}^{l_{\mathsf {s}}}\) and let \(s = \mathsf {Gen}_{H}(1^{k_{\mathsf {s}}};r)\), and sample \(y' \leftarrow \{0,1\}^{l_{\mathsf {out}}}\). Let \(\mathsf {vk}' = (y',s',\mathsf {ek}',\mathsf {crs}')\).

Leakage 7: :

Give \(\mathsf {vk}'\) to \(\mathcal {A}\) along with \(a\).

Signing Oracle 7: :

 

1.:

Get a message \(m\) from \(\mathcal {A}\).

2.:

Compute \(C = \mathsf {Enc}^{m}(\mathsf {ek}',0^{l_{\mathsf {in}}})\). Using \(\mathsf {td}_{s}'\), generate a simulated NIZK proof \(\pi \) proving that \(\exists x\) such that \(( C =\) \( \mathsf {Enc}^{m}(\mathsf {ek},x) \wedge y=H_{s}(x))\). Give \(\sigma = (C, \pi )\) to \(\mathcal {A}\).

Calling the Game 7: :

Get \((m^{*}, (C^{*},\pi ^{*}))\) from \(\mathcal {A}\). Output \(x^{*} = \mathsf {Dec}^{m^{*}}(\mathsf {dk}',C^{*})\).

Let \(V'\) be the same distribution as \(V\) except that it also outputs \(s\) and \(y\), i.e., it outputs \((\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s},s,y)\). By construction

$$\begin{aligned}&\text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s},s,y) \leftarrow V'}[\mathcal {B}_{7}(h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}\vert \wedge s'=s \wedge y'=y]\\&\qquad = \text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}]\ . \end{aligned}$$

So,

$$\begin{aligned}&\text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s}) \leftarrow V}[\mathcal {B}_{6}(\mathsf {vk},h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}]\\&\quad \le 2^{l_{\mathsf {s}}+l_{\mathsf {out}}}\text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s},s,y) \leftarrow V'}[\mathcal {B}_{7}(h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}]. \end{aligned}$$

The distribution on \((\mathsf {sk},\mathsf {vk})\) induced by \(V'\) is identical to that induced by the key generation of \(\Sigma \), and \(\mathcal {B}_{7}\) is PPT. Consequently, we can assume that

$$\begin{aligned} \text {Pr}_{(\mathsf {vk},\mathsf {sk},\mathsf {dk},\mathsf {td}_{s},s,y) \leftarrow V'}[\mathcal {B}_{7}(h(\mathsf {sk},\mathsf {vk})) = \mathsf {sk}] \le 2^{k_{{\text {S}}}}. \end{aligned}$$

Combining our inequalities so far, we get that

$$\begin{aligned} \text {Pr}[W] \le 2^{l_{\mathsf {s}}+l_{\mathsf {dk}}+l_{\mathsf {td}_{s}}+l_{\mathsf {out}}-k_{{\text {S}}}} + \sum _{i=0}^{3} \varepsilon _{i} + u \varepsilon _{4} = 2^{-k} + \sum _{i=0}^{3} \varepsilon _{i} + u \varepsilon _{4} . \end{aligned}$$

\(\square \)

Our concrete instantiation has all the needed properties except that the length of the hash function key \(s\) depends on the input length \(l_{\mathsf {in}}\). This, however, can be handled generically as follows.

Lemma 3.1

If there exists an \(\varepsilon \)-secure family of second preimage resistant hash functions \(H\), with key sampling algorithm \(\mathsf {Gen}_{H}\), and a \(\delta \)-secure pseudo-random generator \(\mathsf {PRG}\), then there exists an \((\varepsilon +\delta )\)-secure family of second preimage resistant hash function \(H\), with key sampling algorithm \(\mathsf {Gen}_{H}'\), where \(s \leftarrow \mathsf {Gen}_{H}'(1^{k})\) can be guessed with probability \(2^{-k_{0}}\), for \(k_{0} = {\text {poly}}(k)\) the seed length of \(\mathsf {PRG}\) with security parameter \(k\).

Proof

Let \(\mathsf {Gen}_{H}'(1^{k}; r \in \{0,1\}^{k_{0}}) = \mathsf {Gen}_{H}(1^{k};\mathsf {PRG}(r))\). It is first clear that an output of \(\mathsf {Gen}_{H}'(r \in \{0,1\}^{k_{0}})\) can be guessed with probability \(2^{-k_{0}}\) by guessing \(r\). Next, let

$$\begin{aligned} \varepsilon = \text {Pr}_{s \leftarrow \mathsf {Gen}_{H} \wedge x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\wedge x^{*} \leftarrow \mathcal {A}(s,x)}[H_{s}(x^{*}) = H_{s}(x) \wedge x^{*} \ne x], \end{aligned}$$

and let

$$\begin{aligned} \varepsilon ' = \text {Pr}_{s \leftarrow \mathsf {Gen}_{H}'\wedge x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\wedge x^{*} \leftarrow \mathcal {A}(s,x)}[H_{s}(x^{*}) = H_{s}(x) \wedge x^{*} \ne x]. \end{aligned}$$

Then, consider the PPT algorithm \(\mathcal {B}(s)\) which samples \(x \leftarrow \{0,1\}^{l_{\mathsf {in}}}\) and then invokes \(x^{*} \leftarrow \mathcal {A}(s,x)\), and finally outputs \(1\) iff \(H_{s}(x^{*}) = H_{s}(x)\). Fix \(\varepsilon ' = \text {Pr}[\mathcal {B}(\mathsf {Gen}_{H}(\mathsf {PRG}(r \leftarrow \{0,1\}^{k_{0}}))) = 1]\) and \(\varepsilon = \text {Pr}[\mathcal {B}(\mathsf {Gen}_{H}(r \leftarrow \{0,1\}^{*})) = 1]\). Clearly, by the \(\mathsf {PRG}\) being a \(\delta \)-pseudo-random generator, it follows that \(\vert \varepsilon ' - \varepsilon \vert \le \delta \). This concludes the proof since it implies that \(H'\) is second preimage resistance with a short key \(s\). \(\square \)

Remark

We note that we can also prove security in the stronger model, where the leakage function \(h\) sees not only \(\mathsf {sk}\), but the randomness used by \(\mathsf {Gen}\) to generate \((\mathsf {vk},\mathsf {sk})\). In that case, we need that the distribution on \(\mathsf {ek}\) induced by sampling \((\mathsf {ek},\mathsf {dk})\) with \(\mathsf {KeyGen}_{\Gamma }\), the distribution of a CRS sampled along with a trapdoor and that the distribution on \(s\) induced by sampling \(s \leftarrow \mathsf {Gen}_{H}\) can all be sampled with invertible sampling. This is indeed the case for our concrete instantiation. The only problematic point is Lemma 3.1. Even if \(\mathsf {Gen}_{H}(\{0,1\}^{*})\) has invertible sampling, it would be very surprising if \(\mathsf {Gen}_{H}(\mathsf {PRG}(\{0,1\}^{k_{0}}))\) has invertible sampling. So, if the probability of guessing a random \(s \leftarrow \mathsf {Gen}_{H}\) is not independent of the input of \(H_{s}\), we cannot generically add this property. One can circumvent this problem as in [13] and consider \(s\) as a public parameter of the scheme. This is modeled by sampling \(s\) in a parameter generation phase prior to the key generation phase and give \(s\) as input to all entities. This would in turn make \(s\) an input to the reduction (called \(\mathcal {B}_{7}\) in our proof), circumventing the problem of having to guess \(s\). We would then get security when considering the class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) for \(k_{{\text {S}}} = k + l_{\mathsf {dk}} + l_{\mathsf {td}_{s}} + l_{\mathsf {out}}\).

4 Applications: Auxiliary Input Secure Identification Schemes

In an identification scheme \(\mathsf {ID}\), a prover attempts to prove its identity to a verifier. Specifically, it allows a prover to prove to the verifier that it possesses some secret information without revealing anything about it. Identification schemes that tolerate leakage in the bounded-retrieval model were already presented in [1]. In this section, we define two notions of identification schemes with security in the presence of auxiliary input and present two constructions with security that meets these notions. More formally, for a security parameter \(k\), an identification scheme \(\mathsf {ID}\) consists of three PPT algorithms \(\mathsf {ID}=(\mathsf {KeyGen},\mathsf {P},\mathsf {V})\) defined as follows:

  • \((\mathsf {pk},\mathsf {sk}) \leftarrow \mathsf {KeyGen}(1^{k})\): Outputs the public parameters of the scheme and a valid key pair.

  • \((\mathsf {P}(\mathsf {pk},\mathsf {sk}),\mathsf {V}(\mathsf {pk}))\): A (possibly) interactive protocol in which \(\mathsf {P}\) tries to convince \(\mathsf {V}\) of its identity by using its secret key \(\mathsf {sk}\). The verifier \(\mathsf {V}\) outputs either \(1\) for accept or \(0\) for reject.

We require that \(\mathsf {ID}\) is complete in the sense that an interaction with an honest prover will always be accepted by the verifier. Passive security of an identification scheme \(\mathsf {ID}\) considers a polynomial-time adversary \(\mathcal {A}\) that takes as input the public key \(\mathsf {pk}\) and observes an arbitrary number of runs of the protocol. After this phase is completed, \(\mathcal {A}\) tries to impersonate \(\mathsf {P}(\mathsf {pk},\mathsf {sk})\) by engaging in an interaction with \(\mathsf {V}(\mathsf {pk})\). An identification scheme \(\mathsf {ID}\) is passive secure if every polynomial-time adversary \(\mathcal {A}\) impersonating the prover only succeeds with at most negligible probability. We extend this definition to incorporate leakage from the prover’s secret key. To this end, we let the adversary obtain \(h(\mathsf {pk},\mathsf {sk})\) for an admissible leakage function \(h \in \mathcal {H}\). More formally, consider the following definition

Definition 4.1

(Secure identification schemes under auxiliary input attacks) An identification scheme \(\mathsf {ID}=(\mathsf {KeyGen},\mathsf {P},\mathsf {V})\) is passively secure under impersonation attacks w.r.t. to auxiliary inputs \((\mathsf {IDAUX}_{\mathsf {vkow}})\) from \(\mathcal {H}\) if for any PPT adversary \(\mathcal {A}\) and any function \(h \in \mathcal {H}\) there exists a negligible function \(\mathsf {negl}(\cdot )\) such that, for sufficiently large \(k \in \mathbb N\), the experiment below outputs 1 with probability at most \(\mathsf {negl}(k)\):

  1. 1.

    Sample \((\mathsf {pk},\mathsf {sk}) \leftarrow \mathsf {KeyGen}(1^{k})\) and give \(\mathsf {pk}\) and the leakage \(h(\mathsf {pk},\mathsf {sk})\) to \(\mathcal {A}\).

  2. 2.

    \(\mathcal {A}\) gets access to the protocol \((\mathsf {P}(\mathsf {pk},\mathsf {sk}),\mathsf {V}(\mathsf {pk}))\) for a polynomial number of times.

  3. 3.

    \(\mathcal {A}\) impersonates the prover and interacts with an honest verifier \(\mathsf {V}(\mathsf {pk})\). If \(\mathsf {V}(\mathsf {pk})\) accepts, then output \(1\); otherwise output \(0\).

As explained in Sect. 2.5, it is possible to consider two different classes of leakage functions. We will then refer to \(\ell (k)-\mathsf {IDAUX}_{\mathsf {ow}}\) security if the identification scheme is secure in the presence of leakage for functions from the class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), while \(\ell (k)-\mathsf {IDAUX}_{\mathsf {vkow}}\) implies security when the leakage function is picked from the class \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\). Where the later identification schemes will be secure only in the presence of leakage functions that are hard to invert given also the public key of the scheme.

We stress that our construction is insecure in the presence of active attacks, where the adversary gets to determine the challenges for the prover, since our construction is only secure for random messages picked by the challenger in the security game for signature schemes. Whereas in the active scenario, the adversary picks the messages to be signed by itself. This implies that we cannot reduce an active attack into the security of our signature scheme from Sect. 3.2.

4.1 Non-interactive Identification from Signature Schemes

In this section, we present constructions for both auxiliary input notions of secure identification schemes. More specifically, it is a well-known fact that non-interactive identification can be easily constructed from signature schemes. We demonstrate below that the same argument holds also when considering a signature scheme with auxiliary input security as a building block. Formally, let \(\Sigma =(\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) be a signature scheme that is secure for random messages and auxiliary input attacks with respect to a class of functions \(\mathcal {H}\) (cf. Definition 2.8). Then, the following identification scheme \(\mathsf {ID}_{1} = (\mathsf {KeyGen}_{1},\mathsf {P}_{1},\mathsf {V}_{1})\) is passive secure against auxiliary input attacks with respect to \(\mathcal {H}\).

Key generation, \(\mathsf {KeyGen}_{1}(1^{k})\)::

Sample keys \((\mathsf {pk},\mathsf {sk}) \leftarrow \mathsf {Gen}(1^{k})\) by running the underlying key generation of \(\Sigma \).

Protocol, \((\mathsf {P}(\mathsf {pk},\mathsf {sk}),\mathsf {V}(\mathsf {pk}))\)::

The non-interactive identification uses \(\mathsf {Sig}\) and \(\mathsf {Ver}\) from the underlying signature scheme \(\mathsf {Sig}\):

1.:

The verifier sends a random challenge \(c\) from the message space of \(\Sigma \).

2.:

The prover \(\mathsf {P}(\mathsf {pk},\mathsf {sk})\) computes \(\sigma \leftarrow \mathsf {Sig}(\mathsf {sk},c)\) and sends it to the verifier.

3.:

The verifier accepts if \(\mathsf {Ver}(\mathsf {pk},\sigma ,c)=1\); otherwise it rejects.

Theorem 4.2

Let \(k \in \mathbb N\) be the security parameter. For any \(\ell (\cdot )\), if \(\Sigma \!=\! (\mathsf {Gen}, \mathsf {Sig}, \mathsf {Ver})\) is \(\ell (k)\)-RU-RMAA for \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) (resp. for \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\)) according to Definition 2.8, then \(\mathsf {ID}_{1}\) is \(\ell (k)-\mathsf {IDAUX}_{\mathsf {ow}}\) (resp. \(\ell (k)-\mathsf {IDAUX}_{\mathsf {vkow}}\)) secure.

Proof

Let \(\Sigma \) be random messages unforgeable signature scheme against random message attacks for \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) (resp. for \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\)) and let \(\mathsf {ID}_{1}\) be the identification scheme described above. Assume, by contradiction, that \(\mathsf {ID}_{1}\) is not passive secure against impersonation attacks for \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\) (resp. for \(\mathcal {H}_{\mathsf {vkow}}(\ell (k))\)). This implies the existence of a PPT adversary \(\mathcal {A}\) that wins the impersonating game with a non-negligible probability \(p(k)\) for infinitely many \(k\)’s. We use \(\mathcal {A}\) to break security of \(\Sigma \). Consider the following adversary \({\mathcal {B}}\) playing against a challenger in the RU-RMA game for signature schemes.

  1. 1.

    \({\mathcal {B}}\) receives a verification key \(\mathsf {pk}\) together with a leakage function \(H(\mathsf {pk},\mathsf {sk})\).

  2. 2.

    \({\mathcal {B}}\) invokes adversary \(\mathcal {A}\) with input \((\mathsf {pk},H(\mathsf {pk},\mathsf {sk}))\).

  3. 3.

    When \(\mathcal {A}\) wishes to observe an interaction of the identification protocol, \({\mathcal {B}}\) first asks its oracle for a signature, receiving back \((m,\sigma )\), where \(m\leftarrow \mathcal M\) and \(\sigma =\mathsf {Sig}(\mathsf {sk},m). {\mathcal {B}}\) then proceeds in the simulation of the protocol by setting the random challenge of the verifier \(c=m\), and then simulating the prover by sending back \(\sigma \).

  4. 4.

    Whenever \(\mathcal {A}\) is ready to impersonate the prover, \({\mathcal {B}}\) asks its challenger for a random message \(m^{*}\) to be signed, which it then forwards to \(\mathcal {A}\).

  5. 5.

    Finally, \({\mathcal {B}}\) outputs whatever \(\mathcal {A}\) outputs.

Note first that \({\mathcal {B}}\) perfectly simulates the identification protocol’s execution and that its overall running time is polynomial. Moreover, \({\mathcal {B}}\) wins the game whenever \(\mathcal {A}\) impersonates correctly the prover. In other words:

$$\begin{aligned} \text {Pr}[{\mathcal {B}} {\text { wins}}]\ge \text {Pr}[\mathcal {A}{\text { wins}}]\ge p(k). \end{aligned}$$

This is a contradiction to the security of \(\Sigma \) and thus concludes the proof. \(\square \)

5 Security Under the \(K\)-Linear Assumption

In this section, we demonstrate how to instantiate our scheme from Sect. 3.3 with concrete primitives that yield an implementation of a EU-CMAA signature scheme. The hardness of our instantiated scheme follows from the \(K\)-linear assumption defined below (which also implies the hardness of discrete logarithms.) Notably, although we use the same building blocks, our instantiation is different and simpler than the one presented in [13].

Hardness assumptions. Our construction relies on the \(K\)-linear assumption. Let \(\mathcal {G}\) be a group generation algorithm, which outputs \((p,\mathbb G, g)\) given \(1^{k}\), where \(\mathbb G\) is the description of a cyclic group of prime order \(p\) and \(g\) is a generator of \(\mathbb G\).

Definition 5.1

(The \(K\)-linear assumption) Let \(K\ge 1\) be constant. The \(K\)-linear assumption on \(\mathbb G\) states that

$$\begin{aligned} (\mathbb G, g_{0}, g_{1},\ldots , g_{K}, g_{1}^{r_{1}},\ldots , g_{K}^{r_{K}}, g_{0}^{\sum _{i=1}^{K} r_{i}})\approx _{c} (\mathbb G, g_{0}, g_{1},\ldots , g_{K}, g_{1}^{r_{1}},\ldots , g_{K}^{r_{K}}, g_{0}^{r_{0}}) \end{aligned}$$

for \((p,\mathbb G,g)\leftarrow \mathcal {G}(1^{k}), g_{0},\ldots , g_{K}\leftarrow \mathbb G\), and \(r_{0}, r_{1},\ldots , r_{K} \leftarrow \mathbb Z_{p}\).

For \(K=1\), we get the DDH assumption and for \(K=2\), the decisional linear assumption. Note that \(K\)-linear implies \((K + 1)\)-linear. For ease of presentation, from here on we only consider the special case with \(K=2\). We further note that the hardness of \(K\)-linear implies the hardness of the discrete logarithm problem defined as follows.

Definition 5.2

(DL) We say that the discrete logarithm ( DL ) problem is hard relative to \(\mathbb G\) if for all PPT adversaries \(\mathcal {A}\) there exists a negligible function \(\mathsf {negl}\) such that

$$\begin{aligned} \hbox {Pr}\left[ \mathcal {A}(\mathbb G,q,g,g^{x})=x\right] \le \mathsf{negl}(k)\ , \end{aligned}$$

where \((p,\mathbb G,g)\leftarrow \mathcal {G}(1^{k})\) and \(x\leftarrow \mathbb Z_{p}\).

Definition 5.3

(Bilinear pairing) Let \(\mathbb G, \mathbb G_{T}\) be multiplicative cyclic groups of prime order \(p\) and let \(g\) be a generator of \(\mathbb G\). A map \(e:\mathbb G\times \mathbb G\rightarrow \mathbb G_{T}\) is a bilinear map for \(\mathbb G\) if it has the following properties:

  1. 1.

    Bi-linearity: \(\forall u,v\in \mathbb G, \forall a,b\in \mathbb Z_{p}, e(u^{a},v^{b})=e(u,v)^{ab}\).

  2. 2.

    Non-degeneracy: \(e(g,g)\) generates \(\mathbb G_{T}\).

  3. 3.

    \(e\) is efficiently computable.

We assume that the \(K\)-linear assumption holds in \(\mathbb G\).

We continue with a list of building blocks used in our scheme from Sect. 3.3 and their instantiations:

  1. 1.

    Second Preimage Resilient Hash Function \(H=(\mathsf {Gen}_{H},H_{s})\). Let \(\mathbb G\) be a group of prime order \(p\). Define first \(g_{1},\ldots ,g_{\ell }\leftarrow \mathsf {Gen}_{H}(1^{k})\) such that \(g_{1},\ldots ,g_{\ell }\) are generators for \(\mathbb G\) and fix the public key \(s=(g_{1},\ldots ,g_{\ell })\). Then, for input \(x\leftarrow \mathbb Z_{p}^{\ell }\), define \(H_{s}(x):=\prod _{i=1}^{\ell } g_{i}^{x_{i}}\). It is simple to verify that second preimage resilience is implied by the hardness of discrete logarithm in \(\mathbb G\). Loosely speaking, finding a collision with respect to \(y\in \mathbb G\) is sufficient to compute \(\log _{g} g_{\ell }\), given \((\log _{g} g_{1},\ldots ,\log _{g} g_{\ell -1},x,y)\). As shown below, second preimage resilience holds even for a small input domain, such as bits.

  2. 2.

    IND-WLCCA-Secure Encryption Scheme \(\Pi _{\mathsf {cca}}=(\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\). We use a modification of the linear Cramer–Shoup scheme [35] that is IND-LCCA secure (cf. Definition 2.3) under the \(2\)-linear assumption and supports labels [9, 12]. We recall that the security notion required for our proof is IND-WLCCA where the adversary cannot query the decryption oracle with the label used to compute the challenge. Furthermore, IND-LCCA on fixed length messages implies IND-WLCCA on arbitrary length messages; see Appendix 6 for further discussion.

    We adopt notation from [12] and use the notation of \((a_{1},a_{2})\) for the elements of a vector \(a\) of length 2. For \(a \in \mathbb G\) and \(\alpha \in \mathbb Z_{p}\), we denote by \(a\cdot \alpha = \alpha \cdot a\) the exponentiation \(a^\alpha \), vector multiplications are computed component-wise. Notice that we use bold fonts to denote vectors. Formally, for the public parameters \((H_{\mathsf {CL}}, p, \mathbb G, g_{0}, g_{1}, g_{2})\) where \((p,\mathbb G, g_{0})\leftarrow \mathcal {G}(1^{k}),g_{1},g_{2}\leftarrow \mathbb G\) and \(H_{\mathsf {CL}}: \{0,1\}^{*} \rightarrow \mathbb Z_{p}\) is a collision resistance hash function, and matrix \(\mathbf {A}\) defined by

    $$\begin{aligned} \mathbf {A}= \left( \begin{array}{ccc} g_{0} &{} g_{1} &{} 1 \\ g_{0} &{} 1 &{} g_{2} \\ \end{array} \right) \end{aligned}$$

    define the following encryption scheme.

    Key generation, \(\mathsf {KeyGen}.\) :

    Choose 3 random vectors \(\mathsf {u}, \mathsf {v}, \mathsf {w}\leftarrow \mathbb Z_{p}^{3}\). Compute the following:

    $$\begin{aligned} \mathbf {d}= \mathbf {A}\cdot \mathsf {v}, ~\mathbf {e}= \mathbf {A}\cdot \mathsf {w}, ~\mathbf {h}= \mathbf {A}\cdot \mathsf {u}. \end{aligned}$$

    Notice that \(\mathbf {d},\mathbf {e},\mathbf {h}\!\in \! \mathbb G^{2}\). Set \((\mathsf {dk},\mathsf {ek}) \!=\! \left( (\mathsf {u}, \mathsf {v}, \mathsf {w}),\! (\mathbf {d},\mathbf {e},\mathbf {h})\!\right) \).

    Encryption, \(\mathsf {Enc}\).:

    To encrypt a message \(m\in \mathbb G\) under label \(L\), choose \(\mathsf {r}\leftarrow \mathbb Z_{p}^{2}\) and compute \(\mathbf {y}= \mathsf {r}^{\top }\cdot \mathbf {A}\in \mathbb G^{3}\). Set

    $$\begin{aligned} a := h_{1}^{r_{1}}\cdot h_{2}^{r_{2}},\, z = a\cdot m,\,c =(d_{1}(e_{1}^{t}))^{r_{1}} \cdot (d_{2}(e_{2}^{t}))^{r_{2}}, \end{aligned}$$

    where \(t=H_{\mathsf {CL}}(\mathbf {y},z,L), \mathbf {d}= (d_{1},d_{2}), \mathbf {e}= (e_{1},e_{2}) , \mathsf {r}= (r_{1},r_{2})\) and \(\mathbf {h}=(h_{1},h_{2})\). Output \(C = (\mathbf {y}, z, c)\).

    Decryption, \(\mathsf {Dec}\).:

    To decrypt ciphertext \(C\) under label \(L\), parse \(C = (\mathbf {y}, z, c)\). Compute

    $$\begin{aligned} t=H_{\mathsf {CL}}(\mathbf {y},z,L),\,\tilde{c} = \mathbf {y}^{\top } \cdot (\mathsf {v}+t \mathsf {w}). \end{aligned}$$

    If \(\tilde{c} = c,\) then output \(z/ (\mathbf {y}^{\top } \cdot \mathsf {u})\). Else output \(\bot \).

  3. 3.

    NIZK Argument for NP. We consider the NIZK argument of Groth–Sahai [20] which shows how to prove in zero-knowledge under the \(2\)-linear assumption that a linear system has a solution (our notations here follow from [12] as well). Let \(\mathbf{B}\in {\text {M}}_{M\times N}(\mathbb G)\) be a matrix whose rows are \(\mathbf {b}_{i}=(\mathbf {b}_{i,1},\dots , \mathbf {b}_{i,N})\) for \(i=1,\dots ,M\). Let \(\mathbf{c}=(\mathbf{c}_{1},\dots ,\mathbf{c}_M )\) be a target vector in \(\mathbb G^M\). We say that the system \((\mathbf B,c)\) is satisfiable if there exists a vector \(\mathsf {u}=(\mathsf {u}_{1},\dots ,\mathsf {u}_{N})\in \mathbb Z_{p}^{N}\) such that:

    $$\begin{aligned} \mathbf {b}^{\mathsf {u}_{1}}_{i,1} \mathbf {b}^{\mathsf {u}_{2}}_{i,2}\ldots \mathbf {b}^{\mathsf {u}_{N}}_{i,N} = \mathbf {c}_{i} \end{aligned}$$

    for every \(i\in [1,\dots ,M]\). Another type of proof we adapt from [20] is for proving that an encrypted plaintext is a bit. Namely when encrypting \(g^{x}\), the prover proves that the quadratic equation \(x(1-x)\) equals zero. This proof ensures that a dishonest signer does not encrypt arbitrary plaintexts that are multiplied into the right hashed value, but do not enable extraction (since we cannot efficiently compute the discrete logarithm for arbitrary values in \(\mathbb G\)).

Instantiation. Our instantiation for the scheme in Sect. 3.3 requires the following. First, for a signing key \(x \in \{0,1\}^{\ell }\), consider the bit representation \(x_{1},\ldots ,x_{\ell }\) of \(x\) and compute \(H(x)\) using the hash function \(H\) from above, i.e., \(H(x)=\prod _{i=1}^{\ell } g_{i}^{x_{i}}\). In order to sign a message \(m\), the signer views \(m\) as a label for the labeled encryption scheme specified above and then computes \(\mathsf {Enc}^{m}(\mathsf {ek},g_{1}^{x_{1}}),\ldots ,\mathsf {Enc}^{{m}}(\mathsf {ek},g_{\ell }^{x_{\ell }})\); all ciphertexts with the same label \({m}\). It is easy to see that IND-WLCCA is maintained under such block-wise parallel encryption with respect to the same label. (See Theorem 6.1 in the appendix for a formal proof). The signer then computes a NIZK for proving that these ciphertexts encrypt bits and that they are consistent with the hashed value \(H(x)\) taken from the public key.

We recall first that second preimage resilience still holds even when evaluated on individual bits. Specifically, finding a collision \(x,x'\) implies that these values differ in at least a single bit. This means that \(\prod _{i=1}^{\ell } g_{i}^{x_{i}} = \prod _{i=1}^{\ell } g_{i}^{x'_{i}}\) induces two linear equations so that it is possible to find \(\log _{g} g_{\ell }\), given \(\log _{g} g_{1},\ldots , \log _{g} g_{\ell -1}\). We note that computing the hash function on bits rather than group elements is necessary in order to extract \(x\) from the forgery given by the adversary.

It is left to show how the NIZK proofs are defined. We observe first that for a ciphertext \(c=(c_{1},c_{2},c_{3})\) generated by \(\mathsf {Enc}\), elements \(c_{1}\) and \(c_{2}\) are component-wise multiplicatively homomorphic (where the third element is needed to verify consistency). Thus, given a ciphertext \(c=(c_{1},c_{2},c_{3})\) encrypting \(g^{x}\), one can generate a (partial) encryption of \(g^{1-x}\) using the homomorphic property of our PKE and prove that the product of the underlying plaintexts equals zero. Specifically, the signer proves that the product of \(x\) and \(1-x\) is zero which implies that \(x\) must be a bit. In addition, it is possible to efficiently compute a (partial) encryption of \(H(x)\) from encryptions of individual bits of \(x\), denoted by \(c_x=\left( (c_{1_{1}},c_{1_{2}},c_{1_{3}}),\ldots ,(c_{\ell _{1}},c_{\ell _{2}},c_{\ell _{3}})\right) \), by computing the following products

$$\begin{aligned} \tilde{c}_{1}=\prod _{i=1}^{\ell } c_{i_{1}} = \left( \prod _{i=1}^{\ell }\mathbf {y}_{i_{1}},\prod _{i=1}^{\ell }\mathbf {y}_{i_{2}},\prod _{i=1}^{\ell }\mathbf {y}_{i_{3}}\right) \,\mathrm{and}\,\tilde{c}_{2}=\prod _{i=1}^{\ell } c_{i_{2}}. \end{aligned}$$

This implies that if \(c_x\) is correctly computed then the following relation holds

$$\begin{aligned} \tilde{c}_{2}\Big /\prod _{i=1}^{\ell }\left( \mathbf {y}_{i_{1}}^{\mathsf {u}_{1}}\cdot \mathbf {y}_{i_{2}}^{\mathsf {u}_{2}}\cdot \mathbf {y}_{i_{3}}^{\mathsf {u}_{3}}\right) = H(x). \end{aligned}$$

Note that this set of ciphertexts induces \(\ell \) linear equations (in the exponent), with coefficients taken from matrix \(\mathbf {A}\) and variables \(\mathsf{R} = \left( (\mathsf {r}_{1_{1}},\mathsf {r}_{1_{2}}),\ldots ,(\mathsf {r}_{\ell _{1}},\mathsf {r}_{\ell _{2}})\right) \) so that \(\mathsf{R^{\top }}\cdot \mathbf {A}= \mathbf {Y}\), for \(\mathbf {Y}=((\mathbf {y}_{1_{1}},\mathbf {y}_{1_{2}},\mathbf {y}_{1_{3}}),\ldots ,(\mathbf {y}_{\ell _{1}}, \mathbf {y}_{\ell _{2}},\mathbf {y}_{\ell _{3}}))\) and \(c_{i_{3}}=(\mathbf {d}_{1}(\mathbf {e}_{1}^{t}))^{r_{1}} \cdot (\mathbf {d}_{2}(\mathbf {e}_{2}^{t}))^{r_{2}}\) for all \(i\). Finally, in order to impose consistency between \(c_x\) and \(H(x)\), we require that

$$\begin{aligned} \prod _{i=1}^{\ell }\left( \mathbf {h}_{i_{1}}^{\mathsf {r}_{i_{1}}}\cdot \mathbf {h}_{i_{2}}^{\mathsf {r}_{i_{2}}}\right) = \left( \prod _{i=1}^{\ell } c_{i_{2}}\right) \Big /H(x). \end{aligned}$$

This implies another linear equation and concludes the description of the signature.

As for the concrete parameters, for \(k=\log p\), we get that the length of the decryption key is \(l_{2} = 3 k\), the length of the simulation trapdoor is \(l_{3} = 2k\), and the length of the output of \(H\) is \(l_{4} = k\). The length of the description of \(H\) is \(l_{1} = \ell k\) but can be brought down to \(l_{1} = 2 k\) using Lemma 3.1, as it is trivial to build a pseudorandom generator \(\mathbb G^{2} \rightarrow \mathbb G^{\ell }\) using the \(2\)-linear assumption. Therefore, by 3.2, we obtain existential unforgeability with \(k_{{\text {W}}} = k + l_{2} + l_{3} = 6k\). If we consider the class \(\mathcal {H}_{\mathsf {ow}}(\ell (k))\), we obtain security with \(k_{{\text {S}}} = k + l_{1} + l_{2} + l_{3} + l_{4} = 9k\).