1 Introduction

In pairing-based cryptography, cryptographic primitives are often designed to have algorithms in which messages and public materials consist only of source group elements and correctness can be proved using pairing product equations to allow smooth coupling with other primitives. This interest in the so-called structure-preserving primitives [3] led to the study of algebraic algorithms with many positive but also negative results, e.g., [1, 2, 4,5,6,7, 12, 22, 27, 41].

In structure-preserving signature schemes, all components but secret keys are group elements. This raises a natural question: “Can secret keys consist entirely of source group elements as well?” A major obstacle in designing structure-preserving signatures is that messages are group elements so standard signature schemes relying on exponentiation with the message or some function thereof do not work. In existing structure-preserving signature schemes, this is overcome by having secret keys that are used in exponents. Thus, it is quite unclear how to construct structure-preserving signatures if even secret keys are group elements.

Besides the above question being a fascinating fundamental question in its own right, it is connected to practical protocol design since group element secret keys combined with the Groth–Sahai proof system [38] allow straight line (i.e., no rewinding) extraction of the secret keys. Verifiably encrypted secret keys are for instance useful in delegatable anonymous credential systems [13, 26, 32] extended with all-or-nothing non-transferability [24]. More applications of secret key extraction are introduced in Sect. 7.

While there are solutions in the random oracle model, e.g., [23, 31], secret key extraction without random oracles is currently prohibitively expensive. Meiklejohn [43] demonstrates how to extract a secret key in the exponent using the Groth–Sahai proofs. It requires bit-by-bit decomposition of secret x, and the proof consists of \(20\, \log _2 x + 18\) group elements. For instance, applying it to a structure-preserving signature scheme [2] whose secret key consists of \(4 + 2 L\) scalar values for signing messages of \(L\) group elements, proving secret keys for signing 10 group elements at the 128-bit security level, requires more than 61,000 group elements.

Our Contribution We summarize our contribution in the following.

  1. 1.

    We introduce the notion of fully structure-preserving signature (FSPS) schemes. FSPS is signature schemes whose message, signature, and key spaces, including secret keys, consist entirely of source group elements of bilinear groups. This extends the paradigm of structure-preserving cryptography to cover private materials.

  2. 2.

    We present a concrete construction of FSPS based on static (i.e., not q-type) standard assumptions. Its secret key consists only of four group elements, and a witness indistinguishable proof of knowledge of the secret key consists of 18 group elements (see Table 3 in Sect. 4.3). These are huge savings compared to the current bit-by-bit proof of knowledge of a secret exponent mentioned earlier.

  3. 3.

    We present a shrinking structure-preserving trapdoor commitment scheme (SPTC) that produces constant-size commitments consisting of a single group element regardless of the message size. It is used as an essential building block in our first construction of FSPS. Besides being an important primitive in itself, it is a remarkable result in light of the well-known impossibility [8] that SPTC schemes yielding shorter commitments than messages cannot be binding. We get around the impossibility by relaxing the binding property only to honestly created commitments as in [28]. The relaxed notion that we call chosen-message target collision resistance (\(\mathsf {CMTCR}\)) is in between collision resistance and target collision resistance if we use the terminology for hash functions. We show that \(\mathsf {CMTCR}\) is sufficient to construct secure signature schemes in combination with a weak signature scheme that is secure only against extended random message attacks [2] where private coins used to chose random messages are exposed to the adversary.

  4. 4.

    To push efficiency further, we give a direct FSPS construction, where security is proved in the generic bilinear group model [18, 19, 40, 42, 45, 48]. The latter construction has an optimally short verification key consisting of only a single group element. Furthermore, we show that its signatures can be either randomizable or strongly unforgeable. Recall that some structure-preserving signature schemes in the literature are randomizable meaning that some elements in a signature can be changed without losing correctness or security. This property is useful in particular when combining structure-preserving signatures with Groth–Sahai proofs since some of the signature elements can be revealed in the clear after being randomized. In other circumstances, however, quite the opposite may be the case and it may be desirable to have strongly unforgeable signatures where it is not only infeasible to forge signatures on messages that have not been seen before but it is also infeasible to create a new different signatures on messages that have already been signed. We define the notion of a combined signature scheme where the signer can choose for each message whether to make the signature strongly unforgeable or randomizable, and our latter construction is a combined signature scheme.

  5. 5.

    All our signatures have size \(\Omega (\sqrt{L})\) for messages consisting of \(L\) group elements. This is no coincidence; we show that, for a (one-time) SPS scheme over the so-called Type-III asymmetric bilinear groups, there is a non-trivial trade-off \(\kappa + \sigma \ge \sqrt{L}\) among verification key size \(\kappa \), signature size \(\sigma \), and message size L in the number of group elements. This means our constructions have optimal signature size for constant-size verification keys. We leave it as an open question to show (in)feasibility of constant-size signatures for verification keys of size growing in \(L\). Such an alternative is more advantageous in communications with constrained bandwidth.

Related Works At least one FSPS scheme already exists [2] but with constraints on both security and usability. Namely, it only meets the weak security guarantee (unforgeability against extended random message attacks), and the signing function takes messages of the form \((G^m, F^m, U^m)\) that essentially requires knowledge of m [14, 39]. Nevertheless, it is a reasonable starting point for constructions in the standard model. Wang et al. in [50] improve our framework and present a concrete construction of FSPS that yields shorter signatures in exchange of longer secret keys.

Regarding the constructions in the generic group model, there are several SPS schemes in the literature. Abe et al. [5] have shown that 3 element signatures cannot be proven secure under a non-interactive assumption using black box reductions, so strong assumptions are needed to get optimal efficiency. One of our schemes designed for more efficiency thus relies on the generic group model for its security proof. The signature scheme in [7] can be seen to be fully structure preserving with signatures consisting of 3 group elements. It is selectively randomizable where signatures are strong, but the signer can later choose to release a randomization token to make a signature randomizable. The notion of selective randomizability is different from the notion of combined signature schemes where the signer can choose to create randomizable or strong signatures at the time of signature generation. The advantage of selective randomizable signatures is that all signatures are verified with the same verification equation; the disadvantage is the need to issue randomization tokens when making a signature randomizable.

Regarding SPTC, the study by Abe et al.[8] is an important piece of context. It shows that no shrinking SPTC scheme can be binding, and indeed all existing SPTCs, e.g., [3], are expanding. The relaxed notion of binding, \(\mathsf {CMTCR}\), is a multi-session extension of honest-sender binding introduced in [28]. The use of trapdoor commitments and chameleon hashing has also been explored in the construction of online/off-line signatures [25, 30].

Paper Organization In Sect. 2, we introduce notations and definitions used throughout the paper and review POS and xRMA-secure FSPS that will be used as building blocks. We then construct a shrinking SPTC scheme in Sect. 3 where we first present a new commitment scheme we call a message-transposing commitment scheme in Sect. 3.2, and use it to construct an \(\mathsf {CMTCR}\)-secure SPTC in Sect. 3.3. We present FSPS schemes in Sect. 4. Starting from a simple construction in Sect. 4.1 that identifies problems, we present our main construction in Sect. 4.2. We then discuss their performance in Sect. 4.3 and a lower bound for signature and public key sizes in Sect. 6. In Sect. 5, we investigate more efficiency and functionality for FSPS in the generic group model. Starting from an efficient combined SPS in Sect. 5.1, we construct a combined FSPS in Sect. 5.2. In Fig. 1, we illustrate the structure of our construction. The upmost nodes are assumptions, and the bottom nodes are the FSPS schemes that we construct. Schemes in Sect. 5 are given security proofs in the generic bilinear group model (GM). We refer to [10] for variations of our FSPS constructions obtained by replacing some building blocks in our construction.

Fig. 1
figure 1

Structure of our constructions

2 Preliminaries

2.1 Notations

We write \(\lambda \) for a security parameter given as input to all parties running a scheme. The intention is that we can strengthen the security of a scheme by increasing the security parameter. We say a function \(f:\mathbb {N}\rightarrow [0,1]\) is negligible when \(f(\lambda )=\lambda ^{-\omega (1)}\), and we say f is overwhelming when \(1-f(\lambda )\) is negligible.

We write \(y \leftarrow A(x)\), when algorithm A takes x as input and outputs y. When it is clear from the context, we write \(\varvec{y} \leftarrow A(\varvec{x})\) to denote sequential and independent executions of \(y_i \leftarrow A(x_i)\) for \(x_i \in \varvec{x}\). When algorithm A is probabilistic, we let A(x) denote the output distribution (or the set of outputs) of A with respect to input x. By \({\Pr }\left[ \,A\,:\,X\,\right] \) we denote a probability that event X happens after process A is executed. When we count the number of source group elements, we use the notation \((n_1,n_2)\) to represent \(n_1\) and \(n_2\) elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively.

We use asterisk \(*\) as a wildcard that matches anything. For instance, \((a, *) \in X\) denotes that a set X includes a pair whose first item is a.

We will work extensively with cyclic groups, which we usually write with multiplicative notation. For a cyclic group \(\mathbb {G}\), we define \(\mathbb {G}^*=\mathbb {G}{\setminus } \{1_{\mathbb {G}}\}\). Or in the case of integers modulo p, which we write with additive notation, we define \({{\mathbb Z}}_p^*={{\mathbb Z}}_p{\setminus }\{0\}\). We assume it is possible to sample elements uniformly at random and will for instance write \(r\leftarrow {{\mathbb Z}}_p\) or \(r\leftarrow {{\mathbb Z}}_p^*\) when sampling from these sets.

2.2 Bilinear Groups

Let \(\mathcal{G}\) be a generator of bilinear groups that takes security parameter \(1^\lambda \) as input and outputs \(\Lambda := (p,{{\mathbb G}},\tilde{{{\mathbb G}}},{{\mathbb G}}_T, e, G, \tilde{G})\), where p is a \(\lambda \)-bit prime, \({{\mathbb G}}, \tilde{{{\mathbb G}}}, {{\mathbb G}}_T\) are groups of prime order \(p\) with efficiently computable group operations, membership tests, and bilinear map \(e: {{\mathbb G}}\times \tilde{{{\mathbb G}}}\rightarrow {{\mathbb G}}_T\). The pairing operation \(e\) satisfies that \(\forall A\in {{\mathbb G}}, \, \forall \tilde{B} \in \tilde{{{\mathbb G}}}, \, \forall x, y \in {{\mathbb Z}}\ : \ e(A^x, \tilde{B}^y)=e(A,\tilde{B})^{xy}\).

Elements G and \(\tilde{G}\) are default random generators of \({{\mathbb G}}\), \(\tilde{{{\mathbb G}}}\), and \(e(G, \tilde{G})\) generates \({{\mathbb G}}_T\). We use multiplicative notation for group operations in \({{\mathbb G}}\), \(\tilde{{{\mathbb G}}}\), and \({{\mathbb G}}_T\). An element in \({{\mathbb G}}\) is represented by a capital letter, e.g., \(A \in {{\mathbb G}}\), and one in \(\tilde{{{\mathbb G}}}\) is represented with tilde, e.g., \(\tilde{B} \in \tilde{{{\mathbb G}}}\). And we often assign corresponding small case letters to represent the logarithm with respect to the default generator, e.g., \(a = \log _G A\) and \(b = \log _{\tilde{G}} \tilde{B}\). A vector of elements is possibly denoted by a bold letter. For a vector of scalar values \(\varvec{x} := (x_1,\ldots ,x_n)\in {{\mathbb Z}}_p^n\) and a group element A, we write \(A^{\varvec{x}}\) for the vector \((A^{x_1},\ldots ,A^{x_n})\). Similarly, for a vector of group elements \(\varvec{X} := (X_1,\ldots ,X_n)\) and a scalar value a, we write \(\varvec{X}^a\) for \((X_1^a, \ldots , X_n^a)\). Similar conventions apply to matrices of scalars and group elements.

An equation of the form \( \prod _i \prod _j e(A_i, B_j)^{a_{ij}}= 1 \) for constants \(a_{ij} \in \mathbb {Z}_p\), and constants or variables \(A_i\in {{\mathbb G}}\), \(B_j\in \tilde{{{\mathbb G}}}\) is called a pairing product equation (PPE). By \({{\mathbb G}}^{*}\), we denote \({{\mathbb G}}{\setminus }{1_{{{\mathbb G}}}}\), and similar for \(\tilde{{{\mathbb G}}}^{*}\) and \(\mathbb {Z}_p^*\).

2.2.1 Generic Bilinear Group Model

Let \((p,{{\mathbb G}},\tilde{{{\mathbb G}}},{{\mathbb G}}_T,e,G,\tilde{G})\) be a description of groups with a bilinear map. We refer to deciding group membership, computing group operations in \({{\mathbb G}}\), \(\tilde{{{\mathbb G}}}\) or \({{\mathbb G}}_T\), comparing group elements, and evaluating the bilinear mapping \(e: {{\mathbb G}}\times \tilde{{{\mathbb G}}}\rightarrow {{\mathbb G}}_T\) as generic group operations. We will sometimes restrict algorithms to only use generic operations. To enforce the use of generic group operations, Shoup [48] introduced the generic group model, where group elements are represented as random strings and group operations handled through an oracle that knows the corresponding real group elements. The generic group model has been adapted to the bilinear groups setting [18, 19].

We will now describe the generic group model for the asymmetric setting where there is no isomorphism between the two source groups \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) that is efficiently computable in either direction. Each element in a group is represented by a unique string obtained by applying a random injective encoding to the group element. Algorithms may get some encoded elements as input and are given access to a group operation oracle that performs generic group operations. For random encodings \(\pi _1\), \(\pi _2\), and \(\pi _T\) assigned to \({{\mathbb G}}\), \(\tilde{{{\mathbb G}}}\), and \({{\mathbb G}}_T\), respectively, the oracle is given encoded elements together with a generic group operation to apply to the inputs. For instance, it may be given \(\pi _1(A)\) and \(\pi _1(B)\) with an instruction to perform a group operation, and return \(\pi _1(A \cdot B)\). Or it may be given \(\pi _1(A)\) and \(\pi _2(\tilde{B})\) with an instruction to perform a pairing operation, in which case it returns \(\pi _T(e(A,\tilde{B}))\). With such an oracle, a (potentially adversarial) algorithm \(\mathcal{A}\) can only compute a new encoding of an element in a source group \(\pi _x(X)\) by using the generic group operations on already seen elements, if it has seen elements \(\pi _x(X_i)\) it can pick scalars \(a_i\) and use the generic group operation oracle to get \(\pi _x(X_i^{a_i})\). The algorithm could also pick a random element in the range of the encoding function; however, this would just give it a random group element, so it might as well pick a random scalar \(r\leftarrow {{\mathbb Z}}_p\) and compute \(\pi _x(X^r)\) for a previously seen \(\pi _x(X)\). A common use of the generic group model is to build trust in an intractability assumption by proving that it cannot be broken by generic operations. In such proofs, we use that the encoding is random and therefore seeing for instance \(\pi _x(X)\) reveal no information about the group element X itself except what can be deduced by using generic group operations and testing equality of group elements.

We call algorithms that follow the above model generic. Note that, when a generic algorithm outputs a group element in \({{\mathbb G}}\) (or \(\tilde{{{\mathbb G}}}\)), it only depends on elements in \({{\mathbb G}}\) (or \(\tilde{{{\mathbb G}}}\), respectively) given to the algorithm as input.

2.2.2 Assumptions

Throughout the paper, we work over asymmetric bilinear groups (so-called Type-III setting [34]) where no efficient isomorphisms exist between \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\). Some building blocks in our construction rely on the double pairing assumption [3].

Assumption 1

(Double Pairing Assumption in\(\tilde{{{\mathbb G}}}\): \(\mathsf {DBP}\)) The double pairing assumption holds in \(\tilde{{{\mathbb G}}}\) relative to \(\mathcal{G}\) if, for all probabilistic polynomial time algorithms \(\mathcal{A}\)

$$\begin{aligned} {\Pr }\left[ \, \begin{array}{l} \Lambda \leftarrow \mathcal{G}(1^\lambda );\\ \tilde{G}_z \leftarrow {{\mathbb G}}_2^*;\\ (Z,R) \leftarrow \mathcal{A}(\Lambda ,\tilde{G}_z) \end{array} \,:\, \begin{array}{l} (Z,R) \in {{{\mathbb G}}_1^* \times {{\mathbb G}}_1^*},\wedge \\ 1 = e(Z, \tilde{G}_z)\; e(R, \tilde{G}) \end{array}\,\right] \end{aligned}$$
(1)

is negligible in the security parameter \(\lambda \).

The \(\mathsf {DBP}\) assumption in \({{\mathbb G}}\) is defined by swapping \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) in the above definition. Note that the \(\mathsf {DBP}\) assumption (in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\)) is implied by the Decision Diffie–Hellman assumption [3] (in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively) which is often assumed in Type-III setting.

We also use a building block that requires more assumptions such as \(\mathsf {DDH}_{2}\), \(\mathsf {XDLIN}_{1}\), and \(\mathsf {co}\)-\(\mathsf {CDH}_2\) defined as follows.

Assumption 2

(Decisional Diffie–Hellman Assumption in\(\tilde{{{\mathbb G}}}\): \(\mathsf {DDH}_{2}\)) Any probabilistic polynomial time algorithm \(\mathcal{A}\) decides whether \(b=1\) or 0 with negligible advantage \( \text {Adv} ^{\mathsf {{\mathsf {ddh}}2}}_{\mathcal{G},\mathcal{A}} (\lambda )\) in \(\lambda \) given \(\Lambda \leftarrow \mathcal{G}(1^\lambda )\), \(\tilde{G}\leftarrow \tilde{{{\mathbb G}}}\), and \((\tilde{G}^x, \tilde{G}^y, Z_b)\) where \(Z_1 = \tilde{G}^{x y}\) and \(Z_0 = \tilde{G}^z\) for random \(x,y,z\leftarrow {{\mathbb Z}}_p\) and random bit b,

Assumption 3

(External Decision Linear Assumption in\({{\mathbb G}}\): \(\mathsf {XDLIN}_{1}\)) Any probabilistic polynomial time algorithm \(\mathcal{A}\) decides whether \(b=1\) or 0 with negligible advantage \( \text {Adv} ^{\mathsf {xdlin} 1}_{\mathcal{G},\mathcal{A}}(\lambda )\) given \(\Lambda \leftarrow \mathcal{G}(1^\lambda )\), and \((G^a, G^b, G^c, G^{a x}, G^{b y}, \tilde{G}^{a}, \tilde{G}^{b}, \tilde{G}^{c}, \tilde{G}^{a x}, \tilde{G}^{b y},Z_{b})\) where \(Z_1 = G^{c(x+y)}\), and \(Z_0 = G^z\) for random \(a, b, c \leftarrow \mathbb {Z}_p^*\), \(x,y,z \leftarrow \mathbb {Z}_p\) and random bit b.

The \(\mathsf {XDLIN}_{1}\) assumption is equivalent to the \(\mathsf {DLIN}_{1}\) assumption in the generic bilinear group model where one can simulate the extra elements, \(\tilde{G}^{a}, \tilde{G}^{b}, \tilde{G}^{c}, \tilde{G}^{a x}, \tilde{G}^{b y}\), in \(\mathsf {XDLIN}_{1}\) from \(G^a, G^b, G^c, G^{a x}, G^{b y}\) in \(\mathsf {DLIN}_{1}\).

Assumption 4

(Computational co-Diffie–Hellman Assumption in\(\tilde{{{\mathbb G}}}\): \(\mathsf {co}\)-\(\mathsf {CDH}_2\)) Any probabilistic polynomial time algorithm \(\mathcal{A}\) outputs \(\tilde{G}^{x y}\) with negligible probability \( \text {Adv} ^{\mathsf {co\text {-}cdh}}_{\mathcal{G},\mathcal{A}}(\lambda )\) given \(\Lambda \leftarrow \mathcal{G}(1^{\lambda })\), \(G\leftarrow {{\mathbb G}}^*\), \(\tilde{G}\leftarrow \tilde{{{\mathbb G}}}^*\), \(G^{x}\), \(G^{y}\), \(\tilde{G}^x\), and \(\tilde{G}^y\) for \(x,y \leftarrow \mathbb {Z}_p\).

Similar to Assumption 3, \(\mathsf {co}\)-\(\mathsf {CDH}_2\) is equivalent to computational Diffie–Hellman assumption in \({{\mathbb G}}\) or \(\tilde{{{\mathbb G}}}\) in the generic bilinear group model, but generally they are unrelated in Type-III setting. We refer [49] for more discussion on variations of computational Diffie–Hellman assumptions over bilinear groups.

2.3 Joint Setup

Building blocks in this paper are defined with individual setup functions. As we work over bilinear groups, an output from a setup function should include a description of bilinear groups \(\Lambda \). Some random generators specific to each building block may be included as well.

The idea behind structure-preserving cryptography is that all schemes will work over the same bilinear group and hence be easy to compose. We will therefore for composed schemes assume they use a joint setup consisting of a bilinear group and a number of random group elements corresponding to the maximum number needed by any of the building blocks. More precisely, suppose that two building blocks, say \(\mathsf {A}\) and \(\mathsf {B}\), are used together and have setups \(gk_{\mathsf {A}}\leftarrow \mathsf {A.Setup}(1^\lambda )\) and \(gk_{\mathsf {B}}\leftarrow \mathsf {B.Setup}(1^\lambda )\). We say they have a common setup function if there exists a polynomial time algorithm \(\mathsf {Setup}\) such that, given \(gk\leftarrow \mathsf {Setup}(1^\lambda )\) it is possible in polynomial time from \(gk\) to recover individual setups \(gk_{\mathsf {A}}\) and \(gk_{\mathsf {B}}\), each one having correct probability distribution. It is also required that \(gk\) can be simulated given either \(gk_{\mathsf {A}}\) or \(gk_{\mathsf {B}}\). In the rest of the paper, we assume a common setup \(gk\) for all the individual schemes. In general each individual setup samples a bilinear group and a number of uniformly random group elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\). We can then pick as a common setup a bilinear group sampled with the same distribution and a number of uniformly random elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) matching the maximum number any individual schemes uses in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively.

2.4 Digital Signatures

In this section, we recall definitions of digital signatures and their security notions. On top of the standard notions, we define structure-preserving and fully structure-preserving signatures.

Definition 1

(Digital Signature Scheme) A digital signature scheme \(\mathsf {SIG}\) is a tuple of polynomial time algorithms \((\mathsf {Setup},\mathsf {Key}, \mathsf {Sign}, \mathsf {Vrf})\) where \(gk\leftarrow \mathsf {Setup}(1^\lambda )\) is a probabilistic setup algorithm that, given a security parameter \(\lambda \), generates common parameter \(gk\), which defines a message space \(\mathcal{M}\) for which membership is efficiently decidable, \((vk_{},sk_{}) \leftarrow \mathsf {Key}(gk)\) is a probabilistic key generation algorithm that takes common parameter \(gk\) and generates a verification key \(vk_{} \) and a signing key \(sk_{} \), \(\sigma _{}\leftarrow \mathsf {Sign}(sk_{},m)\) is a probabilistic signature generation algorithm that computes a signature \(\sigma _{}\) for input message \(m \in \mathcal{M}\) by using signing key \(sk_{} \), and \(1/0 \leftarrow \mathsf {Vrf}(vk_{},m,\sigma _{})\) is a verification algorithm that outputs 1 for acceptance or 0 for rejection according to the input.

For any legitimately generated \(gk\), \(vk_{} \), \(sk_{} \) and any \(m \in \mathcal{M}\), it must hold that \(1 = \mathsf {Vrf}(vk_{},m,\mathsf {Sign}(sk_{},m))\). A key pair \((vk,sk)\) is valid with respect to \(gk\) if it is in the range of \(\mathsf {Key}(gk)\).

Definition 2

(Unforgeability against Adaptive Chosen-Message Attacks) A signature scheme, \(\mathsf {SIG}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Sign}, \mathsf {Vrf}\}\), is unforgeable against adaptive chosen-message attacks (\(\mathsf {UF}\text {-}\mathsf {CMA}\)) if for any polynomial time adversary \(\mathcal{A}\) the following advantage function is negligible.

$$\begin{aligned} \text {Adv} ^{\mathsf {uf\text {-}cma}}_{\mathsf {SIG},\mathcal{A}}(\lambda ) := {\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ),\\ (vk_{},sk_{}) \leftarrow \mathsf {Key}(gk),\\ (\sigma _{}^{\dagger },m^{\dagger }) \leftarrow \mathcal{A}^{\mathcal{O}_{sk_{}}}(vk_{}) \end{array} \,:\, \begin{array}{l} m^{\dagger } \not \in Q\, \wedge \\ 1 = \mathsf {Vrf}(vk_{}, m^{\dagger }, \sigma _{}^{\dagger }) \end{array} \,\right] , \end{aligned}$$
(2)

where \(\mathcal{O}_{sk_{}}\) is an oracle that, given \(m\), executes \(\sigma _{}\leftarrow \mathsf {Sign}(sk_{},m)\), records \(m\) in \(Q\), and returns \(\sigma _{}\).

It is strongly unforgeable if \(\mathcal{O}_{sk_{}}\) records \((m,\sigma _{})\) in \(Q\) and the winning condition \(m^{\dagger } \not \in Q\) is replaced with \((m^{\dagger },\sigma _{}^{\dagger }) \not \in Q\).

By \(\mathsf {UF}\text {-}\mathsf {NACMA}\) we denote a relaxed notion of security where adversary \(\mathcal{A}\) has to commit to the messages to query before seeing \(vk_{} \). (\(\mathcal{A}\) is given \(gk\) that defines the message space.) Such an attack model is called a weak chosen-message attack in [18], and a generic chosen-message attack in [35]. Formally:

Definition 3

(Unforgeability against Non-Adaptive Chosen-Message Attacks) A signature scheme, \(\mathsf {SIG}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Sign}, \mathsf {Vrf}\}\), is unforgeable against non-adaptive chosen-message attacks (\(\mathsf {UF}\text {-}\mathsf {NACMA}\)) if for any polynomial time adversary \(\mathcal{A}\) the following advantage function is negligible.

$$\begin{aligned} \text {Adv} ^{\mathsf {uf\text {-}nacma}}_{\mathsf {SIG},\mathcal{A}}(\lambda ) := {\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ),\\ (\varvec{m_i},\omega ) \leftarrow \mathcal{A}(gk)\\ (vk_{},sk_{}) \leftarrow \mathsf {Key}(gk),\\ \varvec{{\sigma _{}}_i} \leftarrow \mathsf {Sign}(sk_{}, \varvec{m_i})\\ (\sigma _{}^{\dagger },m^{\dagger }) \leftarrow \mathcal{A}(\omega ,vk_{},\varvec{{\sigma _{}}_i}) \end{array} \,:\, \begin{array}{l} m^{\dagger } \not \in \varvec{m_i} \, \wedge \\ 1 = \mathsf {Vrf}(vk_{}, m^{\dagger }, \sigma _{}^{\dagger }) \end{array} \,\right] , \end{aligned}$$
(3)

where \(\omega \) is an internal state, \(\varvec{m_i}\) is a polynomial number of messages, and \(\varvec{{\sigma _{}}_i} \leftarrow \mathsf {Sign}(sk_{}, \varvec{m_i})\) is a process of signing each \(m_i\) with \(sk_{} \) and outputting the resulting signature as \({\sigma _{}}_i\).

A one-time signature scheme is a digital signature scheme with the limitation that a signing key is intended to be used only once. Unforgeability against one-time chosen-message attacks is defined as in Definition  by restricting the signing oracle to answer only a single query.

Definition 4

(Structure-Preserving Signature Scheme) A digital signature scheme is structure preserving relative to bilinear group generator \(\mathcal{G}\) if the common parameters \(gk\) consist of group description \(\Lambda \) generated by \(\mathcal{G}\), some constants, and some source group elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) in \(\Lambda \), and verification keys \(vk\), messages \(m\) and signatures \(\sigma _{}\) solely consist of group elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), and the verification algorithm \(\mathsf {Vrf}\) consists only of evaluating membership in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) and relations described by paring product equations.

When messages consist of elements from both source groups, \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), they are called bilateral. We say a message is unilateral if it consists exclusively of elements from one of the source groups.

The notion of structure-preserving cryptography requires public components to be group elements. We extend it so that private components consist of group elements as well.

Definition 5

(Fully Structure-Preserving Signature Scheme) A structure-preserving signature scheme is fully structure preserving if signing keys \(sk\) consist of group elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), and the validity of \(sk\) with respect to \(vk\) can be verified by evaluating membership in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\) and relations described by pairing product equations.

Note that, in reality, \(vk\) will include \(gk\) and \(sk\) will include \(vk\) to be consistent with the standard interface of the functions in a signature scheme. We will ignore those nested objects when we argue that a scheme is structure preserving and measure the size of keys and signature without counting the elements in \(gk\).

Once the additional conditions in Definition  are satisfied, one can construct proofs of knowledge of secret keys by using the Groth–Sahai proof system, which allows the extraction of a correct secret key corresponding to the verification key. It is, however, important to note that there could exist more than one valid secret key for a verification key and different secret keys may yield signatures with different distributions. One might need stronger extractability that enables the extraction of the secret key for a particular distribution of signatures. This is for instance the case for the group signature application mentioned in Sect. 1. Our concrete scheme allows one to efficiently prove the relation between a secret key and a signature. See Sect. 2.6.

The notion of standard unforgeability does not prevent the adversary from changing signatures as long as the associated message is intact. Constructive use of this property is known as randomizable signature schemes defined as follows.

Definition 6

(Randomizable Signature Scheme) A signature scheme is randomizable if there exists an efficient algorithm \(\mathsf {Rand}\) that takes \(gk\), vk, m, and \(\sigma \) as input and outputs a new signature \(\sigma '\). We require for all \(\lambda \in \mathbb {N}\), \(gk\leftarrow \mathsf {Setup}(1^\lambda )\), \((vk, sk) \leftarrow \mathsf {Key}(gk)\), \(m \in \mathcal{M}\), \(\sigma \leftarrow \mathsf {Sign}(sk,m)\), and all randomized signatures \(\sigma ' \leftarrow \mathsf {Rand}(vk,m,\sigma )\), it must hold that \(1 \leftarrow \mathsf {Vrf}(vk,m,\sigma ')\).

Signatures are perfectly randomizable if a randomized signature looks exactly like a fresh signature on the same message.

Definition 7

(Perfect Randomizability) A signature scheme is perfectly randomizable if for all adversaries \(\mathcal{A}\) outputting messages \(m\in \mathcal{M}\) we have

$$\begin{aligned} \Pr \left[ \begin{array}{c}gk\leftarrow \mathsf {Setup}(1^\lambda ); (vk,sk) \leftarrow \mathsf {Key}(gk); b\leftarrow \{0,1\}; \\ m\leftarrow \mathcal{A}(vk,sk); \sigma _0\leftarrow \mathsf {Sign}(sk,m); \sigma _1\leftarrow \mathsf {Rand}(vk,m,\sigma _0)\end{array}: \mathcal{A}(\sigma _b)=b \right] =\frac{1}{2}. \end{aligned}$$

Unless a signature scheme is deterministic, it cannot be both perfectly randomizable and also strongly unforgeable. However, we can combine both properties in a single signature scheme that allows a signer to issue two types of signatures: randomizable signatures and strongly unforgeable signatures. We call such a scheme a combined signature scheme.

Definition 8

(Combined Signature Scheme) A combined signature scheme is a set of polynomial time algorithms \((\mathsf {Setup},\mathsf {Key},\mathsf {Sign}_0,\mathsf {Vrf}_0,\mathsf {Rand},\mathsf {Sign}_1,\mathsf {Vrf}_1)\) where \((\mathsf {Setup},\mathsf {Key},\mathsf {Sign}_0,\mathsf {Vrf}_0,\mathsf {Rand})\) is a randomizable signature scheme and \((\mathsf {Setup},\mathsf {Key},\mathsf {Sign}_1,\mathsf {Vrf}_1)\) is a strongly unforgeable signature scheme.

A naïve combined signature scheme would have a verification key containing two verification keys, one for randomizable signatures and one for strong signatures. However, this solution has the disadvantage of increasing key size. Instead, we will in this paper construct a combined signature scheme where the verification key is just a single group element that can be used to verify either type of signature. This dual use of the verification key means that we must carefully consider the security implications of combining two signature schemes though, so we will now define a combined signature scheme.

To capture the attacks that can occur against a combined signature scheme, we assume the adversary may arbitrarily query a signer for randomizable or strong signatures. We want the signature scheme to be combined existentially unforgeable in the sense that even seeing randomizable signatures does not help in breaking strong existential unforgeability and on the other hand seeing strong signatures does not help in producing randomizable signatures.

Definition 9

(Combined Existential Unforgeability Under Chosen-Message Attack) The combined signature scheme is combined existentially unforgeable under adaptive chosen-message attack (\(\mathsf {C\text {-}EUF\text {-}CMA}\)) if for all probabilistic polynomial time adversaries \(\mathcal{A}\)

$$\begin{aligned} {\Pr }\left[ \,\begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ); (vk,sk)\leftarrow \mathsf {Key}(gk)\\ (m,\sigma )\leftarrow \mathcal{A}^{\mathsf {Sign}_0(sk,\cdot ),\mathsf {Sign}_1(sk,\cdot )}(vk) \end{array}\,:\, \begin{array}{c} \mathsf {Vrf}_0(vk,m,\sigma )=1 \wedge m\notin Q_0 \ or \\ \mathsf {Vrf}_1(vk,m,\sigma )=1 \wedge (m,\sigma )\notin Q_1\end{array}\,\right] \end{aligned}$$

is negligible, where \(\mathcal{A}\) can make signing queries on arbitrary \(m\in \mathcal{M}\) and the output message m must belong to \(\mathcal{M}\), \(Q_0\) is the set of messages that have been queried to \(\mathsf {Sign}_0\), and \(Q_1\) is the set of message and signature pairs from queries to \(\mathsf {Sign}_1\).

2.5 Partially One-Time Signatures

When only a part of a signing key of one-time signatures must be updated for every signing, i.e., the remaining part of the key can be used an unbounded number of times, the scheme is called a two-tier signature scheme or a partially one-time signature scheme[2, 16].

Definition 10

(Partially One-time Signature Scheme) A partially one-time signature scheme is a set of algorithms \(\mathsf {POS}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Ovk}, \mathsf {Sign}, \mathsf {Vrf}\}\) such that

  • \(\mathsf {Setup}\)\((1^{\lambda })\rightarrow {gk}\): A setup function that, given a security parameter \(\lambda \), generates common parameter \(gk\), which defines message space \(\mathcal{M}\).

  • \(\mathsf {Key}\)(gk)\(\rightarrow \)(\(vk_{}\), \(sk_{}\)): A long-term key generation function that takes \(gk\) and outputs a long-term key pair \((vk_{}, sk_{})\).

  • \(\mathsf {Ovk}\)(gk)\(\rightarrow \)(\(ovk_{}\),\(osk_{}\)): A one-time key generation function that takes \(gk\) and outputs a one-time key pair \((ovk_{},osk_{})\).

  • \(\mathsf {Sign}\)(\(sk_{},osk_{},m)\rightarrow {\sigma _{}}\): A signing function that takes \(sk_{}\), \(osk_{}\) and a message \(m\) as inputs and issues a signature \(\sigma _{}\).

  • \(\mathsf {Vrf}\)(\(vk_{},ovk_{},m,\sigma _{})\rightarrow \)1/0: A verification function that outputs 1 or 0 for acceptance and rejection, respectively.

For any \(gk\leftarrow \mathsf {Setup}(1^\lambda )\), \((vk_{}, sk_{}) \leftarrow \mathsf {Key}(gk)\), \(m\in \mathcal{M}\), and \((ovk_{},osk_{})\leftarrow \mathsf {Ovk}(gk)\), \(\sigma _{}\leftarrow \mathsf {Sign}(sk_{},osk_{},m)\), it must hold that \(1 \leftarrow \mathsf {Vrf}(vk_{}, ovk_{}, m, \sigma _{})\).

The security notion considered in [2, 16] is defined with respect to a single long-term key pair. Here we extend the notion to multiple key pairs.

Definition 11

(Multi-Key Partial One-time Chosen-Message Attack for POS) A partially one-time signature scheme, \(\mathsf {POS}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Ovk}, \mathsf {Sign}, \mathsf {Vrf}\}\), is unforgeable against multi-key non-adaptive partial one-time chosen-message attacks (\({\mathsf {MK}{\text {-}}\mathsf {OT}{\text {-}}\mathsf {NACMA}} \)), if for any polynomial time adversary \(\mathcal{A}\) the advantage function \( \text {Adv} ^{\mathsf {mk\text {-}ot\text {-}nacma}}_{\mathsf {POS},\mathcal{A}}(\lambda )\) defined by

$$\begin{aligned} {\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ),\\ (ovk_{}^{\dagger },\sigma _{}^{\dagger },m^{\dagger }) \leftarrow \mathcal{A}^{\mathcal{O}_{k},\mathcal{O}_{s}}(gk) \end{array} \,:\, \begin{array}{l} (vk_{}^{\dagger },ovk_{}^{\dagger }, *) \in Q_{mv}\, \wedge \\ (vk_{}^{\dagger },ovk_{}^{\dagger },m^{\dagger }) \not \in Q_{mv}\, \wedge \\ 1 = \mathsf {Vrf}(vk_{}^{\dagger }, ovk_{}^{\dagger }, m^{\dagger }, \sigma _{}^{\dagger }) \end{array} \,\right] \end{aligned}$$
(4)

is negligible. Oracle \(\mathcal{O}_{k}\) is the key generation oracle that, on receiving the i-th request from \(\mathcal{A}\), generates \((vk_{}^{[i]}, sk_{}^{[i]}) \leftarrow \mathsf {Key}(gk)\), and returns \(vk_{}^{[i]}\). Oracle \(\mathcal{O}_{s}\) is the signing oracle that, given \(m\in \mathcal{M}\) and \(vk_{}^{[i]}\) generated by \(\mathcal{O}_{k}\), executes \(({ovk_{}}^{(j)},{osk_{}}^{(j)}) \leftarrow \mathsf {Ovk}(gk)\), \(\sigma _{}\leftarrow \mathsf {Sign}({sk_{}}^{[i]},{osk_{}}^{(j)},m)\), records \(({vk_{}}^{[i]}, {ovk_{}}^{(j)},m)\) in \(Q_{mv}\), and returns \((\sigma _{},{ovk_{}}^{(j)})\).

The following concrete scheme is taken from [2] with a trivial modification in the signing algorithm so that the signature elements are computed more efficiently.

figure a

In [2], the security of the above scheme is proven based on the \(\mathsf {DBP}\) assumption in \({{\mathbb G}}\) with respect to a single long-term key, i.e., under the constraint that \(\mathcal{O}_{k}\) is accessible only once. However, it is easy to show that the scheme is indeed \({\mathsf {MK}{\text {-}}\mathsf {OT}{\text {-}}\mathsf {NACMA}} \) as stated below thanks to the random self-reducibility of the \(\mathsf {DBP}\) problem. (The scheme satisfies even stronger security where \(\mathcal{A}\) is allowed to access \(\mathsf {Ovk}\) and \(\mathsf {Sign}\) separately.)

Theorem 1

\(\mathsf {POS}\) is strongly unforgeable against \({\mathsf {MK}{\text {-}}\mathsf {OT}{\text {-}}\mathsf {NACMA}} \) if \(\mathsf {DBP} \) in \({{\mathbb G}}\) holds. In particular, for all p.p.t. algorithms \(\mathcal{A}\) there exists a p.p.t. algorithm \(\mathcal{B}\) such that \( \text {Adv} ^{\mathsf {mk\text {-}ot\text {-}nacma}}_{\mathsf {POS},\mathcal{A}}(\lambda ) \le \text {Adv} ^{}_{\mathsf {DBP},\mathcal{B}}(\lambda ) +1/p(\lambda )\), where \(p(\lambda )\) is the size of the groups produced by \(\mathcal{G}\). Moreover, the run-time overhead of the reduction \(\mathcal{B}\) is a small number of multi-exponentiations per signing or key query.

2.6 xRMA-Secure Fully Structure-Preserving Signature Scheme

We follow the notion of extended random message attacks and take one of the schemes in [2]. The definition is relative to a message sampling algorithm, \(\mathsf {SampleM}\), that takes \(gk\) and outputs messages \(\varvec{m}\) with some auxiliary information \(\omega \).

Definition 12

(Unforgeability against Extended Random Message Attacks) A signature scheme, \(\mathsf {xSIG}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Sign}, \mathsf {Vrf}\}\), is unforgeable against extended random message attacks (\(\mathsf {UF}\text {-}\mathsf {XRMA}\)) with respect to message sampling algorithm \(\mathsf {SampleM}\) if for any polynomial time adversary \(\mathcal{A}\)

$$\begin{aligned} \text {Adv} ^{\mathsf {uf\text {-}xrma}}_{\mathsf {xSIG},\mathcal{A}}(\lambda ) := {\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ),\\ (vk_{},sk_{}) \leftarrow \mathsf {Key}(gk),\\ (\varvec{m}, \omega ) \leftarrow \mathsf {SampleM}(gk),\\ \varvec{\sigma _{}} \leftarrow \mathsf {Sign}(sk_{},\varvec{m}),\\ (\sigma _{}^{\dagger },m^{\dagger }) \leftarrow \mathcal{A}(vk_{},\varvec{\sigma _{}},\varvec{m}, \omega ) \end{array} \,:\, \begin{array}{l} m^{\dagger } \not \in \varvec{m} \, \wedge \\ 1 = \mathsf {Vrf}(vk_{}, m^{\dagger }, \sigma _{}^{\dagger }) \end{array} \,\right] \end{aligned}$$
(5)

is negligible.

The next scheme is taken from [2] with two modifications to fit to our construction. First, the message space is extended to sign messages consisting of \(\ell _{\mathsf {x}}\ge 1\) message blocks. Second, it takes randomness from \(\mathbb {Z}_p\) rather than \(\mathbb {Z}_p^*\) in the key generation. Those differences make no changes in their security properties.

figure b

The above scheme comes with trivial modifications from the original in [2]. First it is extended to sign random messages consisting of \(\ell _{\mathsf {x}}\ge 1\) message blocks, and second it takes randomness from \(\mathbb {Z}_p\) rather than \(\mathbb {Z}_p^*\) in the key generation. Those changes have no effect on the security that we recall below.

Theorem 2

([2]) If the \(\mathsf {DDH}_{2}\), \(\mathsf {XDLIN}_{1}\), and \(\mathsf {co}\)-\(\mathsf {CDH}_2\) assumptions hold, then the above \(\mathsf {xSIG}\mathsf {}\) is \(\mathsf {UF}\)-\(\mathsf {XRMA}\) with respect to any message sampling algorithm that takes \(gk\) as input and returns message block \((\tilde{F}_1^{m_i},\tilde{F}_2^{m_i},\tilde{U}_i^{m_i})\) with auxiliary information \(m_i\) for \(i = 1,\ldots ,\ell _{\mathsf {x}}\).

Theorem 3

The above \(\mathsf {xSIG}\mathsf {}\) is fully structure preserving.

Proof

By inspection, it is clear that \( vk \) (modulo group description in \(gk\)), \( sk \), \(M\), and \(\sigma \) consist of source group elements, and \(\mathsf {xSIG}.\mathsf {Vrf} \) consists of evaluating PPEs.

Next we show that the following PPEs are satisfied if and only if the verification key and the secret key are in the range of \(\mathsf {xSIG}.\mathsf {Key} \).

$$\begin{aligned} \begin{array}{l} e(K_2, \tilde{G}) = e(G, \tilde{V}_1),\quad e(G, \tilde{V}_3) = e(K_2,\tilde{V}_2),\quad e(K_1, \tilde{V}_1) = e(V_7, \tilde{V}_8), \\ e(K_2, \tilde{V}_4) = e(G,\tilde{V}_5),\quad e(K_3, \tilde{G})\, e(K_4, \tilde{V}_2) = e(G, \tilde{V}_4). \end{array} \end{aligned}$$
(9)

Showing correctly generated keys satisfy the above relations is trivial. We argue the other direction as follows. Variables that define a key pair are a, b, \(\alpha \), \(\tau _1\), \(\tau _2\), \(\tau _3\) and \(\rho \). They are uniquely determined by \(\tilde{V}_2\), \(\tilde{V}_1\), \(K_1\), \(K_3\), \(K_4\), \(\tilde{V}_6\), and \(V_7\), respectively. We verify that the remaining \(\tilde{V}_3\), \(\tilde{V}_4\), \(\tilde{V}_5\), \(\tilde{V}_8\), and \(K_2\) are in the support of the correct distribution if the above relations are satisfied. The first equation is \(e(K_2,\tilde{G}) = e(G,\tilde{G})^{b}\) that defines \(K_2 = G^b\). The second equation is \(e(G,\tilde{V}_3) = e(G,\tilde{G})^{ba}\) that defines \(\tilde{V}_3= \tilde{G}^{ba}\). The third equation is \(e(G,\tilde{G})^{\alpha b} = e(G,\tilde{V}_8)^{\rho }\) that defines \(\tilde{V}_8= \tilde{G}^{\alpha b / \rho }\) for \(\rho \ne 0\). If \(\rho =0\), \(\tilde{V}_8\) can be an arbitrary value as described in the key generation. The fourth equation is \(e(G,\tilde{V}_4)^b = e(G,\tilde{V}_5)\) that defines \(\tilde{V}_5= \tilde{V}_4^b\). The last equation is \(e(G,\tilde{G})^{\tau _1 + a \tau _2} = e(G,\tilde{V}_4)\) that defines \(\tilde{V}_4= \tilde{G}^{\tau _1 + a \tau _2}\) as prescribed. \(\square \)

Proving a Correct Secret Key for a Signature In the above \(\mathsf {xSIG}\), there are several secret keys for a verification key, and each secret key yields signatures with a different distribution. It is possible to efficiently prove one’s possession of a secret key used to create the signature in question by proving the following relation.

$$\begin{aligned} \begin{array}{ll} e{(\underline{K_2}, \underline{\tilde{G}^r})} = e{(S_4, \tilde{G})} e{(S_5, \tilde{V}_1)}, &{}e{(S_1, \tilde{G})} = e(\underline{K_1}, \tilde{G})\,e{(\underline{K_3}, \underline{\tilde{G}^r})}, \\ e{(S_3, \tilde{G})} = e{(\underline{G^z}, \tilde{V}_1)}, &{}e{(S_2, \tilde{G})}\,e(\underline{G^z}, \tilde{G}) = e{(\underline{K_4}, \underline{\tilde{G}^r})} \end{array} \end{aligned}$$
(10)

Here, z and r are the random coins used for the signature. A Groth–Sahai zero-knowledge proof for the underlined group elements as witnesses can be constructed using techniques from [20, 29].

Consider a verification key and a signature that satisfy the verification equations in (8) and a secret key that satisfies (9) with respect to the verification key. Suppose that they also satisfy the equations in (10). Define \(r_1\) and \(r_2\) by \(r_1 = \log _{G} S_5\) and \(r_2 = \log _{K_2} S_4\). Parameter b is defined by \(b = \log _{G} K_2 = \log _{\tilde{G}} \tilde{V}_1\). In the exponent, the first relation in (10) is read as \(b r = b r_2 + b r_1\) that ensures correctness of \(G^r\) with respect to \(S_4\) and \(S_5\). The second relation in (10) then guarantees \(S_1= K_1 K_3^r\) for this r, \(K_1\), and \(K_3\). The third relation in (10) proves that \(S_3= G^{z \log _{\tilde{G}} \tilde{V}_1} = G^{zb} = {K_2}^z\) for some z determined by \(G^z\). Finally, the last relation in (10) is for \(S_2= {K_4}^r\, \tilde{G}^{-z}\). Thus, the secret key fulfilling all relations in (10) satisfies relations in (7) with respect to the signature and the verification key. Namely, the secret key is the one used to create the signature.

3 Trapdoor Commitment Schemes

In this section, we construct a structure-preserving shrinking trapdoor commitment scheme as defined in Sect. 3.1. We first construct a commitment scheme in Sect. 3.2 that is almost structure preserving in the sense that the messages for computing commitments are not group elements but scalar values. This slight relaxation allows to implement the shrinking property by using the one-way nature from the scalar values to group elements. Then a complete scheme is constructed in Sect. 3.3 by combining the building block from Sect. 3.2 with a one-time structure-preserving signature scheme that actually binds messages to commitments.

3.1 Definitions

We adopt the following standard syntax for trapdoor commitment schemes.

Definition 13

(Trapdoor Commitment Scheme) A trapdoor commitment scheme \(\mathsf {TC}\) is a tuple of polynomial time algorithms \(\mathsf {TC}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Com}, \mathsf {Vrf}, \mathsf {SimCom}, \mathsf {Equiv}\}\) that:

  • \(\mathsf {Setup}(1^\lambda )\rightarrow {gk}\): A common parameter generation algorithm that takes security parameter \(\lambda \) and outputs a common parameter, \(gk\). It determines the message space \(\mathcal{M}\), the commitment space \(\mathcal{C}\), and opening space \(\mathcal{I}\).

  • \(\mathsf {Key}\)(\(gk)\rightarrow {(ck,tk)}\): A key generation algorithm that takes \(gk\) as input and outputs a commitment key, \(ck\), and a trapdoor key, \(tk\).

  • \(\mathsf {Com}\)(\(ck,m)\rightarrow {(com,open)}\): A commitment algorithm that takes \(ck\) and message \(m\in \mathcal{M}\) and outputs a commitment, \(com\in \mathcal{C}\), and opening information, \(open\in \mathcal{I}\).

  • \(\mathsf {Vrf}\)(\(ck,com,m, open)\rightarrow {1/0}\): A verification algorithm that takes \(ck\), \(com\), \(m\), and \(open\) as input and outputs 1 or 0 representing acceptance or rejection, respectively.

  • \(\mathsf {SimCom}\)(\(gk)\rightarrow {(com,ek)}\): A sampling algorithm that takes common parameter \(gk\) and outputs commitment \(com\) and equivocation key \(ek\).

  • \(\mathsf {Equiv}\)(\(m,ek,tk)\rightarrow {open}\): An algorithm that takes \(ck,ek,tk\) and \(m\in \mathcal{M}\) as input and returns \(open\).

The trapdoor commitment scheme is correct if, for all \(\lambda \in \mathbb {N}\), \(gk\leftarrow \mathsf {Setup}(1^\lambda )\), \((ck, tk) \leftarrow \mathsf {Key}(gk)\), \(m\leftarrow \mathcal{M}\), \((com, open) \leftarrow \mathsf {Com}(ck,m)\), it holds that \(1 = \mathsf {Vrf}(ck, com, m, open)\). Furthermore, it is statistical trapdoor if for any unbounded stateful adversary \(\mathcal{A}\) outputting \(m\in \mathcal{M}\)

$$\begin{aligned} \Pr \left[ \begin{array}{c}gk\leftarrow \mathsf {Setup}(1^\lambda ) ; (ck,tk) \leftarrow \mathsf {Key}(gk) ; m\leftarrow \mathcal{A}(ck,tk)\\ ({com}_0,{open}_0)\leftarrow \mathsf {Com}(ck,m); b\leftarrow \{0,1\} \\ ({com}_1, ek) \leftarrow \mathsf {SimCom}(gk); {open}_1 \leftarrow \mathsf {Equiv}(m,ek,tk)\end{array} :\mathcal{A}({com}_b,{open}_b)=b \right] -\frac{1}{2} \end{aligned}$$

is negligible in \(\lambda \).

A trapdoor commitment scheme is structure-preserving relative to group generator \(\mathcal{G}\) if its common parameter \(gk\) includes a description of bilinear groups generated by \(\mathcal{G}\) and its commitment keys, messages, commitments, and opening information consist only of source group elements, and the verification function consists only of evaluating group membership and relations described by pairing product equations.

From now, we focus on the binding property, which is important for our purpose. The standard binding property requires that it is infeasible for any polynomial time adversary to find two distinct messages and openings for a single commitment value \(com\). As we use a commitment scheme mostly as a hash function, we refer the binding property as collision resistance using terminology for hash functions. A weaker notion known as target collision resistance asks the adversary to find a collision on a given message. An intermediate notion is introduced in [28] as honest-sender binding where the adversary chooses the message for which an honest commitment is made. Thus, the adversary does not choose the randomness used to create the target commitment, but gets to see it and try to create a different opening to a different message. Following [9], we use a refined notion of [28] called chosen-message target collision resistance (\(\mathsf {CMTCR}\)) that handles an arbitrary number of messages.

Definition 14

(Chosen-Message Target Collision Resistance) For a trapdoor commitment scheme, \(\mathsf {TC}\), let \({\mathcal{O}}_{ck}\) denote an oracle that, given \(m\in \mathcal{M}\), executes \((com, open) \leftarrow \mathsf {Com}(ck,m)\), records \((com,m)\) in Q, and returns \((com,open)\). We say \(\mathsf {TC}\) is chosen-message target collision resistant if for any polynomial time adversary \(\mathcal{A}\) the advantage defined by

$$\begin{aligned}&\text {Adv} ^{\mathsf {cmtcr}}_{\mathsf {TC},\mathcal{A}}(\lambda )\nonumber \\&\qquad ={\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ),\\ (ck,tk) \leftarrow \mathsf {Key}(gk),\\ (com^{\dagger },m^{\dagger },open^{\dagger }) \leftarrow \mathcal{A}^{{\mathcal{O}}_{ck}}(ck) \end{array} \,:\, \begin{array}{l} {(com^{\dagger }, *)} \in Q \wedge (com^{\dagger },m^{\dagger }) \notin Q \wedge \\ 1 = \mathsf {Vrf}(ck, com^{\dagger }, m^{\dagger }, open^{\dagger }) \end{array} \,\right] \nonumber \\ \end{aligned}$$
(11)

is negligible in security parameter \(\lambda \).

A structure-preserving commitment scheme is shrinking if number of group elements in \(com\) is strictly less than that in \(m\). The impossibility argument of [8] shows that if a structure-preserving commitment scheme is shrinking, then there exists an adversary that can find two openings and messages that are consistent to a commitment. It is essential for the impossibility argument that the adversary is allowed to choose the randomness used for the target commitment. We stress that it is not the case for the adversary in the game of \(\mathsf {CMTCR}\).

3.2 Message-Transposing Commitment Scheme

We introduce a commitment scheme with a property that the message space \({\mathcal{M}}^{com}\) for creating a commitment and the space \({\mathcal{M}}^{ver}\) for verification differ, and there exists an efficiently computable bijection \(\gamma :{\mathcal{M}}^{com}\rightarrow {\mathcal{M}}^{ver}\). As a message in \({\mathcal{M}}^{com}\) is bound to a commitment through function \(\gamma \), we call such a scheme a message-transposing commitment scheme.Footnote 1 A formal definition is as follows.

Definition 15

(Message-Transposing Commitment Scheme) A message-transposing commitment scheme is a set of algorithms \({\mathsf {MTC}}= \{\mathsf {Setup}, \mathsf {Key}, \mathsf {Com}, \mathsf {Vrf}, \mathsf {SimCom}, \mathsf {Equiv}\}\) such that

  • \(\mathsf {Setup}(1^\lambda )\rightarrow {gk}\): A setup function that, given a security parameter \(\lambda \), generates common parameter \(gk\), which defines message spaces \({\mathcal{M}}^{com}\) for commitment generation and \({\mathcal{M}}^{ver}\) for verification, and an efficiently computable bijection \(\gamma : {\mathcal{M}}^{com}\rightarrow {\mathcal{M}}^{ver}\). It also determines the commitment space \(\mathcal{C}\), and the opening space \(\mathcal{I}\).

  • \(\mathsf {Key}(gk)\rightarrow {(ck,tk)}\): A key generation algorithm that takes \(gk\) and outputs a public commitment key, \(ck\), and a trapdoor key, \(tk\).

  • \(\mathsf {Com}(ck,m)\rightarrow {(com,open)}\): A commitment algorithm that takes \(ck\) and message \(m\in {\mathcal{M}}^{com}\) and outputs a commitment, \(com\in \mathcal{C}\), and an opening information, \(open\in \mathcal{I}\).

  • \(\mathsf {Vrf}(ck,com,M, open)\rightarrow {1/0}\): A verification algorithm that takes \(ck\), \(com\), \(M\in {\mathcal{M}}^{ver}\), and \(open\) as inputs, and outputs 1 or 0 representing acceptance or rejection, respectively.

  • \(\mathsf {SimCom}(gk)\rightarrow {(com,ek)}\): A sampling algorithm that takes common parameter \(gk\) and outputs commitment \(com\) and equivocation key \(ek\).

  • \(\mathsf {Equiv}(M,ek,tk)\rightarrow {open}\): An algorithm that takes \(ck,ek,tk\), and \(M\in {\mathcal{M}}^{ver}\) as input and returns \(open\).

Correctness and statistical trapdoor are the same as Definition  with trivial adaptation to the message spaces.

We say that a message-transposing commitment scheme is structure preserving with respect to verification if \(ck\), \(com\), \(open\), and \({\mathcal{M}}^{ver}\) consist of source group elements of bilinear groups and the verification function consists only of evaluating group membership and pairing product equations. It is shrinking if the number of group elements in a commitment is strictly less than that in the corresponding message for verification.

Next we formally define the security notions, message-transposing target collision resistance and message-transposing collision resistance. As well as ordinary notions of collision resistance, message-transposing collision resistance implies message-transposing target collision resistance.

Definition 16

(Message-Transposing Target Collision Resistance) For a message-transposing commitment scheme, \({\mathsf {MTC}}\), let \(\varvec{com}\) and \(\varvec{open}\) denote vectors of commitment and openings produced by \(\mathsf {Com}\) for uniformly sampled messages \(\varvec{m}\). We say \({\mathsf {MTC}}\) is message-transposing target collision resistant if for any polynomial time adversary \(\mathcal{A}\) the advantage function

$$\begin{aligned}&\text {Adv} ^{\mathsf {tcr}}_{{\mathsf {MTC}},\mathcal{A}}(\lambda )\nonumber \\&\quad ={\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ), (ck, tk) \leftarrow \mathsf {Key}(gk),\\ \varvec{m} \leftarrow {\mathcal{M}}^{com}, (\varvec{com},\varvec{open}) \leftarrow \mathsf {Com}(ck, \varvec{m}),\\ (com,M,open) \leftarrow \mathcal{A}(ck, \varvec{m}, \varvec{com},\varvec{open})\\ \end{array} \,:\, \begin{array}{l} com \in \varvec{com} \, \wedge \, M \not \in \gamma (\varvec{m}) \, \wedge \\ 1 = \mathsf {Vrf}(ck, com, M, open) \end{array} \,\right] \end{aligned}$$

is negligible in security parameter \(\lambda \).

Definition 17

(Message-Transposing Collision Resistance) A message-transposing commitment scheme, \({\mathsf {MTC}}\), is message-transposing collision resistant if for any polynomial time adversary \(\mathcal{A}\) the advantage

$$\begin{aligned}&\text {Adv} ^{\mathsf {cr}}_{{\mathsf {MTC}},\mathcal{A}}(\lambda )\\&\quad ={\Pr }\left[ \, \begin{array}{l} gk\leftarrow \mathsf {Setup}(1^\lambda ), (ck, tk) \leftarrow \mathsf {Key}(gk),\\ (com,M_1,{open}_1,M_2,{open}_2) \leftarrow \mathcal{A}(ck)\\ \end{array} \,:\, \begin{array}{l} M_1 \ne M_2 \wedge \\ 1 = \mathsf {Vrf}(ck, com, M_1, {open}_1) \wedge \\ 1 = \mathsf {Vrf}(ck, com, M_2, {open}_2) \end{array} \,\right] \end{aligned}$$

is negligible in the security parameter \(\lambda \).

Now we present a concrete scheme for a structure-preserving message-transposing trapdoor commitment scheme for \(\gamma :\mathbb {Z}_p\rightarrow {{\mathbb G}}\). For our purpose, we only require target collision resistance but the construction satisfies the stronger notion.

figure c

Theorem 4

\({\mathsf {MTC}}\) is correct, statistical trapdoor, and structure preserving with respect to verification. It is message-transposing collision resistant if the \(\mathsf {DBP}\) assumption holds.

Proof

Correctness is verified as

$$\begin{aligned} e(R, \tilde{G})\, \prod _{i=1}^{\ell } e(M_{i}, \tilde{X}_{i})&= e(G^{\zeta }, \tilde{G})\, \prod _{i=1}^{\ell } e(G^{m_{i}},\tilde{X}_{i}) \\&= e(G, \tilde{G}^{\zeta })\, e(G, \prod _{i=1}^{\ell } \tilde{X}_i^{m_i})\, = e(G, \tilde{G}_u). \end{aligned}$$

To see if it is statistically trapdoor, observe that \(\mathsf {SimCom}\) outputs \(\tilde{G}_u\) distributed uniformly over \(\tilde{{{\mathbb G}}}^*\) whereas that from \(\mathsf {Com}\) distributes statistically close to uniform over \(\tilde{{{\mathbb G}}}\). Then R from \(\mathsf {Equiv}\) is the one that is uniquely determined by the verification equation since it satisfies

$$\begin{aligned} e(R, \tilde{G})\, \prod _{i=1}^{\ell } e(M_{i}, \tilde{X}_{i}) = e\left( G^{\omega _u} \prod _{i=1}^{\ell } M_{i}^{- x_{i}}, \tilde{G}\right) \, \prod _{i=1}^{\ell } e(M_{i}, \tilde{G}^{x_{i}}) = e(G, \tilde{G}_u). \end{aligned}$$

Finally, it is obviously structure preserving with respect to verification due to verification equation (12).

Next we prove the message-transposing collision resistance. Let \(\mathcal{A}\) be an adversary that breaks the \(\mathsf {CR} \) security of \({{\mathsf {MTC}}}\). We show algorithm \(\mathcal{{B}}\) that attacks the \(\mathsf {DBP}\) with black box access to \(\mathcal{A}\). Given an instance \((e, {{\mathbb G}}, \tilde{{{\mathbb G}}}, G, \tilde{G}, \tilde{G}_z)\) of the \(\mathsf {DBP}\), algorithm \(\mathcal{{B}}\) sets up key \(ck\) as follows. Set \(gk:=(p,{{\mathbb G}},\tilde{{{\mathbb G}}},{{\mathbb G}}_T, e, G, \tilde{G})\). For \(i=1,\ldots ,\ell \), choose \(\xi _{i}, \varphi _{i} \leftarrow (\mathbb {Z}_p^*)^2\) and set \(\tilde{X}_{i} :=(\tilde{G}_z)^{\xi _{i}}\,\tilde{G}^{\varphi _{i}}\). Then give \(ck:=(gk, \tilde{X}_1,\ldots ,\tilde{X}_{\ell })\) to \(\mathcal{A}\).

Suppose that \(\mathcal{A}\) outputs \((\tilde{G}_u, R_1, M_1, R_2, M_2)\) that passes the verification as required. \(\mathcal{{B}}\) then outputs \((Z^{{\dagger }}, R^{{\dagger }})\) where

$$\begin{aligned} R^{{\dagger }} :=\frac{R_1}{R_2} \, \prod _{i=1}^{\ell } \left( \frac{M_{1i}}{M_{2i}}\right) ^{\varphi _{i}}\text {, and} \quad Z^{{\dagger }} :=\prod _{i=1}^{\ell } \left( \frac{M_{1i}}{M_{2i}} \right) ^{\xi _{i}}, \end{aligned}$$
(13)

as the answer to the \(\mathsf {DBP}\). This completes the description of \(\mathcal{{B}}\).

We first verify that the simulated \(ck\) is correctly distributed. In the key generation, \(gk\) is set legitimately to the given output of \(\mathcal{G}\). Each simulated \(\tilde{X}_{i}\) distributes uniformly over \(\tilde{{{\mathbb G}}}\), whereas the real one distributes uniformly over \(\tilde{{{\mathbb G}}}^*\). Thus, the simulated \(ck\) is statistically close to the real one.

We then argue that the resulting \((Z^{{\dagger }}, R^{{\dagger }})\) is a valid answer to the given instance of the \(\mathsf {DBP}\). Since the output from \(\mathcal{A}\) satisfies the verification equation, we have

$$\begin{aligned} 1&= e\left( \frac{R_1}{R_2}, \tilde{G}\right) \, \prod _{i=1}^{\ell } e\left( \frac{M_{1i}}{M_{2i}}, (\tilde{G}_z)^{\xi _{i}}\tilde{G}^{\varphi _{i}} \right) \end{aligned}$$
(14)
$$\begin{aligned}&= e\left( \prod _{i=1}^{\ell } \left( \frac{M_{1i}}{M_{2i}} \right) ^{\xi _{i}}, {\tilde{G}_z} \right) \, e\left( \frac{R_1}{R_2} \prod _{i=1}^{\ell } \left( \frac{M_{1i}}{M_{2i}} \right) ^{\varphi _{i}}, \tilde{G}\right) = e(Z^{{\dagger }}, {\tilde{G}_z}) \,e(R^{{\dagger }}, \tilde{G}). \end{aligned}$$
(15)

Observe that every \(\xi _{i}\) is independent of the view of \(\mathcal{A}\) as it is information-theoretically hidden in \(\tilde{X}_{i}\). Since a valid output from \(\mathcal{A}\) satisfies \(M_1 \ne M_2\), there exists an index \(i^{{\dagger }} \in \{1,\ldots ,\ell \}\) that \(M_{1i^{{\dagger }}} \ne M_{2i^{{\dagger }}}\). Thus, \(Z^{{\dagger }}\) distributes as well as \((M_{1i}/M_{2i})^{\xi _{i}}\) at \(i=i^{{\dagger }}\). Since \(M_{1i^{{\dagger }}}/M_{2i^{{\dagger }}} \ne 1\) and \(\xi _{i^{{\dagger }}}\) is uniform over \(\mathbb {Z}_p^*\), we conclude that \(Z^{{\dagger }} = 1\) occurs only with negligible probability.

Thus, \(\mathcal{{B}}\) breaks the \(\mathsf {DBP}\) assumption with almost the same probability and running time of \(\mathcal{A}\) breaking the message-transposing collision resistance of \({\mathsf {MTC}}\). \(\square \)

3.3 Structure-Preserving Shrinking Trapdoor Commitment Scheme

We construct trapdoor commitment scheme \(\mathsf {TC}\) by combining partially one-time signature scheme \(\mathsf {POS}\) and message-transposing trapdoor commitment scheme \({\mathsf {MTC}}\). A key idea is to commit to both long-term and one-time secret keys of \(\mathsf {POS}\) by using shrinking \({\mathsf {MTC}}\) and allow verification of the commitment using corresponding public keys as opening information. For this to be possible, the bijection \(\gamma \) must correspond to the mapping from the secret key space to the public key space of \(\mathsf {POS}\). More precisely, we require the following properties satisfied by \(\mathsf {POS}\) and \({\mathsf {MTC}}\). Let \({\mathcal{M}}_{\mathsf {pos} }\) be the message space of \(\mathsf {POS}\) defined with respect to \(gk\). We denote the key spaces as \({\mathcal{K}}_{\mathsf {pos} }^{vk}\), \({\mathcal{K}}_{\mathsf {pos} }^{sk}\), \({\mathcal{K}}_{\mathsf {pos} }^{ovk}\), and \({\mathcal{K}}_{\mathsf {pos} }^{osk}\) in a self-explanatory manner. There must exist efficiently computable bijections \(\gamma _{sk}:{\mathcal{K}}_{\mathsf {pos} }^{sk}\rightarrow {\mathcal{K}}_{\mathsf {pos} }^{vk}\) and \(\gamma _{osk}:{\mathcal{K}}_{\mathsf {pos} }^{osk}\rightarrow {\mathcal{K}}_{\mathsf {pos} }^{ovk}\), and the \({\mathsf {MTC}}\) is for \(\gamma := \gamma _{sk} \times \gamma _{osk}^{(1)} \times \cdots \times \gamma _{osk}^{(k)}\). It is also required that \(\mathsf {POS}\) and \({\mathsf {MTC}}\) have a common setup function, \(\mathsf {Setup}\), that outputs \(gk\) based on \(\mathsf {POS}.\mathsf {Setup} \) and \({\mathsf {MTC}}.\mathsf {Setup} \), as mentioned in Sect. 2.3. (When instantiated from \(\mathsf {POS}\) in Sect. 2.5 and \({\mathsf {MTC}}\) from Sect. 3.2, \(\mathsf {Setup}\) is as simple as running \(gk\leftarrow \mathcal{G}(1^\lambda )\).) The construction is as follows.

figure d

Theorem 5

The commitment scheme \(\mathsf {TC}\) described above is \(\mathsf {CMTCR} \) if \(\mathsf {POS}\) is \({\mathsf {MK}{\text {-}}\mathsf {OT}{\text {-}}\mathsf {NACMA}} \), and \({\mathsf {MTC}}\) is message-transposing target collision resistant.

Proof

We follow the game transition framework. Let Game 0 be the \(\mathsf {CMTCR}\) game launched by adversary \(\mathcal{A}\). By \(com^{\dagger }=com_{\mathsf {mtc} }^{\dagger }\), \(open^{\dagger } = (open_{\mathsf {mtc} }^{\dagger }, vk_{\mathsf {pos} }^{\dagger }, {ovk_{\mathsf {pos} }^{\dagger }}^{(1)}, \ldots , {ovk_{\mathsf {pos} }^{\dagger }}^{(k)},{\sigma _{\mathsf {pos} }^{\dagger }}^{(1)},\ldots ,{\sigma _{\mathsf {pos} }^{\dagger }}^{(k)} )\) and \(M^{\dagger }=({M^{\dagger }}^{(1)}, \ldots , {M^{\dagger }}^{(k)})\), we denote the collision \(\mathcal{A}\) outputs.

In Game 1, abort if \((vk_{\mathsf {pos} }^{\dagger },{ovk_{\mathsf {pos} }^{\dagger }}^{(1)}, \ldots , {ovk_{\mathsf {pos} }^{\dagger }}^{(k)})\) is different from any of \((vk_{\mathsf {pos} }^{[i]},ovk_{\mathsf {pos} }^{(1)}, \ldots ,ovk_{\mathsf {pos} }^{(k)})\) observed by the signing oracle. We claim that if such an abort happens, then \({\mathsf {MTC}}\) is broken. It is shown by constructing adversary \(\mathcal{{B}}\) attacking the message-transposing target collision resistance of \({\mathsf {MTC}}\). Adversary \(\mathcal{{B}}\) is given \(ck_{\mathsf {mtc} }\) and \(q_s\) reference commitments \(com_{\mathsf {mtc} }\) and opening \(open_{\mathsf {mtc} }\) for random messages of the form \((sk_{\mathsf {pos} }^{[i]},osk_{\mathsf {pos} }^{(1)},\ldots ,osk_{\mathsf {pos} }^{(k)})\). Each message is uniquely mapped to \((vk_{\mathsf {pos} }^{[i]},ovk_{\mathsf {pos} }^{(1)},\ldots ,ovk_{\mathsf {pos} }^{(k)})\) by bijection \(\gamma \). Adversary \(\mathcal{{B}}\) invokes \(\mathcal{A}\) with \(ck:=ck_{\mathsf {mtc} }\) as input. For every commitment query M, adversary \(\mathcal{{B}}\) takes a fresh sample \((sk_{\mathsf {pos} }^{[i]},osk_{\mathsf {pos} }^{(1)},\ldots ,osk_{\mathsf {pos} }^{(k)})\) with its commitment \(com_{\mathsf {mtc} }\) and opening \(open_{\mathsf {mtc} }\), and computes \(\sigma _{\mathsf {pos} }^{(j)} \leftarrow \mathsf {POS}.\mathsf {Sign} (sk_{\mathsf {pos} },osk_{\mathsf {pos} }^{(j)},M^{(j)})\) for \(j=1,\ldots ,k\). It then returns \(com:=com_{\mathsf {mtc} }\) and \(open:=(open_{\mathsf {mtc} }, vk_{\mathsf {pos} }, ovk_{\mathsf {pos} }^{(1)},\cdots , ovk_{\mathsf {pos} }^{(k)}, \sigma _{\mathsf {pos} }^{(1)}, \ldots , \sigma _{\mathsf {pos} }^{(k)} )\). If \(\mathcal{A}\) eventually outputs a collision, \(\mathcal{{B}}\) outputs \(com_{\mathsf {mtc} }^{{\dagger }} :=com_{\mathsf {mtc} }^{\dagger }\), \(open_{\mathsf {mtc} }^{{\dagger }} :=open_{\mathsf {mtc} }^{\dagger }\) and \(M^{{\dagger }} :=(vk_{\mathsf {pos} }^{\dagger }, {ovk_{\mathsf {pos} }^{\dagger }}^{(1)}, \ldots , {ovk_{\mathsf {pos} }^{\dagger }}^{(k)})\). This completes the description of \(\mathcal{{B}}\).

The simulated commitments \(com\) and openings \(open\) have the same distributions as the real ones since every \(osk_{\mathsf {pos} }^{(j)}\) is sampled legitimately by the challenger and the commitment generation procedure is the genuine one. Furthermore, the output of \(\mathcal{{B}}\) is a valid collision against \({\mathsf {MTC}}\) since \(\mathcal{A}\) must have chosen \(com^{\dagger } (= com_{\mathsf {mtc} }^{\dagger })\) from previously used commitments and \(M^{{\dagger }}\) must be fresh for the attack being successful by definition. Accordingly, we have \(|\Pr [\text{ Game } \; 0] - \Pr [\text{ Game } \;1]| \le \text {Adv} ^{\mathsf {tcr}}_{{\mathsf {MTC}},\mathcal{B}}(\lambda )\).

We then argue that \(\mathcal{A}\) wins in Game 1 only if \(\mathsf {POS}\) is broken. Let \(\mathcal{C}\) be an adversary attacking the \({\mathsf {MK}{\text {-}}\mathsf {OT}{\text {-}}\mathsf {NACMA}} \) property of \(\mathsf {POS}\). Given \(gk\), it executes \((ck_{\mathsf {mtc} },tk_{\mathsf {mtc} }) \leftarrow {\mathsf {MTC}}.\mathsf {Key} (gk)\). Then it invokes \(\mathcal{A}\) with input \(ck:=ck_{\mathsf {mtc} }\). For each i-th query \(M^{[i]}=(M^{[i],(1)},\ldots ,M^{[i],(k)})\), \(\mathcal{C}\) makes a key generation query to \(\mathcal{O}_{k}\) to obtain \(vk_{\mathsf {pos} }^{(j)}\), and then makes signing queries to \(\mathcal{O}_{s}\) for \((M^{[i],(1)},\ldots ,M^{[i],(k)})\) with respect to \(vk_{\mathsf {pos} }^{[i]}\). On receiving corresponding signatures from \(\mathcal{O}_{s}\), \(\mathcal{C}\) computes \((com_{\mathsf {mtc} },ek_{\mathsf {mtc} }) \leftarrow {\mathsf {MTC}}.\mathsf {SimCom} (gk)\) and \(open_{\mathsf {mtc} }\leftarrow {\mathsf {MTC}}.\mathsf {Equiv} ((vk_{\mathsf {pos} }^{[i]}, {{ovk_{\mathsf {pos} }}}^{(1)}, \cdots , {{ovk_{\mathsf {pos} }}}^{(k)}), ek_{\mathsf {mtc} }, tk_{\mathsf {mtc} })\) and outputs \(com:=com_{\mathsf {mtc} }\) and \(open:=(open_{\mathsf {mtc} },vk_{\mathsf {pos} }^{[i]}, ovk_{\mathsf {pos} }^{(1)}, \ldots , ovk_{\mathsf {pos} }^{(k)}, \sigma _{\mathsf {pos} }^{(1)}, \ldots , \sigma _{\mathsf {pos} }^{(k)} )\). On receiving a collision from \(\mathcal{A}\), \(\mathcal{C}\) searches for \(vk_{\mathsf {pos} }^{[i]^{{\dagger }}}\) that \(vk_{\mathsf {pos} } = vk_{\mathsf {pos} }^{[i]^{{\dagger }}}\). Note that this search always succeeds if the game does not abort. It then finds \(j^{{\dagger }}\) that \({M^{\dagger }}^{(j)^{{\dagger }}} \ne M^{[i]^{{\dagger }},(j)^{{\dagger }}}\) (such an index must exist since \(M^{\dagger }\) differs from any queried messages with respect to \(i^{{\dagger }}\)) and outputs \(vk_{\mathsf {pos} }^{[i]^{{\dagger }}}\), \(ovk_{\mathsf {pos} }^{(j)^{{\dagger }}}\) and \(M^{{\dagger }} :={M^{\dagger }}^{[i]^{{\dagger }}, (j)^{{\dagger }}}\). This completes the description of \(\mathcal{C}\). The simulated signatures are statistically close to the real ones due to the statistical trapdoor property of \({\mathsf {MTC}}.\mathsf {SimCom} \) and \({\mathsf {MTC}}.\mathsf {Equiv} \). Thus, we have \(\Pr [\text{ Game } \;1] - \epsilon _{\mathsf {sim}} \le \text {Adv} ^{\mathsf {mk\text {-}ot\text {-}nacma}}_{\mathsf {POS},\mathcal{C}}(\lambda )\), where \(\epsilon _{\mathsf {sim}}\) is the statistical loss by switching from \({\mathsf {MTC}}.\mathsf {SimCom} \) to \({\mathsf {MTC}}.\mathsf {Equiv} \).

All in all, we have

$$\begin{aligned} \text {Adv} ^{\mathsf {cmtcr}}_{\mathsf {TC},\mathcal{A}}(\lambda ) \le \text {Adv} ^{\mathsf {tcr}}_{{\mathsf {MTC}},\mathcal{B}}(\lambda ) + \text {Adv} ^{\mathsf {mk\text {-}ot\text {-}nacma}}_{\mathsf {POS},\mathcal{C}}(\lambda ) + \epsilon _{\mathsf {sim}},\end{aligned}$$

which proves the statement. \(\square \)

The following is immediate from the construction. In particular, correctness holds due to the correctness of \({\mathsf {MTC}}\) and \(\mathsf {POS}\) and the existence of a bijection from the secret keys of \(\mathsf {POS}\) to the verification keys.

Theorem 6

\(\mathsf {TC}\) given above is a structure-preserving trapdoor commitment scheme if \({\mathsf {MTC}}\) is structure preserving with respect to verification, and \(\mathsf {POS}\) is structure preserving.

4 Fully Structure-Preserving Signatures

We argue that constructing an FSPS requires a different approach than those for all known constructions of SPS. The verification equations of existing structure-preserving constant-size signatures on message vectors \((G^{m_1},\ldots ,G^{m_L})\) involve pairings such as \(\prod e(G^{x_{i}},G^{m_{i}})\), where \(G^{x_{i}}\) is a public key element and \(G^{m_{i}}\) is a message element. The message is squashed into a signature element, say S, in such a form that \(S := A \cdot \prod _{i=1}^{L} G^{m_{i} x_{i}}\) where \(x_{i}\) is a signing key component and A is computed from inputs other than the message. Such a structure requires either \(m_{i}\) or \(x_{i}\) to be known to a signing algorithm that uses generic group operations. In FSPS, however, neither is given to the signing function.

Our starting point is the FSPS scheme in Sect. 2.6. The following sections present constructions that upgrade the security to \(\mathsf {UF}\)-\(\mathsf {CMA}\) by incorporating one-time signatures or trapdoor commitments.

4.1 Warm-Up

Our first approach is to take random \(x_{i}\) instead of the signing key. That is, \(x_i\) works as a random one-time key and \(G^{x_{i}}\) is regarded as a one-time public key, which is then authenticated by an FSPS with a long-term key that is secure against extended random message attacks. This results in a combination of a weaker signature scheme with OTS, which is well known as a method for upgrading the security of the underlying signature scheme. This in fact can be seen as a special case of the construction of SPS by Abe et al. [2]. We nevertheless work out the scheme in detail to discuss our motivation for our main scheme and settle a basis for comparison. Let \(\mathsf {OTS}\) and \(\mathsf {xSIG}\) be a one-time and an ordinary signature scheme that have common setup function \(\mathsf {Setup}\). We construct \(\mathsf {FSP{1}}\) as follows.

figure e

Theorem 7

If \(\mathsf {OTS}\) is a \(\mathsf {UF}\)-\(\mathsf {NACMA}\) secure SPS and \(\mathsf {xSIG}\) is a \(\mathsf {UF}\)-\(\mathsf {XRMA}\) secure FSPS, then \(\mathsf {FSP{1}}\) is a \(\mathsf {UF}\)-\(\mathsf {CMA}\) secure FSPS scheme.

Proof

Since the syntactical consistency and correctness are trivial from the construction, we only show that the scheme is fully structure preserving. The public component of \(\mathsf {FSP{1}}\) is \((vk_{},\sigma _{},M) = ( vk _{\mathsf {xsig} }, (\sigma _{\mathsf {xsig} }, \sigma _{\mathsf {ots} }, ovk_{\mathsf {ots} }), M)\), which consists of public components of \(\mathsf {xSIG}.\mathsf {Key} \) and the \(\mathsf {OTS}\). Also, the signing key of \(\mathsf {FSP{1}}\) consists of \( sk _{\mathsf {xsig} }\). Thus, both public and private components of \(\mathsf {FSP{1}}\) consist of group elements since \(\mathsf {xSIG}\) is FSPS and the \(\mathsf {OTS}\) is SPS. Furthermore, \(\mathsf {FSP{1}}.\mathsf {Vrf} \) evaluates \(\mathsf {OTS}.\mathsf {Vrf} \) and \(\mathsf {xSIG}.\mathsf {Vrf} \) that evaluate PPEs. Thus, \(\mathsf {FSP{1}}\) is FSPS.

We next prove the \(\mathsf {UF}\text {-}\mathsf {CMA}\) security of \(\mathsf {FSP{1}}\) by following the standard game transition technique. Let \(\mathcal{A}\) be an adversary against \(\mathsf {FSP{1}}\). By \(\Pr [\text{ Game } \; i]\) we denote probability that \(\mathcal{A}\) eventually outputs a valid forgery as defined in Definition . Let Game 0 be the \(\mathsf {UF}\text {-}\mathsf {CMA}\) game that \(\mathcal{A}\) is playing. By definition, \(\Pr [\text{ Game } \; 0] = \text {Adv} ^{\mathsf {uf\text {-}cma}}_{\mathsf {FSP{1}},\mathcal{A}}(\lambda )\). Let \((\sigma _{}^{\dagger }, M^{\dagger })\) be a forgery \(\mathcal{A}\) outputs. Let \(\sigma _{}^{\dagger } :=(\sigma _{\mathsf {xsig} }^{\dagger }, \sigma _{\mathsf {ots} }^{\dagger }, vk_{\mathsf {ots} }^{\dagger })\).

In Game 1, abort the game if \((\sigma _{}^{\dagger }, M^{\dagger })\) is a valid forgery and \(vk_{\mathsf {ots} }^{\dagger }\) is never used by the signing oracle. We show that this event occurs only if the \(\mathsf {UF}\text {-}\mathsf {XRMA}\) security of \(\mathsf {xSIG}\) is broken. Let \(\mathcal{{B}}\) be an adversary against \(\mathsf {xSIG}\) launching an \(\mathsf {XRMA}\) attack. \(\mathcal{{B}}\) is given a public key \( vk _{\mathsf {xsig} }\), message \(m^{(j)} := vk_{\mathsf {ots} }^{(j)}\), signature \(\sigma _{\mathsf {xsig} }^{(j)})\) for \(j=1,\ldots ,q_s\). It is also given random coin \(\omega ^{(j)}\) for each j used to generate \(vk_{\mathsf {ots} }^{(j)}\) using \(\mathsf {OTS}.\mathsf {Key} \) as the message sampler. \(\mathcal{{B}}\) first computes \(sk_{\mathsf {ots} }^{(j)}\) from \(\omega ^{(j)}\) by executing \(\mathsf {OTS}.\mathsf {Key} \) by itself. Then it invokes \(\mathcal{A}\) with input \(vk_{} := vk _{\mathsf {xsig} }\). On receiving \(M^{(j)}\) from \(\mathcal{A}\) for signing, \(\mathcal{{B}}\) computes \(\sigma _{\mathsf {ots} }^{(j)} \leftarrow \mathsf {OTS}.\mathsf {Sign} (sk_{\mathsf {ots} }^{(j)},M^{(j)})\) and returns \(\sigma _{}^{(j)} :=(\sigma _{\mathsf {xsig} }^{(j)},\sigma _{\mathsf {ots} }^{(j)},vk_{\mathsf {ots} }^{(j)})\). When \(\mathcal{A}\) outputs forgery \(\sigma _{}^{\dagger } :=(\sigma _{\mathsf {xsig} }^{\dagger }, \sigma _{\mathsf {ots} }^{\dagger }, vk_{\mathsf {ots} }^{\dagger })\) for some message \(M^{{\dagger }}\), \(\mathcal{{B}}\) outputs \(\sigma _{\mathsf {xsig} }^{{\dagger }} :=\sigma _{\mathsf {xsig} }^{\dagger }\) and \(m^{{\dagger }} :=vk_{\mathsf {ots} }^{\dagger }\). This is a valid forgery since \(\mathcal{A}\)’s forgery is supposed to satisfy \(vk_{\mathsf {ots} }^{\dagger } \ne vk_{\mathsf {ots} }^{(j)}\). Thus, we have \(|\Pr [\text{ Game } \;0]-\Pr [\text{ Game } \;1]| \le \text {Adv} ^{\mathsf {uf\text {-}xrma}}_{\mathsf {xSIG},\mathcal{B}}(\lambda )\).

Next we show that \(\mathcal{A}\) wins \(\text{ Game } \;1\) only if \(\mathsf {OTS}\) is broken. Let \(\mathcal{C}\) be an adversary attacking \(\mathsf {OTS}\) with \(\mathsf {NACMA}\). Given \(gk\) from outside, \(\mathcal{C}\) first chooses a random index \(i \leftarrow \{1,\ldots ,q_s\}\). It then executes \((vk_{},sk_{}) \leftarrow \mathsf {FSP{1}}.\mathsf {Key} (gk)\). Given \(j (\ne i)\)-th query \(M^{(j)}\) from \(\mathcal{A}\), \(\mathcal{C}\) runs \(\sigma _{}^{(j)} \leftarrow \mathsf {FSP{1}}.\mathsf {Sign} (sk_{},M^{(j)})\) and returns \(\sigma _{}^{(j)}\). Given \(j (=i)\)-th query, \(\mathcal{C}\) forwards \({M^{(j)}}\) to the signing oracle of \(\mathsf {OTS}\) and receive \(\sigma _{\mathsf {ots} }^{(j)}\) and \(vk_{\mathsf {ots} }^{(j)}\). Then \(\mathcal{C}\) executes \(\sigma _{\mathsf {xsig} }\leftarrow \mathsf {xSIG}.\mathsf {Sign} ( sk _{\mathsf {xsig} },vk_{\mathsf {ots} }^{(j)})\) and returns \(\sigma _{}:=(\sigma _{\mathsf {xsig} }^{(j)}, \sigma _{\mathsf {ots} }^{(j)},vk_{\mathsf {ots} }^{(j)})\) to \(\mathcal{A}\). When \(\mathcal{A}\) outputs forgery \(\sigma _{}^{\dagger } :=(\sigma _{\mathsf {xsig} }^{\dagger }, \sigma _{\mathsf {ots} }^{\dagger }, vk_{\mathsf {ots} }^{\dagger })\) and \({M^{\dagger }}\), \(\mathcal{C}\) aborts if \(vk_{\mathsf {ots} }^{\dagger } \ne vk_{\mathsf {ots} }^{(i)}\). Otherwise, \(\mathcal{C}\) outputs \(\sigma _{\mathsf {xsig} }^{{\dagger }} :=\sigma _{\mathsf {ots} }^{\dagger }\) and \(m^{{\dagger }} :={M^{\dagger }}\). This is a valid forgery since \(M^{\dagger } \ne M^{(j)}\) for all j including the case \(j=i\). Thus, we have \(\Pr [\text{ Game } \;1] \le {{q_s} \cdot } \text {Adv} ^{\mathsf {uf\text {-}nacma}}_{\mathsf {OTS},\mathcal{C}}(\lambda )\).

In total, we have

$$\begin{aligned} \text {Adv} ^{\mathsf {uf\text {-}cma}}_{\mathsf {FSP{1}},\mathcal{A}}(\lambda ) \le \text {Adv} ^{\mathsf {uf\text {-}xrma}}_{\mathsf {xSIG},\mathcal{B}}(\lambda ) + {q_s \cdot } \text {Adv} ^{\mathsf {uf\text {-}nacma}}_{\mathsf {OTS},\mathcal{C}}(\lambda ), \end{aligned}$$

which proves the statement. \(\square \)

Though the above reduction involves a loss factor of \(q_s\), it will vanish if \(\mathsf {OTS}\) is based on a random self-reducible problem like \(\mathsf {SDP} \).

The above construction requires \({\mathcal{K}}^{vk}_{\mathsf {ots} }\) to match \({\mathcal{M}}_{\mathsf {xsig} }\). When they are instantiated with the concrete schemes from previous sections (using the \(\mathsf {POS}\) in Sect. 2.5 as \(\mathsf {OTS}\) by swapping \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), and using \(\mathsf {xSIG}\) in Sect. 2.6), the space adjustment is done as follows.

figure f

Then those extended \(vk_{\mathsf {pos} }\) and \(ovk_{\mathsf {pos} }\) constitute a message \(((G_z,G_{z2},G_{z3}),(G_1,G_{12},G_{13}),\ldots ,(G_{L},G_{L2},G_{L3}), (A,A_2,A_3))\) given to \(\mathsf {xSIG}\) to sign. We present a summary of the resulting instantiation of \(\mathsf {FSP{1}}\) below.

figure g

Motivation for Improvement Since an SPS is an OTS, construction \(\mathsf {FSP{1}}\) can be seen as a generic conversion from any SPS to an FSPS. In exchange for the generality, the construction has several shortcomings when instantiated with current building blocks.

  • (O(|m|)-size signatures) The resulting signature \(\sigma _{}\) includes the one-time verification key \(ovk_{\mathsf {ots} }\), which is linear in the size of messages in all current instantiations of \(\mathsf {OTS}\).

  • (Factor 3 expansion in \(\mathsf {xSIG}\)) As shown above, the message space of \(\mathsf {xSIG}\) must cover \(ovk_{\mathsf {ots} }\), which is linear in the size of the message. Even worse, the currently known instantiation of \(\mathsf {xSIG}\) suffers from an expansion factor of \(\mu = 3\) for messages. That is, to sign a message consisting of a group element, say \(G^x\), it is required to represent the message with two more extra elements \(F_2^x\) and \(U_i^x\) for given bases \(F_2\) and \(U_i\). Thus, the size of \(ovk_{\mathsf {ots} }\) will actually be \(\mu \) times larger than the one-time verification key that \(\mathsf {OTS}\) originally requires.

The above shortcomings amplify each other. Finding an instantiation of \(\mathsf {xSIG}\) with a smaller expansion factor is one direction of improvement. We leave it as an interesting open problem and focus on a generic approach in the next section.

4.2 Main Construction

Our idea is to avoid signing any components whose size grows to the size of messages directly with \(\mathsf {xSIG}\). We achieve this by committing to the message using a shrinking commitment scheme and signing the commitment with \(\mathsf {xSIG}\). Combining a trapdoor commitment scheme (or a chameleon hash) and a signature scheme to achieve such an improvement is a known approach. What is important here is to clarify the required security for each building block. We show that chosen-message target collision resistance is sufficient for \(\mathsf {TC}\) to reach \(\mathsf {UF}\text {-}\mathsf {CMA}\) in combination with an \(\mathsf {XRMA}\)-secure signature scheme.

Let \(\mathsf {xSIG}\) be a \(\mathsf {UF}\text {-}\mathsf {XRMA}\) secure FSPS scheme and \(\mathsf {TC}\) be a \(\mathsf {CMTCR} \) secure trapdoor commitment scheme with common setup function \(\mathsf {Setup}\). We construct our FSPS scheme \(\mathsf {FSP{2}}\) from \(\mathsf {xSIG}\) and \(\mathsf {TC}\) as follows.

figure h

Note that trapdoor \(tk_{\mathsf {tc}}\) is not included in \(sk_{} \) but used only in the security proof. It is the point that makes the scheme fully structure preserving.

Theorem 8

If \(\mathsf {TC}\) is a \(\mathsf {CMTCR} \) secure SPTC, and \(\mathsf {xSIG}\) is a \(\mathsf {UF}\text {-}\mathsf {XRMA}\) secure FSPS relative to \(\mathsf {TC}.\mathsf {SimCom} \) as a message sampler, then \(\mathsf {FSP{2}}\) is a \(\mathsf {UF}\text {-}\mathsf {CMA}\) FSPS.

Proof

Correctness holds trivially from those of the underlying \(\mathsf {TC}\) and \(\mathsf {xSIG}\). Regarding the full structure-preserving property, observe that \(sk_{} \) consists of \( sk _{\mathsf {xsig} }\), and it consists only of source group elements since \(\mathsf {xSIG}\) is fully structure preserving. The same is true for public components, i.e., public keys, messages, and signatures consist only of source group elements because both \(tk_{\mathsf {tc}}\) and \(\mathsf {xSIG}\) are structure preserving. The verification only evaluates verification functions of these underlying building blocks, which evaluate PPEs. Thus, \(\mathsf {FSP{2}}\) is FSPS.

We next prove the security property. Let \(\mathcal{A}\) be an adversary against \(\mathsf {FSP{2}}\). Let Game 0 be the \(\mathsf {UF}\text {-}\mathsf {CMA}\) game that \(\mathcal{A}\) is playing. By definition, \(\Pr [\text{ Game } \; 0] = \text {Adv} ^{\mathsf {uf\text {-}cma}}_{\mathsf {FSP{2}},\mathcal{A}}(\lambda )\). Let \((\sigma _{}^{\dagger }, m^{\dagger })\) be a forgery \(\mathcal{A}\) outputs. Let \(\sigma _{}^{\dagger } :=(\sigma _{\mathsf {xsig} }^{\dagger }, open_{\mathsf {tc}}^{\dagger }, com_{\mathsf {tc}}^{\dagger })\).

In Game 1, abort the game if \((\sigma _{}^{\dagger }, m^{\dagger })\) is a valid forgery and \(com_{\mathsf {tc}}^{\dagger }\) is never queried by the signing oracle. We show that this event occurs only if the \(\mathsf {UF}\text {-}\mathsf {XRMA}\) security of \(\mathsf {xSIG}\) is broken. Let \(\mathcal{{B}}\) be an adversary against \(\mathsf {xSIG}\) launching an \(\mathsf {XRMA}\) attack. The message sampler for \(\mathsf {XRMA}\) is \(\mathsf {TC}.\mathsf {SimCom} \). That is, the challenger samples random messages by \((com_{\mathsf {tc}},ek_{\mathsf {tc}}) \leftarrow \mathsf {TC}.\mathsf {SimCom} (gk; \omega )\) with random coin \(\omega \) and gives \(com_{\mathsf {tc}}\) and \(\omega \) with signature \(\sigma _{\mathsf {xsig} }\) on \(com_{\mathsf {tc}}\) as a message. Let \( sample ^{[i]}\) be the i-th sample, i.e., \( sample ^{[i]} :=(com_{\mathsf {tc}}^{[i]}, \omega ^{[i]}, \sigma _{\mathsf {xsig} }^{[i]})\). Given \(( vk _{\mathsf {xsig} }, sample ^{[1]},\ldots , sample ^{[q_s]})\) as input, \(\mathcal{{B}}\) runs as follows. It first takes \(gk\) from \( vk _{\mathsf {xsig} }\) and recovers every \(ek_{\mathsf {tc}}^{[i]}\) from \(\omega ^{[i]}\) by \((com_{\mathsf {tc}},ek_{\mathsf {tc}}) \leftarrow \mathsf {TC}.\mathsf {SimCom} (gk; \omega )\). It then runs \((ck_{\mathsf {tc}},tk_{\mathsf {tc}}) \leftarrow \mathsf {TC}.\mathsf {Key} (gk)\) and invokes \(\mathcal{A}\) with input \(vk_{} :=( vk _{\mathsf {xsig} },ck_{\mathsf {tc}})\). Given the i-th signing query \(m^{[i]}\) from \(\mathcal{A}\), it executes \(open_{\mathsf {tc}}^{[i]} \leftarrow \mathsf {TC}\mathsf {.Equiv} (m^{[i]},tk_{\mathsf {tc}}^{[i]},ek_{\mathsf {tc}}^{[i]},)\) and returns \(\sigma _{}:=(\sigma _{\mathsf {xsig} }^{[i]}, open_{\mathsf {tc}}^{[i]}, com_{\mathsf {tc}}^{[i]})\) to \(\mathcal{A}\). If \(\mathcal{A}\) eventually outputs a forgery, \(\sigma _{}^{\dagger }=(\sigma _{\mathsf {xsig} }^{\dagger }, open_{\mathsf {tc}}^{\dagger }, com_{\mathsf {tc}}^{\dagger })\) and \(m^{\dagger }\), it outputs \(\sigma _{\mathsf {xsig} }^{{\dagger }} :=\sigma _{\mathsf {xsig} }^{\dagger }\) and \(m^{{\dagger }} :=com_{\mathsf {tc}}^{\dagger }\) as a forgery with respect to \(\mathsf {xSIG}\).

Correctness of the above reduction holds because of the statistically close distribution of simulated \(com_{\mathsf {tc}}^{[i]}\), and \(open_{\mathsf {tc}}^{(j)}\). The output \((\sigma _{\mathsf {xsig} }^{{\dagger }},m^{{\dagger }})\) is also a valid forgery since \(com_{\mathsf {tc}}^{\dagger }\) differs from any \(com_{\mathsf {tc}}^{[i]}\). Letting \(\epsilon _{\mathsf {sim}}\) denote the statistical distance, we have \(|\Pr [\text{ Game } \; 0] - \Pr [\text{ Game } \;1]| \le \text {Adv} ^{\mathsf {uf\text {-}xrma}}_{\mathsf {xSIG},\mathcal{B}}(\lambda ) + \epsilon _{\mathsf {sim}}\).

Now we claim that \(\mathcal{A}\) winning in Game 1 occurs only if the \(\mathsf {CMTCR}\) security of \(\mathsf {TC}\) is broken. The reduction from successful \(\mathcal{A}\) in Game 1 to adversary \(\mathcal{C}\) that breaks \(\mathsf {TC}\) is straightforward. Given \(ck_{\mathsf {tc}}\), \(\mathcal{C}\) runs \(( vk _{\mathsf {xsig} }, sk _{\mathsf {xsig} }) \leftarrow \mathsf {xSIG}.\mathsf {Key} (gk)\) and invokes \(\mathcal{A}\) with \(vk_{} :=( vk _{\mathsf {xsig} },ck_{\mathsf {tc}})\). Then, given message \(m^{(j)}\), forward it to the oracle of \(\mathsf {TC}\) and obtain \((com_{\mathsf {tc}}^{[i]},open_{\mathsf {tc}}^{[i]})\). Then sign \(com_{\mathsf {tc}}^{[i]}\) using \( sk _{\mathsf {xsig} }\) to obtain \(\sigma _{\mathsf {xsig} }^{[i]}\) and return \((\sigma _{\mathsf {xsig} }^{[i]}, open_{\mathsf {tc}}^{[i]}, com_{\mathsf {tc}}^{[i]})\) to \(\mathcal{A}\). Given a forged signature \((\sigma _{\mathsf {xsig} }^{\dagger }, open_{\mathsf {tc}}^{\dagger }, com_{\mathsf {tc}}^{\dagger })\) and \(m^{\dagger }\), output \(open_{\mathsf {tc}}^{{\dagger }} :=open_{\mathsf {tc}}^{\dagger }\) and \(m^{{\dagger }} :=m^{\dagger }\). It is a valid forgery since \(m^{\dagger } \ne m^{[i]}\) for all i. We thus have \(\Pr [\text{ Game } \; 1] = \text {Adv} ^{\mathsf {cmtcr}}_{\mathsf {TC},\mathcal{C}}(\lambda )\).

By summing up the differences, we have

$$\begin{aligned} \text {Adv} ^{\mathsf {uf\text {-}cma}}_{\mathsf {FSP{2}},\mathcal{A}}(\lambda ) \le \text {Adv} ^{\mathsf {uf\text {-}xrma}}_{\mathsf {xSIG},\mathcal{B}}(\lambda ) + \text {Adv} ^{\mathsf {cmtcr}}_{\mathsf {TC},\mathcal{C}}(\lambda ) + \epsilon _{\mathsf {sim}}, \end{aligned}$$
(16)

which proves the statement. \(\square \)

To instantiate this construction with the building blocks from previous sections, we again need to duplicate \(com_{\mathsf {mtc} }= \tilde{G}_u = \tilde{G}^{\zeta } \prod _{i=1}^{\ell } \tilde{X}_i^{m_i}\) to a triple with respect to bases \(\tilde{G}=\tilde{F}_2\), \(\tilde{F}_3\) and \(\tilde{U}_1\) as follows. To be able to do so without holding the discrete logarithms of the \(\tilde{X}_i\)’s, we need to duplicate \(\tilde{X}\) to the same set of bases as well. Details are shown in the following.

figure i

The result is an extended commitment \(com_{\mathsf {mtc} }= (\tilde{G}_u, \tilde{G}_{u2}, \tilde{G}_{u3})\) that matches the message space of \(\mathsf {xSIG}\) with \(\ell =1\). Note that the duplicated keys have no effect on the security of \(\mathsf {POS}\) nor \({\mathsf {MTC}}\) since they can be easily simulated when the discrete-logs of the extra bases to the original base \(\tilde{G}\) are known.

We summarize the instantiation of \(\mathsf {FSP{2}}\) in the following. Let \(k=\lceil \frac{L}{\ell _{\mathsf {pos} }} \rceil \) and \(\ell _{\mathsf {mtc} } = 1+k+ \ell _{\mathsf {pos} }\).

$$\begin{aligned} \text {Common Parameter}&\quad (G,\tilde{G}, F_1, F_2, \tilde{F}_1, \tilde{F}_2, U_1, \tilde{U}_1)\\ \text {Public-key}&\quad (\tilde{V}_1,\tilde{V}_2,\tilde{V}_3,\tilde{V}_4,\tilde{V}_5,\tilde{V}_6,V_7, \tilde{V}_8,\{\tilde{X}_i,\tilde{X}_{i2},\tilde{X}_{i3}\}^{\ell _{\mathsf {mtc} }}_{i=1})\\ \text {Secret-key}&\quad (K_1, K_2, K_3, K_4)\\ \text {Message}&\quad (\tilde{M}_1,\ldots ,\tilde{M}_{L})\\ \text {Signature}&\quad (\tilde{S}_0,S_1, \ldots ,S_5, \tilde{G}_u,\tilde{G}_{u2},\tilde{G}_{u3}, R, G_z,G_1,\ldots ,G_{\ell _{\mathsf {pos} }}, \{A_i,\tilde{Z}_i,\tilde{R}_i\}^{k}_{i=1})\\ \text {Verification PPEs}&\quad \begin{array}[t]{l} \text {Let } (N_1,\ldots ,N_{\ell _{\mathsf {mtc} }}) :=(G_z,G_1,\ldots ,G_{\ell _{\mathsf {pos} }},A_1,\ldots ,A_k).\\ \text { For }j=1,\ldots ,k:\\ \quad \quad e(A_j,\tilde{G}) = e(G_z,\tilde{Z}_j)\,e(G,\tilde{R}_j)\,\prod _{i=1}^{\ell _{\mathsf {pos} }} e(G_{i}, \tilde{M}_{(j-1)\ell _{\mathsf {pos} } + i}),\\ e(G,\tilde{G}_u) = e(R,\tilde{G}) \prod _{i=1}^{\ell _{\mathsf {mtc} }} e(N_i,\tilde{X}_i)\\ e(S_5,\tilde{V}_6\; \tilde{G}_{u3}) = e(G,\tilde{S}_0),\\ e(S_1, \tilde{V}_1)\,e(S_2,\tilde{V}_3)\,e(S_3,\tilde{V}_2)=e(S_4,\tilde{V}_4)\,e(S_5,\tilde{V}_5)\,e(V_7,\tilde{V}_8),\\ e(F_1,\tilde{G}_{u3}) = e(U_{1},\tilde{G}_u),\quad e(F_2,\tilde{G}_{u3}) = e(U_{1},\tilde{G}_{u2}). \end{array} \end{aligned}$$

4.3 Efficiency

In this section, we assess the efficiency of \(\mathsf {FSP{1}}\) and \(\mathsf {FSP{2}}\) instantiated as described in Sects. 4.1 and 4.2. Note that \(\mathsf {FSP{1}}\) uses a one-time signature scheme, \(\mathsf {OTS}\), and we evaluate the efficiency where \(\mathsf {OTS}\) is instantiated by \(\mathsf {POS}\) in Sect. 2.5 since the \(\mathsf {POS}\) is the best known structure-preserving \(\mathsf {OTS}\) under a standard assumption.

Signature Size and Number of PPEs In Table 1 we assess the sizes of a key and a signature for unilateral messages consisting of \(\ell \) group elements. By \(|vk|\), we denote the number of group elements in \(vk\) except for those in \(gk\). Similarly, by \(|sk|\), we denote the number of group elements in \(sk\) except for those in \(vk\). By the term \(\#\,\mathsf {PPE}_\mathsf {A}\) we denote the number of pairing product equations in the corresponding building block \(\mathsf {A}\). Table 2 summarizes the comparison with signature length for some concrete message lengths. In the following, we denote the size of an element by (ab) when the element consists of a and b group elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively.

Table 1 Size of a secret key, a verification key, a signature, and the number of PPEs in verification for a unilateral message of size \(L=k \ell \). (ab) : a and b elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively
Table 2 Concrete signature size for small messages with optimal setting of \(k = \ell = \sqrt{L}\)
  • \(\mathsf {FSP{1}}\). According to the descriptions in Sects. 2.5 and  2.6, we have the following parameters for the building blocks.

    • \(\mathsf {OTS}\): \(|vk_{\mathsf {ots} }| = |vk_{\mathsf {pos} }|+|ovk_{\mathsf {pos} }| = (0,L+2)\), \(|\sigma _{\mathsf {ots} }| = (2,0)\), and \(\#\,\mathsf {PPE}_{\mathsf {ots} } = 1\).

    • \(\mathsf {xSIG}\): \(| sk _{\mathsf {xsig} }| = (4,0)\), \(| vk _{\mathsf {xsig} }| = (1,7)\), and \(\#\,\mathsf {PPE}_{\mathsf {xsig} } = 2 + 2\,|vk_{\mathsf {ots} }|\).

    The common setup function for these building blocks generates bases \((G,\tilde{G}, F_1, F_2, \tilde{F}_1, \tilde{F}_2, \{U_i, \tilde{U}_i\}_{i=1}^{\ell _{}})\) for \(\ell _{\mathsf {xsig} } = |vk_{\mathsf {ots} }|\) to allow \(\mathsf {xSIG}\) to sign \(vk_{\mathsf {ots} }\). (Note that \(vk_{\mathsf {ots} }\) consists only of group elements from \({{\mathbb G}}\), which \(\mathsf {xSIG}\) can sign.) Taking the message expansion factor \(\mu =3\) into account, we obtain the following for \(\mathsf {FSP{1}}\):

    $$\begin{aligned} |gk|&= (3+ |vk_{\mathsf {ots} }|, 3+ |vk_{\mathsf {ots} }|)= (5+L, 5+L)\\ |sk_{} |&= | sk _{\mathsf {xsig} }| = (4,0)\\ |vk_{} |&= | vk _{\mathsf {xsig} }| = (1, 7)\\ |\sigma _{}|&= |\sigma _{\mathsf {xsig} }| + |\sigma _{\mathsf {ots} }| + \mu \,|vk_{\mathsf {ots} }| = (5,1) + (2,0) + (0, 6+ 3\,L) \\&= (7, 7+ 3\,L)\\ \#\,\mathsf {PPE}_{}&= \#\,\mathsf {PPE}_{\mathsf {xsig} } + \#\,\mathsf {PPE}_{\mathsf {ots} } = 7 + 2\,L\end{aligned}$$
  • \(\mathsf {FSP{2}}\). The underlying components are \(\mathsf {xSIG}\), \({\mathsf {MTC}}\) and \(\mathsf {POS}\). Since \(\mathsf {POS}\) is repeatedly used in \(\mathsf {FSP{2}}\), its message size \(\ell _{\mathsf {pos} }\) can be set independently from the input message size \(\ell \). The parameters for these underlying components are:

    • \(\mathsf {POS}\): \(|vk_{\mathsf {pos} }| = (\ell _{\mathsf {pos} } + 1, 0)\), \(|ovk_{\mathsf {pos} }| = (1,0)\), \(|\sigma _{\mathsf {pos} }| = (0,2)\), and \(\#\,\mathsf {PPE}_{\mathsf {pos} } = 1\).

    • \({\mathsf {MTC}}\): \(|ck_{\mathsf {mtc} }| = |vk_{\mathsf {pos} }| + |L/ \ell _{\mathsf {pos} }| \cdot |ovk_{\mathsf {pos} }| = (0,1 + k + \ell _{\mathsf {pos} })\), \(|com_{\mathsf {mtc} }| = (0,1)\), and \(|open_{\mathsf {mtc} }| = (1,0)\).

    • \(\mathsf {xSIG}\): \(| sk _{\mathsf {xsig} }| = (4,0)\), \(| vk _{\mathsf {xsig} }| = (1,7)\), and \(\#\,\mathsf {PPE}_{\mathsf {xsig} } = 2 + 2\,|com_{\mathsf {mtc} }|\).

    As in the previous case, the common setup function outputs \(gk\) including bases \((G,\tilde{G}, F_1, F_2, \tilde{F}_1,\tilde{F}_2, \{U_i,\tilde{U}_i\}_{i=1}^{\ell _{\mathsf {xsig} }})\) for \(\ell _{\mathsf {xsig} } = |com_{\mathsf {mtc} }|\) to allow \(\mathsf {xSIG}\) to sign \(com_{\mathsf {mtc} }\). Based on these parameters, the following evaluation is obtained for \(\mathsf {FSP{2}}\):

    $$\begin{aligned} |sk_{} |&= | sk _{\mathsf {xsig} }| = (4,0)\\ |gk|&= (4,4)\\ |vk_{} |&= |gk| + | vk _{\mathsf {xsig} }| + |ck_{\mathsf {mtc} }| = (1,7) + (0, 3+ 3(k+ \ell ))\\&= (1, 10 + 3\,k + 3\,\ell )\\ |\sigma _{}|&= |\sigma _{\mathsf {xsig} }| + |open_{\mathsf {mtc} }| + |\sigma _{\mathsf {pos} }| + \mu |com_{\mathsf {mtc} }| + |vk_{\mathsf {pos} }| + |L/ \ell _{\mathsf {pos} } | \cdot |ovk_{\mathsf {pos} }|\\&= (5,1) + (k,0) + (0, 2\,k) + (0,3) + (\ell +1, 0)+(1,0)\\&= (7 + k + \ell , 4 + 2\,k) \\ \#\,\mathsf {PPE}_{}&= \#\,\mathsf {PPE}_{\mathsf {xsig} } + \#\,\mathsf {PPE}_{\mathsf {mtc} } + | L/ \ell _{\mathsf {pos} } | \cdot \#\,\mathsf {PPE}_{\mathsf {pos} } \\&= 5 + |L/ \ell _{\mathsf {pos} } | = 5 + k \end{aligned}$$

    The last equality in each evaluation is obtained at the optimal setting; \(\ell _{\mathsf {pos} } = | L/ \ell _{\mathsf {pos} } | = k\).

Proof Size Next we assess the cost for proving one’s knowledge of a secret key or a signature for \(\mathsf {FSP{1}}\) and \(\mathsf {FSP{2}}\) with the Groth–Sahai proof as a non-interactive zero-knowledge proof. Results are summarized in Table 3.

Table 3 Size of a Groth–Sahai zero-knowledge proof of knowledge for a secret key or a signature for unilateral messages of size \(L\) with the optimal parameter setting. (xyz) denotes x and y elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively, and z elements in \(\mathbb {Z}_p\)

Proof of Knowing a Secret Key Recall that, in either scheme, a secret key \((K_1, K_2, K_3, K_4)\) is correct if it satisfies relations in (9). To allow zero-knowledge simulation, the relations are transformed into the following form:

$$\begin{aligned} \begin{array}{l} e(\underline{K_2}, \tilde{G}) = e(\underline{G}, \tilde{V}_1),\quad e(\underline{G}, \tilde{V}_3) = e(\underline{K_2},\tilde{V}_2),\\ e(\underline{K_1}, \tilde{V}_1) = e(\underline{W}, \tilde{V}_8), \quad \underline{W} = V_7, \\ e(\underline{K_2}, \tilde{V}_4) = e(\underline{G},\tilde{V}_5),\quad e(\underline{K_3}, \tilde{G}) e(\underline{K_4}, \tilde{V}_2) = e(\underline{G}, \tilde{V}_4). \end{array} \end{aligned}$$
(17)

Underlined variables are the witnesses the prover commits to. Observe that (17) consists of five linear PPEs and a linear multiscalar multiplication equation. According to [38], committing to a group element in \({{\mathbb G}}\) requires 2 elements in \({{\mathbb G}}\), and proving a linear PPE and a multiscalar multiplication equation yield a proof consisting of 2 group elements in \(\tilde{{{\mathbb G}}}\), and \(2 \times 1 = 2\) scalar values in \(\mathbb {Z}_p\), respectively. Committing to G can be done for free by using a prescribed default commitment as suggested in [29]. Thus, with five witnesses, five linear PPEs, and one linear multiscalar multiplication equation, the resulting proof (i.e., commitments and proofs for all relations) consists of 10 elements in \({{\mathbb G}}\), 10 elements in \(\tilde{{{\mathbb G}}}\), and 2 elements in \(\mathbb {Z}_p\).

Proof of Knowing a Valid Signature We first consider \(\mathsf {FSP{1}}\). According to the descriptions in Sect. 4.1, a valid signature satisfies the following relations.

$$\begin{aligned}&e(G,\underline{\tilde{A}}) = e(\underline{Z},\underline{\tilde{G}_{z}})\, e(\underline{R},\tilde{G})\,\prod _{i=1}^{L} e(M_i,\underline{\tilde{G}_{i}}),\quad e(\underline{S_5},\tilde{V}_6\; \underline{\tilde{A}_3}\; \underline{\tilde{G}_{z3}} \; \prod _{i=1}^{L} \underline{\tilde{G}_{i3}}) = e(G,\underline{\tilde{S}_0}),\\&e(\underline{S_1}, \tilde{V}_1)\, e(\underline{S_2},\tilde{V}_3)\, e(\underline{S_3},\tilde{V}_2) = e(\underline{S_4},\tilde{V}_4)\, e(\underline{S_5},\tilde{V}_5)\, e(\underline{W},\tilde{V}_8), \quad \underline{W} = V_7,\\&e(F_1,\underline{\tilde{A}_{3}}) = e(U_{\ell +2},\underline{\tilde{A}}),\quad e(F_2,\underline{\tilde{A}_{3}}) = e(U_{\ell +2},\underline{\tilde{A}_{2}}), \quad e(F_1,\underline{\tilde{G}_{z3}}) = e(U_{\ell +1},\underline{\tilde{G}_{z}}),\\&e(F_2,\underline{\tilde{G}_{z3}}) = e(U_{\ell +1},\underline{\tilde{G}_{z2}}),\quad e(F_1,\underline{\tilde{G}_{i3}}) = e(U_i,\underline{\tilde{G}_{i}}),\quad e(F_2,\underline{\tilde{G}_{i3}}) = e(U_i,\underline{\tilde{G}_{i2}}) \end{aligned}$$

for \(i=1,\ldots ,L\) for the last two relations. There are 8 underlined witnesses in \({{\mathbb G}}\) and \(7+ 3\,L\) in \(\tilde{{{\mathbb G}}}\). Committing to these witnesses requires 16 elements in \({{\mathbb G}}\) and \(14 + 6\,L\) elements in \(\tilde{{{\mathbb G}}}\). The first two relations involve witnesses in both groups whose proofs require \(2 \times 4\) elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\). The third relation has witnesses only in \({{\mathbb G}}\). Its proof consists of 2 elements in \(\tilde{{{\mathbb G}}}\). The fourth relation is a linear multiscalar multiplication equation whose proof consists of 2 elements in \(\mathbb {Z}_p\). The remaining \(4 + 2 L\) relations have witnesses only in \(\tilde{{{\mathbb G}}}\), and each of their proof costs 2 elements in \({{\mathbb G}}\). In total the proofs and commitments consist of \(16 + 4 \times 2 + 2 \times (4+2\,L) = 32 + 4\,L\) elements in \({{\mathbb G}}\) and \(14 + 6\,L+ 4\times 2 + 2 = 24 + 6 \,L\) elements in \(\tilde{{{\mathbb G}}}\), and 2 elements in \(\mathbb {Z}_p\).

Next consider \(\mathsf {FSP{2}}\). As described in Sect. 4.2, a valid signature satisfies the following relations:

$$\begin{aligned}&e(\underline{A_j},\tilde{G}) = e(\underline{G_z},\underline{\tilde{Z}_j})\, e(G,\underline{\tilde{R}_j})\,\prod _{i=1}^{\ell _{\mathsf {pos} }} e(\underline{G_{i}}, \tilde{M}_{(j-1)\ell _{\mathsf {pos} } + i}) \text { (for }j=1,\ldots ,k),\\&e(G,\underline{\tilde{G}_u}) = e(\underline{R},\tilde{G}) \prod _{i=1}^{\ell _{\mathsf {mtc} }} e(\underline{N_i},\tilde{X}_i),\quad e(\underline{S_5},\tilde{V}_6\; \underline{\tilde{G}_{u3}}) = e(G,\underline{\tilde{S}_0}),\\&e(\underline{S_1}, \tilde{V}_1)\, e(\underline{S_2},\tilde{V}_3)\, e(\underline{S_3},\tilde{V}_2)= e(\underline{S_4},\tilde{V}_4)\, e(\underline{S_5},\tilde{V}_5)\, e(\underline{W},\tilde{V}_8), \quad \underline{W} = V_7,\\&e(F_1,\underline{\tilde{G}_{u3}}) = e(U_{1},\underline{\tilde{G}_u}),\quad e(F_2,\underline{\tilde{G}_{u3}}) = e(U_{1},\underline{\tilde{G}_{u2}}) \end{aligned}$$

where \((N_1,\ldots ,N_{\ell _{\mathsf {mtc} }})\) is actually \((G_z,G_1,\ldots ,G_{\ell _{\mathsf {pos} }},A_1,\ldots ,A_k)\) that are also witnesses. Thus we do not need to count the cost for committing to \(N_i\). We consider \(\ell _{\mathsf {mtc} } = k = \ell \). A signature consists of \(7 + k + \ell \) elements in \({{\mathbb G}}\) and \(4 + 2 k\) elements in \(\tilde{{{\mathbb G}}}\). Thus committing to the signature costs \(2 (7 + k + \ell )\) and \(2 (4 + 2 k)\) elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively. To achieve zero-knowledge, the prover also commits to \(V_7\) with W, which costs 2 elements in \({{\mathbb G}}\). The first three relations (indeed \(k+2\) relations) that came from \(\mathsf {POS}\) and \({\mathsf {MTC}}\) involve witnesses in both groups. Hence proofs for them cost \((k+2)(4,4)\) elements in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively. The multiscalar multiplication equation for \(V_7\) costs two elements in \(\mathbb {Z}_p\). The remaining three relations that came from \(\mathsf {xSIG}\) involves witnesses for either of \({{\mathbb G}}\) or \(\tilde{{{\mathbb G}}}\). Proofs for those relations costs 2 group elements in \(\tilde{{{\mathbb G}}}\) and \(2 \times 2\) group elements in \({{\mathbb G}}\). In total the proofs and commitments consist of \(2 (7 + k + \ell ) + 2 + 4 (k+2) + 4 = 28 + 6\,k + 2\,\ell \) and \(2 (4 + 2 \,k) + 4 (k+2) + 2 = 18 + 8 \,k\) in \({{\mathbb G}}\) and \(\tilde{{{\mathbb G}}}\), respectively, and 2 elements in \(\mathbb {Z}_p\). Accordingly, for any setting of k and \(\ell \) satisfying \(L= k \ell \), \(\mathsf {FSP{2}}\) retains better efficiency over \(\mathsf {FSP{1}}\).

5 Efficient Fully Structure-Preserving Combined Signatures

We will now construct a fully structure-preserving combined signature scheme \(\mathsf {SP{1}}\) that can be used to sign messages consisting of \(L=\ell k\) group elements in \(\tilde{{{\mathbb G}}}\). We strive for high efficiency and to optimize performance we settle for a proof of security in the generic asymmetric bilinear group model. We proceed in two steps, first we construct a (not fully) structure-preserving signature scheme and then later modify it to a fully structure-preserving signature scheme.

5.1 Starting Point: A Structure-Preserving Combined Signature Scheme

In this section, we construct a structure-preserving combined signature scheme \(\mathsf {SP{1}}\) that can be used to sign messages consisting of \(L=\ell k\) group elements in \(\tilde{{{\mathbb G}}}\). The signature and verification algorithms for randomizable and strongly unforgeable signatures, respectively, are quite similar. We therefore describe them at the same time indicating the choice by \(b=0\) for randomizable signatures and \(b=1\) for strongly unforgeable ones.

In order to explain some of the design principles underlying the construction, let us first consider the special case where the message space is \(\tilde{{{\mathbb G}}}\), i.e., we are signing a single group element and \(L=\ell =k=1\). The setup includes a random group element \(\tilde{Y}=\tilde{G}^y \in \tilde{{{\mathbb G}}}\), the verification key consists of a single group element \(V=G^v \in {{\mathbb G}}\), and both randomizable and strongly unforgeable signatures are of the form \(\sigma =(R, \tilde{S}, \tilde{T}) \in {{\mathbb G}}\times \tilde{{{\mathbb G}}}^2\).

For a randomizable signature, there will be two verification equations:

$$\begin{aligned} e{(R, \tilde{S})} = e{(G, \tilde{Y})} e{(V, \tilde{G})} \qquad \qquad e{(R, \tilde{T})} = e{(G, \tilde{M})} e{(V, \tilde{Y})}. \end{aligned}$$

It is easy to see that we can randomize the factors in \(e{(R, \tilde{S})}\) and \(e{(R, \tilde{T})}\) into \(e{(R^{\frac{1}{\beta }}, \tilde{S}^{\beta })}\) and \(e{(R^{\frac{1}{\beta }}, \tilde{T}^{\beta })}\) without changing the products themselves, which gives us randomizability of the signatures.

The first verification equation is designed to prevent the adversary from creating a forged signature from scratch after seeing the verification key only. An adversary using only generic group operations can do no better than computing \(R = G^{\rho }\,V^{\rho _v}\) and \(\tilde{S}= \tilde{G}^{\sigma }\,\tilde{Y}^{\sigma _y}\) using known scalars \(\rho ,\rho _v,\sigma ,\sigma _y\in \mathbb {Z}_p\). Looking at the underlying discrete logarithms, the first verification equation then corresponds to the polynomial equation

$$\begin{aligned} (\rho +\rho _vv)(\sigma +\sigma _yy)=y+v \end{aligned}$$

in the unknown discrete logarithms v and y. Let us first argue that this equation is not solvable when viewing it as a formal polynomial equation in vy. Looking at the coefficients of the term v, we get \(\rho _v\sigma =1\), which means \(\sigma \ne 0\). Looking at coefficients of the term y we get \(\rho \sigma _y =1\) we get \(\rho \ne 0\). But this leaves us with a constant term \(\rho \sigma \ne 0\) and therefore we cannot solve the equation formally. On the other hand, in the generic group model the random encoding of group elements mean that the adversary has no further information about the actual values of vy that are chosen at random, so the Schwartz–Zippel lemma implies it has a negligible probability \(\frac{2}{p}\) of guessing \(\rho ,\rho _v,\sigma ,\sigma _y\) such that the equation holds for the concrete discrete logarithms vy.

What if the adversary instead of creating a signature from scratch tries to modify an existing signature or combine many existing signatures? Due to the randomness in the choice of \(z\leftarrow \mathbb {Z}_p^*\) in the signing protocol each signature query will return a signature with a random \(R_i\). As it turns out the randomization used in each signature makes it hard for the adversary to combine multiple signatures, or even modify one signature, in a meaningful way with generic group operations. Intuitively this is because generic group operations allow the adversary to compute linear combinations of elements it has seen; however, the verification equations are quadratic.

Let us now turn to the other option, to make strongly existentially unforgeable signature. In order to prevent randomization when strong unforgeability is desired, the combined signature scheme modifies the latter verification equation by including also \(e{(V, \tilde{S})}\). This gives us the following verification equations for strongly unforgeable signatures

$$\begin{aligned} e{(R, \tilde{S})} = e{(G, \tilde{Y})} e{(V, \tilde{G})} \qquad \qquad e{(R, \tilde{T})} = e{(G, \tilde{M})} e{(V, \tilde{Y})} e{(V, \tilde{S})} \end{aligned}$$

Now the randomization technique fails because a randomization of \(\tilde{S}\) means we must change \(\tilde{T}\) in a way that counteracts this change in the second verification equation. However, \(\tilde{T}\) is paired with R that also changes when \(\tilde{S}\) changes. The adversary is therefore faced with a nonlinear modification of the signatures and gets stuck because generic group operations only enable it to do linear modifications of signature elements.

We can extend the one-element signature scheme to sign a vector \(\varvec{\tilde{M}}_{[1]} = (\tilde{M}_{(1,1)}, \ldots , \tilde{M}_{(\ell ,1)})\) with \(\ell \) group elements in \(\tilde{{{\mathbb G}}}\) by extending the verification key by \(\ell -1\) random group elements \(\varvec{U} = (U_1, \ldots U_{\ell -1})\). Now the verification equations become

$$\begin{aligned} e{(R, \tilde{S})} = e{(G, \tilde{Y})} e{(V, \tilde{G})} \qquad e{(R, \tilde{T})}= & {} \prod ^{\ell -1}_{i=1} e{(U_i, \tilde{M}_{(i,1)})}\cdot e{(G, M_{(\ell ,1)} )} \\&\cdot e{(V, \tilde{Y})} \cdot e{(V, \tilde{S})^b} \end{aligned}$$

where \(b=0\) for a randomizable signature and \(b=1\) for a strong signature. The idea is that the discrete logarithms of the elements in \(\varvec{U}\) are unknown to the adversary making it hard to change any group elements in a previously signed message to get a new message that will verify under the same signature.

Finally, to sign \(L= \ell k\) group elements in \(\tilde{{{\mathbb G}}}\) instead of \(\ell \) group elements we keep the first verification equation, which does not involve the message, but add \(k-1\) extra verification equations similar to the second verification equation for a vector of group elements described above. This allows us to sign \(k\) vectors in parallel. In order to avoid linear combinations of message vectors and signature components being useful in other verification equations, we give each verification equation a separate \(e{(V, \tilde{Y}_{j})}\) factor, where \(j=1,\ldots ,k\) is the index of the verification equation. The resulting signature scheme is given below.

figure j

Theorem 9

\(\mathsf {SP{1}}\) is structure-preserving combined signature scheme that is combined existentially unforgeable under chosen-message attack (\(\mathsf {C}\text {-} \mathsf {EUF}\text {-} \mathsf {CMA} \) secure) in the generic group model.

Proof

Perfect correctness, perfect randomizability and structure preservation follow by inspection. What remains is to prove that the signature scheme is C-EUF-CMA secure in the generic bilinear group model. In the (Type-III) generic bilinear group model, the adversary may compute new group elements in either source group by taking arbitrary linear combinations of previously seen group elements in the same source group. We shall see that no such linear combination of group elements, viewed as formal Laurent polynomials in the variables picked by the key generator and the signing oracle, yields an existential forgery. It follows along the lines of the Uber assumption of Boneh, Boyen and Goh [19] from the inability to produce forgeries when working with formal Laurent polynomials that the signature scheme is C-EUF-CMA secure in the generic bilinear group model.

Let \(\varvec{\tilde{M}}_i = \tilde{G}^{\mathbf{{W}}_i} \in \tilde{{{\mathbb G}}}^{\ell \times k}\) for \(\mathbf{{W}}_i \in {{\mathbb Z}}_p^{\ell \times k}\) be the i-th (\(0\le i \le q\)) signing query made by the adversary. The group elements in the message may be constructed by combining previously seen group elements, so \(\mathbf{{W}}_i\) may depend linearly on the discrete logarithms of public key elements in \(\tilde{{{\mathbb G}}}\) and all previously seen signature elements in \(\tilde{S}_j, \varvec{\tilde{T}}_j\) for \(j<i\). The adversary obtains signatures \((R_i, \tilde{S}_i, \varvec{\tilde{T}}_i)\) that

$$\begin{aligned} R_i = G^{\frac{1}{z_i}} \qquad \tilde{S}_i = (\tilde{Y}_1 \tilde{G}^v)^{z_i} \qquad \varvec{\tilde{T}}_i = \tilde{G}^{z_i\left( (\varvec{u},1)\mathbf{{W}}_i+v\varvec{y}+b_iz_iv(y_1+v)\,\varvec{1}\right) } \end{aligned}$$

where \(b_i=0\) if query i is for a randomizable signature and \(b_i=1\) if query i is for a strong signature.

Viewed as Laurent polynomials, we have that the discrete logarithm of a signature \((R, \tilde{S}, \varvec{\tilde{T}})\) generated by the adversary on a message \(\varvec{\tilde{M}}^{\ell \times k}\) defined by \(\mathbf{{W}}\in \mathbb {Z}_p^{\ell \times k}\) is of the form

$$\begin{aligned} r= & {} \rho +v\rho _v+\varvec{u}\varvec{\rho }_{u}^\top +\sum _i\frac{1}{z_i}\rho _{r_i}\\ s= & {} \sigma +\varvec{\sigma }_y \varvec{y}^\top +\sum _j\sigma _{s_j}z_j(y_1+v)+\sum _j\varvec{\sigma }_{t_j}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jz_jv(y_1+v)\varvec{1}\right) \\ \varvec{t}= & {} \varvec{\tau }+\varvec{y}T_y+\sum _jz_j(y_1+v)\varvec{\tau }_{s_j}+\sum _jz_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jz_jv(y_1+ v)\varvec{1}\right) T_{t_j} \end{aligned}$$

Similarly, all \(\ell k\) entries in \(\mathbf{{W}}\) can be written in a form similar to s, and all entries in queried messages with discrete logarithms \(\mathbf{{W}}_i\) can be written in a form similar to s where the sums are bounded by \(j<i\).

For the first verification equation to be satisfied, we must have \(rs=y_1+v\), i.e.,

$$\begin{aligned} \left( \begin{array}{l}\ \ \rho +\varvec{u}\varvec{\rho }_{u}^\top \\ +v\rho _v+\sum _i\frac{1}{z_i}\rho _{r_i}\end{array}\right) \left( \begin{array}{l}\ \ \sigma +\varvec{\sigma }_y \varvec{y}^\top +\sum _j\sigma _{s_j}z_j(y_1+v)\\ +\sum _j\varvec{\sigma }_{t_j}z_j\big ((\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\big )^\top \end{array}\right) =y_1+v \end{aligned}$$

We start by noting that \(r\ne 0\) since otherwise the left hand side multivariate polynomial rs cannot have the term \(y_1\) that appears on the right hand side. Please observe that it is only in \({{\mathbb G}}\) that we have terms including indeterminates with negative power, i.e., \(\frac{1}{z_i}\). In \(\tilde{{{\mathbb G}}}\) all indeterminates have positive power, i.e., so \(s_j,\varvec{t}_j,\mathbf{{W}}_j\) only contain proper multivariate polynomials. Now suppose for a moment that \(\rho _{r_i}=0\) for all i. Then in order not to have a terms involving \(z_j\)’s in rs we must have \(\sum _j\sigma _{s_j}z_j(y_1+v)+\sum _j\varvec{\sigma }_{t_j}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) ^\top =0\). The term \(y_1\) now gives us \(\rho \sigma _{y,1}=1\) and the term v gives us \(\rho _v\sigma =1\). This means \(\rho \ne 0\) and \(\sigma \ne 0\), and therefore, we reach a contradiction since the constant term should be \(\rho \sigma =0\). We conclude that there must exist some \(J\) for which \(\rho _{r_{J}}\ne 0\).

Now we have the term \(\rho _{r_{J}}\sigma \frac{1}{z_{J}}=0\), which shows us \(\sigma =0\). The terms \(\rho _{r_{J}}\sigma _{y,h}\frac{y_{h}}{z_{J}}=0\) for \(h=1,\ldots ,k\) give us \(\varvec{\sigma }_y=\varvec{0}\).

The polynomials corresponding to \(s_j\) and \(\varvec{t}_j\) contain the indeterminate \(z_j\) in all terms, so no linear combination of them can give us a term where the indeterminate component is \(vy_{h}\) for some \(h\in \{1,\ldots ,k\}\). Since \(M_j\) is constructed as a linear combination of elements in the verification key and components in \(\tilde{{{\mathbb G}}}\) from previously seen signatures, it too cannot contain a term where the indeterminate component is \(vy_{h}\). The coefficient of \(\frac{z_j}{z_{J}}vy_{h}\) is therefore \(\rho _{r_{J}}\sigma _{t_j,h}=0\) and therefore \(\sigma _{t_j,h}=0\) for every \(j\ne {J}\) and \(h\in \{1,\ldots ,k\}\). This shows \(\varvec{\sigma }_{t_j}=\varvec{0}\) for all \(j\ne J\). Looking at the coefficients for \(vy_{h}\) for \(h=1,\ldots ,k\) we see that \(\varvec{\sigma }_{t_{J}}=\varvec{0}\) too.

The terms \(\rho _{r_{J}}\sigma _{s_j}\frac{z_j}{z_l}v\) give us \(\sigma _{s_j}=0\) for all \(j\ne J\). In order to get a coefficient of 1 for the term \(y_1\) we see that \(\sigma _{s_J}=\frac{1}{\rho _{r_J}}\), which is nonzero. Our analysis has now shown that

$$\begin{aligned} s=\frac{1}{\rho _{r_J}}z_J(y_1+v). \end{aligned}$$

Let us now analyze the structure of r. The term \(\rho _v \sigma _{J}v^2z_J=0\) gives us \(\rho _v=0\). We know from our previous analysis that if there was a second \(i\ne J\) for which \(\rho _{r_i}\ne 0\) then also \(\sigma _{\rho _{J}}=0\), which it is not. Therefore for all \(i\ne J\) we have \(\rho _{r_i}=0\). The term \(\rho \sigma _{s_J}z_Jy_1\) gives \(\rho =0\). The terms in \(\sigma _{s_J}\varvec{u}z_Jv\varvec{\rho }_u^{\top }\) give us \(\varvec{\rho }_{u}=\varvec{0}\). Our analysis therefore shows

$$\begin{aligned} r=\rho _{r_J}\frac{1}{z_J}. \end{aligned}$$

We now turn to the second verification equation, which is \(rt_1=(\varvec{u},1)\mathbf{{w}}^\top +vy_1+bvs\), where \(\mathbf{{w}}^\top \) is the first column vector of \(\mathbf{{W}}\). The message vector is of the form

$$\begin{aligned} \mathbf{{w}}= & {} \varvec{\mu }+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+v)\\&+\,\sum _jz_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j} \end{aligned}$$

where \(\varvec{\mu }, \mathbf{{W}}_y\varvec{\mu }_{s_j}\) and \(\mathbf{{W}}_{t_j}\) are vectors and matrices of corresponding size with entries in \({{\mathbb Z}}_p\) chosen by the adversary. Similarly, we can write out \(t_1=\tau +\varvec{\tau }_y\varvec{y}^\top +\sum _j\tau _{s_j}z_j(y_1+v)+\sum _j\varvec{\tau }_{t_j}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \) for elements and vectors of corresponding size \(\tau ,\varvec{\tau }_y,\tau _{s_j},\varvec{\tau }_{t_j}\) with entries in \({{\mathbb Z}}_p\) chosen by the adversary.

Writing out the second verification equation, we have

$$\begin{aligned}&\rho _{r_J}\frac{1}{z_J}\left( \begin{array}{l}\ \ \tau +\varvec{\tau }_y\varvec{y}^\top +\sum _j\tau _{s_j}z_j(y_1+v)\\ +\sum _j\varvec{\tau }_{t_j}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \end{array}\right) \\&\quad = vy_1+bv\left( \frac{1}{\rho _{r_J}}z_J(y_1+v)\right) \\&\qquad +\left( \varvec{u},1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+v)\\ +\sum _jz_j\left( (\varvec{u},1) \mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top . \end{aligned}$$

Looking at the coefficients of terms involving \(\frac{1}{z_J}\) and \(\frac{y_{h}}{z_J}\), we get \(\tau =0\) and \(\varvec{\tau }_y=\varvec{0}\). Looking at the terms in \(\rho _{r_J}\varvec{\tau }_{t_j}\frac{z_j}{z_J}v\varvec{y}\), we get \(\varvec{\tau }_{t_j}=\varvec{0}\) for all \(j\ne J\). Similarly, the terms \(\rho _{r_J}\tau _{s_j}\frac{z_j}{z_J}v\) give us \(\tau _{s_j}=0\) for all \(j\ne J\). We are now left with

$$\begin{aligned}&\rho _{r_J}\left( \tau _{s_J}(y_1+v)+\varvec{\tau }_{t_J}\left( (\varvec{u},1)\mathbf{{W}}_J+v\varvec{y}+b_Jvz_J(y_1+v)\varvec{1}\right) \right) \\&\quad = vy_1+bv\frac{1}{\rho _{r_J}}z_J(y_1+v)\\&\qquad +\,\left( \varvec{u},1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+v)\\ +\sum _jz_j\left( (\varvec{u},1) \mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top . \end{aligned}$$

Terms involving \(z_j\) and \(z_j^2\) must cancel out, so we can assume \(\varvec{\mu }_{s_j}=\varvec{0}\) and \(\mathbf{{W}}_{t_j}=0\) for \(j>J\). Since \(\mathbf{{W}}_J\) does not involve \(z_J\) in any of its terms, we get from the terms in \((\varvec{u},1)z_Jv\varvec{\mu }_{s_J}^\top \) that \(\varvec{\mu }_{s_J}=0\). Since there can be no terms involving \(z_J^2\) we get \(b_J\varvec{1} \mathbf{{W}}_{t_J}^\top =\varvec{0}\). Looking at the coefficients for v we get \(\tau _{s_J}=0\). This leaves us with

$$\begin{aligned}&\rho _{r_J}\varvec{\tau }_{t_J}\left( (\varvec{u},1) \mathbf{{W}}_J+v\varvec{y}+b_Jvz_J(y_1+v)\varvec{1}\right) ^\top \\&\quad = vy_1+bv\frac{1}{\rho _{r_J}}z_J(y_1+v) + (\varvec{u},1)z_J\left( (\varvec{u},1) \mathbf{{W}}_J+v\varvec{y}) \mathbf{{W}}_{t_J} \right) ^\top \\&\qquad +\,\left( \varvec{u},1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{y}\mathbf{{W}}_y+\sum _{j<J}\varvec{\mu }_{s_j}z_j(y_1+v)\\ +\,\sum _{j<J}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top . \end{aligned}$$

Looking at the terms involving \(z_Jv^2\) we see \(\rho _{r_J}\varvec{\tau }_{t_J}b_J\varvec{1}^\top =b\frac{1}{\rho _{r_J}}\). This cancels out the first two parts involving \(z_J\). The only remaining terms involving \(z_J\) now give us \(\mathbf{{W}}_{t_J}=0\). This gives us

$$\begin{aligned}&\rho _{r_J}\varvec{\tau }_{t_J}\left( (\varvec{u},1)\mathbf{{W}}_J+v\varvec{y}\right) ^\top - \varvec{y}_1\\&\quad = \left( \varvec{u},1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{y}\mathbf{{W}}_y+\sum _{j<J}\varvec{\mu }_{s_j}^{(J)}z_j(y_1+v)\\ +\sum _{j<J}z_j\left( (\varvec{u},1) \mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top \end{aligned}$$

Looking at the terms in \(v\varvec{y}\) we now get \(\rho _{r_J}\varvec{\tau }_{t_J}=(1,0,\ldots ,0)\). Let the first column vector of \(\mathbf{{W}}_J\) be \(\mathbf{{w}}_J^\top \) then we now have

$$\begin{aligned} (\varvec{u},1)\mathbf{{w}}_J^\top =(\varvec{u},1)\mathbf{{w}}^\top . \end{aligned}$$

Writing

$$\begin{aligned} \mathbf{{w}}'= & {} \mathbf{{w}}_J-\mathbf{{w}}=\varvec{\mu }'+\varvec{y}\mathbf{{W}}_y'+\sum _{j<J}\varvec{\mu }_{s_j}'z_j(y_1+v)\\&\quad +\sum _{j<J}z_j\left( (\varvec{u},1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}' \end{aligned}$$

we now have

$$\begin{aligned} (\varvec{u},1)\left( \begin{array}{l}\ \ \varvec{\mu }'+\varvec{y}\mathbf{{W}}_y'+\sum _{j<J}\varvec{\mu }_{s_j}'z_j(y_1+v)\\ +\sum _{j<J}z_j\left( (\varvec{u},1) \mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+v)\varvec{1}\right) \mathbf{{W}}_{t_j}'\end{array}\right) ^\top =0. \end{aligned}$$

The terms in \((\varvec{u},1)\varvec{\mu }'^\top \) tell us \(\varvec{\mu }'=\varvec{0}\). Looking at terms involving \(u_iy_h\) or \(y_h\) gives us \(\mathbf{{W}}_y'=0\). Terms with \(z_j^2\) tell us \(b_j\varvec{1}\mathbf{{W}}_{t_j}'=\varvec{0}\) for all j. Terms in \((\varvec{u},1)z_jv\mu _{s_j}'\) tell us \(\mu _{s_j}'=0\) for all j. Finally, terms in \((\varvec{u},1)(v\varvec{y}\mathbf{{W}}_{t_j}')\) give us \(\mathbf{{W}}_{t_j}'=0\).

We have now deduced that \(\mathbf{{w}}'=\varvec{0}\) and therefore \(\mathbf{{w}}_J=\mathbf{{w}}\). This means the first column in \(\mathbf{{W}}\) for which the adversary has produced a signature is a copy of the first column in the queried message \(\mathbf{{W}}_J\). Using the same analysis on the last \(k-1\) verification equations gives us that the other \(k-1\) columns also match. This means a generic adversary can only produce valid signatures for previously queried messages, so we have EUF-CMA security.

Finally, let us consider the case where \(b=1\), i.e., we are doing a strong signature verification. We saw earlier that \(\rho _{r_J}\varvec{\tau }_{t_J}b_J\varvec{1}^\top =b_J=b\frac{1}{\rho _{r_J}}\) which can only be satisfied if \(b_{J}=1\) and \(\rho _{r_J}=1\). This means \(s=s_J\) and \(r=r_J\) and \(\mathbf{{W}}= \mathbf{{W}}_J\) and therefore \(\varvec{t}=\varvec{t}_J\). So the generic adversary can only satisfy the strong verification equation with \(b=1\) by copying both the message and signature from a previous query with \(b_J=1\).

On the other hand, if \(b=0\), i.e., we are verifying a randomizable signature, we see from \(\rho _{r_J}\varvec{\tau }_{t_J}b_l\varvec{1}^\top =b_J=b\frac{1}{\rho _{r_J}}\) that \(b_J=0\). So the adversary has randomized a signature intended for randomization. \(\square \)

5.2 Combined FSPS

The structure-preserving signature scheme we just gave uses knowledge of the discrete logarithms of \(\varvec{U}\) in a fundamental way since \(\varvec{\tilde{T}}\) contains linear combinations of group elements in \(\tilde{M}\), which yield a vector of group elements \(\tilde{G}^{z(\varvec{u},1)\mathbf{{W}}}\) that could not be computed without knowing \(\varvec{u}\). This situation is common for all structure-preserving signature schemes for messages that are vectors of group elements. The need to specify such discrete logarithms in the signing key therefore prevents them from being fully structure preserving.

To get full structure preservation, we circumvent this problem by only pairing message group elements with signature group elements where the signer does actually know the discrete logarithms. In our case, we will modify the structure-preserving signature scheme by letting the signer pick \(\varvec{U}\) herself and include it in the signature.

To make this idea work, we first make a minor modification to our signature scheme from before. We include a vector of \(\ell -1\) group elements \(\varvec{\tilde{X}}\) in the setup, and we modify \(\tilde{S}\) to have the form \(\tilde{S}= \left( \tilde{Y}_1 \varvec{\tilde{X}}^{\varvec{u}}\tilde{G}^v \right) ^z\). The first verification equation then becomes

$$\begin{aligned} e{(R, \tilde{S})} = e{(G, \tilde{Y}_1)} \prod _{i=1}^{\ell -1} e{(U_i, \tilde{X}_i)} e{(V, \tilde{G})} \end{aligned}$$

If this was the only modification, we made it is not hard to see that the same security proof we gave earlier will work again, we are only modifying the verification equation by a random constant \(\prod _{i=1}^{\ell -1} e{(U_i, \tilde{X}_i)}\). The surprising thing though is that the signature scheme remains secure if we let the signer pick the \(\varvec{U}\) part of the verification key herself and include it in the signature.

Letting the signer pick \(\varvec{U}\) as part of the verification key means that she can know their discrete logarithms. Since she also picks \(z\leftarrow \mathbb {Z}_p^*\) herself, she can now use linear operations on the group elements in the message matrix to compute the group elements in the vector \(\tilde{G}^{z(\varvec{u},1)\mathbf{{W}}}\) part of \(\varvec{\tilde{T}}\). Furthermore, we have designed the scheme such that the rest can be computed with linear operations as well. To make randomizable signatures, the signer just needs to know \(\tilde{G}^v\) and \(\varvec{\tilde{Y}}^v\) To make strong signatures she additionally needs to know \(\varvec{\tilde{X}}^v\) and \(\tilde{G}^{v^2}\).

The resulting fully structure-preserving signature scheme is described below and can be used to sign messages consisting of \(L=\ell k\) group elements in \(\tilde{{{\mathbb G}}}\). It has a verification key size of 1 group element, a signature size of \(\ell +k+1\) group elements, and verification involves evaluating \(k+1\) pairing product equations. Since they are quite similar, we described the randomizable signature and the strongly unforgeable signature algorithms at the same time. Setting \(b=0\) gives the algorithms for randomizable signatures and setting \(b=1\) gives the algorithms for strongly unforgeable signatures.

figure k

Theorem 10

\(\mathsf {EFSP{1}}\) gives a fully structure-preserving combined signature scheme that is \(\mathsf {C}\text {-}\mathsf {EUF}\text {-}\mathsf {CMA }\) secure in the generic group model.

Proof

Perfect correctness, perfect randomizability and structure preservation follow by inspection. The secret key \(sk = (\mathcal{A},\varvec{\tilde{X}}^v, \varvec{\tilde{Y}}^v, {\tilde{G}^{v^2}})\) consists of \(\ell +k+1\) group elements, and we can verify that it matches the verification key \(vk = V\) by checking the pairing product equations

$$\begin{aligned}&e{(V, \tilde{G})} = e{(G, {\tilde{G}^v})}, \quad e{(V, \varvec{\tilde{X}})} = e{(G, \varvec{\tilde{X}}^v)} , \\&e{(V, \varvec{\tilde{Y}})} = e{(G, \varvec{\tilde{Y}}^v)}, \quad e{(V, {\tilde{G}^v})} = e{(G, {\tilde{G}^{v^2}})} \end{aligned}$$

so the signature scheme is fully structure preserving.

What remains now is to prove that the signature scheme is C-EUF-CMA secure in the generic group model. In the (Type-III) generic bilinear group model, the adversary may compute new group elements in either source group by taking arbitrary linear combinations of previously seen group elements in the same source group. We shall see that no such linear combination of group elements, viewed as formal Laurent polynomials in the variables picked by the key generator and the signing oracle, yields an existential forgery. It follows along the lines of the Uber assumption in [19] this that the signature scheme is C-EUF-CMA secure in the generic bilinear group model.

Let \(\varvec{\tilde{M}}_i = \tilde{G}^{\mathbf{{W}}_i} \in \tilde{{{\mathbb G}}}^{\ell \times k}\) for \(\mathbf{{W}}_i \in \mathbb {Z}_p^{\ell \times k}\) be the i-th (\(0 \le i \le q\)) signing query made by the adversary. Since the adversary can use generic group operations to construct the message group elements, \(\mathbf{{W}}_i\) may depend linearly on the discrete logarithms of public key elements in \(\tilde{{{\mathbb G}}}\) and all previously seen signature elements in \(\tilde{S}_j, \varvec{\tilde{T}}_j\) for \(j<i\). The adversary obtains signatures \((\varvec{U}_i, R_i, \tilde{S}_i, \varvec{\tilde{T}}_i)\) that

$$\begin{aligned}&\varvec{U}_i, \qquad R_i = G^{\frac{1}{z_i}}, \qquad \tilde{S}_i = \left( \tilde{Y}_1^{z_i} \sum _{\kappa =1}^{m-1}\tilde{X}_{\kappa }^{u_{\kappa }} {\tilde{G}^v} \right) ^{z_i},\\&\varvec{\tilde{T}}_i = \tilde{G}^{z_i \left( (\varvec{u}_i,1)\mathbf{{W}}_i+v\varvec{y}+b_iz_iv(y_1+\varvec{u}_i\cdot \varvec{x}+v)\right) } \end{aligned}$$

where \(b_i=0\) if query i is for a randomizable signature and \(b_i=1\) if query i is for a strong signature.

Viewed as Laurent polynomials we have that the discrete logarithms of a signature \((\varvec{U}, R, \tilde{S}, \varvec{\tilde{T}})\) generated by the adversary on \(\varvec{\tilde{M}} \in \tilde{{{\mathbb G}}}^{\ell \times k}\) are of the forms

$$\begin{aligned} \varvec{u}= & {} \varvec{\alpha }+ v\varvec{\alpha }_v + \sum _i\varvec{u}_i A_i + \sum _i\frac{1}{z_i}\varvec{\alpha }_{r_i}\\ r= & {} \rho +v\rho _v+\sum _i\varvec{u}_i\varvec{\rho }_{u_i}^\top +\sum _i\frac{1}{z_i}\rho _{r_i}\\ s= & {} \sigma +\varvec{\sigma }_x \varvec{x}^\top +\varvec{\sigma }_y \varvec{y}^\top +\sum _j\sigma _{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\&+\sum _j\varvec{\sigma }_{t_j}z_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jz_jv(y_1+\varvec{u}\varvec{x}^\top + v)\varvec{1}\right) \\ \varvec{t}= & {} \varvec{\tau }+\varvec{x}T_x+\varvec{y}T_y+\sum _jz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{\tau }_{s_j}\\&+\sum _jz_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jz_jv(y_1+\varvec{u}\varvec{x}^\top + v)\varvec{1}\right) T_{t_j} \end{aligned}$$

Similarly, all \(\ell k\) discrete logarithms \(\mathbf{{W}}\) of \(\tilde{M}\) can be written in a form similar to s, and all discrete logarithms of queried message matrices \(\mathbf{{W}}_i\) can be written in a form similar to s where the sums are bounded by \(j<i\).

For the first verification equation to be satisfied, we must have \(rs=y_1+\varvec{u}\varvec{x}^\top +v\), i.e.,

$$\begin{aligned}&\left( \begin{array}{l}\ \ \rho +\sum _i\varvec{u}_i\varvec{\rho }_{u_i}^\top \\ +v\rho _v+\sum _i\frac{1}{z_i}\rho _{r_i}\end{array}\right) \cdot \left( \begin{array}{l}\ \ \ \sigma +\varvec{\sigma }_x \varvec{x}^\top +\varvec{\sigma }_y \varvec{y}^\top +\sum _j\sigma _{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _j\varvec{\sigma }_{t_j}z_j\big ((\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\big )^\top \end{array}\right) \\&\quad =y_1+\left( \varvec{\alpha }+ v\varvec{\alpha }_v + \sum _i\varvec{u}_i A_i + \sum _i\frac{1}{z_i}\varvec{\alpha }_{r_i}\right) \varvec{x}^\top +v \end{aligned}$$

We start by noting that \(r\ne 0\) since otherwise rs cannot have the term \(y_1\). Please observe that it is only in \({{\mathbb G}}\) that we have terms including indeterminates with negative power, i.e., \(\frac{1}{z_i}\). In \(\tilde{{{\mathbb G}}}\), all indeterminates have positive power, i.e., so \(s_j,\varvec{t}_j, \mathbf{{W}}_j\) only contain proper multivariate polynomials. Now suppose for a moment that \(\rho _{r_i}=0\) for all i. Then in order not to have a terms involving \(z_j\)’s in rs we must have

$$\begin{aligned}&\sum _j\sigma _{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^{\top }+v)\\&\quad +\sum _j\varvec{\sigma }_{t_j}z_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) ^\top =0. \end{aligned}$$

The term \(y_1\) now gives us \(\rho \sigma _{y,1}=1\) and the term v gives us \(\rho _v\sigma =1\). This means \(\rho \ne 0\) and \(\sigma \ne 0\) and therefore we reach a contradiction since the constant term should be \(\rho \sigma =0\). We conclude that there must exist some \(J\) for which \(\rho _{r_J}\ne 0\).

Now we have the term \(\rho _{r_J}\sigma \frac{1}{z_J}=0\), which shows us \(\sigma =0\). The terms \(\rho _{r_J}\sigma _{y,h}\frac{y_h}{z_J}=0\) for \(h=1,\ldots ,k\) give us \(\varvec{\sigma }_y=\varvec{0}\).

The polynomials corresponding to \(s_j\) and \(\varvec{t}_j\) contain the indeterminate \(z_j\) in all terms, so no linear combination of them can give us a term where the indeterminate component is \(vy_h\) for some \(h\in \{1,\ldots ,k\}\). Since \(\mathbf{{W}}_j\) is constructed as a linear combination of elements in the verification key and components in \(\tilde{{{\mathbb G}}}\) from previously seen signatures, it too cannot contain a term where the indeterminate component is \(vy_h\). The coefficient of \(\frac{z_j}{z_J}vy_h\) is therefore \(\rho _{r_J}\sigma _{t_j,h}=0\) and therefore \(\sigma _{t_j,h}=0\) for every \(j\ne J\) and \(h\in \{1,\ldots ,k\}\). This shows \(\varvec{\sigma }_{t_j}=\varvec{0}\) for all \(j\ne J\). Looking at the coefficients for \(vy_h\) for \(h=1,\ldots ,k\) we see that \(\varvec{\sigma }_{t_J}=\varvec{0}\) too.

The terms \(\rho _{r_J}\sigma _{s_j}\frac{z_j}{z_l}v\) give us \(\sigma _{s_j}=0\) for all \(j\ne J\). In order to get a coefficient of 1 for the term \(y_1\) we see that \(\sigma _{s_J}=\frac{1}{\rho _{r_J}}\), which is nonzero. Our analysis has now shown that

$$\begin{aligned}s=\varvec{\sigma }_x \varvec{x}^\top +\frac{1}{\rho _{r_J}}z_J(y_1+\varvec{u}_J\varvec{x}^\top +v).\end{aligned}$$

Let us now analyze the structure of r. The term \(\rho _v \sigma _{J}v^2z_J=0\) gives us \(\rho _v=0\). We know from our previous analysis that if there was a second \(i\ne J\) for which \(\rho _{r_i}\ne 0\) then also \(\sigma _{\rho _{J}}=0\), which it is not. Therefore, for all \(i\ne J\) we have \(\rho _{r_i}=0\). The term \(\rho \sigma _{s_J}z_Jy_1\) gives \(\rho =0\). The terms in \(\varvec{\rho }_{u_i}\sigma _{s_J}\varvec{u}_{i}z_Jv\) give us \(\varvec{\rho }_{u_i}=\varvec{0}\) for all i. Our analysis therefore shows

$$\begin{aligned} r=\rho _{r_J}\frac{1}{z_J}. \end{aligned}$$

Finally, having simplified r and s analyzing the terms in \(\varvec{u}\) gives us

$$\begin{aligned}\varvec{u}=\varvec{u}_J+\rho _{r_J}\varvec{\sigma }_x \frac{1}{z_J}.\end{aligned}$$

We now turn to the second verification equation, which is \(rt_1=(\varvec{u},1)\mathbf{{w}}^\top +vy_1+bvs\), where \(\mathbf{{w}}^\top \) is the first column vector of \(\mathbf{{W}}\). The message vector is of the form

$$\begin{aligned} \mathbf{{w}}=\begin{array}{l}\varvec{\mu }+\varvec{x}\mathbf{{W}}_x+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _jz_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}, \end{aligned}$$

where \(\varvec{\mu }, \mathbf{{W}}_x, \mathbf{{W}}_y\varvec{\mu }_{s_j}\) and \(\mathbf{{W}}_{t_j}\) are vectors and matrices of corresponding size with entries in \(\mathbb {Z}_p\) chosen by the adversary. Similarly, we can write out

$$\begin{aligned} t_1= & {} \tau +\varvec{\tau }_x\varvec{x}^\top +\varvec{\tau }_y\varvec{y}^\top +\sum _j\tau _{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)+\sum _j\varvec{\tau }_{t_j}z_j\\&\left( (\varvec{u},1) \mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \end{aligned}$$

for elements and vectors of corresponding size \(\tau ,\varvec{\tau }_x,\varvec{\tau }_y,\tau _{s_j},\varvec{\tau }_{t_j}\) with entries in \(\mathbb {Z}_p\) chosen by the adversary.

Writing out the second verification equation, we have

$$\begin{aligned}&\rho _{r_J}\frac{1}{z_J}\left( \begin{array}{l}\ \ \ \tau +\varvec{\tau }_x\varvec{x}^\top +\varvec{\tau }_y\varvec{y}^\top +\sum _j\tau _{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _j\varvec{\tau }_{t_j}z_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) ^\top \end{array}\right) \\&\quad = vy_1+bv\left( \varvec{\sigma }_x \varvec{x}^\top +\frac{1}{\rho _{r_J}}z_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\right) \\&\qquad +\left( \varvec{u}_J+\rho _{r_J}\varvec{\sigma }_x \frac{1}{z_J},1\right) \\&\qquad \times \left( \begin{array}{l}\ \ \ \varvec{\mu }+\varvec{x}\mathbf{{W}}_x+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _jz_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top . \end{aligned}$$

Looking at the coefficients of terms involving \(\frac{1}{z_J}\), we get the following equalities for all \(j\ne J\): \(\tau =\varvec{\sigma }_x \mu ^\top \ (\frac{1}{z_J})\), \(\varvec{\tau }_x=\varvec{\sigma }_x \mathbf{{W}}_x^\top \ (\frac{x_h}{z_J})\), \(\varvec{\tau }_y=\varvec{\sigma }_x \mathbf{{W}}_y^\top \ (\frac{y_h}{z_J})\), \(\tau _{s_j}=\varvec{\sigma }_x\varvec{\mu }_{s_j}^\top \ (\frac{vz_j}{z_J})\), \(\varvec{\tau }_{t_j}=\varvec{\sigma }_x T_{t_j}^\top \ (\frac{vy_kz_j}{z_J})\). Canceling out these terms, we are left with

$$\begin{aligned}&\rho _{r_J}\left( \tau _{s_J}(y_1+\varvec{u}_J\varvec{x}^\top +v)+\varvec{\tau }_{t_J}\left( (\varvec{u}_J,1)\mathbf{{W}}_J+v\varvec{y}+b_Jvz_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\varvec{1}\right) ^\top \right) \\&\quad = vy_1+bv\left( \varvec{\sigma }_x \varvec{x}^\top +\frac{1}{\rho _{r_J}}z_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\right) \\&\qquad +\, \rho _{r_J}\varvec{\sigma }_x \left( \varvec{\mu }_{s_J}(y_1+\varvec{u}_J\varvec{x}^\top +v)\right. \\&\qquad \left. +\left( (\varvec{u}_J,1)\mathbf{{W}}_J+v\varvec{y}+b_Jvz_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_J}\right) ^\top \\&\qquad +\left( \varvec{u}_J,1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{x}\mathbf{{W}}_x+\varvec{y}\mathbf{{W}}_y+\sum _j\varvec{\mu }_{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _jz_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top . \end{aligned}$$

Terms involving \(z_j\) and \(z_j^2\) must cancel out, so we can assume \(\mu _{s_j}=\varvec{0}\) and \(\mathbf{{W}}_{t_j}=0\) for \(j>J\). Since \(\mathbf{{W}}_J\) does not involve \(z_J\) in any of its terms, we get from the terms in \((\varvec{u}_J,1)z_Jv\mu _{s_J}^\top \) that \(\varvec{\mu }_{s_J}=0\). Since there can be no terms involving \(z_J^2\) we get \(b_J\varvec{1} \mathbf{{W}}_{t_J}^\top =\varvec{0}\). Looking at the coefficients for v we get \(\tau _{s_J}=\varvec{\sigma }_x\varvec{\mu }_{s_J}\). This leaves us with

$$\begin{aligned}&\rho _{r_J}\varvec{\tau }_{t_J}\left( (\varvec{u}_J,1)\mathbf{{W}}_J+v\varvec{y}+b_Jvz_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\varvec{1}\right) ^\top \\&\quad = vy_1+bv\left( \varvec{\sigma }_x \varvec{x}^\top +\frac{1}{\rho _{r_J}}z_J(y_1+\varvec{u}_J\varvec{x}^\top +v)\right) \\&\qquad + \,\rho _{r_J}\varvec{\sigma }_x \left( \left( (\varvec{u}_J,1)\mathbf{{W}}_J+v\varvec{y}\right) \mathbf{{W}}_{t_J}\right) ^\top \\&\qquad +\left( \varvec{u}_J,1\right) \left( \begin{array}{l}\ \ \varvec{\mu }+\varvec{x}\mathbf{{W}}_x+\varvec{y}\mathbf{{W}}_y+\sum _{j<J}\varvec{\mu }_{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _{j<J}z_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top \\&\qquad + \,(\varvec{u}_J,1)z_J\left( (\varvec{u}_J,1)\mathbf{{W}}_J+v\varvec{y}) \mathbf{{W}}_{t_J} \right) ^\top . \end{aligned}$$

Looking at the terms involving \(z_Jv^2\) we see \(\rho _{r_J}\varvec{\tau }_{t_J}b_J\varvec{1}^\top =b\frac{1}{\rho _{r_J}}\). The only remaining terms involving \(z_J\) now give us \(\mathbf{{W}}_{t_J}=0\). This gives us

$$\begin{aligned}&\rho _{r_J}\varvec{\tau }_{t_J}\left( (\varvec{u}_J,1) \mathbf{{W}}_J+v\varvec{y}\right) ^\top \\&\quad = vy_1+bv\varvec{\sigma }_x \varvec{x}^\top \\&\qquad +\left( \varvec{u}_J,1\right) \left( \begin{array}{l}\ \ \ \varvec{\mu }+\varvec{x}\mathbf{{W}}_x+\varvec{y}\mathbf{{W}}_y+\sum _{j<J}\varvec{\mu }_{s_j}z_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\\ +\sum _{j<J}z_j\left( (\varvec{u}_j,1)\mathbf{{W}}_j+v\varvec{y}+b_jvz_j(y_1+\varvec{u}_j\varvec{x}^\top +v)\varvec{1}\right) \mathbf{{W}}_{t_j}\end{array}\right) ^\top \end{aligned}$$

Looking at the terms in \(v\varvec{y}\) we now get \(\rho _{r_J}\varvec{\tau }_{t_J}=(1,0,\ldots ,0)\). This means \((\varvec{u}_J,1)\mathbf{{W}}_J^\top =b\varvec{\sigma }_x\varvec{x}^\top +(\varvec{u}_J,1)\mathbf{{W}}^\top \), where \(\mathbf{{W}}_{J}^\top \) is the first column of \(\mathbf{{W}}_J\). Looking at the coefficients of \(vx_h\), we see that if \(b\varvec{\sigma }_x=\varvec{0}\). Since \(\mathbf{{W}}_J\) and \(\mathbf{{W}}\) are independent of \(\varvec{u}_J\) this means \(\mathbf{{W}}=\mathbf{{W}}_J\).

A similar argument can applied to the remaining \(k-1\) verification equations showing us that in all columns \(\mathbf{{W}}\) and \(\varpi _{J}\) match. This means \(\varvec{\tilde{M}}=\varvec{\tilde{M}}_{J}\), so the signature scheme is existentially unforgeable both for randomizable signatures and strong signatures.

Finally, let us consider the case where \(b=1\), i.e., we are doing a strong signature verification. We have already seen that \(b\varvec{\sigma }_x=\varvec{0}\) so when \(b=1\) this means \(\varvec{\sigma }_x=\varvec{0}\). Since \(\rho _{r_J}\varvec{\tau }_{t_J}b_J\varvec{1}^\top =b_J=b\frac{1}{\rho _{r_J}}\) we see that \(b_{J}=1\) and \(\rho _{r_J}=1\). This means \(s=s_J\) and \(r=r_J\) and \(\varvec{u}=\varvec{u}_J\) and \(\mathbf{{W}}= \mathbf{{W}}_{J}\), it means \(\varvec{\tilde{M}}=\varvec{\tilde{M}}_J\), and therefore \(\varvec{t}=\varvec{t}_J\). So the generic adversary can only satisfy the strong verification equation with \(b=1\) by copying both the message and signature from a previous query with \(b_J=1\).

On the other hand, if we have \(b=0\), i.e., we are verifying a randomizable signature, we see from \(\rho _{r_J}\varvec{\tau }_{t_J}b_l\varvec{1}^\top =b_J=b\frac{1}{\rho _{r_J}}\) that \(b_J=0\). So the adversary has randomized a signature intended for randomization. \(\square \)

5.3 Efficiency

We give a detailed performance comparison in Table 4 between \(\mathsf {FSP{2}}\) based on standard assumptions and our most efficient scheme \(\mathsf {EFSP{1}}\). Unsurprisingly we get significantly smaller signature size and a modest reduction in the number of verification equations. We also observe the verification key in \(\mathsf {EFSP{1}}\) is just a single group element, which is optimal and makes it cheap to certify the verification key in digital credential systems or by a certification authority.

Table 4 Size of objects and number of verification equations in fully structure-preserving signature schemes for messages consisting of \(L=\ell k\) elements in \(\tilde{{{\mathbb G}}}\)

6 Lower Bound on Signature Size and Verification Key Size

The signatures of our concrete FSPSs consist of \(\Omega (\sqrt{L})\) group elements when signing \(L\)-element messages. This may seem disappointing given previous constant-size constructions of SPS, but we argue that the \(\sqrt{L}\) factor is unavoidable. It is a consequence of the following new trade-off between signature and verification key size for arbitrary (even one-time) SPS schemes.

Theorem 11

Consider a (one-time) SPS scheme on messages in \(\tilde{{{\mathbb G}}}^L\) in the asymmetric (Type-III) bilinear group setting. Let \(\kappa \) be the number of group elements in \(vk\) (and \(gk\)) used in evaluating the PPEs in verification. Let \(\sigma \) the number of group elements in signatures. If the scheme is existentially unforgeable in a model in which the adversary has access to a valid signature on a known message and the scheme has a generic signing algorithm, we have \(\kappa + \sigma \ge \sqrt{L}\).

Proof

Denote by \((M_1,\ldots ,M_L)\in \tilde{{{\mathbb G}}}^L\) the message vector, by \((U_1,\ldots ,U_{\kappa _1},V_1,\ldots ,V_{\kappa _2})\in {{\mathbb G}}^{\kappa _1}\times \tilde{{{\mathbb G}}}^{\kappa _2}\) (\(\kappa _1+\kappa _2=\kappa \)) the verification key elements, and by \((R_1,\ldots ,R_{\sigma _1}, S_1,\ldots ,S_{\sigma _2})\in {{\mathbb G}}^{\sigma _1}\times \tilde{{{\mathbb G}}}^{\sigma _2}\) (\(\sigma _1+\sigma _2=\sigma \)) the signature elements. The corresponding discrete logarithms are written in lowercase letters.

Each verification equation of the scheme can be expressed as a bilinear relation between the discrete logarithms of the group elements in \({{\mathbb G}}\) (namely the \(U_i\)’s and \(R_i\)’s) on the one hand, and those of the elements in \(\tilde{{{\mathbb G}}}\) (namely the \(M_i\)’s, \(V_i\)’s and \(S_i\)’s) on the other. The i-th pairing product equation can thus be written in matrix form as:

$$\begin{aligned} X^T E_i Y = 0, \end{aligned}$$
(22)

where X and Y are the column vectors given by

$$\begin{aligned} X&= (r_1,\ldots ,r_{\sigma _1},u_1,\ldots ,u_{\kappa _1},1)^T,\text { and}\\ Y&= (m_1,\ldots ,m_L,s_1,\ldots ,s_{\sigma _2},v_1,\ldots ,v_{\kappa _2},1)^T, \end{aligned}$$

and \(E_i\) is a public \((\kappa _1+\sigma _1+1)\times (L+\kappa _2+\sigma _2+1)\) matrix over \({{\mathbb Z}}_p\).

Now fix a valid message-signature pair \((M_1,\ldots ,M_L,R_1,\ldots ,R_{\sigma _1},S_1,\ldots ,S_{\sigma _2})\). By linear algebra, we can efficiently compute a nonzero tuple \((m_1^*,\ldots ,m_L^*)\in {{\mathbb Z}}_p^L\) that satisfies

$$\begin{aligned} E_i (m_1^*,\ldots ,m_L^*,0,\ldots ,0)^T = 0 \end{aligned}$$

for all i, if it exists. Then, it is clear from Eq.  (22) that \((R_1,\ldots ,R_{\sigma _1},S_1,\ldots ,S_{\sigma _2})\) is still a valid signature on the distinct message vector \((M_1\tilde{G}^{m_1^*},\ldots , M_L\tilde{G}^{m_L^*})\), which contradicts existential unforgeability.

Therefore, with n being the number of verification equations, the linear map \({{\mathbb Z}}_p^L\rightarrow {{\mathbb Z}}_p^{n(\kappa _1+\sigma _1+1)}\) mapping \((m_1,\ldots ,m_L)\) to the concatenation of all vectors \(E_i(m_1,\ldots ,m_L,0,\ldots ,0)^T\) must be injective. In particular, we have:

$$\begin{aligned} L\le n\cdot (\kappa _1+\sigma _1+1) \le n\cdot (\kappa +\sigma )\;, \end{aligned}$$

where the second inequality comes from the fact that we must have \(\sigma _2\ge 1\); otherwise, the generic signing algorithm would output signatures that cannot depend on the message.

Finally, we must have \(n\le \sigma \) (after removing possibly redundant verification equations). Indeed, if it were not the case, the quadratic system satisfied by the discrete logarithms of the signature elements would be overdetermined, and a generic message would not admit any valid signature at all. We thus obtain \(L\le \sigma \cdot (\kappa +\sigma )\le (\kappa +\sigma )^2\), which concludes the proof.\(\square \)

The following theorem can be proven in a similar manner as above by replacing public keys, secret keys, and signatures with commitment keys, opening information, and commitments, respectively.

Theorem 12

Consider a structure-preserving commitment scheme on messages in \(\tilde{{{\mathbb G}}}^L\) in the asymmetric (Type-III) bilinear group setting. Assume that the commitment key consists of elements in \(\tilde{{{\mathbb G}}}\), and let \(\chi \) be the number of elements in commitments and o the number of group elements in the opening information. If the scheme is target collision resistant and has a generic commitment algorithm, we have \(\chi +o \ge \sqrt{L}\).

From Theorem 11, we immediately see that an FSPS scheme obtained from construction \(\mathsf {FSP{1}}\) must have signatures of more than \(\sqrt{L}\) elements. This is because all signatures include as a subset including both the verification key and signature of a structure-preserving OTS scheme signing \(L\)-element messages.

Regarding \(\mathsf {EFSP{1}}\) in Sect. 5.2, only a constant number of group elements from \(vk\) and \(gk\) are involved in the verification. With such optimized verification keys, signatures have to have more than \(\sqrt{L}\) elements according to Theorem 11.

Finally, Theorem 12 shows that an FSPS scheme obtained from construction \(\mathsf {FSP{2}}\) must also have signatures of more than \(\sqrt{L}\) elements, at least when the underlying trapdoor commitment scheme has its key elements on the same side as the resulting signature, which seems necessary with our approach based on \({\mathsf {MTC}}\).

7 Applications

We first discuss potential applications of FSPS in this section. Later, we show composable and modular anonymous credentials from [21] as a concrete example.

  • Public key infrastructure. On the very applied side, the question is connected with the timely problem of public key infrastructures. Few protocols have been designed with the goal of being secure against adversarial keys, and few real-world certificate authorities validate that registries provide valid public keys or prove knowledge of the corresponding secret keys. The availability of schemes with efficient non-interactive proofs-of-knowledge of secret key possession can only improve this situation. In the provable security literature, this knowledge of secret key solution to rogue key attacks appeared early on in the study of multi-signatures by Micali et al. [44, Problem 4 and Fix 4]. See Ristenpart and Yilek [46] for a comprehensive study of this problem.

  • Protocol design in strong security model. More generally, these obstacles to secret key extraction have hindered modular composable protocol design. Camenisch et al. [23] developed a framework for practical universally composable (UC) zero-knowledge proofs, in which they identify proofs-of-knowledge of exponents as a major bottleneck. Camenisch et al. [21] constructed unlinkable redactable signatures and anonymous credentials that are UC-secure. Their construction requires proofs-of-knowledge of the signing key of a structure-preserving signature scheme, which in turn, as studied by Chase et al. [26], is an instance of a general transformation for making signature schemes simulatable [11]. Given these examples, we conjecture that fully structure-preserving signature schemes help build UC-secure privacy preserving protocols.

  • Strengthening privacy in group and ring signatures. In classical group and ring signatures, e.g., [15, 17, 36, 47], the goal of the adversary against privacy is to distinguish signatures from two honest members whose keys are actually generated and registered by the challenger. The attack game aborts if either of the targets is a corrupted member registered with an adversarially generated key. Instead of excluding such corrupt members from the scope of security, stronger privacy in the presence of adversarial keys can be guaranteed, if the challenger can extract the secret key to create group or ring signatures on their behalf. Such a model is meaningful when some keys are generated incorrectly, e.g., because of multiple potentially flawed implementations, but their owners nevertheless use them with the correct signing algorithm. Note that this requires a trusted common reference string that puts mild assumptions on the trust model to retain other security properties such as unforgeability and non-frameability: the extraction trapdoor must be inaccessible for the adversary.

As a concrete example, we overview a UC-secure anonymous credential system in [21].

Anonymous Credentials Like a traditional digital certificate, an anonymous credential can be seen as a signature by an issuer on the attributes of users. To preserve privacy, the signature scheme used to certify information must be redactable. This allows users to only reveal the information that they deem adequate for a given service provider and context. To preserve anonymity, the signature scheme must be unlinkable. This prevents service providers from collecting meta-data about the behavior of users and also prevents the collation of attribute information about the same user previously revealed in different contexts or across multiple services.

Thus, in addition to the \(\mathsf {Key}\), \(\mathsf {Sign}\), and \(\mathsf {Vrf}\) algorithms of traditional signatures, unlinkable redactable signatures provide a \({\mathsf {Derive}}\) algorithm that given a signature on a message produces an unlinkable signature on a redacted message. Anonymous credentials guarantee unlinkability even when issuers and service providers collude. To model this, we require that signatures produced by \({\mathsf {Derive}}\) are indistinguishable from fresh signatures, as long as the verification key satisfies the predicate \({\mathsf {CheckVK}}\). For such keys, we will require that signing keys are online extractable and this is exactly where FSPS are needed.

A UC Functionality for Anonymous Credentials The following functionality models the unforgeability, unlinkability, and redactability requirements of anonymous credentials.Footnote 2

Intuitively, for the scheme to be composably secure, the ideal functionality together with the simulator must be indistinguishable from the real functionality that on messages \({\mathsf {keygen}}\), \({\mathsf {check\_vk}}\), \({\mathsf {issue}}\), \({\mathsf {show}}\), and \({\mathsf {verify}}\) simply runs the cryptographic algorithms \(\mathsf {Key}\), \({\mathsf {CheckVK}}\), \(\mathsf {Sign}\), \({\mathsf {Derive}}\), and \(\mathsf {Vrf}\) of an unlinkable redactable signature scheme. Instead of running \({\mathsf {CheckVK}}\), the simulator provides the functionality with signing keys extracted from well-formed issuer keys.

figure l

The functionality enforces security properties regardless of the algorithms provided by the simulator:

Unforgeability is guaranteed by rejecting signatures for the honestly generated verification key that cannot be derived from signed messages, unless the signing key has been leaked.

Unlinkability and Redactability are guaranteed by generating a fresh redacted signature that covers the part of the message that is not redacted, while the rest of the initial message is set to zero. The presence of the \(({\mathsf {leak\_sk}},sid)\) message models that privacy guarantees are ensured even when the adversary learns the issuers signing key. This corresponds to the full anonymity property of group signatures [15]. The \({\mathsf {check\_vk}}\) message models that privacy guarantees are ensured even for adversarial verification keys as long as verification keys are well formed.

Realizing Compact Unlinkable Redactable Signatures Traditional certificates sign a hash of their attributes and can thus be very compact. Using hashed data structures such as Merkle trees, it is not too hard to extend the approach to support redactability. It is, however, challenging to simultaneously achieve compactness, unforgeability, unlinkability, and redactability. To our knowledge, all existing approaches [21, 33] employ structure-preserving signatures in one way or another. [33] uses a SPS to sign a set commitment while [21] employ a vector commitment to compresses sets, respectively, vectors, of attributes into a single group element to allow compact openings. Both set and vector commitments can be seen as commitments with an efficient partial opening algorithm.

An unlinkable redactable signature (URS) scheme consists of five algorithms. We recall the construction of [21] of URS from SPS and generalize it for commitments with partial opening.

figure m

The composable unlinkability of the ideal functionality can only be realized when the unlinkable redactable signature scheme allows for the online extraction of signing keys. Informally, key extractability requires additional predicates \({\mathsf {CheckVK}}\), \({\mathsf {CheckKeys}}\), and trapdoor parameter generation and extraction algorithms \({\mathsf {SetupTd}}\), \({\mathsf {ExtractKey}}\). When \({\mathsf {SetupTd}}\) is run and \({\mathsf {CheckVK}}\) outputs 1, then \({\mathsf {ExtractKey}}(vk,td)\) must output a valid signing key \(sk\), that is, \({\mathsf {CheckKeys}}(vk,sk)=1\). The efficient construction of such an extraction algorithm is facilitated by FSPS as we can efficiently prove knowledge of the signing key using the Groth–Sahai proof system. Note that the scheme only signs a vector commitment that compresses a large number of messages into a single group element. Consequently, the overhead of full structure preservation is small: When using \(\mathsf {FSP{2}}\) from Sect. 4.2 as the FSPS, signatures for signing a single group element consist of only 15 elements per signature and proofs of key possession consist of just 18 elements, respectively. Consequently, the initial signatures of the \({\mathsf {URS}}\) consist of only 16 elements and redacted signatures of 38 elements.