1 Introduction

Arguably, one of the most important cryptographic hardness assumptions is the decisional Diffie–Hellman (\(\textsf {DDH}\)) assumption. For a fixed additive group \(\mathbb {G}\) of prime order q and a generator \(\mathcal {P}\) of \(\mathbb {G}\), we denote by \([a]:=a\mathcal {P}\in \mathbb {G}\) the implicit representation of an element \(a \in \mathbb {Z}_q\). The \(\textsf {DDH}\) Assumption states that \(([a],[r],[ar]) \approx _c ([a], [r], [z])\in \mathbb {G}^3\), where arz are uniform elements in \(\mathbb {Z}_q\) and \(\approx _c\) denotes computationally indistinguishability of the two distributions. It has been used in numerous important applications such as secure encryption [12], key exchange [20], hash proof systems [13], pseudo-random functions [37] and many more.

Bilinear Groups and the Linear Assumption. Bilinear groups (i.e., groups \(\mathbb {G}, \mathbb {G}_T\) of prime order q equipped with a bilinear map \(e: \mathbb {G}\times \mathbb {G}\rightarrow \mathbb {G}_T\)) [4, 24] revolutionized cryptography in recent years and are the basis for a large number of cryptographic protocols. However, relative to a (symmetric) bilinear map, the \(\textsf {DDH}\) Assumption is no longer true in the group \(\mathbb {G}\). (This is since \(e([a],[r])=e([1],[ar])\) and hence [ar] is not longer pseudo-random given [a] and [r].) The need for an “alternative” decisional assumption in \(\mathbb {G}\) was quickly addressed with the Linear Assumption (\(2\text{- }\textsf {Lin}\)) introduced by Boneh, Boyen and Shacham [3]. It states that \(([a_1],[a_2],[a_1r_1], [a_2r_2], [r_1+r_2])\approx _c ([a_1],[a_2],[a_1r_1], [a_2r_2], [z]) \in \mathbb {G}^5\), where \(a_1, a_2, r_1, r_2, z \leftarrow \mathbb {Z}_q\). \(2\text{- }\textsf {Lin}\) holds in generic bilinear groups [3] and it has virtually become the standard decisional assumption in the group \(\mathbb {G}\) in the bilinear setting. It has found applications to encryption [5, 7, 29, 38], signatures [3], zero-knowledge proofs [21], pseudo-random functions [6] and many more. More recently, the \(2\text{- }\textsf {Lin}\) assumption was generalized to the \((k\text{- }\textsf {Lin})_{k \in \mathbb {N}}\) Assumption family [23, 45] (\(1\text{- }\textsf {Lin}=\textsf {DDH}\)), a family of increasingly (strictly) weaker Assumptions which are generically hard in k-linear maps.

Subgroup Membership Problems. Since the work of Cramer and Shoup [13], it has been recognized that it is useful to view the \(\textsf {DDH}\) Assumption as a hard subgroup membership problem in \(\mathbb {G}^2\). In this formulation, the \(\textsf {DDH}\) Assumption states that it is hard to decide whether a given element \(([r],[t]) \in \mathbb {G}^2\) is contained in the subgroup generated by ([1], [a]). Similarly, in this language the \(2\text{- }\textsf {Lin}\) assumption says that it is hard to decide whether a given vector \(([r],[s],[t]) \in \mathbb {G}^3\) is in the subgroup generated by the vectors \(([a_1],[0],[1]), ([0],[a_2],[1])\). The same holds for the \((k\text{- }\textsf {Lin})_{k \in \mathbb {N}}\) Assumption family: For each k, the \(k\text{- }\textsf {Lin}\) assumption can be naturally written as a hard subgroup membership problem in \(\mathbb {G}^{k+1}\). This alternative formulation has conceptual advantages for some applications; for instance, it allowed to provide more instantiations of the original \(\textsf {DDH}\)-based scheme of Cramer and Shoup and it is also the most natural point of view for translating schemes originally constructed in composite order groups into prime order groups [18, 36, 43, 44].

Linear Algebra in Bilinear Groups. In its formulation as subgroup decision membership problem, the \(k\text{- }\textsf {Lin}\) assumption can be seen as the problem of deciding linear dependence “in the exponent.” Recently, a number of works have illustrated the usefulness of a more algebraic point of view on decisional assumptions in bilinear groups, like the Dual Pairing Vector Spaces of Okamoto and Takashima [40] or the Subspace Assumption of Lewko [32]. Although these new decisional assumptions reduce to the \(2\text{- }\textsf {Lin}\) assumption, their flexibility and their algebraic description have proven to be crucial in many works to obtain complex primitives in strong security models previously unrealized in the literature, like attribute-based encryption, unbounded inner product encryption and many more (see [32, 41, 42], just to name a few).

This Work. Motivated by the success of this algebraic viewpoint of decisional assumptions, in this paper we explore new insights resulting from interpreting the \(k\text{- }\textsf {Lin}\) decisional assumption as a special case of what we call a Matrix Diffie–Hellman Assumption. The general problem states that it is hard to distinguish whether a given vector in \(\mathbb {G}^{\ell }\) is contained in the space spanned by the columns of a certain matrix \([{{\mathbf {{A}}}}] \in \mathbb {G}^{\ell \times k}\), where \({{\mathbf {{A}}}}\) is sampled according to some distribution \(\mathcal {D}_{\ell ,k}\). We remark that even though all our results are stated in symmetric bilinear groups, they can be naturally extended to the asymmetric setting.

1.1 The Matrix Diffie–Hellman Assumption

A New Framework for \(\textsf {DDH}\)

-like Assumptions. For integers \(\ell > k\) let \(\mathcal {D}_{\ell ,k}\) be an (efficiently samplable) distribution over \(\mathbb {Z}_q^{\ell \times k}\). We define the \(\mathcal {D}_{\ell ,k}\)-Matrix DH (\(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\)) Assumption as the following subgroup decision assumption:

$$\begin{aligned} \mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}: \quad [{{\mathbf {{A}}}}|| {{\mathbf {{A}}}} \vec {r}] \approx _c [{{\mathbf {{A}}}}|| \vec {u}] \in \mathbb {G}^{\ell \times (k+1)}, \end{aligned}$$

where \({{\mathbf {{A}}}}\in \mathbb {Z}_q^{\ell \times k}\) is chosen from distribution \(\mathcal {D}_{\ell ,k}\), \(\vec {r}\leftarrow \mathbb {Z}_q^k\), and \(\vec {u} \leftarrow \mathbb {G}^{\ell }\). The \((k\text{- }\textsf {Lin})_{k \in \mathbb {N}}\) family corresponds to this problem when \(\ell =k+1\), and \(\mathcal {D}_{\ell ,k}\) is the specific distribution \(\mathcal {L}_{k}\) (formally defined in Example 2).

Generic Hardness. Due to its linearity properties, the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption does not hold in \((k+1)\)-linear groups. In Sect. 3.3, we give two different theorems which state sufficient conditions for the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption to hold generically in m-linear groups. Theorem 3 is very similar to the Uber-Assumption [2, 9] that characterizes hardness in bilinear groups (i.e., \(m=2\)) in terms of linear independence of polynomials in the inputs. We generalize this to arbitrary m using a more algebraic language. This algebraic formulation has the advantage that one can use additional tools (e.g., Gröbner bases or resultants) to show that a distribution \(\mathcal {D}_{\ell ,k}\) meets the conditions of Theorem 3, which is specially important for large m. It also allows to prove a completely new result, namely Theorem 4, which states that a matrix assumption with \(\ell =k+1\) is generically hard if a certain determinant polynomial is irreducible.

New Assumptions for Bilinear Groups. We propose other families of generically hard decisional assumptions that did not previously appear in the literature, e.g., those associated with \(\mathcal {C}_{k},\mathcal {SC}_{k},\mathcal {IL}_{k}\) defined below. For the most important parameters \(k=2\) and \(\ell =k+1=3\), we consider the following examples of distributions:

$$\begin{aligned} \mathcal {C}_{2}: {{\mathbf {{A}}}}= & {} \left( \begin{array}{ll} a_1 &{} \quad 0\\ 1 &{} \quad a_2\\ 0 &{} \quad 1 \end{array} \right) \quad \mathcal {SC}_{2}: {{\mathbf {{A}}}}= \left( \begin{array}{ll} a &{} \quad 0 \\ 1 &{} \quad a \\ 0 &{} \quad 1 \end{array} \right) \quad \mathcal {L}_2: {{\mathbf {{A}}}}= \left( \begin{array}{ll} a_1 &{} \quad 0 \\ 0 &{} \quad a_2 \\ 1 &{} \quad 1 \end{array} \right) \\ \mathcal {IL}_{2}: {{\mathbf {{A}}}}= & {} \left( \begin{array}{ll} a &{} \quad 0 \\ 0 &{} \quad a+1\\ 1 &{} \quad 1 \end{array} \right) , \end{aligned}$$

for uniform \(a, a_1, a_2 \in \mathbb {Z}_q\) as well as \(\mathcal {U}_{3,2}\), the uniform distribution in \(\mathbb {Z}_q^{3 \times 2}\) (already considered in [5, 19, 38, 46]). All assumptions are hard in generic bilinear groups. It is easy to verify that \(\mathcal {L}_{2}\)-\(\textsf {MDDH}=2\text{- }\textsf {Lin}\). We define \(2\text{- }\textsf {Casc}:=\mathcal {C}_{2}\text{- }\textsf {MDDH}\) (Cascade Assumption), \(2\text{- }\textsf {SCasc} :=\mathcal {SC}_{2}\text{- }\textsf {MDDH}\) (Symmetric Cascade Assumption), and \(2\text{- }\textsf {ILin} :=\mathcal {IL}_{2}\text{- }\textsf {MDDH}\) (Incremental Linear Assumption). In Sect. 3.4, we show that \(2\text{- }\textsf {SCasc} \Rightarrow 2\text{- }\textsf {Casc}\), \(2\text{- }\textsf {ILin} \Rightarrow 2\text{- }\textsf {Lin}\) and that \(\mathcal {U}_{3,2}\text{- }\textsf {MDDH}\) is the weakest of these assumptions (which extends the results of [18, 19, 46] for \(2\text{- }\textsf {Lin}\)). Although originally [16] \(2\text{- }\textsf {ILin}\) and \(2\text{- }\textsf {SCasc}\) were thought to be incomparable assumptions, in Sect. 4 we show that \(2\text{- }\textsf {SCasc}\) and \(2\text{- }\textsf {ILin}\) are indeed equivalent assumptions. The equivalence result, together with the fact that \(2\text{- }\textsf {ILin} \Rightarrow 2\text{- }\textsf {Lin}\), implies that \(2\text{- }\textsf {SCasc}\) is a stronger assumption than \(2\text{- }\textsf {Lin}\).

Efficiency Improvements. As a measure of efficiency, we define the representation size \(\textsf {RE}_\mathbb {G}(\mathcal {D}_{\ell ,k})\) of an \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) assumption as the minimal number of group elements needed to represent \([{{\mathbf {{A}}}}]\) for any \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\). This parameter is important since it affects the performance (typically the size of public/secret parameters) of schemes based on a Matrix Diffie–Hellman Assumption. \(2\text{- }\textsf {Lin}\) and \(2\text{- }\textsf {Casc}\) have representation size 2 (elements \(([a_1],[a_2])\)), while \(2\text{- }\textsf {SCasc}\) only 1 (element [a]). Hence our new assumptions directly translate into shorter parameters for a large number of applications (see the Applications in Sect. 5). Further, our result points out a trade-off between efficiency and hardness which questions the role of \(2\text{- }\textsf {Lin}\) as the “standard decisional assumption" over a bilinear group \(\mathbb {G}\).

New Families of Weaker Assumptions. By defining appropriate distributions \(\mathcal {C}_{k},\, \mathcal {SC}_{k}\), \(\mathcal {IL}_{k}\) over \(\mathbb {Z}_q^{(k+1) \times k}\), for any \(k \in \mathbb {N}\), one can generalize all three new assumptions naturally to \(k\text{- }\textsf {Casc},\, k\text{- }\textsf {SCasc}\) and \(k\text{- }\textsf {ILin}\) with representation size \(k,\, 1\), and 1, respectively. Using our results on generic hardness, it is easy to verify that all three assumptions are generically hard in k-linear groups. Actually, in Sect. 4 we show that \(k\text{- }\textsf {SCasc}\) and \(k\text{- }\textsf {ILin}\) are equivalent for every k. Since all these assumptions are false in \((k+1)\)-linear groups, this gives us three new families of increasingly strictly weaker assumptions.Footnote 1 In particular, the \(k\text{- }\textsf {SCasc}\) (equivalently, \(k\text{- }\textsf {ILin}\)) assumption family is of great interest due to its compact representation size of only 1 element.

Relations to Other Standard Assumptions. Surprisingly, the new assumption families can also be related to standard assumptions. The \(k\text{- }\textsf {Casc}\) Assumption is implied by the \((k+1)\)-party Diffie–Hellman Assumption (\((k+1)\text{- }\textsf {PDDH}\)) [7] which states that \(([a_1],\ldots ,[a_{k+1}],[a_1 \cdots a_{k+1}]) \approx _c ([a_1], \ldots ,[a_{k+1}],[z]) \in \mathbb {G}^{k+2}\). Similarly, \(k\text{- }\textsf {SCasc}\) is implied by the \((k+1)\)-Exponent Diffie–Hellman Assumption (\((k+1)\text{- }\textsf {EDDH}\)) [28] which states that \(([a],[a^{k+1}]) \approx _c ([a],[z]) \in \mathbb {G}^2\). Figure 1 on page 10 gives an overview over the relations between the different assumptions.

Fig. 1
figure 1

Relation between various assumptions and their generic hardness in k-linear groups

Uniqueness of One-Parameter Family. The most natural and useful \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) assumptions are those with \(\ell =k+1\), and the entries of the matrices generated by \(\mathcal {D}_{\ell ,k}\) are polynomials of degree one in some parameters. Among them, the most compact correspond to the one-parameter distributions. As novel contribution with respect to [16], in Sect. 4 we show that \(k\text{- }\textsf {ILin}\) and \(k\text{- }\textsf {SCasc}\) are tightly equivalent. Moreover, we prove that every \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) assumption defined by univariate polynomials of degree one is tightly equivalent to \(k\text{- }\textsf {SCasc}\), so we can see \(k\text{- }\textsf {SCasc}\) as a sort of canonical compact Matrix DH assumption. From the equivalence proof between \(k\text{- }\textsf {ILin}\) and \(k\text{- }\textsf {SCasc}\), one can easily construct a reduction from \(k\text{- }\textsf {SCasc}\) to \(k\text{- }\textsf {Lin}\).

1.2 Basic Applications

We believe that all schemes based on \(2\text{- }\textsf {Lin}\) can be shown to work for any Matrix Assumption. Consequently, a large class of known schemes can be instantiated more efficiently with the new more compact decisional assumptions, while offering the same generic security guarantees. To support this belief, in Sect. 5 we show how to construct some fundamental primitives based on any Matrix Assumption. All constructions are purely algebraic and therefore very easy to understand and prove.

  • Public-key Encryption. We build a key encapsulation mechanism with security against passive adversaries from any \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption. The public key is \([{{\mathbf {{A}}}}]\), the ciphertext consists of the first k elements of \([z]=[{{\mathbf {{A}}}} \vec {r}]\), and the symmetric key consists of the last \(\ell -k\) elements of [z]. Passive security immediately follows from \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\).

  • Hash Proof Systems. We build a smooth projective hash proof system (HPS) from any \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption. It is well known that HPS implies chosen-ciphertext secure encryption [13], password-authenticated key exchange [20], zero-knowledge proofs [1] and many other things.

  • Pseudo-Random Functions. Generalizing the Naor-Reingold PRF [6, 37], we build a pseudo-random function \(\textsf {PRF}\) from any \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption. The secret key consists of transformation matrices \({{\mathbf {{T}}}}_1,\ldots , {{\mathbf {{T}}}}_n\) (derived from independent instances \({{\mathbf {{A}}}}_{i,j} \leftarrow \mathcal {D}_{\ell ,k}\)) plus a vector \(\vec {h}\) of group elements. For \(x \in \{0,1\}^n\), we define \(\textsf {PRF}_K(x) = \left[ \prod _{i : x_i=1} {{\mathbf {{T}}}}_i \cdot \vec {h}\right] \). Using the random self-reducibility of the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption, we give a tight security proof.

  • Groth–Sahai non-interactive zero-knowledge proofs. Groth and Sahai [21] proposed very elegant and efficient non-interactive zero-knowledge (NIZK) and non-interactive witness-indistinguishable (NIWI) proofs that work directly for a wide class of languages that are relevant in practice. We show how to instantiate their proof system based on any \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption. While the size of the proofs depends only on \(\ell \) and k, the CRS and verification depend on the representation size of the Matrix Assumptions. Therefore, our new instantiations offer improved efficiency over the \(2\text{- }\textsf {Lin}\)-based construction from [21]. This application in particular highlights the usefulness of the Matrix Assumption to describe in a compact way many instantiations of a scheme: Instead of having to specify the constructions for the \(\textsf {DDH}\) and the \(2\text{- }\textsf {Lin}\) assumptions separately [21], we can recover them as a special case of a general construction.

More Efficient Proofs for CRS-Dependent Languages. In Sect. 6, we provide more efficient NIZK proofs for concrete natural languages which are dependent on the common reference string. More specifically, the common reference string of the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) instantiation of Groth–Sahai proofs of Sect. 5.4 includes as part of the commitment keys the matrix \([{{\mathbf {{A}}}}]\), where \({{\mathbf {{A}}}}\in \mathbb {Z}_q^{\ell \times k} \leftarrow \mathcal {D}_{\ell ,k}\). We give more efficient proofs for several languages related to \({{\mathbf {{A}}}}\). Although at first glance the languages considered may seem quite restricted, they naturally appear in many applications, where typically \({{\mathbf {{A}}}}\) is the public key of some encryption scheme and one wants to prove statements about ciphertexts. More specifically, we obtain improvements for several kinds of statements, namely:

  • Subgroup Membership Proofs. We give more efficient proofs in the language \(\mathcal {L}_{{{\mathbf {{A}}}},\mathbb {G},\mathcal {P}} := \{[{{\mathbf {{A}}}}\vec {r}], \vec {r} \in \mathbb {Z}_q^k \}\subset \mathbb {G}^\ell \). To quantify some concrete improvement, in the \(2\text{- }\textsf {Lin}\) case, our proofs of membership are half of the size of a standard Groth–Sahai proof and they require only six group elements. We stress that this improvement is obtained without introducing any new computational assumption. As an example of application, consider, for instance, the encryption scheme derived from our KEM based on any \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) Assumption, where the public key is some matrix \([{{\mathbf {{A}}}}],\, {{\mathbf {{A}}}}\leftarrow \mathcal {D}_{\ell ,k}\). To see which kind of statements can be proved using our result, note that a ciphertext is a re-randomization of another one only if their difference is in \(\mathcal {L}_{{{\mathbf {{A}}}},\mathbb {G},\mathcal {P}}\). The same holds for proving that two commitments with the same key hide the same value or for showing in a publicly verifiable manner that the ciphertext of our encryption scheme opens to some known message [m]. This improvement has a significant impact on recent results, like [17, 35], and we think many more examples can be found. Interestingly, in independent work, a number of results ([25, 26, 31, 34]) have constructed even more efficient proofs in linear subspaces by also exploiting the dependency of the common reference string and the matrix which generates the space. We note that although in all these work proofs are shorter, this is at the cost of having only computationally sound proofs, while our results retain the perfect soundness inherited from Groth–Sahai proofs.

  • Ciphertext Validity and Plaintext Equality. Similar techniques apply to get more efficient proofs of statements which naturally appear when one wants to prove that a ciphertext is valid and that two ciphertexts encrypted with different public keys open to the same plaintext, e.g., when using Naor–Yung techniques to obtain chosen-ciphertext security [39], like in the encryption schemes of [10, 15, 22, 27].

2 Preliminaries

2.1 Notation

For \(n \in \mathbb {N}\), we write \(1^n\) for the string of n ones. Moreover, |x| denotes the length of a bitstring x, while |S| denotes the size of a set S. Further, \(s \leftarrow S\) denotes the process of sampling an element s from S uniformly at random. For an algorithm \(\textsf {A}\), we write \(z \leftarrow \textsf {A}(x,y,\ldots )\) to indicate that \(\textsf {A}\) is a (probabilistic) algorithm that outputs z on input \((x,y,\ldots )\). If \({{\mathbf {{A}}}}\) is a matrix, we denote by \(a_{ij}\) the entries and \(\vec {a}_i\) the column vectors.

2.2 Representing Elements in Groups

Let \(\textsf {Gen}\) be a probabilistic polynomial-time (ppt) algorithm that on input \(1^\lambda \) returns a description \(\mathcal {G}=(\mathbb {G},q,\mathcal {P})\) of a cyclic group \(\mathbb {G}\) of order q for a \(\lambda \)-bit prime q and a generator \(\mathcal {P}\) of \(\mathbb {G}\). More generally, for any fixed \(k \ge 1\), let \(\textsf {MGen}_{k}\) be a ppt algorithm that on input \(1^\lambda \) returns a description \(\mathcal {MG}_{k}=(\mathbb {G}, \mathbb {G}_{T_k}, q, e_k, \mathcal {P})\), where \(\mathbb {G}\) and \(\mathbb {G}_{T_k}\) are cyclic additive groups of prime order \(q,\, \mathcal {P}\) a generator of \(\mathbb {G}\), and \(e_k: \mathbb {G}^k \rightarrow \mathbb {G}_{T_k}\) is a (non-degenerated, efficiently computable) k-linear map. For \(k=2\), we define \(\textsf {PGen}:=\textsf {MGen}_{2}\) to be a generator of a bilinear group \(\mathcal {PG}=(\mathbb {G}, \mathbb {G}_T, q, e, \mathcal {P})\).

For an element \(a \in \mathbb {Z}_q\), we define \([a] = a \mathcal {P}\) as the implicit representation of a in \(\mathbb {G}\). More generally, for a matrix \({{\mathbf {{A}}}} = (a_{ij}) \in \mathbb {Z}_q^{n\times m}\) we define \([{{\mathbf {{A}}}}]\) as the implicit representation of \({{\mathbf {{A}}}}\) in \(\mathbb {G}\) and \([{{\mathbf {{A}}}}]_{T_k}\) as the implicit representation of \({{\mathbf {{A}}}}\) in \(\mathbb {G}_{T_k}\):

$$\begin{aligned}&[{{\mathbf {{A}}}}] := \left( \begin{array}{lll} a_{11} \mathcal {P}&{} \quad \ldots &{} \quad a_{1m} \mathcal {P}\\ &{} &{} \\ a_{n1} \mathcal {P}&{} \quad \ldots &{} \quad a_{nm} \mathcal {P}\\ \end{array} \right) \in \mathbb {G}^{n \times m}, \\&[{{\mathbf {{A}}}}]_{T_k} := \left( \begin{array}{lll} a_{11} \mathcal {P}_{T_k} &{} \quad \ldots &{} \quad a_{1m} \mathcal {P}_{T_k} \\ &{} &{} \\ a_{n1} \mathcal {P}_{T_k} &{} \quad \ldots &{} \quad a_{nm} \mathcal {P}_{T_k} \\ \end{array}\right) \in \mathbb {G}_{T_k}^{n \times m}, \end{aligned}$$

where \(\mathcal {P}_{T_k} = e_k(\mathcal {P}, \ldots , \mathcal {P}) \in \mathbb {G}_{T_k}\).

When talking about elements in \(\mathbb {G}\) and \(\mathbb {G}_{T_k}\), we will always use this implicit notation, i.e., we let \([a] \in \mathbb {G}\) be an element in \(\mathbb {G}\) or \([b]_{T_k}\) be an element in \(\mathbb {G}_{T_k}\). Note that from \([a] \in \mathbb {G}\), it is generally hard to compute the value a (discrete logarithm problem in \(\mathbb {G}\)). Further, from \([b]_{T_k}\in \mathbb {G}_{T_k}\) it is hard to compute the value \(b \in \mathbb {Z}_q\) (discrete logarithm problem in \(\mathbb {G}_{T_k}\)) or the value \([b] \in \mathbb {G}\) (pairing inversion problem). Obviously, given \([a] \in \mathbb {G},\, [b]_{T_k} \in \mathbb {G}_{T_k}\), and a scalar \(x \in \mathbb {Z}_q\), one can efficiently compute \([ax] \in \mathbb {G}\) and \([bx]_{T_k} \in \mathbb {G}_{T_k}\).

Also, all functions and operations acting on \(\mathbb {G}\) and \(\mathbb {G}_{T_k}\) will be defined implicitly. For example, when evaluating a bilinear pairing \(e: \mathbb {G}\times \mathbb {G}\rightarrow \mathbb {G}_T\) in \([a],[b]\in \mathbb {G}\), we will use again our implicit representation and write \([z]_T := e([a],[b])\). Note that \(e([a], [b]) = [ab]_T\), for all \(a,b \in \mathbb {Z}_q\).

2.3 Standard Diffie–Hellman Assumptions

Let \(\textsf {Gen}\) be a ppt algorithm that on input \(1^\lambda \) returns a description \(\mathcal {G}=(\mathbb {G},q,\mathcal {P})\) of cyclic group \(\mathbb {G}\) of prime order q and a generator \(\mathcal {P}\) of \(\mathbb {G}\). Similarly, let \(\textsf {PGen}\) be a ppt algorithm that returns a description \(\mathcal {PG}=(\mathbb {G}, \mathbb {G}_T, q, e, \mathcal {P})\) of a pairing group. We informally recall a number of previously considered decisional Diffie–Hellman assumptions.

  • Diffie–Hellman (\(\textsf {DDH}\)) Assumption. It is hard to distinguish \((\mathcal {G}, [x],[y],[xy])\) from \((\mathcal {G},[x], [y],[z])\), for \(\mathcal {G}=(\mathbb {G},q,\mathcal {P})\leftarrow \textsf {Gen}\), \(x,y,z \leftarrow \mathbb {Z}_q\).

  • k -Linear (\(k\text{- }\textsf {Lin}\)) Assumption [3, 23, 45]. It is hard to distinguish \((\mathcal {G}, [x_1],[x_2],\ldots [x_k], [r_1x_1],[r_2x_2],\ldots [r_kx_k],[r_1 + \dots +r_k])\) from \((\mathcal {G}, [x_1],[x_2],\ldots [x_k], [r_1x_1],[r_2x_2],\ldots [r_kx_k],[z])\), for \(\mathcal {G}\leftarrow \textsf {Gen},\, x_1, \ldots , x_k,r_1, \ldots , r_k,z \leftarrow \mathbb {Z}_q\). Clearly, \(1\text{- }\textsf {Lin} = \textsf {DDH}\).

  • Bilinear Diffie–Hellman (\(\textsf {BDDH}\)) Assumption [4]. It is hard to distinguish \((\mathcal {PG}, [x], [y], [z], [xyz]_T)\) from \((\mathcal {PG}, [x], [y], [z], [w]_T)\), for \(\mathcal {PG}\leftarrow \textsf {PGen},\, x,y,z,w \leftarrow \mathbb {Z}_q\).

  • k -Multilinear Diffie–Hellman (\(k\text{- }\textsf {MLDDH}\)) Assumption [8]. Given k-linear group generator \(\textsf {MGen}_{k}\), it is hard to distinguish \((\mathcal {MG}_{k}, [x_1],\ldots [x_{k+1}], [x_1 \cdots x_{k+1}]_{T_k})\) from \((\mathcal {MG}_{k}, [x_1],\ldots [x_{k+1}], [z]_{T_{k}})\), for \(\mathcal {MG}_{k}\leftarrow \textsf {MGen}_{k},\, x_1, \ldots , x_{k+1}, z \leftarrow \mathbb {Z}_q\). Clearly, \(2\text{- }\textsf {MLDDH} = \textsf {BDDH}\).

  • k -party Diffie–Hellman (\(k\text{- }\textsf {PDDH}\)) Assumption. It is hard to distinguish \((\mathcal {G}, [x_1],[x_2],\ldots [x_k], [x_1 \cdots x_k])\) from \((\mathcal {G}, [x_1],[x_2],\ldots , [x_k], [z])\), for \(\mathcal {G}\leftarrow \textsf {Gen},\, x_1, \ldots , x_k,z \leftarrow \mathbb {Z}_q\). \(2\text{- }\textsf {PDDH} = \textsf {DDH}\) and \(3\text{- }\textsf {PDDH}\) was proposed in [7].

  • k -Exponent Diffie–Hellman (\(k\text{- }\textsf {EDDH}\)) Assumption [28, 47]. It is hard to distinguish \((\mathcal {G}, [x], [x^k])\) from \((\mathcal {G}, [x], [z])\), for \(\mathcal {G}\leftarrow \textsf {Gen},\, x,z \leftarrow \mathbb {Z}_q\).

2.4 Key Encapsulation Mechanisms

A key encapsulation mechanism \(\textsf {KEM}=(\textsf {Gen},\textsf {Enc},\textsf {Dec})\) with key space \(\mathcal {K}(\lambda )\) consists of three polynomial-time algorithms (PTAs). Via \(( pk , sk )\leftarrow \textsf {Gen}(1^\lambda )\), the randomized key generation algorithm produces public/secret keys for security parameter \(\lambda \in \mathbb {N}\); via \((K,{c}) \leftarrow \textsf {Enc}( pk )\), the randomized encapsulation algorithm creates a uniformly distributed symmetric key \(K \in \mathcal {K}(\lambda )\) together with a ciphertext \({c}\); via \(K \leftarrow \textsf {Dec}( sk , {c})\), the possessor of secret key \( sk \) decrypts ciphertext \({c}\) to get back a key K which is an element in \(\mathcal {K}\) or a special rejection symbol \(\bot \). For consistency, we require that for all \(\lambda \in \mathbb {N}\), and all \((K,{c}) \leftarrow \textsf {Enc}( pk )\) we have \(\Pr [\textsf {Dec}( sk ,{c})=K]=1\), where the probability is taken over the choice of \(( pk , sk )\leftarrow \textsf {Gen}(1^\lambda )\), and the coins of all the algorithms in the expression above.

For IND-CPA security, we require that the distribution \(( pk ,(c,K))\) is computationally indistinguishable from \(( pk ,(c,K'))\), where \(( pk , sk )\leftarrow \textsf {Gen}(1^\lambda ),\, (K,{c}) \leftarrow \textsf {Enc}( pk )\) and \(K' \leftarrow \mathcal {K}(\lambda )\). An IND-CPA secure KEM implies an IND-CPA secure public-key encryption (PKE) scheme by combining it with a one-time secure symmetric cipher (DEM).

2.5 Hash Proof Systems

We recall the notion of hash proof systems as introduced by Cramer and Shoup [13].

Let \(\mathcal {C}, \mathcal {K}\) be sets and \(\mathcal {V}\subset \mathcal {C}\) a language. In the context of public-key encryption (and viewing a hash proof system as a key encapsulation mechanism (KEM) [14] with “special algebraic properties”), one may think of \(\mathcal {C}\) as the set of all ciphertexts, \(\mathcal {V}\subset \mathcal {C}\) as the set of all valid (consistent) ciphertexts, and \(\mathcal {K}\) as the set of all symmetric keys. Let \(\Lambda _ sk :\mathcal {C}\rightarrow \mathcal {K}\) be a hash function indexed with \( sk \in \mathcal {SK}\), where \(\mathcal {SK}\) is a set. A hash function \(\Lambda _{ sk }\) is projective if there exists a projection \(\mu : \mathcal {SK}\rightarrow \mathcal {PK}\) such that \(\mu ( sk ) \in \mathcal {PK}\) defines the action of \(\Lambda _{ sk }\) over the subset \(\mathcal {V}\). That is, for every \({c}\in \mathcal {V}\), the value \(K=\Lambda _{ sk }({c})\) is uniquely determined by \(\mu ( sk )\) and \({c}\). In contrast, nothing is guaranteed for \({c}\in \mathcal {C}\setminus \mathcal {V}\), and it may not be possible to compute \(\Lambda _{ sk }({c})\) from \(\mu ( sk )\) and \({c}\). The projective hash function is (perfectly) universal\(_1\) if for all \({c}\in \mathcal {C}\setminus \mathcal {V}\),

$$\begin{aligned} ( pk , \Lambda _{ sk }({c})) \equiv ( pk , K) \end{aligned}$$

where in the above \( pk = \mu ( sk )\) for \( sk \leftarrow \mathcal {SK}\) and \(K \leftarrow \mathcal {K}\), and the symbol \(\equiv \) stands for equality of the two distributions.

A hash proof system \(\textsf {HPS}=(\textsf {Param}, \textsf {Pub}, \textsf {Priv})\) consists of three algorithms where the randomized algorithm \(\textsf {Param}(1^\lambda )\) generates instances of \( params =(\mathcal {S},\mathcal {K},\mathcal {C}, \mathcal {V}, \mathcal {PK}, \mathcal {SK}, \Lambda _{(\cdot )}:\mathcal {C}\rightarrow \mathcal {K}, \mu : \mathcal {SK}\rightarrow \mathcal {PK})\), where \(\mathcal {S}\) may contain some additional structural parameters such as the group description. The deterministic public evaluation algorithm \(\textsf {Pub}\) inputs the projection key \( pk = \mu ( sk ),\, {c}\in \mathcal {V}\) and a witness w of the fact that \({c}\in \mathcal {V}\) and returns \(K = \Lambda _ sk ({c})\). The deterministic private evaluation algorithm inputs \( sk \in \mathcal {SK}\) and returns \(\Lambda _ sk ({c})\), without knowing a witness. We further assume there are efficient algorithms given for sampling \( sk \in \mathcal {SK}\) and sampling \({c}\in \mathcal {V}\) uniformly together with a witness w.

As computational problem we require that the subset membership problem is hard in \(\textsf {HPS}\) which means that the two elements \({c}\) and \({c}'\) are computationally indistinguishable, for uniform \({c}\in \mathcal {V}\) and uniform \({c}' \in \mathcal {C}\setminus \mathcal {V}\).

2.6 Pseudo-Random Functions

A pseudo-random function \(\textsf {PRF}=(\textsf {Gen},\textsf {F})\) with respect to range \(\mathcal {R}=\mathcal {R}(\lambda )\) and message space \(\mathcal {M}=\mathcal {M}(\lambda )\) consists of two algorithms, where the randomized algorithm \(\textsf {Gen}(1^\lambda )\) generates a symmetric key K and the deterministic evaluation algorithm \(\textsf {F}_K(x)\) outputs a value in \(\mathcal {R}\), for all \(x \in \mathcal {M}\). For security we require that an adversary making polynomially many queries to an oracle \({\mathcal {O}}(\cdot )\), the output of oracle \({\mathcal {O}}(x) = \textsf {F}_K(x)\) for a fixed key \(K \leftarrow \textsf {Gen}(1^\lambda )\) is computationally indistinguishable from \({\mathcal {O}}(x)=f(x)\), where f is chosen uniformly from all functions from mapping \(\mathcal {M}\) to \(\mathcal {R}\) (i.e., f(x) outputs uniform elements in \(\mathcal {R}\)).

3 Matrix DH Assumptions

3.1 Definition

Definition 1

Let \(\ell ,k \in \mathbb {N}\) with \(\ell > k\). We call \(\mathcal {D}_{\ell ,k}\) a matrix distribution if it outputs (in poly time, with overwhelming probability) matrices in \(\mathbb {Z}_q^{\ell \times k}\) of full rank k. We define \(\mathcal {D}_k := \mathcal {D}_{k+1,k}\).

For simplicity, we will also assume that, wlog, the first k rows of \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\) form an invertible matrix.

We define the \(\mathcal {D}_{\ell ,k}\)-matrix problem as to distinguish the two distributions \(([{{\mathbf {{A}}}}], [{{\mathbf {{A}}}}\vec {w}])\) and \(([{{\mathbf {{A}}}}], [\vec {u}])\), where \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\), \(\vec {w} \leftarrow \mathbb {Z}_q^{k}\), and \(\vec {u} \leftarrow \mathbb {Z}_q^{\ell }\).

Definition 2

(\(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman Assumption \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\)) Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution. We say that the \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman (\(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\)) Assumption holds relative to \(\textsf {Gen}\) if for all ppt adversaries \(\textsf {D}\),

$$\begin{aligned} \mathbf {Adv}_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}) = \Pr [\textsf {D}(\mathcal {G},[{{\mathbf {{A}}}}], [{{\mathbf {{A}}}} \vec {w}])=1]-\Pr [\textsf {D}(\mathcal {G},[{{\mathbf {{A}}}}], [\vec {u} ]) =1] = negl (\lambda ), \end{aligned}$$

where the probability is taken over \(\mathcal {G}=(\mathbb {G},q,\mathcal {P}) \leftarrow \textsf {Gen}(1^\lambda ),\, {{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}, \vec {w} \leftarrow \mathbb {Z}_q^k, \vec {u} \leftarrow \mathbb {Z}_q^{\ell }\) and the coin tosses of adversary \(\textsf {D}\).

Definition 3

Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution. Let \({{\mathbf {{A}}}}_0\) be the first k rows of \({{\mathbf {{A}}}}\) and \({{\mathbf {{A}}}}_1\) be the last \(\ell -k\) rows of \({{\mathbf {{A}}}}\). The matrix \({{\mathbf {{T}}}} \in \mathbb {Z}_q^{(\ell -k) \times k}\) defined as \({{\mathbf {{T}}}} = {{\mathbf {{A}}}}_1{{\mathbf {{A}}}}_0^{-1}\) is called the transformation matrix of \({{\mathbf {{A}}}}\).

We note that using the transformation matrix, one can alternatively define the advantage from Definition 2 as

$$\begin{aligned} \mathbf {Adv}_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D})= & {} \Pr \left[ \textsf {D}\left( \mathcal {G},\left[ {{{\mathbf {{A}}}}_0 \atop {{\mathbf {{T}}}} {{\mathbf {{A}}}}_0}\right] , \left[ {\vec {h}\atop {{\mathbf {{T}}}} \vec {h}}\right] \right) =1\right] \\&-\Pr \left[ \textsf {D}\left( \mathcal {G},\left[ {{{\mathbf {{A}}}}_0\atop {{\mathbf {{T A}}}}_0}\right] , \left[ {\vec {h}\atop \vec {u}}\right] \right) =1 \right] , \end{aligned}$$

where the probability is taken over \(\mathcal {G}=(\mathbb {G},q,\mathcal {P}) \leftarrow \textsf {Gen}(1^\lambda ),\, {{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}, \vec {h} \leftarrow \mathbb {Z}_q^{k}, \vec {u} \leftarrow \mathbb {Z}_q^{\ell -k}\) and the coin tosses of adversary \(\textsf {D}\).

3.2 Basic Properties

We can generalize Definition 2 to the m-fold \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption as follows. Given \({{\mathbf {{W}}}} \leftarrow \mathbb {Z}_q^{k \times m}\) for some \(m \ge 1\), we consider the problem of distinguishing the distributions \(([{{\mathbf {{A}}}}], [{{\mathbf {{A}}}}{{\mathbf {{W}}}}])\) and \(([{{\mathbf {{A}}}}], [{{\mathbf {{U}}}}])\) where \({{\mathbf {{U}}}} \leftarrow \mathbb {Z}_q^{\ell \times m}\) is equivalent to m independent instances of the problem (with the same \({{\mathbf {{A}}}}\) but different \(\vec {w}_i\)). This can be proved through a hybrid argument with a loss of m in the reduction, or, with a tight reduction (independent of m) via random self-reducibility.

Lemma 1

(Random self-reducibility) For any matrix distribution \(\mathcal {D}_{\ell ,k},\, \mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) is random self-reducible. Concretely, for any m,

$$\begin{aligned} \mathbf {Adv}^m_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}') \le {\left\{ \begin{array}{ll} m \cdot \mathbf {Adv}_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}) &{} 1 \le m \le \ell -k \\ (\ell -k) \cdot \mathbf {Adv}_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}) + \dfrac{1}{q-1} &{} m > \ell -k \end{array}\right. }, \end{aligned}$$


$$\begin{aligned} \mathbf {Adv}^m_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}') = \Pr \left[ \textsf {D}'\left( \mathcal {G},[{{\mathbf {{A}}}}], [{{\mathbf {{A}}}} {{\mathbf {{W}}}}]\right) =1\right] -\Pr \left[ \textsf {D}'\left( \mathcal {G},[{{\mathbf {{A}}}}], [{{\mathbf {{U}}}} ]\right) =1\right] , \end{aligned}$$

and the probability is taken over \(\mathcal {G}=(\mathbb {G},q,\mathcal {P}) \leftarrow \textsf {Gen}(1^\lambda ),\, {{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}, {{\mathbf {{W}}}} \leftarrow \mathbb {Z}_q^{k \times m}, {{\mathbf {{U}}}} \leftarrow \mathbb {Z}_q^{\ell \times m}\) and the coin tosses of adversary \(\textsf {D}'\).


The case \(1 \le m \le \ell -k\) comes from a natural hybrid argument, while the case \(m > \ell -k\) is obtained from the inequality

$$\begin{aligned} \mathbf {Adv}^m_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}') \le \mathbf {Adv}^{\ell -k}_{\mathcal {D}_{\ell ,k},\textsf {Gen}}(\textsf {D}) + \frac{1}{q-1}. \end{aligned}$$

To prove it, we show that there exists an efficient transformation of any instance \(([{{\mathbf {{A}}}}],[{{\mathbf {{Z}}}}])\) of the \((\ell -k)\)-fold \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) problem into another instance \(([{{\mathbf {{A}}}}],[{{\mathbf {{Z'}}}}])\) of the m-fold problem, with overwhelming probability.

In particular, we set \({{\mathbf {{Z'}}}} = {{\mathbf {{AR+ZC}}}}\), for random matrices \({{\mathbf {{R}}}}\leftarrow \mathbb {Z}_q^{k\times m}\) and \({{\mathbf {{C}}}}\leftarrow \mathbb {Z}_q^{(\ell -k)\times m}\). On the one hand, if \({{\mathbf {{Z}}}}={{\mathbf {{AW}}}}\), then \({{\mathbf {{Z'}}}} = {{\mathbf {{AW'}}}}\) for \({{\mathbf {{W'}}}}={{\mathbf {{R+WC}}}}\), which is uniformly distributed in \(\mathbb {Z}_q^{k\times m}\). On the other hand, if \({{\mathbf {{Z}}}}={{\mathbf {{U}}}}\) is uniform, then \({{\mathbf {{A|U}}}}\) is full rank with probability at least \(1-1/(q-1)\). In that case, \({{\mathbf {{Z'}}}}={{\mathbf {{AR+UC}}}}\) is uniformly distributed in \(\mathbb {Z}_q^{\ell \times m}\), which proves the above inequality. \(\square \)

We remark that, given \([{{\mathbf {{A}}}}], [\vec {z}]\) the above lemma can only be used to re-randomize the value \([\vec {z}]\). In order to re-randomize the matrix \([{{\mathbf {{A}}}}]\), we need that one can sample matrices \({{\mathbf {{L}}}}\) and \({{\mathbf {{R}}}}\) such that \({{\mathbf {{A}}}}'={{\mathbf {{LAR}}}}\) looks like an independent instance \({{\mathbf {{A}}}}' \leftarrow \mathcal {D}_{\ell ,k}\). In all of our example distributions, we are able to do this.

Due to its linearity properties, the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) assumption does not hold in \((k+1)\)-linear groups, assuming that k is constant, i.e., it does not depend on the security parameter.Footnote 2

Lemma 2

Let \(\mathcal {D}_{\ell ,k}\) be any matrix distribution. Then the \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman Assumption is false in \((k+1)\)-linear groups.


In a \((k+1)\)-linear group, the implicit representation of any \(r\times r\) determinant for \(r \le k+1\) can be efficiently computed by using the r-linear map given by the Leibnitz formula:

$$\begin{aligned} \det ({{\mathbf {{M}}}}) = \sum _{\sigma \in S_{r}} \mathrm {sgn}(\sigma ) \prod _{i=1}^{r} m_{i,\sigma _i} \end{aligned}$$

Using the \((k+1)\)-linear map, \([\det ({{\mathbf {{M}}}})]_{T_k}\) can be computed in the target group. Then, given \([{{\mathbf {{B}}}}]:=[{{\mathbf {{A}}}}||\vec {z}]\), consider the submatrix \({{\mathbf {{A}}}}_0\) formed by the first k rows of \({{\mathbf {{A}}}}\) and the vector \(\vec {z}_0\) formed by the first k elements of \(\vec {z}\). If \(\det ({{\mathbf {{A}}}}_0) \ne 0\), then define \({{\mathbf {{C}}}}\) as the first \(k+1\) rows of \({{\mathbf {{B}}}}\). If \(\vec {z}\) is random, then \(\det ({{\mathbf {{C}}}}) \ne 0\) with overwhelming probability, while if \(\vec {z} = {{\mathbf {{A}}}}\vec {w}\) for some vector \(\vec {w}\), then \(\det ({{\mathbf {{C}}}}) = 0\). Therefore, the \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman Assumption is false in this case.

Otherwise \(\det ({{\mathbf {{A}}}}_0) = 0\). Then \(\mathrm {rank}({{\mathbf {{A}}}}_0||\vec {z}_0) = \mathrm {rank}({{\mathbf {{A}}}}_0)\) when \(\vec {z} = {{\mathbf {{A}}}}\vec {w}\), while \(\mathrm {rank}({{\mathbf {{A}}}}_0||\vec {z}_0) = \mathrm {rank}({{\mathbf {{A}}}}_0)+1\) with overwhelming probability if \(\vec {z}\) is random. To compute the rank of both matrices, the following efficient randomized algorithm can be used. Take random invertible matrices \({{\mathbf {{L}}}},{{\mathbf {{R}}}}\in \mathbb {Z}_q^{k \times k}\). Then set \([{{\mathbf {{A}}}}'_0] = [{{\mathbf {{L}}}}{{\mathbf {{A}}}}_0{{\mathbf {{R}}}}]\) and \([\vec {z}'_0] = [{{\mathbf {{L}}}}\vec {z}_0]\), which is just a randomized instance of the same problem. Now if \(\mathrm {rank}({{\mathbf {{A}}}}'_0) = r\), then with overwhelming probability its principal \(r\times r\) minor is nonzero. Therefore, we can estimate \(r =\mathrm {rank}({{\mathbf {{A}}}}'_0)\) as the size of the largest nonzero principal minor (with negligible error probability). Finally, if the determinant of the submatrix of \({{\mathbf {{A}}}}'_0||\vec {z}'_0\) formed by the first \(r+1\) rows and the first r and the last column is nonzero, we conclude that \(\vec {z}\) is random. \(\square \)

3.3 Generic Hardness of Matrix DH

Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution as in Definition 1, which outputs matrices \({{\mathbf {{A}}}}\in \mathbb {Z}_q^{\ell \times k}\). We call \(\mathcal {D}_{\ell ,k}\) polynomial-induced if the distribution is defined by picking \(\vec {t}\in \mathbb {Z}_q^d\) uniformly at random and setting \(a_{i,j}:=\mathfrak {p}_{i,j}(\vec {t})\) for some polynomials \(\mathfrak {p}_{i,j}\in \mathbb {Z}_q[\vec {T}]\) whose degree does not depend on \(\lambda \). For example, for \(2\text{- }\textsf {Lin}\) from Sect. 1.1, we have \(a_{1,1}=t_1, a_{2,2}=t_2, a_{2,1}=a_{3,2}=1\) and \(a_{1,2}=a_{3,1}=0\) with \(t_1,t_2\) (called \(a_1,a_2\) in Sect. 1.1) uniform.

We set \(\mathfrak {f}_{i,j} =A_{i,j}-\mathfrak {p}_{i,j}\) and \(\mathfrak {g}_i=Z_i-\sum _j\mathfrak {p}_{i,j}W_j\) in the ring \(\mathcal {R}=\mathbb {Z}_q[A_{1,1},\ldots , A_{\ell ,k},\vec {Z},\vec {T},\vec {W}]\). Consider the ideal \(\mathcal {I}_0\) generated by all \(\mathfrak {f}_{i,j}\)’s and \(\mathfrak {g}_i\)’s and the ideal \(\mathcal {I}_1\) generated only by the \(\mathfrak {f}_{i,j}\)’s in \(\mathcal {R}\). Let \(\mathcal {J}_b:=\mathcal {I}_b\cap \mathbb {Z}_q[A_{1,1},\ldots ,A_{\ell ,k},\vec {Z}]\). Note that the equations \(\mathfrak {f}_{i,j}=0\) just encode the definition of the matrix entry \(a_{i,j}\) by \(\mathfrak {p}_{i,j}(\vec {t})\) and the equation \(\mathfrak {g}_{i}=0\) encodes the definition of \(z_i\) in the case \(\vec {z}={{\mathbf {{A}}}}\vec {\omega }\). So, informally, \(\mathcal {I}_0\) encodes the relations between the \(a_{i,j}\)’s, \(z_i\)’s, \(t_i\)’s and \(w_i\)’s in \(([{{\mathbf {{A}}}}],[\vec {z}]=[{{\mathbf {{A}}}}\vec {\omega }])\) and \(\mathcal {I}_1\) encodes the relations in \(([{{\mathbf {{A}}}}],[\vec {z}]=[\vec {u}])\). For \(b=0\) (\(\vec {z}={{\mathbf {{A}}}}\vec {\omega }\)) and \(b=1\) (\(\vec {z}\) uniform), \(\mathcal {J}_b\) encodes the relations visible by considering only the given data (i.e., the \(A_{i,j}\)’s and \(Z_j\)’s).

Theorem 3

Let \(\mathcal {D}_{\ell ,k}\) be a polynomial-induced matrix distribution with notation as above. Then the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) assumption holds in generic m-linear groups if and only if \((\mathcal {J}_0)_{\le m}=(\mathcal {J}_1)_{\le m}\), where the \(_{\le m}\) means restriction to total degree at most m.


Note that \(\mathcal {J}_{\le m}\) captures precisely what any adversary can generically compute with polynomially many group and m-linear pairing operations. Formally, this is proven by restating the Uber-Assumption Theorem of [2, 9] and its proof more algebraically. Cf. “Appendix 2” for details. \(\square \)

For a given matrix distribution, the condition \((\mathcal {J}_0)_{\le m}=(\mathcal {J}_1)_{\le m}\) can be verified by direct linear algebra or by elimination theory (using, e.g., Gröbner bases).Footnote 3 For the special case \(\ell =k+1\), we can actually give a criterion that is simple to verify using determinants:

Theorem 4

Let \(\mathcal {D}_{k}\) be a polynomial-induced matrix distribution, which outputs matrices \(a_{i,j}=\mathfrak {p}_{i,j}(\vec {t})\) for uniform \(\vec {t}\in \mathbb {Z}_q^d\). Let \(\mathfrak {d}\) be the determinant of \((\mathfrak {p}_{i,j}(\vec {T})\Vert \vec {Z})\) as a polynomial in \(\vec {Z},\vec {T}\).

  1. 1.

    If the matrices output by \(\mathcal {D}_{k}\) always have full rank (not just with overwhelming probability), even for \(t_i\) from the algebraic closure \(\overline{\mathbb {Z}_q}\), then \(\mathfrak {d}\) is irreducible over \(\overline{\mathbb {Z}_q}\).

  2. 2.

    If all \(\mathfrak {p}_{i,j}\) have degree at most one and \(\mathfrak {d}\) is irreducible over \(\overline{\mathbb {Z}_q}\) and the total degree of \(\mathfrak {d}\) is \(k+1\), then the \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) assumption holds in generic k-linear groups.

This theorem and generalizations for nonlinear \(\mathfrak {p}_{i,j}\) and non-irreducible \(\mathfrak {d}\) are proven in “Appendix 2” using tools from algebraic geometry.

3.4 Examples of \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\)

Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution and \({{\mathbf {{A}}}}\leftarrow \mathcal {D}_{\ell ,k}\). Looking ahead to our applications, \([{{\mathbf {{A}}}}]\) will correspond to the public key (or common reference string) and \([{{\mathbf {{A}}}}\vec {w}] \in \mathbb {G}^{\ell }\) will correspond to a ciphertext. We define the representation size \(\textsf {RE}_\mathbb {G}(\mathcal {D}_{\ell ,k})\) of a given polynomial-induced matrix distribution \(\mathcal {D}_{\ell ,k}\) with linear \(\mathfrak {p}_{i,j}\)’s as the minimal number of group elements it takes to represent \([{{\mathbf {{A}}}}]\) for any \({{\mathbf {{A}}}}\in \mathcal {D}_{\ell ,k}\). We will be interested in families of distributions \(\mathcal {D}_{\ell ,k}\) such that that Matrix Diffie–Hellman Assumption is hard in k-linear groups. By Lemma 2, we obtain a family of strictly weaker assumptions. Our goal is to obtain such a family of assumptions with small (possibly minimal) representation.

Example 1

Let \(\mathcal {U}_{\ell ,k}\) be the uniform distribution over \(\mathbb {Z}_q^{\ell \times k}\).

The next lemma says that \(\mathcal {U}_{\ell ,k}\text{- }\textsf {MDDH}\) is the weakest possible assumption among all \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman Assumptions. However, \(\mathcal {U}_{\ell ,k}\) has poor representation, i.e., \(\textsf {RE}_\mathbb {G}(\mathcal {U}_{\ell ,k}) = \ell k\).

Lemma 5

Let \(\mathcal {D}_{\ell ,k}\) be any matrix distribution. Then \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\Rightarrow \mathcal {U}_{\ell ,k}\text{- }\textsf {MDDH}\).


Given an instance \(([{{\mathbf {{A}}}}],[{{\mathbf {{A}}}}\vec {w} ])\) of \(\mathcal {D}_{\ell ,k}\), if \({{\mathbf {{L}}}} \in \mathbb {Z}_q^{\ell \times \ell }\) and \({{\mathbf {{R}}}} \in \mathbb {Z}_q^{k \times k}\) are two random invertible matrices, it is possible to get a properly distributed instance of the \(\mathcal {U}_{\ell ,k}\)-matrix DH problem as \(([{{\mathbf {{L}}}}{{\mathbf {{A}}}}{{\mathbf {{R}}}}],[{{\mathbf {{L}}}}{{\mathbf {{A}}}}\vec {w}])\). Indeed, \({{\mathbf {{L}}}}{{\mathbf {{A}}}}{{\mathbf {{R}}}}\) has a distribution statistically close to the uniform distributionFootnote 4 in \(\mathbb {Z}_q^{k \times \ell }\), while \({{\mathbf {{L}}}}{{\mathbf {{A}}}}\vec {w}={{\mathbf {{L}}}}{{\mathbf {{A}}}}{{\mathbf {{R}}}}\vec {v}\) for \(\vec {v}={{\mathbf {{R}}}}^{-1}\vec {w}\). Clearly, \(\vec {v}\) has the uniform distribution in \(\mathbb {Z}_q^{k}\). \(\square \)

Example 2

(k-Linear Assumption/\(k\text{- }\textsf {Lin}\)) We define the distribution \(\mathcal {L}_k\) as follows

$$\begin{aligned} {{\mathbf {{A}}}} = \left( \begin{array}{ccccc} a_1 &{} 0 &{} \ldots &{} 0 &{} 0 \\ 0 &{} a_2 &{} \ldots &{} 0 &{} 0 \\ 0 &{} 0 &{} &{} \ddots &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \ldots &{} 0 &{} a_k \\ 1 &{} 1 &{} \ldots &{} 1 &{} 1 \end{array}\right) \in \mathbb {Z}_q^{(k+1)\times k}, \end{aligned}$$

where \(a_i \leftarrow \mathbb {Z}_q^*\). The transformation matrix \({{\mathbf {{T}}}} \in \mathbb {Z}_q^{1\times k}\) is given as \({{\mathbf {{T}}}} = (\frac{1}{a_1},\ldots , \frac{1}{a_k})\). Note that the distribution \(({{\mathbf {{A}}}}, {{\mathbf {{A}}}}\vec {w})\) can be compactly written as \((a_1,\ldots , a_k, a_1 w_1, \ldots , a_k w_k, w_1+\cdots +w_k) = (a_1, \ldots , a_k, b_1, \ldots , b_k, \frac{b_1}{a_1}+\cdots +\frac{b_k}{a_k})\) with \(a_i \leftarrow \mathbb {Z}_q^*\), \(b_i, w_i \leftarrow \mathbb {Z}_q\). Hence the \(\mathcal {L}_{k}\)-Matrix Diffie–Hellman Assumption is an equivalent description of the k-linear Assumption [3, 23, 45] with \(\textsf {RE}_\mathbb {G}(\mathcal {L}_k) = k\).

It was shown in [45] that \(k\text{- }\textsf {Lin}\) holds in the generic k-linear group model, and hence \(k\text{- }\textsf {Lin}\) forms a family of increasingly strictly weaker assumptions. Furthermore, in [7] it was shown that \(2\text{- }\textsf {Lin} \Rightarrow \textsf {BDDH}\).

Example 3

(k-Cascade Assumption/\(k\text{- }\textsf {Casc}\)) We define the distribution \(\mathcal {C}_{k}\) as follows

$$\begin{aligned} {{\mathbf {{A}}}} = \left( \begin{array}{ccccc} a_1 &{} 0 &{} \ldots &{} 0 &{} 0 \\ 1 &{} a_2 &{} \ldots &{} 0 &{} 0 \\ 0 &{} 1 &{} \ddots &{} &{} 0 \\ \vdots &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} \ldots &{} 1 &{} a_k \\ 0 &{} 0 &{} \ldots &{} 0 &{} 1 \end{array}\right) , \end{aligned}$$

where \(a_i \leftarrow \mathbb {Z}_q^*\). The transformation matrix \({{\mathbf {{T}}}} \in \mathbb {Z}_q^{1\times k}\) is given as \({{\mathbf {{T}}}} = (\pm \frac{1}{a_1 \cdots a_k}, \mp \frac{1}{a_2 \cdots a_k} \ldots , \frac{1}{a_k})\). Note that \(({{\mathbf {{A}}}}, {{\mathbf {{A}}}}\vec {w})\) can be compactly written as \((a_1, \ldots , a_k, a_1 w_1, w_1+a_2w_2\ldots ,w_{k-1}+a_k w_k, w_k) = (a_1, \ldots , a_k, b_1, \ldots , b_k,\frac{b_k}{a_k}-\frac{b_{k-1}}{a_{k-1} a_k} + \frac{b_{k-2}}{a_{k-2} a_{k-1} a_k} - \cdots \pm \frac{b_1}{a_1 \cdots a_k})\). We have \(\textsf {RE}_\mathbb {G}(\mathcal {C}_{k}) = k\).

Matrix \({{\mathbf {{A}}}}\) bears resemblance to a cascade which explains the assumption’s name. Indeed, in order to compute the right lower entry \(w_k\) of matrix \(({{\mathbf {{A}}}}, {{\mathbf {{A}}}}\vec {w})\) from the remaining entries, one has to “descend” the cascade to compute all the other entries \(w_i\) (\(1 \le i \le k-1\)) one after the other.

A more compact version of \(\mathcal {C}_{k}\) is obtained by setting all \(a_i:=a\).

Example 4

(Symmetric k-Cascade Assumption) We define the distribution \(\mathcal {SC}_{k}\) as \(\mathcal {C}_{k}\) but now \(a_i=a\), where \(a \leftarrow \mathbb {Z}_q^*\). Then \(({{\mathbf {{A}}}}, {{\mathbf {{A}}}}\vec {w})\) can be compactly written as \((a, a w_1, w_1+aw_2, \ldots , w_{k-1}+a w_k, w_k) = (a, b_1, \ldots , b_k, \frac{b_k}{a}-\frac{b_{k-1}}{a^2} + \frac{b_{k-2}}{a^3} - \ldots \pm \frac{b_1}{a^k})\). We have \(\textsf {RE}_\mathbb {G}(\mathcal {C}_{k}) = 1\).

Observe that the same trick cannot be applied to the k-Linear assumption \(k\text{- }\textsf {Lin}\), as the resulting Symmetric k-Linear assumption does not hold in k-linear groups. However, if we set \(a_i := a+i-1\), we obtain another matrix distribution with compact representation.

Example 5

(Incremental k-Linear Assumption) We define the distribution \(\mathcal {IL}_{k}\) as \(\mathcal {L}_{k}\) with \(a_i=a+i-1\), for \(a \leftarrow \mathbb {Z}_q^*\). The transformation matrix \({{\mathbf {{T}}}} \in \mathbb {Z}_q^{1\times k}\) is given as \({{\mathbf {{T}}}} = (\frac{1}{a}, \ldots , \frac{1}{a+k-1})\). \(({{\mathbf {{A}}}}, {{\mathbf {{A}}}}\vec {w})\) can be compactly written as \((a, a w_1,(a+1)w_2, \ldots , (a+k-1) w_k, w_1+\ldots +w_k) = (a, b_1, \ldots , b_k, \frac{b_1}{a}+\frac{b_2}{a+1} + \cdots + \frac{b_k}{a+k-1})\). We also have \(\textsf {RE}_\mathbb {G}(\mathcal {IL}_{k}) = 1\).

The last three examples need some work to prove its generic hardness.

Theorem 6

\(k\text{- }\textsf {Casc}\), \(k\text{- }\textsf {SCasc}\) and \(k\text{- }\textsf {ILin}\) are hard in generic k-linear groups.


We need to consider the (statistically close) variants with \(a_i\in \mathbb {Z}_q\) rather than \(\mathbb {Z}_q^*\). The determinant polynomial for \(\mathcal {C}_{k}\) is \(\mathfrak {d}(a_1,\ldots ,a_k,z_1,\ldots ,z_{k+1}) = a_1\cdots a_k z_{k+1}-a_1\cdots a_{k-1}z_k + \cdots + (-1)^{k}z_1\), which has total degree \(k+1\). As all matrices in \(\mathcal {C}_{k}\) have rank k, because the determinant of the last k rows in \({{\mathbf {{A}}}}\) is always 1, by Theorem 4 we conclude that \(k\text{- }\textsf {Casc}\) is hard in k-linear groups. As \(\mathcal {SC}_{k}\) is a particular case of \(\mathcal {C}_{k}\), the determinant polynomial for \(\mathcal {SC}_{k}\) is \(\mathfrak {d}(a,z_1,\ldots ,z_{k+1}) = a^k z_{k+1}-a^{k-1}z_k + \cdots + (-1)^{k}z_1\). As before, by Theorem 4, \(k\text{- }\textsf {SCasc}\) is hard in k-linear groups. Finally, in the case of \(k\text{- }\textsf {ILin}\) we will show in the next section its equivalence to \(k\text{- }\textsf {SCasc}\), and therefore, it is generically hard in k-linear groups. \(\square \)

The previous examples can be related to some known assumptions from Sect. 2.3. Figure 1 depicts the relations that are also stated in next theorem, except the equivalence of \(k\text{- }\textsf {ILin}\) and \(k\text{- }\textsf {SCasc}\) which is addressed in the next section. We stress that this equivalence together with Theorem 7 implies that \(k\text{- }\textsf {SCasc}\) is a stronger assumption than \(k\text{- }\textsf {Lin}\), previously unknown [16].

Theorem 7

For any \(k \ge 2\), the following holds:

$$\begin{aligned}&(k+1)\text{- }\textsf {PDDH} \Rightarrow k\text{- }\textsf {Casc}\;; \\&\quad (k+1)\text{- }\textsf {EDDH} \Rightarrow k\text{- }\textsf {SCasc} \Rightarrow k\text{- }\textsf {Casc}\;;\qquad k\text{- }\textsf {ILin} \Rightarrow k\text{- }\textsf {Lin}\;; \\&\quad k\text{- }\textsf {Casc} \Rightarrow (k+1)\text{- }\textsf {Casc}\;; \qquad k\text{- }\textsf {SCasc} \Rightarrow (k+1)\text{- }\textsf {SCasc} \end{aligned}$$

Further, in k-linear groups, \(k\text{- }\textsf {Casc} \Rightarrow k\text{- }\textsf {MLDDH}\).


The proof of all implications can be found in “Appendix 1”. \(\square \)

4 Uniqueness of One-Parameter Matrix DH Problems

Some differently looking \(\textsf {MDDH}\) assumptions can be tightly equivalent, or isomorphic, meaning that there is a very tight generic reduction between the corresponding problems. These reductions are mainly based on the algebraic nature of the \(\textsf {MDDH}\) problems.

The simplest and most compact polynomial-induced matrix distributions \(\mathcal {D}_{k}\) are the one-parameter linear ones, where \(\mathcal {D}_{k}\) outputs matrices \({{\mathbf {{A}}}}(t) = {{\mathbf {{A}}}}_0+{{\mathbf {{A}}}}_1 t\) for a uniformly distributed \(t\in \mathbb {Z}_q\), and fixed \({{\mathbf {{A}}}}_0,{{\mathbf {{A}}}}_1\in \mathbb {Z}_q^{(k+1) \times k}\). The two examples of them given in [16] are \(\mathcal {SC}_{k}\) and \(\mathcal {IL}_{k}\).

A natural question is whether such a tight algebraic reduction exists between \(\mathcal {SC}_{k}\) and \(\mathcal {IL}_{k}\). In this section, we prove a much stronger result, which states there exists essentially a single one-parameter linear \(\textsf {MDDH}\) problem. Indeed, we show that all one-parameter linear \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problems are isomorphic to \(\mathcal {SC}_{k}\). This result is heavily related to the one-parameter nature of the problems considered, and it seems to be not generalizable to broader families of \(\textsf {MDDH}\) problems (e.g., trying to relate \(\mathcal {C}_{k}\) and \(\mathcal {L}_{k}\), or dealing with the case \(\ell > k+1\)).

4.1 Hardness

Theorem 4 gives an easy-to-check sufficient condition ensuring the \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) assumption holds in generic k-linear groups for certain matrix distributions \(\mathcal {D}_{k}\), including the one-parameter linear ones. For this particular family, the sufficient condition is that all matrices \({{\mathbf {{A}}}}(t) = {{\mathbf {{A}}}}_0+{{\mathbf {{A}}}}_1 t\) have full rank for all \(t\in \overline{\mathbb {Z}_q}\), the algebraic closure of the finite field \(\mathbb {Z}_q\), and the determinant \(\mathfrak {d}\) of \(({{\mathbf {{A}}}}(T)\Vert \vec {Z})\) as a polynomial in \(\vec {Z},T\) has total degree \(k+1\). We first show that indeed it is also a necessary condition for the hardness of the \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problem.

Theorem 8

Let \(\mathcal {D}_{k}\) be a one-parameter linear matrix distribution, producing matrices \({{\mathbf {{A}}}}(t) = {{\mathbf {{A}}}}_0+{{\mathbf {{A}}}}_1 t\), such that \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) assumption is hard generically in k-linear groups. Then, the determinant \(\mathfrak {d}\) of \(({{\mathbf {{A}}}}(T)\Vert \vec {Z})\) is an irreducible polynomial in \(\overline{\mathbb {Z}_q}[\vec {Z},T]\) with total degree \(k+1\), and the rank of \({{\mathbf {{A}}}}_0+{{\mathbf {{A}}}}_1 t\) is always k, for all \(t\in \overline{\mathbb {Z}_q}\).


The proof just consists in finding a nonzero polynomial \(\mathfrak {h}\in \mathbb {Z}_q[\vec {Z},T]\) of degree at most k such that \(\mathfrak {h}({{\mathbf {{A}}}}(t)\vec {w},t)=0\) for all \(t\in \mathbb {Z}_q\) and \(\vec {w}\in \mathbb {Z}_q^k\), and then using it to solve the \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problem. If the total degree of \(\mathfrak {d}\) is at most k, then we can simply let \(\mathfrak {h}=\mathfrak {d}\).Footnote 5 Otherwise, assume that the degree of \(\mathfrak {d}\) is \(k+1\). If \(\mathfrak {d}\) is reducible, from Lemma 21 it follows that \(\mathfrak {d}\) can be split as \(\mathfrak {d}=\mathfrak {c}\mathfrak {d}_0\), where \(\mathfrak {c}\in \mathbb {Z}_q[T]\) and \(\mathfrak {d}_0\in \mathbb {Z}_q[\vec {Z},T]\) are nonconstant. Clearly, if \(\mathfrak {c}(t)\ne 0\), then \(\mathfrak {d}_0({{\mathbf {{A}}}}(t)\vec {w},t)=0\) for all \(\vec {w}\in \mathbb {Z}_q^k\), which means that as a polynomial in \(\mathbb {Z}_q[\vec {W},T],\, \mathfrak {d}_0(T,{{\mathbf {{A}}}}(T)\vec {W})\) has too many roots, so it is the zero polynomial. Therefore, we are done by taking \(\mathfrak {h}=\mathfrak {d}_0\).

Finally, observe that \(\mathfrak {d}(\vec {z},t)=\sum _{i=0}^{k+1}\mathfrak {c}_i(t)z_i\), where \(\vec {z}=(z_1,\ldots ,z_{k+1})\) and the \(\mathfrak {c}_i(t)\) are the (signed) k-minors of \({{\mathbf {{A}}}}(t)\). Therefore, if \({{\mathbf {{A}}}}(t_0)\) has rank less than k for some \(t_0\in \overline{\mathbb {Z}_q}\), then \(\mathfrak {d}(\vec {z},t_0)=0\) for all \(\vec {z}\in \mathbb {Z}_q^{k+1}\), which means that \(\mathfrak {c}_i(t_0) = 0\) for all i. As a consequence, \(T-t_0\) divides all \(\mathfrak {c}_i\), and hence it divides \(\mathfrak {d}\), that is, \(\mathfrak {d}\) is reducible.

Once we have found the polynomial \(\mathfrak {h}\) of degree at most k, an efficient distinguisher can use the k-linear map to evaluate \([\mathfrak {h}(\vec {z},t)]_{T_k}\) from an instance \(([{{\mathbf {{A}}}}(t)],[\vec {z}])\) of the \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problem, where [t] can be computed easily from \([{{\mathbf {{A}}}}(t)]\) because \({{\mathbf {{A}}}}_0\) and \({{\mathbf {{A}}}}_1\) are known. If \(\vec {z} = {{\mathbf {{A}}}}(t)\vec {w}\), then \(\mathfrak {h}(\vec {z},t)=0\), while for a randomly chosen \(\vec {z},\, \mathfrak {h}(\vec {z},t)\ne 0\) with overwhelming probability.Footnote 6 Then the distinguisher succeeds with an overwhelming probability. \(\square \)

4.2 Isomorphic Problems

From now on, we consider in this section a one-parameter linear matrix distribution \(\mathcal {D}_{k}\) such that \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) assumption holds in generic k-linear groups. This in particular means that using Theorem 8, the polynomial \(\mathfrak {d}\) is irreducible in \(\overline{\mathbb {Z}_q}[\vec {Z},T]\) with total degree \(k+1\) and that the rank of \({{\mathbf {{A}}}}_0+{{\mathbf {{A}}}}_1 t\) is always k, for all \(t\in \overline{\mathbb {Z}_q}\). Clearly, the rank of \({{\mathbf {{A}}}}_0\) is k, but also \({{\mathbf {{A}}}}_1\) has rank k. Indeed, it is easy to see that the coefficients of the monomials of degree \(k+1\) in \(\mathfrak {d}\) are exactly the (signed) k-minors of \({{\mathbf {{A}}}}_1\), so they cannot be all zero.

There are some natural families of maps that generically transform \(\textsf {MDDH}\) problems into \(\textsf {MDDH}\) problems. As mentioned in previous sections, some examples of them are left and right multiplication by an invertible constant matrix. More precisely, let \({{\mathbf {{L}}}}\in GL_{k+1}(\mathbb {Z}_q)\), the set of all invertible matrices in \(\mathbb {Z}_q^{(k+1)\times (k+1)}\), and \({{\mathbf {{R}}}}\in GL_{k}(\mathbb {Z}_q)\). Given some matrix distribution \(\mathcal {D}_{k}\), we write \(\mathcal {D}_{k}' = {{\mathbf {{L}}}}\mathcal {D}_{k}{{\mathbf {{R}}}}\) to denote the matrix distribution resulting from sampling a matrix from \(\mathcal {D}_{k}\) and multiplying on the left and on the right by \({{\mathbf {{L}}}}\) and \({{\mathbf {{R}}}}\).

This mapping between matrix distrutions can be used to transform any distinguisher for \(\mathcal {D}_{k}'\)-\(\textsf {MDDH}\) into a distinguisher for \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) with the same advantage and essentially the same running time. Indeed, a ‘real’ instance \(([{{\mathbf {{A}}}}],[{{\mathbf {{A}}}}\vec {w}])\) of a \(\textsf {MDDH}\) problem can be transformed into a ‘real’ instance of the other \(\textsf {MDDH}\) problem \(([{{\mathbf {{A}}}}'],[{{\mathbf {{A}}}}'\vec {w}']) = ({{\mathbf {{L}}}}[{{\mathbf {{A}}}}]{{\mathbf {{R}}}},{{\mathbf {{L}}}}[{{\mathbf {{A}}}}\vec {w}])\) with the right distribution, because \({{\mathbf {{L}}}}{{\mathbf {{A}}}}\vec {w} = {{\mathbf {{A}}}}'\vec {w}'\), where \(\vec {w}' = {{\mathbf {{R}}}}^{-1}\vec {w}\) is uniformly distributed. Similarly, a ‘random’ instance \(([{{\mathbf {{A}}}}],[\vec {z}])\) is transformed into another one \(([{{\mathbf {{A}}}}'],[\vec {z}']) = ({{\mathbf {{L}}}}[{{\mathbf {{A}}}}]{{\mathbf {{R}}}},{{\mathbf {{L}}}}[\vec {z}])\). From an algebraic point of view, we can see the above transformation as changing the bases used to represent certain linear maps as matrices.

In the particular case of one-parameter linear matrix distributions, one can write \({{\mathbf {{A}}}}'(t) = {{\mathbf {{L}}}}{{\mathbf {{A}}}}(t){{\mathbf {{R}}}} = {{\mathbf {{L}}}}{{\mathbf {{A}}}}_0{{\mathbf {{R}}}} + {{\mathbf {{L}}}}{{\mathbf {{A}}}}_1{{\mathbf {{R}}}}t\), which simply means defining \({{\mathbf {{A}}}}'_0 = {{\mathbf {{L}}}}{{\mathbf {{A}}}}_0{{\mathbf {{R}}}}\) and \({{\mathbf {{A}}}}'_1 = {{\mathbf {{L}}}}{{\mathbf {{A}}}}_1{{\mathbf {{R}}}}\). Consider the injective linear maps \(f_0,f_1 : \mathbb {Z}_q^k\rightarrow \mathbb {Z}_q^{k+1}\) defined by \(f_0(\vec {w})={{\mathbf {{A_0}}}}\vec {w}\) and \(f_1(\vec {w})={{\mathbf {{A_1}}}}\vec {w}\). We need the following technical lemma.

Lemma 9

If \(\mathcal {D}_{k}\) is generically hard in k-linear groups, no nontrivial subspace \(U\subset \mathbb {Z}_q^k\) exists such that \(f_0(U) = f_1(U)\).


Assume for contradiction a nontrivial subspace U exists such that \(f_0(U) = f_1(U)\)s and consider the natural automorphism \(\phi :U\rightarrow U\) defined as \(\phi =f_1^{-1} \circ f_0\). It is well defined due to the injectivity of \(f_0\) and \(f_1\). Then, there exists an eigenvector \(\vec {v}\ne \vec {0}\) of \(\phi \) for some eigenvalue \(\lambda \in \overline{\mathbb {Z}_q}\). The equation \(\phi (\vec {v}) = f_1^{-1} \circ f_0(\vec {v}) = \lambda \vec {v}\) implies \((f_0-\lambda f_1)(\vec {v}) = \vec {0}\). Therefore, \(f_0-\lambda f_1\) is no longer injective and \({{\mathbf {{A}}}}(-\lambda ) = {{\mathbf {{A}}}}_0-\lambda {{\mathbf {{A}}}}_1\) has rank strictly less than k, which contradicts Theorem 8. \(\square \)

Applying the lemma iteratively, one can build special bases for the spaces \(\mathbb {Z}_q^k\) and \(\mathbb {Z}_q^{k+1}\) and obtain canonical forms simultaneously for \({{\mathbf {{A}}}}_0\) and \({{\mathbf {{A}}}}_1\), as described in the proof of the following theorem, which has some resemblance to the construction of Jordan normal forms of endomorphisms. The proof is rather technical, and it can be found in “Appendix 3”.

Theorem 10

Let \(f_0,f_1:\mathbb {Z}_q^k\rightarrow \mathbb {Z}_q^{k+1}\) two injective linear maps such that \(f_0(U) \ne f_1(U)\) for any nontrivial subspace \(U\subset \mathbb {Z}_q^k\). There exist bases of \(\mathbb {Z}_q^k\) and \(\mathbb {Z}_q^{k+1}\) such that \(f_0\) and \(f_1\) are represented in those bases respectively by the matrices

$$\begin{aligned} {{\mathbf {{J}}}}_0 = \left( \begin{array}{ccc} 0&{}\cdots &{}0\\ 1&{}\ddots &{}\vdots \\ \vdots &{}\ddots &{}0 \\ 0&{}\cdots &{}1 \end{array}\right) \qquad {{\mathbf {{J}}}}_1 = \left( \begin{array}{lll} 1&{}\cdots &{}0\\ 0&{}\ddots &{}\vdots \\ \vdots &{}\ddots &{}1 \\ 0&{}\cdots &{}0 \end{array}\right) \end{aligned}$$

Corollary 1

All one-parameter linear hard \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problems are isomorphic to the \(\mathcal {SC}_{k}\)-\(\textsf {MDDH}\) problem, i.e., there exist invertible matrices \({{\mathbf {{L}}}}\in GL_{k+1}(\mathbb {Z}_q)\) and \({{\mathbf {{R}}}}\in GL_{k}(\mathbb {Z}_q)\) such that \(\mathcal {D}_{k} = {{\mathbf {{L}}}}\mathcal {SC}_{k}{{\mathbf {{R}}}}\).


Combining the previous results, the maps \(f_0\), \(f_1\) defined from the hard \(\mathcal {D}_{k}\)-\(\textsf {MDDH}\) problem are injective and they can be represented in the bases given in Theorem 10. In terms of matrices, this means that there exist \({{\mathbf {{L}}}}\in GL_{k+1}(\mathbb {Z}_q)\) and \({{\mathbf {{R}}}}\in GL_{k}(\mathbb {Z}_q)\) such that \({{\mathbf {{A}}}}_0 = {{\mathbf {{L}}}}{{\mathbf {{J}}}}_0{{\mathbf {{R}}}}\) and \({{\mathbf {{A}}}}_1 = {{\mathbf {{L}}}}{{\mathbf {{J}}}}_1{{\mathbf {{R}}}}\), that is,

$$\begin{aligned} {{\mathbf {{A}}}}(t) = {{\mathbf {{L}}}} \left( \begin{array}{lll} t&{}\cdots &{}0 \\ 1&{}\ddots &{}\vdots \\ \vdots &{}\ddots &{}t \\ 0&{}\cdots &{}1 \end{array}\right) {{\mathbf {{R}}}} \end{aligned}$$

which concludes the proof. \(\square \)

As an example, we show an explicit isomorphism between \(\mathcal {SC}_{2}\)-\(\textsf {MDDH}\) and \(\mathcal {IL}_{2}\)-\(\textsf {MDDH}\) problems.

$$\begin{aligned} \left( \begin{array}{ll} t &{} \quad 0 \\ 0&{} \quad t+1 \\ 1&{} \quad 1 \\ \end{array}\right) = \left( \begin{array}{lll} -1&{} \quad 0&{} \quad 0 \\ 1&{} \quad 1&{} \quad 1 \\ 0&{} \quad 0&{} \quad 1 \\ \end{array} \right) \left( \begin{array}{ll} t&{} \quad 0 \\ 1&{} \quad t \\ 0&{} \quad 1 \\ \end{array}\right) \left( \begin{array}{ll} -1&{} \quad 0 \\ 1&{} \quad 1 \\ \end{array}\right) \end{aligned}$$

We stress that ‘isomorphic’ does not mean ‘identical,’ and it is still useful having at hand different representations of essentially the same computational problem, as it would help in finding applications.

5 Basic Applications

5.1 Public-Key Encryption

Let \(\textsf {Gen}\) be a group generating algorithm and \(\mathcal {D}_{\ell ,k}\) be a matrix distribution that outputs a matrix over \(\mathbb {Z}_q^{\ell \times k}\) such that the first k-rows form an invertible matrix with overwhelming probability. We define the following key encapsulation mechanism \(\textsf {KEM}_{\textsf {Gen},\mathcal {D}_{\ell ,k}}=(\textsf {Gen},\textsf {Enc},\textsf {Dec})\) with key space \(\mathcal {K}=\mathbb {G}^{\ell -k}\).

  • \(\textsf {Gen}(1^\lambda )\) runs \(\mathcal {G}\leftarrow \textsf {Gen}(1^\lambda )\) and \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\). Let \({{\mathbf {{A}}}}_0\) be the first k rows of \({{\mathbf {{A}}}}\) and \({{\mathbf {{A}}}}_1\) be the last \(\ell -k\) rows of \({{\mathbf {{A}}}}\). Define \({{\mathbf {{T}}}} \in \mathbb {Z}_q^{(\ell -k) \times k}\) as the transformation matrix \({{\mathbf {{T}}}} = {{\mathbf {{A}}}}_1{{\mathbf {{A}}}}_0^{-1}\). The public/secret key is

    $$\begin{aligned} pk =\left( \mathcal {G}, [{{\mathbf {{A}}}}] \in \mathbb {G}^{\ell \times k}\right) , \quad sk = ( pk ,{{\mathbf {{T}}}} \in \mathbb {Z}_q^{(\ell -k) \times k}) \end{aligned}$$
  • \(\textsf {Enc}_ pk \) picks \(\vec {w} \leftarrow \mathbb {Z}_q^k\). The ciphertext/key pair is

    $$\begin{aligned}{}[\vec {c}]= [{{\mathbf {{A}}}}_0 \vec {w}] \in \mathbb {G}^{k}, \quad [K] = [{{\mathbf {{A}}}}_1 \vec {w}] \in \mathbb {G}^{\ell -k} \end{aligned}$$
  • \(\textsf {Dec}_ sk ([\vec {c}] \in \mathbb {G}^{k})\) recomputes the key as \([K] = [{{\mathbf {{T}}}} \vec {c}] \in \mathbb {G}^{\ell -k}\).

Correctness follows by the equation \({{\mathbf {{T}}}} \cdot \vec {c} = {{\mathbf {{T}}}} \cdot {{\mathbf {{A}}}}_0 \vec {w} = {{\mathbf {{A}}}}_1 \vec {w}\). The public key contains \(\textsf {RE}_\mathbb {G}(\mathcal {D}_{\ell ,k})\) and the ciphertext k group elements. An example scheme from the \(k\text{- }\textsf {SCasc}\) Assumption is given in “Appendix 5.1”.

Theorem 11

Under the \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) Assumption, \(\textsf {KEM}_{\textsf {Gen},\mathcal {D}_{\ell ,k}}\) is IND-CPA secure.


By the \(\mathcal {D}_{\ell ,k}\) Matrix Diffie–Hellman Assumption, the distribution of \(( pk ,[\vec {c}],[K])=( (\mathcal {G},[{{\mathbf {{A}}}}]),[{{\mathbf {{A}}}}\vec {w}])\) is computationally indistinguishable from \(((\mathcal {G}, [{{\mathbf {{A}}}}]), [\vec {u}])\), where \(\vec {u} \leftarrow \mathbb {Z}_q^{\ell }\). \(\square \)

5.2 Hash Proof Systems

Let \(\mathcal {D}_{\ell ,k}\) be a matrix distribution. We build a universal\(_1\) hash proof system \(\textsf {HPS}=(\textsf {Param},\textsf {Pub},\textsf {Priv})\), whose hard subset membership problem is based on the \(\mathcal {D}_{\ell ,k}\) Matrix Diffie–Hellman Assumption.

  • \(\textsf {Param}(1^\lambda )\) runs \(\mathcal {G}\leftarrow \textsf {Gen}(1^\lambda )\) and picks \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\). Define the language

    $$\begin{aligned} \mathcal {V}= \mathcal {V}_{{{\mathbf {{A}}}}}=\{[\vec {c}]=[{{\mathbf {{A}}}}\vec {w}] \in \mathbb {G}^{\ell }\;:\; \vec {w} \in \mathbb {Z}_q^k\} \subseteq \mathcal {C}= \mathbb {G}^{\ell }. \end{aligned}$$

    The value \(\vec {w} \in \mathbb {Z}_q^k\) is a witness of \([\vec {c}] \in \mathcal {V}\). Let \(\mathcal {SK}= \mathbb {Z}_q^{\ell },\, \mathcal {PK}= \mathbb {G}^k\), and \(\mathcal {K}= \mathbb {G}\). For \( sk = \vec {x} \in \mathbb {Z}_q^{\ell }\), define the projection \(\mu ( sk ) = [\vec {x}^\top {{\mathbf {{A}}}}] \in \mathbb {G}^k\). For \([\vec {c}] \in \mathcal {C}\) and \( sk \in \mathcal {SK}\), we define

    $$\begin{aligned} \Lambda _{ sk }([\vec {c}]) := [\vec {x}^\top \cdot \vec {c} ]\;. \end{aligned}$$

    The output of \(\textsf {Param}\) is \( params =\left( \mathcal {S}= (\mathcal {G},[{{\mathbf {{A}}}}]), \mathcal {K},\mathcal {C},\mathcal {V},\mathcal {PK},\mathcal {SK},\Lambda _{(\cdot )}(\cdot ),\mu (\cdot )\right) \).

  • \(\textsf {Priv}( sk , [\vec {c}])\) computes \([K]=\Lambda _{ sk }([\vec {c}])\).

  • \(\textsf {Pub}( pk , [\vec {c}], \vec {w})\). Given \( pk =\mu ( sk )=[\vec {x}^\top {{\mathbf {{A}}}}]\), \([\vec {c}] \in \mathcal {V}\) and a witness \(\vec {w}\in \mathbb {Z}_q^k\) such that \([\vec {c}] = [{{\mathbf {{A}}}}\cdot \vec {w}]\) the public evaluation algorithm \(\textsf {Pub}( pk , [\vec {c}], \vec {w})\) computes \([K]=\Lambda _{ sk }([\vec {c}])\) as

    $$\begin{aligned}{}[K] = \left[ (\vec {x}^\top \cdot {{\mathbf {{A}}}}) \cdot \vec {w}\right] . \end{aligned}$$

Correctness follows by (3) and the definition of \(\mu \). Clearly, under the \(\mathcal {D}_{\ell ,k}\)-Matrix Diffie–Hellman Assumption, the subset membership problem is hard in \(\textsf {HPS}\).

We now show that \(\Lambda \) is a universal\(_1\) projective hash function. Let \([\vec {c}]\in \mathcal {C}\setminus \mathcal {V}\) be an element outside of the language. Then the matrix \(({{\mathbf {{A}}}} || \vec {c}) \in \mathbb {Z}_q^{\ell \times (k+1)}\) is of full rank \(k+1\) and consequently \((\vec {x}^\top \cdot {{\mathbf {{A}}}} || \vec {x}^\top \cdot \vec {c}) \equiv (\vec {x}^\top {{\mathbf {{A}}}} || u)\) for \(\vec {x} \leftarrow \mathbb {Z}_q^k\) and \(u \leftarrow \mathbb {Z}_q\). Hence, \(( pk , \Lambda _{ sk }([\vec {c}]) = ([\vec {x}^\top {{\mathbf {{A}}}}],[\vec {x}^\top \vec {c}]) \equiv ([\vec {x}^\top {{\mathbf {{A}}}}], [u])= ([\vec {x}^\top {{\mathbf {{A}}}}], [K])\).

We remark that \(\Lambda \) can be transformed into a universal\(_2\) projective hash function by applying a four-wise independent hash function [30]. Alternatively, one can construct a computational version of a universal\(_2\) projective hash function as follows. Let \(\mathcal {SK}= (\mathbb {Z}_q^{\ell })^2,\, \mathcal {PK}= (\mathbb {G}^k)^2\), and \(\mathcal {K}= \mathbb {G}\). For \( sk = (\vec {x}_1, \vec {x}_2) \in (\mathbb {Z}_q^{\ell })^2\), define the projection \(\mu ( sk ) = [\vec {x}_1^\top {{\mathbf {{A}}}}, \vec {x}_2^\top {{\mathbf {{A}}}}] \in (\mathbb {G}^k)^2\). For \([\vec {c}] \in \mathcal {C}\) and \( sk \in \mathcal {SK}\), define \(\Lambda _{ sk }([\vec {c}]) := [(t\vec {x}_1^\top + \vec {x}_2^\top )\cdot \vec {c} ]\), where \(t=H(\vec {c})\) and \(H: \mathcal {C}\rightarrow \mathbb {Z}_q\) is a collision-resistant hash function. The corresponding \(\textsf {Priv}\) and \(\textsf {Pub}\) algorithms are adapted accordingly. It is easy to verify that for all values \([\vec {c}_1], [\vec {c}_2] \in \mathcal {C}\setminus \mathcal {V}\) with \(H(\vec {c}_1)\ne H(\vec {c}_2)\), we have \(( pk , \Lambda _{ sk }([\vec {c}_1],\Lambda _{ sk }([\vec {c}_2]) \equiv ( pk ,[K_1],[K_2])\), for \(K_1, K_2 \leftarrow \mathbb {Z}_q\).

5.3 Pseudo-Random Functions

Let \(\textsf {Gen}\) be a group generating algorithm and \(\mathcal {D}_{\ell ,k}\) be a matrix distribution that outputs a matrix over \(\mathbb {Z}_q^{\ell \times k}\) such that the first k-rows form an invertible matrix with overwhelming probability. We define the following pseudo-random function \(\textsf {PRF}_{\textsf {Gen},\mathcal {D}_{\ell ,k}}=(\textsf {Gen},\textsf {F})\) with message space \(\mathcal {M}=\{0,1\}^n\) and range \(\mathcal {R}=\mathbb {G}^k\). For simplicity, we assume that \(\ell -k\) divides k.

  • \(\textsf {Gen}(1^\lambda )\) runs \(\mathcal {G}\leftarrow \textsf {Gen}(1^\lambda ),\, \vec {h} \in \mathbb {Z}_q^k\), and \({{\mathbf {{A}}}}_{i,j} \leftarrow \mathcal {D}_{\ell ,k}\) for \(i=1,\ldots , n\) and \(j=1,\ldots , t:=k/(\ell -k)\) and computes the transformation matrices \({{\mathbf {{T}}}}_{i,j} \in \mathbb {Z}_q^{(\ell -k)\times k}\) of \({{\mathbf {{A}}}}_{i,j} \in \mathbb {Z}_q^{\ell \times k}\) (cf. Definition 3). For \(i=1, \ldots , n\), define the aggregated transformation matrices

    $$\begin{aligned} {{\mathbf {{T}}}}_i = \left( \begin{array}{l} {{\mathbf {{T}}}}_{i,1} \\ \vdots \\ {{\mathbf {{T}}}}_{i,t} \\ \end{array} \right) \in \mathbb {Z}_q^{k\times k} \end{aligned}$$

    The key is defined as

    $$\begin{aligned} K=\left( \mathcal {G}, \vec {h}, {{\mathbf {{T}}}}_1, \ldots , {{\mathbf {{T}}}}_n \right) . \end{aligned}$$
  • \(\textsf {F}_K(x)\) computes

    $$\begin{aligned} \textsf {F}_K(x) = \left[ \prod _{i : x_i=1} {{\mathbf {{T}}}}_i \cdot \vec {h}\right] \in \mathbb {G}^k. \end{aligned}$$

\(\textsf {PRF}_{\textsf {Gen},\mathcal {L}_{k}}\) (i.e., setting \(\mathcal {D}_{\ell ,k}= \mathcal {L}_k\)) is the PRF from Lewko and Waters [33]. A more efficient PRF from the \(k\text{- }\textsf {SCasc}\) Assumption is given in “Appendix 5.2”.

Note that the elements \({{\mathbf {{T}}}}_1, \ldots , {{\mathbf {{T}}}}_t\) of the secret key consist of the transformation matrices of independently sampled matrices \({{\mathbf {{A}}}}_{i,j}\). Interestingly, for a number of distributions \(\mathcal {D}_{\ell ,k}\) the distribution of the transformation matrix \({{\mathbf {{T}}}}\) is the same. For example, the transformation matrix for \(\mathcal {L}_k\) consists of a uniform row vector, so does the transformation matrix for \(\mathcal {C}_k\) and for \(\mathcal {U}_{k+1,k}\). Consequently, \(\textsf {PRF}_{\textsf {Gen},\mathcal {C}_{k}}=\textsf {PRF}_{\textsf {Gen},\mathcal {L}_{k}}=\textsf {PRF}_{\textsf {Gen},\mathcal {U}_{k+1,k}}\) and in light of the theorem below, \(\textsf {PRF}_{\textsf {Gen},\mathcal {L}_{k}}\) proposed by Lewko and Waters can also be proved on the \(\mathcal {U}_{k+1,k}\)-\(\textsf {MDDH}\) assumption, the weakest among all \(\textsf {MDDH}\) assumptions of matching dimensions.

Theorem 12

Under the \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) Assumption \(\textsf {PRF}_{\textsf {Gen},\mathcal {D}_{\ell ,k}}\) is a secure pseudo-random function.

The proof is based on the augmented cascade construction of Boneh et al. [6]. Here we give a direct self-contained proof. We first state and prove the following lemma.

Lemma 13

Let Q be a polynomial. Under the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption,

$$\begin{aligned} \left[ \left( {\vec {h}^1 \atop \hat{{{\mathbf {{T}}}}} \vec {h}^1}\right) , \ldots , \left( {\vec {h}^Q \atop \hat{{{\mathbf {{T}}}}} \vec {h}^Q}\right) \right] \in \mathbb {G}^{2k \times Q} \end{aligned}$$

is computationally indistinguishable from a uniform \([{{\mathbf {{H}}}}] \in \mathbb {G}^{2k \times Q}\), where \(\vec {h}^i \leftarrow \mathbb {Z}_q^k\),

$$\begin{aligned} \hat{{{\mathbf {{T}}}}} = \left( \begin{array}{c} \hat{{{\mathbf {{T}}}}}_{1} \\ \vdots \\ \hat{{{\mathbf {{T}}}}}_{t} \\ \end{array} \right) \in \mathbb {Z}_q^{k\times k}, \end{aligned}$$

and \(\hat{{{\mathbf {{T}}}}}_{j} \, ( 1 \le j \le t)\) are the transformation matrices of \({{\mathbf {{A}}}}_{j} \leftarrow \mathcal {D}_{\ell ,k}\).


By a hybrid argument over \(j = 1,\ldots , t\), it is sufficient to show that

$$\begin{aligned} \left[ \left( {\vec {h}^1 \atop \hat{{{\mathbf {{T}}}}}_1 \vec {h^1}}\right) , \ldots , \left( {\vec {h}^Q \atop \hat{{{\mathbf {{T}}}}}_1 \vec {h}^Q}\right) \right] \in \mathbb {G}^{\ell \times Q} \end{aligned}$$

is computationally indistinguishable from a uniform \([{{\mathbf {{H}}}}_1] \leftarrow \mathbb {G}^{\ell \times Q}\), i.e., for one single transformation matrix \(\hat{{{\mathbf {{T}}}}}_{1}\) of \({{\mathbf {{A}}}}_{1} \leftarrow \mathcal {D}_{\ell ,k}\). This in turn follows directly by Lemma 1 (random self-reducibility of \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\)). Note that the overall loss in the security reduction is \(k=t \cdot (\ell -k)\), where the factor t stems from the hybrid argument and the factor \(\ell -k\) stems from Lemma 1. \(\square \)

Proof of Theorem 12

For \(x \in \{0,1\}^n\) and \(0 \le \mu \le n\), define \(\textsf {suffix}^{\mu }(x)\) as the \(\mu \)-th suffix of x, i.e., \(\textsf {suffix}^{\mu }(x):= (x_{n-\mu +1}, \ldots , x_n)\). We make the convention that \(\textsf {suffix}^{0}(x) = \varepsilon \), the empty string.

We will use a hybrid argument over n, the bitlength of x. In Hybrid \(\mu \, (0 \le \mu \le n)\), let \(\textsf {RF}^\mu :\{0,1\}^\mu \rightarrow \mathbb {Z}_q^k\) be a truly random function and define the oracle

$$\begin{aligned} {\mathcal {O}}^\mu (x) = \left[ \prod _{1 \le i \le n-\mu \atop i : x_i=1} {{\mathbf {{T}}}}_i \cdot \textsf {RF}^\mu (\textsf {suffix}^{\mu }(x))\right] \in \mathbb {G}^k, \end{aligned}$$

where the \({{\mathbf {{T}}}}_i\) are defined as in the real scheme. With this definition, we have that \({\mathcal {O}}^0(x)=\textsf {F}_K(x)\) (by defining \(\textsf {RF}(\varepsilon ):=\vec {h}\)) and \({\mathcal {O}}^n(x)\) is a truly random function. It leaves to show that the output of oracle \({\mathcal {O}}^\mu (\cdot )\) is computationally indistinguishable from \({\mathcal {O}}^{\mu +1}(\cdot )\). For the reduction, we use Lemma 13, where Q is the maximal number of queries to oracle \({\mathcal {O}}\) made by the PRF adversary. It inputs

$$\begin{aligned} \left[ \left( {\vec {h}^1_0 \atop \vec {h}^1_1}\right) , \ldots , \left( {\vec {h}^Q_0 \atop \vec {h}_1^Q}\right) \right] , \end{aligned}$$

where \(\vec h_1^j = \hat{{{\mathbf {{T}}}}} \vec {h_0^j}\) or uniformly random. Next, it picks \({{\mathbf {{T}}}}_i \, (1 \le i \le n-\mu )\) and implicitly defines \({{\mathbf {{T}}}}_{n-\mu } = \hat{{{\mathbf {{T}}}}}\). On the j-th query \(x^j=(x_1^j, \ldots , x_n^j)\) (\(1 \le j \le Q\) and wlog all queries are distinct) to oracle \({\mathcal {O}}\), it returns

$$\begin{aligned} {\mathcal {O}}(x^j) = \left[ \prod _{1 \le i \le n-\mu -1 \atop i : x^j_i=1} {{\mathbf {{T}}}}_i \cdot \vec {h}^j_{x^j_{n-\mu }} \right] . \end{aligned}$$

If \(\vec h_1^j = \hat{{{\mathbf {{T}}}}} \vec {h_0^j}\), then

$$\begin{aligned} \textsf {RF}^\mu (\textsf {suffix}^{\mu }(x^j))=\vec {h}_0^j \end{aligned}$$

is a random function on \(\mu \) bits and

$$\begin{aligned} {\mathcal {O}}(x^j) = \left[ \prod _{1 \le i \le n-\mu \atop i : x^j_i=1} {{\mathbf {{T}}}}_i \cdot \vec {h}_0^j\right] = \left[ \prod _{1 \le i \le n-\mu \atop i : x^j_i=1} {{\mathbf {{T}}}}_i \cdot \textsf {RF}^\mu (\textsf {suffix}^{\mu }(x^j))\right] \end{aligned}$$

perfectly simulates oracle \({\mathcal {O}}^\mu \) from Hybrid \(\mu \).

If \(\vec {h}^j_1\) is uniform and independent from \(\vec h^j_0\), then

$$\begin{aligned} \textsf {RF}^{\mu +1}\left( \textsf {suffix}^{\mu +1}(x^j)\right) = \vec {h}^j_{x^j_{n-\mu }} \end{aligned}$$

is a random function on \(\mu +1\) bits and

$$\begin{aligned} {\mathcal {O}}(x^j) = \left[ \prod _{1 \le i \le n-\mu -1 \atop i : x^j_i=1} {{\mathbf {{T}}}}_i \cdot \textsf {RF}^{\mu +1}\left( \textsf {suffix}^{\mu +1}(x^j)\right) \right] \end{aligned}$$

perfectly simulates oracle \({\mathcal {O}}^{\mu +1}\) from Hybrid \(\mu +1\).

We remark that the loss in the reduction is independent of the number of queries Q to oracle \({\mathcal {O}}\), i.e., the reduction loses a factor of nk, where the factor n stems from the above hybrid argument, and the factor k from Lemma 13. \(\square \)

5.4 Groth–Sahai Non-interactive Zero-Knowledge Proofs

Groth and Sahai gave a method to construct non-interactive witness-indistinguishable (NIWI) and non-interactive zero-knowledge (NIZK) proofs for satisfiability of a set of equations in a bilinear group \(\mathcal {PG}\). (For formal definitions of NIWI and NIZK proofs, we refer to [21].) The equations in the set can be of different types, but they can be written in a unified way as

$$\begin{aligned} \sum _{j=1}^n f(a_j, \textsf {y}_j)+\sum _{i=1}^m f(\textsf {x}_i, b_i)+\sum _{i=1}^m \sum _{j=1}^n f(\textsf {x}_i,\gamma _{ij} \textsf {y}_j)=t, \end{aligned}$$

where \(A_1,A_2,A_T\) are \(\mathbb {Z}_q\)-modules, \(\vec {\textsf {x}}\in A_1^m,\, \vec {\textsf {y}}\in A_2^n\) are the variables, \(\vec {a} \in A_1^n,\, \vec {b} \in A_2^m,\, \varvec{\Gamma }=(\gamma _{ij}) \in \mathbb {Z}_q^{m\times n},\, t \in A_T\) are the constants and \(f:A_1 \times A_2 \rightarrow A_T\) is a bilinear map. More specifically, considering only symmetric bilinear groups, equations are of one of these types:

  1. i)

    Pairing product equations, with \(A_1=A_2=\mathbb {G}\), \(A_T=\mathbb {G}_T\), \(f([\textsf {x}],[\textsf {y}])=[\textsf {x}\textsf {y}]_T \in \mathbb {G}_T\).

  2. ii)

    Multi-scalar multiplication equations, with \(A_1=\mathbb {Z}_q\), \(A_2=A_T=\mathbb {G}\), \(f(\textsf {x},[\textsf {y}])=[\textsf {x}\textsf {y}] \in \mathbb {G}\).

  3. iii)

    Quadratic equations in \(\mathbb {Z}_q\), with \(A_1=A_2=A_T=\mathbb {Z}_q\), \(f(\textsf {x},\textsf {y})=\textsf {x}\textsf {y}\in \mathbb {Z}_q\).

Overview. The GS proof system allows to construct NIWI and NIZK proofs for satisfiability of a set of equations of the type (4), i.e., proofs that there is a choice of variables—the witness—satisfying all equations simultaneously. The prover gives to the verifier a commitment to each element of the witness and some additional information, the proof. Commitments and proof satisfy some related set of equations computable by the verifier because of their algebraic properties. We stress that to compute the proof, the prover needs the randomness which it used to create the commitments. To give new instantiations of GS proofs, we need to specify the distribution of the common reference string, which includes the commitment keys and some maps whose purpose is roughly to give some algebraic structure to the commitment space.

Commitments. We will now construct commitments to elements in \(\mathbb {Z}_q\) and \(\mathbb {G}\). The commitment key \([{{\mathbf {{U}}}}] = ([\vec {u}_1], \ldots , [\vec {u}_{k+1}]) \in \mathbb {G}^{\ell \times (k+1)}\) is of the form

$$\begin{aligned} {[}{{\mathbf {{U}}}}] = \left\{ \begin{array}{ll} \left[ {{\mathbf {{A}}}} || {{\mathbf {{A}}}}\vec {w}\right] &{} \quad \text {binding key (soundness setting)} \\ \left[ {{\mathbf {{A}}}} || {{\mathbf {{A}}}}\vec {w}-\vec {z}\right] &{} \quad \text {hiding key (WI setting)} \end{array}\right. , \end{aligned}$$

where \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell , k},\, \vec {w} \leftarrow \mathbb {Z}_q^k\), and \(\vec {z}\in \mathbb {Z}_q^\ell ,\, \vec {z} \notin \mathrm {Im}({{\mathbf {{A}}}})\) is a fixed, public vector. The two types of commitment keys are computationally indistinguishable based on the \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) Assumption.

To commit to \([\textsf {y}] \in \mathbb {G}\) using randomness \(\vec {r}\leftarrow \mathbb {Z}_q^{k+1}\), we define maps \(\iota : \mathbb {G}\rightarrow \mathbb {Z}_q^{\ell }\) and \(p: \mathbb {G}^{\ell } \rightarrow \mathbb {Z}_q\) as

$$\begin{aligned} \iota ([\textsf {y}])=\textsf {y}\cdot \vec {z}, \quad p([\vec {c}])=\vec {\xi }^\top \cdot \vec {c}, \quad \text {defining } \textsf {com}_{[{{\mathbf {{U}}}}],\vec {z}}\left( [\textsf {y}]; \vec {r}\right) := \left[ \iota ([\textsf {y}])+ {{\mathbf {{U}}}} \vec {r}\right] \in \mathbb {G}^{\ell }, \end{aligned}$$

where \(\vec {\xi } \in \mathbb {Z}_q^{\ell }\) is an arbitrary vector such that \(\vec {\xi }^\top {{\mathbf {{A}}}}=\vec {0}\) and \(\vec {\xi }^\top \cdot \vec {z}=1\). Note that, given \([\textsf {y}]\), \(\iota ([\textsf {y}])\) is not efficiently computable, but \([\iota ([\textsf {y}])]\) is, and this suffices to compute the commitment. On a binding key (soundness setting), we have that \(p([\iota ([\textsf {y}])])=\textsf {y}\) for all \([\textsf {y}]\in \mathbb {G}\) and that \(p([\vec {u}_i])=0\) for all \(i=1\ldots k+1\). So \(p(\textsf {com}_{[{{\mathbf {{U}}}}],\vec {z}}([\textsf {y}];\vec {r})) =\vec {\xi }^{\top }(\vec {z} \textsf {y}+ {{\mathbf {{U}}}} \vec {r} ) = \vec {\xi }^\top \vec {z} \textsf {y}+ \vec {\xi }^\top ({{\mathbf {{A}}}} || {{\mathbf {{A}}}}\vec {w}) \vec {r} = \textsf {y}\) and the commitment is perfectly binding. On a hiding key (WI setting), \(\iota ([\textsf {y}])\in \mathrm {Span}( \vec {u}_1,\ldots , \vec {u}_{k+1})\) for all \([\textsf {y}]\in \mathbb {G}\) which implies that the commitments are perfectly hiding.

To commit to a scalar \(\textsf {x}\in \mathbb {Z}_q\) using randomness \(\vec {s}\leftarrow \mathbb {Z}_q^{k}\), we define the maps \(\iota ': \mathbb {Z}_q \rightarrow \mathbb {Z}_q^{\ell }\) and \(p': \mathbb {G}^\ell \rightarrow \mathbb {Z}_q\) as

$$\begin{aligned} \iota '(\textsf {x})=\textsf {x}\cdot (\vec {u}_{k+1}+\vec {z}),\quad p'([\vec {c}])=\vec {\xi }^\top \vec {c}, \quad \text {defining } \textsf {com}'_{[{{\mathbf {{U}}}}],\vec {z}}(\textsf {x}; \vec {s}):=[\iota '(\textsf {x})+ {{\mathbf {{A}}}} \vec {s}] \in \mathbb {G}^{\ell }. \end{aligned}$$

where \(\vec {\xi }\) is defined as above. Note that, given \(\textsf {x},\, \iota (\textsf {x})\) is not efficiently computable, but \([\iota (\textsf {x})]\) is, and this suffices to compute the commitment. On a binding key (soundness setting), we have that \(p'([\iota '(\textsf {x})])=\textsf {x}\) for all \(\textsf {x}\in \mathbb {Z}_q\) and \(p'([\vec {u}_i])=0\) for all \(i=1\ldots k\) so the commitment is perfectly binding. On a hiding key (WI setting), \(\iota '(x)\in \mathrm {Span}( \vec {u}_1,\dots ,\vec {u}_k)\) for all \(x\in \mathbb {Z}_q\), which implies that the commitment is perfectly hiding.

It will also be convenient to define a vector of commitments as \(\textsf {com}_{[{{\mathbf {{U}}}}],\vec {z}}([\vec {\textsf {y}}]; {{\mathbf {{R}}}}) = [\iota ([\vec {\textsf {y}}^\top ])+{{\mathbf {{U}}}}{{\mathbf {{R}}}}]\) and \(\textsf {com}'_{[{{\mathbf {{U}}}}],\vec {z}}(\vec {\textsf {x}}; {{\mathbf {{S}}}}) = [\iota '(\vec {\textsf {x}}^\top )+{{\mathbf {{A}}}} {{\mathbf {{S}}}}]\), where \([\vec {\textsf {y}}] \in \mathbb {G}^m,\, \vec {\textsf {x}}\in \mathbb {Z}_q^n\), \({{\mathbf {{R}}}}\leftarrow \mathbb {Z}_q^{(k+1)\times m}\), \({{\mathbf {{S}}}}\leftarrow \mathbb {Z}_q^{k\times n}\) and the inclusion maps are defined component-wise.

Inclusion and Projection Maps. As we have seen, commitments are elements of \(\mathbb {G}^{\ell }\). The main idea of GS NIWI and NIZK proofs is to give some algebraic structure to the commitment space (in this case, \(\mathbb {G}^{\ell }\)) so that the commitments to a solution in \(A_1,A_2\) of a certain set of equations satisfy a related set of equations in some larger modules. For this purpose, if \([\vec {\textsf {x}}]\in \mathbb {G}^\ell \) and \([\vec {\textsf {y}}] \in \mathbb {G}^\ell \), we define the bilinear map \(\tilde{F}:\mathbb {G}^\ell \times \mathbb {G}^{\ell } \rightarrow \mathbb {Z}_q^{\ell \times \ell }\) defined implicitly as:

$$\begin{aligned} \tilde{F} \left( [\vec {\textsf {x}}],[\vec {\textsf {y}}]\right) = \vec {\textsf {x}}\cdot \vec {\textsf {y}}^\top , \end{aligned}$$

as well as its symmetric variant \(F([\vec {\textsf {x}}],[\vec {\textsf {y}}])=\frac{1}{2} \tilde{F}([\vec {\textsf {x}}],[\vec {\textsf {y}}])+ \frac{1}{2} \tilde{F}([\vec {\textsf {y}}],[\vec {\textsf {x}}])\). Additionally, for any two row vectors of elements of \(\mathbb {G}^{\ell }\) of equal length r \([\textsf {X}]=[\vec {\textsf {x}}_1,\dots ,\vec {\textsf {x}}_r]\) and \([\textsf {Y}]=[\vec {\textsf {y}}_1,\dots ,\vec {\textsf {y}}_r]\), we define the maps \(\ \tilde{\bullet }\ ,\, \bullet \) associated with \(\tilde{F}\) and F as \([\textsf {X}] \ \tilde{\bullet }\ [\textsf {Y}]=[\sum _{i=1}^r \tilde{F}([\vec {\textsf {x}}_i],[\vec {\textsf {y}}_i])]_T\) and \([\textsf {X}] \bullet [\textsf {Y}]=[\sum _{i=1}^r F([\vec {\textsf {x}}_i],[\vec {\textsf {y}}_i])]_T\). To complete the details of the new instantiation, we must specify for each type of equation, for both \(F'=F\) and \(F'=\tilde{F}\):

  1. a)

    some maps \(\iota _T\) and \(p_T\) such that for all \(\textsf {x}\in A_1, \textsf {y}\in A_2, [\vec {\textsf {x}}]\in \mathbb {G}^\ell ,[\vec {\textsf {y}}]\in \mathbb {G}^\ell \),

    $$\begin{aligned} F'([\iota _1(\textsf {x})],[\iota _2(\textsf {y})])= & {} \iota _T(f(\textsf {x},\textsf {y})) \ \qquad \text {and} \qquad p_T([F'([\vec {\textsf {x}}],[\vec {\textsf {y}}])]_T) \\= & {} f(p_1([\vec {\textsf {x}}]),p_2([\vec {\textsf {y}}])), \end{aligned}$$

    where \(\iota _1,\, \iota _2\) are either \(\iota \) or \(\iota '\) and \(p_1,\, p_2\) either [p] or \(p'\), according to the appropriate \(A_1,A_2\) for each equation,

  2. b)

    matrices \({{\mathbf {{H}}}}_1,\ldots , {{\mathbf {{H}}}}_{\eta }\in \mathbb {Z}_q^{k_1\times k_2}\), where \(k_1,\, k_2\) are the number of columns of \({{\mathbf {{U_1}}}},{{\mathbf {{U_2}}}}\), respectively, and which, in the witness indistinguishability setting, are a basis of all the matrices which are a solution of the equation \([{{\mathbf {{U}}}}_1 {{\mathbf {{H}}}}] \bullet [{{\mathbf {{U}}}}_2]=[{{\mathbf {{0}}}}]_T\) if \(F'=F\) or \([{{\mathbf {{U}}}}_1{{\mathbf {{H}}}}] \ \tilde{\bullet }\ [{{\mathbf {{U}}}}_2]=[{{\mathbf {{0}}}}]_T\) if \(F'=\tilde{F}\), where \({{\mathbf {{U}}}}_1,{{\mathbf {{U}}}}_2\) are either \({{\mathbf {{U}}}}\) or \({{\mathbf {{A}}}}\), depending on the modules \(A_1,A_2\). These matrices are necessary to randomize the NIWI and NIZK proofs.

To present the instantiations in concise form, in the following \({{\mathbf {{H}}}}^{r,s,m,n}=(h_{ij})\in \mathbb {Z}_q^{m\times n}\) denotes the matrix such that \(h_{rs}=-1,\, h_{sr}=1\) and \(h_{ij}=0\) for \((i,j)\notin \{(r,s),(s,r)\}\). In summary, the elements which must be defined are as follows:

  • Pairing product equations. In this case, \(A_1=A_2=\mathbb {G}\), \(A_T=\mathbb {G}_T\), \(\iota _1=\iota _2=\iota \), \(p_1=p_2=[p]\), \({{\mathbf {{U}}}}_1={{\mathbf {{U}}}}_2={{\mathbf {{U}}}}\) and for both \(F'=F\) and \(F'=\tilde{F}\),

    $$\begin{aligned} \iota _T([\textsf {z}]_T)=\textsf {z}\cdot \vec {z}\cdot \vec {z}^\top \in \mathbb {Z}_q^{\ell \times \ell } \qquad \qquad p_T([\textsf {Z}]_T)= [\vec {\xi }^\top \textsf {Z}\vec {\xi }]_T, \end{aligned}$$

    where \(\textsf {Z}=(\textsf {Z}_{ij})_{1\le i,j \le \ell }\in \mathbb {Z}_q^{\ell \times \ell }.\) The equation \([{{\mathbf {{U}}}} {{\mathbf {{H}}}}]\ \tilde{\bullet }\ [{{\mathbf {{U}}}}]=[{{\mathbf {{0}}}}]_T\) admits no solution, while all the solutions to \([{{\mathbf {{U}}}} {{\mathbf {{H}}}}] \bullet [{{\mathbf {{U}}}}]=[{{\mathbf {{0}}}}]_T\) are generated by \(\left\{ {{\mathbf {{H}}}}^{r,s,k+1,k+1}\right\} _{1\le r< s\le k+1}\).

  • Multi-scalar multiplication equations. In this case, \(A_1=\mathbb {Z}_q\), \(A_2=A_T=\mathbb {G}\), \(\iota _1=\iota ', \iota _2=\iota \), \(p_1=p'\), \(p_2=[p]\), \({{\mathbf {{U_1}}}}={{\mathbf {{A}}}}\), \({{\mathbf {{U_2}}}}={{\mathbf {{U}}}}\) and for both \(F'=\tilde{F}\) and \(F'=F\),

    $$\begin{aligned} \iota _T([\textsf {z}])=F'([\iota '(1)],[\iota ([\textsf {z}])]) \qquad \qquad p_T([\textsf {Z}]_T)= [\vec {\xi }^\top \textsf {Z}\vec {\xi }]. \end{aligned}$$

    The equation \([{{\mathbf {{A}}}} {{\mathbf {{H}}}}]\ \tilde{\bullet }\ [{{\mathbf {{U}}}}]=[{{\mathbf {{0}}}}]_T\) admits no solution, while all the solutions to \([{{\mathbf {{A}}}} {{\mathbf {{H}}}}] \bullet [{{\mathbf {{U}}}}]=[{{\mathbf {{0}}}}]_T\) are generated by \(\left\{ {{\mathbf {{H}}}}^{r,s,k,k+1}\right\} _{1\le r< s\le k}\).

  • Quadratic equations. In this case, \(A_1=A_2=A_T=\mathbb {Z}_q\), \(\iota _1=\iota _2=\iota '\), \(p_1=p_2=p'\) and \({{\mathbf {{U_1}}}}={{\mathbf {{U_2}}}}={{\mathbf {{A}}}}\), for both \(F'=\tilde{F}\) and \(F'=F\), we define

    $$\begin{aligned} \iota _T(z)=F'([\iota '(1)],[\iota '(z)]) \qquad \qquad p_T([\textsf {Z}]_T)= \vec {\xi }^\top \textsf {Z}\vec {\xi }. \end{aligned}$$

    The equation \([{{\mathbf {{A}}}} {{\mathbf {{H}}}}]\ \tilde{\bullet }\ [{{\mathbf {{A}}}}]=[{{\mathbf {{0}}}}]_T\) admits no solution, while all the solutions to \([{{\mathbf {{A}}}} {{\mathbf {{H}}}}] \bullet [{{\mathbf {{A}}}}]=[{{\mathbf {{0}}}}]_T\) are generated by \(\left\{ {{\mathbf {{H}}}}^{r,s,k,k}\right\} _{1\le r< s\le k}\).

To argue that the equation \([{{\mathbf {{U}}}}_1{{\mathbf {{H}}}}] \ \tilde{\bullet }\ [{{\mathbf {{U}}}}_2]=[{{\mathbf {{0}}}}]_T\) admits no solution, for each of the cases above, it is sufficient to argue that the vectors \(\tilde{F}([\vec {u}_i],[\vec {u}_j])\) are linearly independent. This holds regardless of the matrix distribution \(\mathcal {D}_{\ell ,k}\) from basic linear algebra, since \(\tilde{F}([\vec {u}_i],[\vec {u}_j])\) was defined as the implicit representation of the outer product of \(\vec {u}_i \) and \(\vec {u}_j\) and \(\vec {u}_1,\ldots ,\vec {u}_{k+1}\) are linearly independent.

Proof and Verification. For completeness, we now describe how do the prover and the verifier proceed. Define \(k_1,k_2\) as the number of columns of \({{\mathbf {{U}}}}_1,{{\mathbf {{U}}}}_2\) respectively. On input \(\mathcal {PG},\, [{{\mathbf {{U}}}}],\vec {z}\), a set of equations and a set of witnesses \(\vec {\textsf {x}}\in A_1^m, \vec {\textsf {y}}\in A_2^n\) the prover proceeds as follows:

  1. 1.

    Commit to \(\vec {\textsf {x}}\) and \(\vec {\textsf {y}}\) as

    $$\begin{aligned}{}[{{\mathbf {{C}}}}] = [\iota _1(\vec {\textsf {x}}^\top )+{{\mathbf {{U}}}}_1 {{\mathbf {{R}}}}], \quad \quad [{{\mathbf {{D}}}}] = [\iota _2(\vec {\textsf {y}}^\top )+{{\mathbf {{U}}}}_2{{\mathbf {{S}}}}] \end{aligned}$$

    where \({{\mathbf {{R}}}}\leftarrow \mathbb {Z}_q^{k_1\times m}\), \({{\mathbf {{S}}}}\leftarrow \mathbb {Z}_q^{k_2 \times n}\).

  2. 2.

    For each equation of the type (4), pick \({{\mathbf {{T}}}} \leftarrow \mathbb {Z}_q^{k_1 \times k_2}\), \(r_i \leftarrow \mathbb {Z}_q\) and output \(([\varvec{\Pi }],[\varvec{\Theta }])\), defined as:

    $$\begin{aligned}&[\varvec{\Pi }] := \left[ \iota _2(\vec {b}^{\top }) {{\mathbf {{R}}}}^\top +\iota _2(\vec {\textsf {y}}^{\top }) \varvec{\Gamma }^\top {{\mathbf {{R}}}}^{\top }+ {{\mathbf {{U}}}}_2 {{\mathbf {{S}}}} \varvec{\Gamma }^\top {{\mathbf {{R}}}}^\top - {{\mathbf {{U}}}}_2 {{\mathbf {{T}}}}^\top +\sum _{1\le i \le \eta } r_i {{\mathbf {{U}}}}_2 {{\mathbf {{H}}}}^\top _i\right] \\&\ [\varvec{\Theta } ] := \left[ \iota _1(\vec {a}^{\top }) {{\mathbf {{S}}}}^\top +\iota _1(\vec {\textsf {x}}^{\top }) \varvec{\Gamma } {{\mathbf {{S}}}}^{\top }+ {{\mathbf {{U}}}}_1 {{\mathbf {{T}}}}\right] \end{aligned}$$

The proof described above is for a general equation the same optimizations for special types of equation as in the full version of [21] apply. In particular, when the map used is the symmetric map F, the size of the proof can be reduced. In addition, the size of the proof can also be reduced when all the elements in either \(A_1\) or \(A_2\) are constants. Taking these optimizations into account, we give the size of the commitments and the proof for the different types of equations in Table 1.

Table 1 Size of the proofs based on the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) assumption

To verify a proof, on input the commitments \([{{\mathbf {{C}}}}]\), \([{{\mathbf {{D}}}}]\) and a proof \(([\varvec{\Pi }],[\varvec{\Theta }])\), the verifier checks whether

$$\begin{aligned}&[\iota _1(\vec {a}^\top )] \bullet ' [{{\mathbf {{D}}}}]+ [{{\mathbf {{C}}}}] \bullet ' [\iota _2(\vec {b}^\top )]+[{{\mathbf {{C}}}}] \bullet ' [{{\mathbf {{D}}}}\varvec{\Gamma }^\top ] \\&\quad =[\iota _T(t)]_T+[{{\mathbf {{U}}}}_1] \bullet ' [\varvec{\Pi }]+ [\varvec{\Theta }] \bullet ' [{{\mathbf {{U}}}}_2], \end{aligned}$$

where \(\bullet '\) is either \(\bullet \) or \(\tilde{\bullet }\), depending on whether \(F'\) is F or \(\tilde{F}\). If the equation is satisfied, the verifier accepts the proof for this equation and rejects otherwise. In general, the verification cost depends on \(\ell \) and k, though a bit might be gained in pairing computations when using batch verification techniques and if some components of the commitment keys are trivial or are repeated, i.e., if the \(\mathcal {D}_{\ell ,k}\) admits short representation.

Efficiency. We emphasize that for \(\mathcal {D}_{\ell ,k}=\mathcal {L}_2\) and \(\vec {z}=(0,0,1)^\top \) and for \(\mathcal {D}_{\ell ,k}=\textsf {DDH}\) and \(\vec {z}=(0,1)^\top \) (in the natural extension to asymmetric bilinear groups), we recover the \(2\text{- }\textsf {Lin}\) and the \(\textsf {SXDH}\) instantiations of [21]. While the size of the proofs depends only on \(\ell \) and k, both the size of the CRS and the cost of verification increase with \(\textsf {RE}_\mathbb {G}(\mathcal {D}_{\ell ,k})\). In particular, in terms of efficiency, the \(\mathcal {SC}_{2}\) Assumption is preferable to the \(2\text{- }\textsf {Lin}\) assumption, but the main reason to consider more instantiations of GS proofs is to obtain more efficient proofs for a large class of languages in Sect. 6.

6 More Efficient Proofs for Some CRS-Dependent Languages

Let \([{{\mathbf {{U}}}}]\) be the commitment key defined in last section as part of a \(\mathcal {D}_{\ell , k}\text{- }\textsf {MDDH}\) instantiation, for some \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell , k}\). In this section, we show how to obtain shorter proofs of some languages related to \({{\mathbf {{A}}}}\). The common idea of all the improvements is to exploit the special structure of the homomorphic commitments used in Groth–Sahai proofs.

6.1 More Efficient Subgroup Membership Proofs

We first show how to obtain shorter proofs of membership in the language \(\mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}:=\{[{{\mathbf {{A}}}}\vec {r}], \vec {r} \in \mathbb {Z}_q^k \}\subset \mathbb {G}^\ell \).

Intuition. Our proofs implicitly use the GS framework, although we have preferred to give the proofs without using the GS notation. Indeed, the idea behind our improvement is to exploit the special algebraic structure of commitments in GS proofs, namely the observation that if \([\vec {\Phi }]=[{{\mathbf {{A}}}}\vec {r}] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\) then \([\vec {\Phi }]=\textsf {com}'_{[{{\mathbf {{U}}}}],\vec {z}}(0; \vec {r})\). Therefore, to prove that \([\vec {\Phi }] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), we proceed as if we were giving a GS proof of satisfability of the equation \(\textsf {x}=0\) where the randomness used for the commitment to \(\textsf {x}\) is \(\vec {r}\). In particular, no commitments have to be given in the proof, which results in shorter proofs. To prove zero-knowledge, we rewrite the equation \(\textsf {x}=0\) as \(\textsf {x}\cdot \delta =0\). The real proof is just a standard GS proof with the commitment to \(\delta =1\) being \(\iota '(1)=\textsf {com}_{[{{\mathbf {{U}}}}]}(1; \vec {0})\), while in the simulated proof the trapdoor allows to open \(\iota '(1)\) as a commitment to 0, so we can proceed as if the equation was the trivial one \(\textsf {x}\cdot 0=0\), for which it is easy to give a proof of satisfiability.

Related Work. It is interesting to compare in detail with a recent line of work aiming at obtaining very efficient arguments of membership in linear subspaces ([25, 26, 31, 34]) which also exploits the dependency of the common reference string and the space where one wants to prove membership in. More specifically, these works construct NIZK arguments of membership in the space generated by \([{{\mathbf {{A}}}}] \in \mathbb {G}^{\ell \times k}\), with perfect zero-knowledge and computational soundness. We compare our results with [31], who give two different constructions which generalize and simplify previous results. In their work, computational soundness is based on any \(\mathcal {D}_{m}\)-\(\textsf {MDDH}{}\) Assumption.Footnote 7 In the first construction, the proof size is \(m+1\) and the common reference string must include \(m \ell +(m+1)k+ \textsf {RE}_\mathbb {G}(\mathcal {D}_m)\) group elements and a description of \([{{\mathbf {{A}}}}]\). In the second construction, which assumes that \([{{\mathbf {{A}}}}]\) is drawn from a witness samplable distribution, the proof size is m and the common reference string must include \(m\ell +mk+\textsf {RE}_\mathbb {G}(\overline{\mathcal {D}}_m)\) group elements, where \(\overline{\mathcal {D}}_{m}\) denotes the distribution of the first m rows of the matrices sampled according to \(\mathcal {D}_{m}\), and a description of \([{{\mathbf {{A}}}}]\). Our proof, on the other hand, has perfect soundness, composable zero-knowledge under the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption, proof of size \(\ell k\) and apart from a description of \([{{\mathbf {{A}}}}]\), the common reference string consists of only \(\ell \) elements of \(\mathbb {G}\).

6.1.1 Construction

Define \(\mathcal {H}:=\{ {{\mathbf {{H}}}} \in \mathbb {Z}_q^{k \times k}: {{\mathbf {{H}}}}+{{\mathbf {{H}}}}^{\top }={{\mathbf {{0}}}}\}\). Following the intuition given above, the actual construction looks as follows:

Setup. At the setup stage, some group \(\mathcal {PG}=(\mathbb {G},\mathbb {G}_T,q,e,\mathcal {P}) \leftarrow \textsf {PGen}(1^\lambda )\) is specified.

Common reference string. We define \([{{\mathbf {{U}}}}]=([\vec {u}_1],\ldots ,[\vec {u}_{k+1}])\) as \([{{\mathbf {{A}}}} || {{\mathbf {{A}}}}\vec {w}+\vec {z}]\) in the soundness setting and \([{{\mathbf {{A}}}} || {{\mathbf {{A}}}}\vec {w}]\) in the witness indistinguishability setting, where \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell , k}\), \(\vec {w} \leftarrow \mathbb {Z}_q^k\), and \(\vec {z}\in \mathbb {Z}_q^\ell ,\, \vec {z} \notin \mathrm {Im}({{\mathbf {{A}}}})\). The common reference string is \(\sigma :=(\mathcal {PG}, [{{\mathbf {{U}}}}],\vec {z})\).

Simulation trapdoor. The simulation trapdoor \(\tau \) is the vector \(\vec {w} \in \mathbb {Z}_q^{k}\).

Prover. On input \(\sigma \), a vector \([\vec {\Phi }]=[{{\mathbf {{A}}}} \vec {r}] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\) and the witness \(\vec {r}\in \mathbb {Z}_q^{k}\), the prover chooses a matrix \({{\mathbf {{H}}}} \leftarrow \mathcal {H}\) and computes

$$\begin{aligned}{}[\varvec{\Pi }]=\left[ \vec {u}_{k+1} \vec {r}^\top + {{\mathbf {{A}}}} {{\mathbf {{H}}}}\right] . \end{aligned}$$

Verifier. On input \(\sigma ,[\vec {\Phi }],[\varvec{\Pi }]\), the verifier checks whether \([\vec {\Phi } \vec {u}_{k+1}^{\top }+ \vec {u}_{k+1}\vec {\Phi }^{\top }]_T=[\varvec{\Pi } {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}}\varvec{\Pi }^{\top }]_T\).

Simulator. On input \(\sigma ,[\vec {\Phi }],\tau \) the simulator picks a matrix \({{\mathbf {{H'}}}} \leftarrow \mathcal {H}\) and computes

$$\begin{aligned} \left[ \varvec{\Pi }_{\mathrm{sim}}\right] =\left[ \vec {\Phi }\vec {w}^\top + {{\mathbf {{A}}}} {{\mathbf {{H'}}}}\right] . \end{aligned}$$

Theorem 14

Let \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell , k}\), where \(\mathcal {D}_{\ell ,k}\) is a matrix distribution. There exists a non-interactive zero-knowledge proof for the language \(\mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), with perfect completeness, perfect soundness and composable zero-knowledge of \(k\ell \) group elements based on the \(\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}\) Assumption.

The proof follows directly by implicitly reconstructing the same arguments which prove the same properties for the GS proof system.


First, it is clear that under the \(\mathcal {D}_{\ell ,k}\)-\(\textsf {MDDH}\) Assumption, the soundness and the WI setting are computationally indistinguishable.

Completeness. To see completeness, we see that a real proof satisfies the verification equation. Indeed, in the soundness setting, the left term of the verification equation is:

$$\begin{aligned} \left[ \vec {\Phi } \vec {u}_{k+1}^{\top }+\vec {u}_{k+1}\vec {\Phi }^{\top }\right] _T= & {} \left[ {{\mathbf {{A}}}}\vec {r} ({{\mathbf {{A}}}}\vec {w}+\vec {z})^{\top }+({{\mathbf {{A}}}}\vec {w}+\vec {z}) ({{\mathbf {{A}}}}\vec {r})^{\top }\right] _T \\= & {} \left[ {{\mathbf {{A}}}}(\vec {r} \vec {w}^{\top }+ \vec {w} \vec {r}^{\top }) {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}} \vec {r} \vec {z}^{\top }+ \vec {z} \vec {r}^{\top } {{\mathbf {{A}}}}^{\top }\right] _T \end{aligned}$$

while the right term in the real proof is:

$$\begin{aligned} \left[ \varvec{\Pi } {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}}\varvec{\Pi }^{\top }\right] _T= & {} \left[ {{\mathbf {{A}}}}(\vec {w} \vec {r}^{\top } + \vec {w} \vec {r}^{\top }) {{\mathbf {{A}}}}^{\top } + {{\mathbf {{A}}}} ({{\mathbf {{H}}}}+ {{\mathbf {{H}}}}^{\top }) {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}} \vec {r} \vec {z}^{\top }+ \vec {z} \vec {r}^{\top } {{\mathbf {{A}}}}^{\top }\right] _T\nonumber \\ \end{aligned}$$
$$\begin{aligned}= & {} \left[ {{\mathbf {{A}}}}(\vec {r} \vec {w}^{\top }+ \vec {w} \vec {r}^{\top }) {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}} \vec {r} \vec {z}^{\top }+ \vec {z} \vec {r}^{\top } {{\mathbf {{A}}}}^{\top }\right] _T. \end{aligned}$$

This proves perfect completeness.

Soundness. Let \(\vec {\xi }\in \mathbb {Z}_q^{\ell }\) be any vector such that \(\vec {\xi }^{\top } {{\mathbf {{A}}}}=\vec {0}\), \(\vec {\xi }^{\top } \vec {z}=1\). This implies that in the soundness setting, \(\vec {\xi }^{\top } \vec {u}_{k+1}=1\). Therefore, if \([\varvec{\Pi }]\) is any proof that satisfies the verification equation, multiplying on the left by \(\vec {\xi }^{\top }\) and the right by \(\vec {\xi }\),

$$\begin{aligned} \vec {\xi }^{\top } \left[ \vec {\Phi } \vec {u}_{k+1}^{\top }+\vec {u}_{k+1}\vec {\Phi }^{\top }\right] _T \vec {\xi }= \vec {\xi }^{\top } \left[ \varvec{\Pi } {{\mathbf {{A}}}}^{\top }+{{\mathbf {{A}}}} \varvec{\Pi }^{\top }\right] _T \vec {\xi }, \end{aligned}$$

we obtain

$$\begin{aligned} \left[ \vec {\xi }^{\top } \vec {\Phi } + \vec {\Phi }^{\top } \vec {\xi }\right] _T=[0]_T. \end{aligned}$$

Since \([\vec {\xi }^{\top } \vec {\Phi } + \vec {\Phi }^{\top } \vec {\xi }]_T=2[\vec {\xi }^{\top } \vec {\Phi }]_T\), from this last equation it follows that \([\vec {\xi }^{\top } \vec {\Phi }]_T=[0]_T\). This holds for any vector \(\vec {\xi }\) such that \(\vec {\xi }^{\top } {{\mathbf {{A}}}}=\vec {0}\) and \(\vec {\xi }^{\top } \vec {z}=1\), which implies that \([\Phi ] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), which proves perfect soundness.

Composable Zero-Knowledge. We will now see that, in the witness indistinguishability setting, both a real proof and a simulated proof have the same distribution when \([\Phi ] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\). We first note that they both satisfy the verification equation. Indeed, the left term of the verification equation in the WI setting is

$$\begin{aligned} \left[ \vec {\Phi } \vec {u}_{k+1}^{\top }+\vec {u}_{k+1}\vec {\Phi }^{\top }\right] _T= \left[ {{\mathbf {{A}}}}\left( \vec {r} \vec {w}^{\top }+ \vec {w} \vec {r}^{\top }\right) {{\mathbf {{A}}}}^{\top }\right] _T, \end{aligned}$$

which is obviously equal to the right term of the verification equation for the real proof (rewrite Eq. (5) in the WI setting). On the other hand, if \([\Phi ] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), the right term of the verification equation for a simulated proof is:

$$\begin{aligned} \left[ \varvec{\Pi }_{\mathrm{sim}} {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}}\varvec{\Pi }_{\mathrm{sim}}^{\top }\right] _T= & {} \left[ {{\mathbf {{A}}}}\left( \vec {r} \vec {w}^{\top } + \vec {w} \vec {r}^{\top }\right) {{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}} \left( {{\mathbf {{H}}}}'+ ({{\mathbf {{H}}}}')^{\top }\right) {{\mathbf {{A}}}}^{\top }\right] _T \\= & {} \left[ {{\mathbf {{A}}}} \left( \vec {r} \vec {w}^{\top } + \vec {w} \vec {r}^{\top }\right) {{\mathbf {{A}}}}^{\top }\right] _T, \end{aligned}$$

for some \({{\mathbf {{H}}}}' \in \mathcal {H}\).

We now argue that an honestly generated proof \([\varvec{\Pi }]\) and a simulated proof \([\varvec{\Pi }_{\mathrm{sim}}]\) have the same distribution. By construction, there exist some matrices \(\varvec{\Theta }\) and \(\varvec{\Theta }'\) such that \([\varvec{\Pi }]=[{{\mathbf {{A}}}}\varvec{\Theta }]\) and \([\varvec{\Pi }_{\mathrm{sim}}]=[{{\mathbf {{A}}}}\varvec{\Theta }']\). Now, if \([\varvec{\Pi }_1]=[{{\mathbf {{A}}}}\varvec{\Theta }_1]\) and \([\varvec{\Pi }_2]=[{{\mathbf {{A}}}}\varvec{\Theta }_2]\) are two proofs, real or simulated, which satisfy the verification equation, then necessarily \([(\varvec{\Pi }_1-\varvec{\Pi }_2){{\mathbf {{A}}}}^{\top }+ {{\mathbf {{A}}}} (\varvec{\Pi }_1-\varvec{\Pi }_2)]_T=[{{\mathbf {{A}}}}((\varvec{\Theta }_1-\varvec{\Theta }_2)+(\varvec{\Theta }_1-\varvec{\Theta }_2)^{\top }) {{\mathbf {{A}}}}^{\top }]_T=0\).

Since with overwhelming probability, \({{\mathbf {{A}}}}\) has rank k, it must hold that \((\varvec{\Theta }_1-\varvec{\Theta }_2)+(\varvec{\Theta }_1-\varvec{\Theta }_2)^{\top }=0\), that is, it must hold that \((\varvec{\Theta }_1-\varvec{\Theta }_2) \in \mathcal {H}\). By construction, for both honestly generated proofs \([\varvec{\Pi }]\) and simulated proofs these differences are uniformly distributed in \(\mathcal {H}\). \(\square \)

6.1.2 Efficiency Comparison and Applications

For the \(2\text{- }\textsf {Lin}\) assumption, (\(\ell =3,k=2\)), our proof consists of only six group elements, whereas without using our technique the proof consists of 12 elements.Footnote 8 More generally, to prove that \([\vec {\Phi }] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), for some \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\) with a GS instantiation based on a (possibly unrelated) \(\mathcal {D}_{\ell ',k'}\)-matrix DH problem using standard GS proofs, one would prove that the following equation is satisfiable for all \(i=1\ldots \ell \):

$$\begin{aligned} r_1 [u_{1,i}]+ \cdots +r_k [u_{k,i}]= [\Phi _i], \end{aligned}$$

that is, one needs to prove that \(\ell \) linear equations with k variables are satisfied. Therefore, according to Table 1, the verifier must be given \(k \ell '\) elements of \(\mathbb {G}\) for the commitments and \(\ell k'\) elements of \(\mathbb {G}\) for the proof. On the other hand, proving \([\vec {\Phi }] \in \mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\) using our approach requires \(\ell k\) elements of \(\mathbb {G}\), corresponding to the size of the proof of one quadratic equation.

Applications. For a typical application scenario of Theorem 14, think of \([{{\mathbf {{A}}}}]\) as part of the public parameters of the hash proof system of Sect. 5.2. Proving that a ciphertext is well formed is proving membership in \(\mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\). Another application is to show that two ciphertexts encrypt the same message under the same public key, a common problem in electronic voting or anonymous credentials. There are many other settings in which subgroup membership problems naturally appear, for instance the problem of certifying public keys or given some plaintext m, the problem of proving that a certain ciphertext is an encryption of [m]. We stress that in our construction the setup of the CRS can be built on top of the encryption key so that proofs can be simulated without the decryption key, which is essential for many of these applications. More concretely, below we give two application examples.

Application Example 1. The standard proof of membership in \(\mathcal {L}_{{{\mathbf {{A}}}},\mathcal {PG}}\), when \({{\mathbf {{A}}}} \leftarrow 2\text{- }\textsf {Lin}\) based on the same assumption (with \(\ell =\ell '=3\), \(k=k'=2\)), requires 12 group elements, while with our approach only six elements are required.Footnote 9 This reduces the ciphertext size of one of the instantiations of [35] from 15 to 9 group elements.

Application Example 2. With our results, we can also give a more efficient proof of correct opening of the Cramer–Shoup ciphertext. We briefly recall the CS encryption scheme based on the \(2\text{- }\textsf {Lin}\)-assumption ([23, 45]). The public key consists of the description of some group \(\mathcal {G}\) and a tuple \([a_1,a_2, X_1,X_2,X_3,X_4,X_5,X_6] \in \mathbb {G}^8\). Given a message \([m] \in \mathbb {G}\), a ciphertext is constructed by picking random \(r,s \in \mathbb {Z}_q\) and setting

$$\begin{aligned} C:=[r (a_1,0,1,X_5,X_1+\alpha X_3)+ s (0,a_2,1,X_6,X_2+\alpha X_4)+(0,0,m,0,0)], \end{aligned}$$

where \(\alpha \) is the hash of some components of the ciphertext and possibly some label. To prove that a ciphertext opens to a (known) message [m], substract [m] from the third component of the ciphertext and prove membership in \(\mathcal {L}_{{{\mathbf {{A}}}}_{\alpha },\mathcal {PG}}\), where \({{\mathbf {{A}}}}_{\alpha }\) is defined as:

$$\begin{aligned} {{\mathbf {{A}}}}_{\alpha }:= \left( \begin{array}{cc} a_1&{} 0 \\ 0 &{} a_2 \\ 1 &{} 1 \\ X_5 &{} X_6 \\ X_1+\alpha X_3&{} X_2+\alpha X_4 \\ \end{array}\right) = \left( \begin{array}{cccccc} 1 &{} 0 &{} 0 &{}0 &{}0&{}0\\ 0 &{} 1 &{} 0 &{}0 &{}0&{}0\\ 0 &{} 0 &{} 1 &{}0 &{}0&{}0\\ 0 &{} 0 &{} 0 &{}1 &{}0&{}0\\ 0 &{} 0 &{} 0 &{}0 &{}1&{}\alpha \\ \end{array}\right) \left( \begin{array}{cc} a_1&{} 0 \\ 0 &{} a_2 \\ 1 &{} 1 \\ X_5 &{} X_6\\ X_1&{} X_2 \\ X_3 &{} X_4 \\ \end{array}\right) . \end{aligned}$$

Denote \({{\mathbf {{M}}}}_\alpha , {{\mathbf {{C}}}}\), the two matrices of the right term of the previous equation such that \({{\mathbf {{A}}}}_{\alpha }={{\mathbf {{M}}}}_\alpha {{\mathbf {{C}}}}\). The matrix \({{\mathbf {{A}}}}_\alpha \) depends on \(\alpha \) and is different for each ciphertext, so it cannot be included in the CRS. Instead, we include the matrix \([{{\mathbf {{U}}}}_C]:=[{{\mathbf {{C}}}}||{{\mathbf {{C}}}}\vec {w}+\vec {z}_C]\) in the soundness setting and \([{{\mathbf {{U}}}}_C]:=[{{\mathbf {{C}}}}||{{\mathbf {{C}}}}\vec {w}]\) in the WI setting, for \(\vec {z}_C \notin \mathrm {Im}({{\mathbf {{C}}}})\), for instance \(\vec {z}_C^{\top }:=(0,0,0,0,1,0)\). To prove membership in \(\mathcal {L}_{{{\mathbf {{A}}}}_\alpha ,\mathcal {PG}}\) as we explained, we would make the proof with respect to the CRS \([{{\mathbf {{U}}}}_{\alpha }]:=[{{\mathbf {{M}}}}_\alpha {{\mathbf {{U}}}}_C]\). Clearly, if \(\vec {z}^{\top }:=(0,0,0,0,1),\, [{{\mathbf {{U}}}}_{\alpha }]=[{{\mathbf {{A}}}}_\alpha ||{{\mathbf {{A}}}}_\alpha \vec {w}+\vec {z}]\) in the soundness setting and \([{{\mathbf {{U}}}}_{\alpha }]=[{{\mathbf {{A}}}}_\alpha ||{{\mathbf {{A}}}}_\alpha \vec {w}]\) in the WI, as required. The resulting proof consists of 10 group elements, as opposed to 16 using standard GS proofs. This applies to the result of [17], Sect. 3.

6.2 Other CRS-Dependent Languages

The techniques of the previous section can be extended to other languages, namely:

  • A proof of validity of a ciphertext, that is, given \([{{\mathbf {{A}}}}]\), \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\), and some vector \(\vec {z}\in \mathbb {Z}_q^\ell \), \(\vec {z} \notin \mathrm {Im}({{\mathbf {{A}}}})\), one can use the same techniques to give a more efficient proof of membership in the space:

    $$\begin{aligned} \mathcal {L}_{{{\mathbf {{A}}}},\vec {z},\mathcal {PG}}=\{ [\vec {c}] : \vec {c}= {{\mathbf {{A}}}}\vec {r}+m \vec {z} \} \subset \mathbb {G}^{\ell }, \end{aligned}$$

    where \((\vec {r},[m]) \in \mathbb {Z}_q^k \times \mathbb {G}\) is the witness. This is also a proof of membership in the subspace of \(\mathbb {G}^{\ell }\) spanned by the columns of \([{{\mathbf {{A}}}}]\) and the vector \(\vec {z}\), but part of the witness, [m], is in the group \(\mathbb {G}\) and not in \(\mathbb {Z}_q\), while part of the matrix generating the subspace is in \(\mathbb {Z}_q\). However, it is not hard to modify the subgroup membership proofs as described in Sect. 6.1 to account for this. In particular, since the GS proof system is non-interactive zero-knowledge proofs of knowledge when the witnesses are group elements, the proof guarantees both that \([\vec {c}]\) is well formed and that the prover knows [m]. In a typical application, \([\vec {c}]\) will be the ciphertext of some encryption scheme, in which case \(\vec {r}\) will be the ciphertext randomness and [m] the message.

  • A proof of plaintext equality. The encryption scheme derived from the KEM given in Sect. 5.1 corresponds to a commitment in GS proofs — except that the commitment is always binding. That is, if \( pk _{A}=(\mathcal {G}, [{{\mathbf {{A}}}}] \in \mathbb {G}^{\ell \times k})\), for some \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell ,k}\), given \(\vec {r} \in \mathbb {Z}^k_q\),

    $$\begin{aligned} \textsf {Enc}_{ pk _A}([m];\vec {r})= & {} [\vec {c}]= [{{\mathbf {{A}}}} \vec {r}+(0,\ldots ,0,m)^{\top }]=[{{\mathbf {{A}}}} \vec {r}+m \cdot \vec {z}] \\= & {} \textsf {com}_{[{{\mathbf {{A}}}}||{{\mathbf {{A}}}}\vec {w}]}([m]; \vec {s}), \end{aligned}$$

    where \(\vec {s}^{\top }:=(\vec {r}^{\top },0)\) and \(\vec {z}:=(0,\ldots ,0,1)^{\top }\). Therefore, given two (potentially distinct) matrix distributions \(\mathcal {D}_{\ell _1,k_1}\), \(\mathcal {D}'_{\ell _2,k_2}\) and \({{\mathbf {{A}}}} \leftarrow \mathcal {D}_{\ell _1,k_1}, {{\mathbf {{B}}}} \leftarrow \mathcal {D}'_{\ell _2,k_2}\), proving equality of plaintexts of two ciphertexts encrypted under \(pk_A,pk_B\), corresponds to proving that two commitments under different keys open to the same value. One can gain in efficiency with respect to the standard use of GS proofs because one does not need to give any commitments as part of the proof, since the ciphertexts themselves play this role. More specifically, given \([\vec {c}_A]=\textsf {Enc}_{ pk _A}([m])\) and \([\vec {c}_B]=\textsf {Enc}_{ pk _B}([m])\), one can treat \([\vec {c}_A]\) as a commitment to the variable \([\textsf {x}] \in A_1=\mathbb {G}\) and \([\vec {c}_B]\) as a commitment to the variable \([\textsf {y}] \in A_2=\mathbb {G}\) and prove that the quadratic equation \(e([\textsf {x}],[1]) \cdot e([-1],[\textsf {y}])=[0]_T\) is satisfied. The problem is only how to construct the simulator of the NIZK proof system, since commitments are always binding. For this, one uses a similar trick as in the membership proofs, namely to let the zero-knowledge simulator open \(\iota _1([1])\), \(\iota _2([-1])\) as commitments to the [0] variable and simulate a proof for the equation \(e([\textsf {x}],[0]) \cdot e([0],[\textsf {y}])=[0]_T\), which is trivially satisfiable and can be simulated. In [27], we reduce the size of the proof by four group elements from 18 to 22, while in [22] we save nine elements although their proof is quite inefficient altogether. We note that even if both papers give a proof that two ciphertexts under two different \(2\text{- }\textsf {Lin}\) public keys correspond to the same value, the proof in [22] is more inefficient because it must use GS proofs for pairing product equations instead of multi-scalar multiplication equations. Other examples include [10, 15].