Abstract
Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment.
Prior works have approached the problem of constructing VRFs by proposing a candidate under a specific number theoretic setting — mostly in bilinear groups — and then grappling with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.
In this work we take a different approach. Instead of tackling the VRF problem as a whole, we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1bounded constrained pseudo random function for a functionality that is “admissible hash friendly”, (2) a noninteractive statistically binding commitment scheme (without trusted setup) and (3) noninteractive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume subexponential hardness of assumptions.
In the second half of our work, we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give two new constructions of 1bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the \(n\text {}\mathsf {power DDH}\) assumption. The next is from the \(\phi \) hiding assumption.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Verifiable Random Functions (VRFs) as introduced by Micali et al. [30] are a special form of Pseudo Random Functions (PRFs) [20] wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. The caveat being that the pseudorandomness of the function on other points should not be sacrificed even after providing polynomially many proofs. The VRF definition forbids interactivity or any setup assumption, thereby disallowing trivial extensions of PRFs making the problem more challenging and interesting.
Prior works [13, 14, 22, 24, 25, 29] have approached the problem of constructing VRFs by proposing a candidate under a specific number theoretic setting — mostly in bilinear groups — and then grappling with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.
In this work we take a different approach. Instead of tackling the VRF problem as a whole, we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset.
In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1bounded constrained pseudo random function [8, 10, 27] for a functionality that is “admissible hash friendly”, (2) a noninteractive statistically binding commitment scheme (without trusted setup) and (3) noninteractive witness indistinguishable proofs or NIWIs [16]. The first primitive can be replaced with a more basic puncturable PRF [36] constraint if one is willing to settle for selective security or assume subexponential hardness of assumptions.
The first benefit of our approach is that by generically breaking down the problem we expose and separate the core features of VRFs. Namely, we can see that in spirit any reduction must both develop a way of constraining itself from knowing the output of the entire PRF space while at the same time be able to develop noninteractive proofs without a common setup. Second, with the VRF problem dissected into constituent parts, we can explore and develop number theoretic solutions to each piece. Ideally, this breakdown will help us develop a wider array of solutions and in particular break away from the dependence on bilinear maps. We now look at each primitive in turn.
Beginning with constrained PRFs, our goal is to build them for constraints that we call admissible hash [7] compatible. In particular, we need a constrained key that can be associated with a string \(z \in \{0,1, \bot \}^n\) where the constrained key can be evaluated on any input \(x \in \{0,1\}^n\) where \(x \ne z\). For our purposes such a scheme only needs to be secure in the model where an attacker is allowed a single key query. The recent work of Brakerski and Vaikuntanthan [11] construct 1bounded constrained PRFs under the learning with errors (LWE) [35] assumption that can handle any constraint in \(\varvec{\mathrm {NC}}^1\) which encompasses the admissible hash compatible functionality.
We complement this by providing a new construction of constrained PRFs that is admissible hash friendly in the setting of nonbilinear groups. Our construction is proven secure under the \(n\text {}\mathsf {power DDH}\) problem. Informally, one is given \(g, g^a, g^{a^2},\ldots , g^{a^{n1}}\), it is hard to distinguish \(g^{a^n}\) from a random group element. We note that this problem in composite order groups reduces to the subgroup decision problem [12]. In addition, as mentioned above if we assume subexponential hardness of our assumptions or relax to selective security we can instead rely on puncturable PRFs which are realizable from any one way function.
We next turn to constructing noninteractive perfectly binding commitments. The main challenge here is any solution must not utilize a trusted setup since a trusted setup is disallowed in the VRF setting. Naor [32] showed how any certifiably injective one way function gives rise to such a commitment scheme. Injective functions can in turn be based on (certifiable) groups where discrete log is hard.
We develop new constructions for noninteractive perfectly binding commitments from noisy cryptographic assumptions. We show and prove a construction under the Learning with Errors and Learning Parity with Noise (LPN) assumptions. Our LPN solution uses a lownoise variant (\(\beta \approxeq \frac{1}{\sqrt{n}}\)) of the LPN assumption that has been used in previous public key encryption schemes [1]. We also develop an approach for proving security under LPN with constant noise. Our solution requires the existence of an explicit error correcting code with certain properties. We leave finding such a code as an interesting open problem.
Finally, we arrive at NIWIs. There are three basic approaches to building NIWIs. First, in the bilinear setting, it is known [21] how to construct NIWIs from the decision linear assumption. Second, Barak, Ong and Vadhan (BOV) [4] showed that twomessage publiccoin witness indistinguishable proofs (a.k.a. ZAPs [15]) imply NIWIs under certain complexity theoretic assumptions that allow for derandomization. Finally, indistinguishability obfuscation [18] gives rise to NIWI constructions [6].
Taking a step back we can see that our approach already leads to constructions of VRFs with new properties. For example, if we build ZAPs from trapdoor permutations and apply the BOV theorem we can achieve multiple constructions of adaptively secure VRFs without complexity leveraging that do not use bilinear groups. In addition, given the wide array of choices for building our commitments and constrained PRFs, our work reveals developing new techniques for building and proving NIWIs as the primary bottleneck for progress towards VRFs.
1.1 Technical Overview
We now give a high level overview of our technical approach. A formal treatment is given in the main body. We break our overview into three pieces. First we describe our generic construction of Verifiable Random Functions. Next, we define admissible hash compatible constrained PRFs and go over our nonbilinear group solution. Finally, we overview our LWE and LPN solutions to noninteractive perfectly binding commitments.
Constructing VRFs Generically. We first briefly review the definition of a Verifiable Random Function. In the VRF framework, a party runs the \(\mathsf {Setup}\) algorithm to generate a pair of secret key \(\mathrm {SK}\) and public verification key \(\mathrm {VK}\). Using the secret key \(\mathrm {SK}\), it could efficiently evaluate the function \(F_{\mathrm {SK}}(\cdot )\) on any input x as well as a proof \(\varPi \) of the statement \(y = F_{\mathrm {SK}}(x)\). The verification key could be considered as a statistically binding commitment to the underlying pseudorandom function. A third party verification algorithm \(\mathsf {Verify}\) is used to verify a proof \(\varPi \) which takes the verification key \(\mathrm {VK}\), function evaluation y, and message x as additional inputs. First, the soundness condition dictates that for each \((\mathrm {VK}, x)\) pair there should be at most one output y such that \(\mathsf {Verify}(\mathrm {VK}, x, y, \pi ) = 1\). Importantly, VRFs do not make use of any setup assumption and soundness should hold even in the case of a maliciously generated \(\mathrm {VK}\). Second, it should also hold that the output of function \(F_{\mathrm {SK}}(\cdot )\) is indistinguishable from a random string even after observing polynomially many evaluations and proofs at adversarially chosen points. The latter is formalized as pseudorandomness property of the VRF.
We now give a simple construction from the aforementioned primitives. The VRF setup proceeds as follows. First, a constrained PRF key K is sampled and kept as part of the secret key. Next, a sequence of three independent commitments \(c_1, c_2, c_3\) is computed such that each commitment \(c_i\) opens to the key K.^{Footnote 1} The triple of commitments \((c_1, c_2, c_3)\) is stored as the public verification key and the corresponding randomness used during commitment is included in the secret key. For evaluating the VRF on any input x, we first apply an admissible hash function on x and then evaluate the constrained PRF on the output of admissible hash. In short, the VRF output on some input x is \(\mathsf {PRF}_K(h(x))\). For proving correctness of evaluation, we use noninteractive witness indistinguishable proofs (NIWIs). In particular, to prove that the output of VRF on some input x is y, we create a NIWI proof for the statement that at least two out of three commitments \((c_1, c_2, c_3)\) (in the verification key) open to keys \(K_1, K_2\) such that \(y = \mathsf {PRF}_{K_1}(h(x)) = \mathsf {PRF}_{K_2}(h(x))\) (the idea of a majoritybased decoding (i.e., two out of three trick) was also used in [2]). We would like to emphasize that keys \(K_1\) and \(K_2\) need not be identical as the only constraint that must hold is that the PRF evaluation of input h(x) must be equal to y irrespective of the key (out of \(K_1, K_2\)) used. The proof verification can be done in a straightforward manner as it simply involves running the NIWI verifier.
Now we briefly sketch the idea behind pseudorandomness proof in the adaptive setting. To prove security we use a “partitioning” argument where roughly 1 / Q fraction of inputs can be used as challenge and remaining \(1  1/Q\) fraction will be used for answering evaluation queries, where Q is the number of queries made by an attacker. First step in the reduction is to concretely define the challenge and nonchallenge partitions using admissible hash function. Next, we leverage the facts that all the evaluation queries will lie outside the challenge partition^{Footnote 2} and for generating the evaluation proofs we only need openings of two key commitments out of three. At a high level, our goal is to switch all three commitments \(c_1, c_2, c_3\) such that they commit to the constrained key \(K'\) instead of key K, where \(K'\) could be used to evaluate the VRF on all points outside the challenge partition. To this end, the reduction proceeds as follows.
First, the challenger makes two crucial modifications — (1) it generates a constrained PRF key \(K'\) along with the master key K, (2) it computes \(c_3\) as a commitment to key \(K'\) instead of key K. Such a hybrid jump is indistinguishable by the hiding property of the commitment scheme as for generating all the evaluation proofs it does not need the opening for \(c_3\). Next, we switch the NIWI witness used to generate the proof. In particular, the challenger now uses openings of \(c_2, c_3\) as the NIWI witnesses. This only results in a negligible dip in the adversary’s advantage because for all inputs outside the challenge partition, the PRF evaluation using the master key K and constrained key \(K'\) is identical, thus the openings of any two commitments out of \(c_1, c_2, c_3\) could be used as the NIWI witness. Applying similar modifications as above in succession, all three commitments \(c_1, c_2, c_3\) could be switched to commitments of the constrained key \(K'\). If all three commitments open to the constrained key \(K'\), then the challenger could directly reduce an attack on the VRF pseudorandomness to an attack on the constrained pseudorandomness of the PRF.
It is also interesting to note that if we use a puncturable PRF instead of an admissible hash compatible constrained PRF, then the same construction could be proven to be selectively secure with only polynomial security loss to the underlying assumptions. The major difference in the proof being the partitioning step, where instead of using the admissible hash function to perform partitioning and aborting in case of bad partitions, the reduction already knows the challenge input at the start, thus it only needs to puncture the PRF key on the challenge input in order to use the same sequence of hybrids. This is discussed in detail in Sect. 3.
Admissible Hash Compatible Constrained PRFs. A constrained PRF family consists of a setup algorithm that outputs the master PRF key, a constrain algorithm that takes as input the master PRF key and a constraint, and outputs a constrained PRF key. The constrained PRF key can be used to evaluate the PRF at all points satisfied by the constraint. As mentioned in the previous paragraph, for constructing adaptively secure VRFs, we require constrained PRFs for a special class of “admissible hash compatible” constraints. Each constraint is specified by a string \(u \in \{0,1,\perp \}^n\). Given a constrained key for u, it can be used to evaluate the PRF at all points x such that there exists an index \(i \le n\) where \(u_i \ne \ \perp \) and \(x_i \ne u_i\). For this work, we require a weaker notion of security which we call ‘singlekey noquery’ security. Here, the adversary first sends a constrained key query u. After receiving the constrained key, it sends a challenge point x such that it does not satisfy the constraint (that is, for all \(i\le n\), either \(u_i =\ \perp \), or \(x_i = u_i\)). It then receives either the PRF evaluation at x or a uniformly random string, and it must distinguish between the two scenarios.
PowersDDH Construction. This construction, at a high level, is similar to the NaorReingold PRF construction. The PRF key consists of 2n integers \(\{c_{i,b}\}_{i\le n, b\in \{0,1\}}\) and a group element g. To evaluate at a point x, we first choose n out of the 2n integers, depending on the bits of x. Let t denote the product of these n integers. The PRF evaluation is \(g^t\). A constrained key for constraint \(u \in \{0,1,\perp \}^n\) consists of n powers of a random integer a in the exponent of g: (g, \(g^a\), \(\ldots \), \(g^{a^{n1}}\)) and 2n integers \(\{v_{i,b}\}\). Each \(v_{i,b}\) is set to be either \(c_{i,b} \) or \(c_{i,b}/a\), depending on \(u_i\). Using the \(v_{i,b}\) and an appropriate \(g^{a^k}\) term, one can compute the PRF evaluation at any point x such that it satisfies the constraint (that is, if there exists an \(i\le n\) such that \(u_i \ne \ \perp \) and \(x_i \ne u_i\)). However, if x does not satisfy the constraint, then one needs to compute \(g^{a^n}\) to compute the PRF evaluation at x. Using the \(n\text {}\mathsf {power DDH}\) assumption, we can argue that if an adversary can distinguish between the PRF evaluation and a truly random string, then one can use this adversary to distinguish between \(g^{a^n}\) and a random group element.
PhiHiding Construction. In this scheme, the PRF key consists of an RSA modulus N, its factorization (p, q), 2n integers \(c_{i, b}\), a base integer h and a strong extractor seed \(\mathfrak {s}\). The PRF evaluation on an n bit strings is performed as follows: first choose n out of the 2n integers depending on the input, compute their product, then compute this product in the exponent of h and finally apply a strong extractor on the product with seed \(\mathfrak {s}\). A constrained key for constraint \(u \in \{0,1,\perp \}^n\) consists of 2n integers \(\{v_{i,b}\}\), integers e and \(h^e\), and seed \(\mathfrak {s}\). Each \(v_{i,b}\) is set to be either \((c_{i,b}  1)\cdot e^{1}\) or \(c_{i,b}\cdot e^{1}\), depending on \(u_i\). Integers \(v_{i, b}\) are set such that the PRF evaluation at any point x satisfying the constraint is of the form \(\mathsf {Ext}(h^{e \alpha }, \mathfrak {s})\), where \(\alpha \) could be computed only using \(v_{i, b}\)’s and e. However, for all unsatisfying points x, the output is of the form \(\mathsf {Ext}(h^{1 + e \alpha }, \mathfrak {s})\). Using the phihiding assumption, we can argue that an adversary can not distinguish between the cases where e is coprime with respect to \(\phi (N)\), and when e divides \(\phi (N)\). Note that in the latter case, there are e distinct \(e^{th}\) roots of \(h^e\). Thus, for any challenge point, the term \(h^{1 + e \alpha }\) will have large minentropy, and by strong extractor guarantee we could conclude that it looks uniformly random to the adversary.
We could also show that the above construction is a secure constrained unpredictable function under the RSA assumption. Note that constrained unpredictability is a weaker notion of security than constrained pseudorandomness in which the adversary must guess the PRF evaluation on the challenge point.
New Constructions of NonInteractive Perfectly Binding Commitments. Finally, the third component required for our VRF construction is a noninteractive perfectly binding commitment scheme (without trusted setup). In this work, we give new constructions for this primitive based on the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. (We emphasize that such commitments have applications beyond VRFs. For example, they are a key ingredient in building verifiable functional encryption [2].) Our LPN construction can be proven secure under the LPN with low noise assumption. Finally, we also give an approach for proving security under LPN with constant noise. This approach relies on the existence of special error correcting codes with ‘robust’ generator matrix. Currently, we do not have any explicit constructions for this combinatorial object. For simplicity, we only consider single bit commitment schemes.
LWE Construction. In this scheme, we will be working in \(\mathbb {Z}_q\) for a suitably large prime q. The commitment to a bit b consists of a randomly chosen vector \(\varvec{\mathrm {w}}\) and \(\varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}} + \mathsf {noise}+ b(q/2)\), where \(\varvec{\mathrm {s}}\) is a randomly chosen secret vector. However, to ensure perfect binding, we need to have some additional components. The scheme also chooses a random matrix \(\mathbf {B}\) from a distribution \(\mathcal {D}_1\) and outputs \(\mathbf {B}, \mathbf {B}^{T} \varvec{\mathrm {s}} + \mathsf {noise}\). This distribution has the special property that all matrices from this distribution have ‘medium norm’ rowspace. This property ensures that there does not exist two distinct vectors \(\varvec{\mathrm {s}}_1\) and \(\varvec{\mathrm {s}}_2\) such that \(\mathbf {B}^{T} \varvec{\mathrm {s}}_1 + \mathsf {noise}_1 = \mathbf {B}^{T} \varvec{\mathrm {s}}_2 + \mathsf {noise}_2\). Finally, to argue computational hiding, we require that a random matrix from this distribution looks uniformly random. If this condition is satisfied, then we can use the LWE assumption to argue that \(\varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}} + \mathsf {noise}\) and \(\mathbf {B}^{T} \varvec{\mathrm {s}} + \mathsf {noise}'\) look uniformly random, thereby hiding the committed bit. Sampling a matrix from the distribution \(\mathcal {D}_1 \) works as follows: first choose a uniformly random matrix \(\mathbf {A}\), then choose a matrix \(\mathbf {C}\) with low norm entries, matrix \(\mathbf {D}\) with ‘medium’ entries and output \([\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {D}+ \mathsf {noise}]\). For any non zero vector \(\varvec{\mathrm {s}}\), if \(\mathbf {A}^{T} \varvec{\mathrm {s}}\) has low norm, then \(\mathbf {C}^{T} \mathbf {A}^{T} \varvec{\mathrm {s}}\) also has low norm, but \(\mathbf {D}^{T} \varvec{\mathrm {s}}\) has medium norm entries, and therefore \([\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {D}+ \mathsf {noise}]^{T} \varvec{\mathrm {s}}\) has medium norm entries.
Low Noise LPN construction. This scheme is similar to the LWE construction. Here also, the commitment to a bit b consists of \(\varvec{\mathrm {w}} \) and \(\varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}} + b\), where \(\varvec{\mathrm {w}} \) and \(\varvec{\mathrm {s}}\) are uniformly random vectors in \(\mathbb {Z}_2^n\). To ensure that there can be only one vector \(\varvec{\mathrm {s}}\), we also choose a matrix \(\mathbf {B}\) from a special distribution \(\mathcal {D}_2\) and output \(\mathbf {B}, \mathbf {B}^{T} \varvec{\mathrm {s}} + \mathsf {noise}\). In this case, the distribution \(\mathcal {D}_2\) is such that all matrices from this distribution have high hamming weight rowspace. To sample from the distribution \(\mathcal {D}_2\), one chooses a uniformly random matrix \(\mathbf {A}\), a matrix \(\mathbf {C}\) with low hamming weight rows and outputs \([\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {G}]\), where \(\mathbf {G}\) is the generator matrix of an error correcting code. Here the role of \(\mathbf {G}\) is similar to the role of \(\mathbf {D}\) in the previous solution: to map any nonzero vector to a high hamming weight vector. An important point here is that we need the rows of \(\mathbf {C}\) to have low (\(O(\sqrt{n})\)) hamming weight. This is because we want to argue that if \(\mathbf {A}^{T} \varvec{\mathrm {s}}\) has low hamming weight, then so does \(\mathbf {C}^{T} \mathbf {A}^{T} \varvec{\mathrm {s}}\). Finally, to argue that \(\mathcal {D}_2\) looks like the uniform distribution, we need the LPN assumption with low noise^{Footnote 3} (since \(\mathbf {C}\) has low (\(O(\sqrt{n})\)) hamming weight rows).
This construction bears some similarities to the CCA secure encryption scheme of Kiltz et al. [28].
Standard LPN construction. Finally, we describe an approach for constructing a commitment scheme that can be proven secure under the standard LPN assumption (with constant noise). For this approach, we require a deterministic procedure that can output \(\ell \) matrices \(\mathbf {G}_1, \ldots , \mathbf {G}_\ell \) with the following property: for any matrix \(\mathbf {A}\), there exists an index i such that the rowspace of \(\mathbf {A}+ \mathbf {G}_i\) has high hamming weight. Given such a procedure, our commitment scheme works as follows. The commitment algorithm, on input message b, chooses a uniformly random matrix \(\mathbf {A}\) and generates \(\ell \) subcommitments. The \(i^{th}\) subcommitment chooses uniformly random vectors \(\varvec{\mathrm {s}}_i, \varvec{\mathrm {w}}_i\) and outputs \((\mathbf {A}+ \mathbf {G}_i)^{T} \varvec{\mathrm {s}}_i + \mathsf {noise}\), \(\varvec{\mathrm {w}}_i\) and \(\varvec{\mathrm {w}}_i^{T} \varvec{\mathrm {s}}_i + b\). For perfect binding, we will use the guarantee that there exists an i such that the rowspace of \(\mathbf {A}+ \mathbf {G}_i\) has high hamming weight. This implies that if \((\mathbf {A}+ \mathbf {G}_i)^{T} \varvec{\mathrm {s}}_1 + \mathsf {noise}= (\mathbf {A}+\mathbf {G}_i)^{T} \varvec{\mathrm {s}}_2 + \mathsf {noise}\), then \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\). For computational hiding, we need a hybrid argument to switch each subcommitment to uniformly random.
1.2 Concurrent Work
Independently and concurrently, Bitansky [5] gave a very similar construction of VRFs from NIWIs, perfectly binding commitments and puncturable PRFs/constrained PRFs for admissible hash friendly constraints.
The notable differences in the two works are with respect to the new realizations of commitments and constrained PRFs. Both works give a constrained PRF under the \(n\text {}\mathsf {power DDH}\) assumption for admissible hash friendly constraints. Bitansky was able to further prove this construction secure under the DDH assumption. Interestingly, the result was achieved by considering constrained PRFs for a more general notion of partitioning than admissible hash. We also construct admissible hash friendly constrained PRFs based on the phihiding assumption as well as constrained unpredictable functions based on the more standard RSA assumption. Finally, we also provide new constructions for perfectly binding commitments based on the LWE and LPN assumption.
Subsequently, Badrinarayanan et al. [3] gave an alternate construction of VRFs from puncturable PRFs/constrained PRFs for admissible hash friendly constraints and Verifiable Functional Encryption [2], which in turn can be constructed from NIWIs, injective oneway functions and secret key functional encryption schemes secure against single ciphertext and unbounded key queries.
2 Preliminaries
2.1 Verifiable Random Functions
Verifiable random functions (VRFs) were introduced by Micali, Rabin and Vadhan [30]. VRFs are keyed functions with input domain \(\left\{ \mathcal {X}_\lambda \right\} _\lambda \), output range \(\left\{ \mathcal {Y}_\lambda \right\} _\lambda \) and consist of three polynomial time algorithms \(\mathsf {Setup}\), \(\mathsf {Evaluate}\text { and } \mathsf {Verify}\) described as follows:

\(\mathsf {Setup}(1^{\lambda })\) is a randomized algorithm that on input the security parameter, outputs \((\mathrm {SK}, \mathrm {VK})\). \(\mathrm {SK}\) is called secret key, and \(\mathrm {VK}\) verification key.

\(\mathsf {Evaluate}(\mathrm {SK}, x)\) is a (possibly randomized) algorithm, and on input the secret key \(\mathrm {SK}\) and \(x \in \mathcal {X}_\lambda \), it outputs an evaluation \(y \in \mathcal {Y}_\lambda \) and a proof \(\pi \in \{0, 1\}^{*}\).

\(\mathsf {Verify}(\mathrm {VK}, x, y, \pi )\) is a (possibly randomized) algorithm which uses verification key \(\mathrm {VK}\) and proof \(\pi \) to verify that y is the correct evaluation on input x. It outputs 1 (accepts) if verification succeeds, and 0 (rejects) otherwise.
Definition 1
(Adaptivelysecure VRF). A pair of polynomial time algorithms \((\mathsf {Setup}, \mathsf {Evaluate}, \mathsf {Verify})\) is an adaptivelysecure verifiable random function if it satisfies the following conditions:

(Correctness) For all \((\mathrm {SK}, \mathrm {VK}) \leftarrow \mathsf {Setup}(1^{\lambda })\), and all \(x \in \mathcal {X}_\lambda \), if \((y, \pi ) \leftarrow \mathsf {Evaluate}(\mathrm {SK}, x)\), then \(\Pr [\mathsf {Verify}(\mathrm {VK}, x, y, \pi ) = 1] = 1\).

(Unique Provability) For every \((\mathrm {VK}, x, y_1, \pi _1, y_2, \pi _2)\) such that \(y_1 \ne y_2\), the following holds for at least one \(i \in \{1, 2\}\):
$$\begin{aligned} \Pr [\mathsf {Verify}(\mathrm {VK}, x, y_i, \pi _i) = 1] \le 2^{\varOmega (\lambda )}. \end{aligned}$$ 
(Pseudorandomness) For any PPT adversary \(\mathcal {A}= (\mathcal {A}_0, \mathcal {A}_1)\) there exists a negligible function \(\text {negl}(\cdot )\), such that for all \(\lambda \in \mathbb {N}\), \(\mathsf {Adv}^{\mathrm {adp\text {}VRF}}_{\mathcal {A}}(\lambda ) \le \text {negl}(\lambda )\), where advantage of \(\mathcal {A}\) is defined as
$$\begin{aligned} \mathsf {Adv}^{\mathrm {adp\text {}VRF}}_{\mathcal {A}}(\lambda ) = \Pr \left[ \mathcal {A}_1^{\mathcal {O}_{x^{*}}}(\mathsf {st}, y_b)\ = b : \ \begin{array}{cl} (\mathrm {SK}, \mathrm {VK}) \leftarrow \mathsf {Setup}(1^{\lambda });\\ (x^{*}, \mathsf {st}) = \mathcal {A}_0^{\mathsf {Evaluate}(\mathrm {SK}, \cdot )}(\mathrm {VK})\\ (y_1, \pi ) \leftarrow \mathsf {Evaluate}(\mathrm {SK}, x^{*});\\ \quad y_0 \leftarrow \mathcal {Y}_\lambda ; \quad b \leftarrow \{0,1\}\end{array} \right]  \dfrac{1}{2}, \end{aligned}$$where \(x^{*}\) should not have been queried by \(\mathcal {A}_0\), and oracle \(\mathcal {O}_{x^{*}}\) on input \(x^{*}\) outputs \(\perp \), otherwise behaves same as \(\mathsf {Evaluate}(\mathrm {SK}, \cdot )\).
A weaker notion of security for VRFs is selective pseudorandomness where the adversary must commit to its challenge \(x^{*}\) at the start of the game, that is before the challenger sends \(\mathrm {VK}\) to \(\mathcal {A}\). Then during evaluation phase, \(\mathcal {A}\) is allowed to query on polynomially many messages \(x \ne x^{*}\), and \(\mathcal {A}\) wins if its guess \(b' = b\). The advantage of \(\mathcal {A}\) is defined to be \(\mathsf {Adv}^{\mathrm {sel\text {}VRF}}_{\mathcal {A}}(\lambda ) = \left \Pr [\mathcal {A}\text { wins}]  1/2 \right \).
Definition 2
(Selectivelysecure VRF). A pair of polynomial time algorithms \((\mathsf {Setup}, \mathsf {Evaluate}, \mathsf {Verify})\) is called a selectivelysecure verifiable random function if it satisfies correctness and unique provability properties (as in Definition 1), and for all PPT adversaries \(\mathcal {A}\), \(\mathsf {Adv}^{\mathrm {sel\text {}VRF}}_{\mathcal {A}}(\lambda )\) is negligible in the security parameter \(\lambda \).
2.2 Noninteractive Witness Indistinguishable Proofs
Witness indistinguishable (WI) proofs were introduced by Feige and Shamir [16] as a natural weakening of zeroknowledge (ZK) proofs. At a high level, the witness indistinguishability property says that a proof must not reveal the witness used to prove the underlying statement even if it reveals all possible witnesses corresponding to the statement. Unlike ZK proofs, WI proofs without interaction in the standard model are known to be possible. Barak et al. [4] provided constructions for onemessage (completely noninteractive, with no shared random string or setup assumptions) witness indistinguishable proofs (NIWIs) based on ZAPs (i.e., twomessage publiccoin witness indistinguishable proofs) and NisanWigderson type pseudorandom generators [34]. Groth et al. [21] gave the first NIWI construction from a standard cryptographic assumption, namely the decision linear assumption. Recently, Bitansky and Paneth [6] constructed NIWI proofs assuming iO and oneway permutations.
Definition 3
(NIWI). A pair of PPT algorithms \((\mathcal {P}, \mathcal {V})\) is a NIWI for a language \(\mathcal {L}\in \varvec{\mathrm {NP}}\) with witness relation \(\mathcal {R}\) if it satisfies the following conditions:

(Perfect Completeness) For all (x, w) such that \(\mathcal {R}(x, w) = 1\),
$$\begin{aligned} \Pr [\mathcal {V}(x, \pi ) = 1 :\ \pi \leftarrow \mathcal {P}(x, w)] = 1. \end{aligned}$$ 
(Statistical Soundness) For every \(x \notin \mathcal {L}\) and \(\pi \in \{0, 1\}^{*}\),
$$\begin{aligned} \Pr [\mathcal {V}(x, \pi ) = 1] \le 2^{\varOmega (x)}. \end{aligned}$$ 
(Witness Indistinguishability) For any sequence \(\mathcal {I}= \{(x, w_1, w_2): \mathcal {R}(x, w_1) = 1 \wedge \mathcal {R}(x, w_2) = 1\}\)
$$\begin{aligned} \left\{ \pi _1: \pi _1 \leftarrow \mathcal {P}(x, w_1)\right\} _{(x, w_1, w_2) \in \mathcal {I}} \approx _c \left\{ \pi _2: \pi _2 \leftarrow \mathcal {P}(x, w_2)\right\} _{(x, w_1, w_2) \in \mathcal {I}} \end{aligned}$$
2.3 Perfectly Binding Commitments (with No Setup Assumptions)
A commitment scheme with message space \(\left\{ \mathcal {M}_\lambda \right\} _\lambda \), randomness space \(\left\{ \mathcal {R}_\lambda \right\} _\lambda \) and commitment space \(\left\{ \mathcal {C}_\lambda \right\} _\lambda \) consists of two polynomial time algorithms — \(\mathsf {Commit}\) and \(\mathsf {Verify}\) with the following syntax.

\(\mathsf {Commit}(1^{\lambda }, m \in \mathcal {M}_\lambda ; r \in \mathcal {R}_\lambda )\): The commit algorithm is a randomized algorithm that takes as input the security parameter \(\lambda \), message m to be committed and random coins r. It outputs a commitment c.

\(\mathsf {Verify}(m \in \mathcal {M}_\lambda , c \in \mathcal {C}_\lambda , o \in \mathcal {R}_\lambda )\): The verification algorithm takes as input the message m, commitment c and an opening o. It outputs either 0 or 1.
For simplicity, we assume that the opening for a commitment is simply the randomness used during the commitment phase. As a result, we do not have a separate ‘reveal’ algorithm. Below we formally define perfectly binding computationally hiding (PBCH) commitment schemes with no setup assumptions (i.e., without trusted setup and CRS).
Definition 4
(PBCH Commitments). A pair of polynomial time algorithms \((\mathsf {Commit}, \mathsf {Verify})\) is a perfectly binding computationally hiding (PBCH) commitment scheme if it satisfies the following conditions:

(Perfect Correctness) For all security parameters \(\lambda \in \mathbb {N}\), message \(m \in \mathcal {M}_\lambda \) and randomness \(r \in \mathcal {R}_\lambda \), if \(c = \mathsf {Commit}(1^{\lambda }, m; r)\), then \(\mathsf {Verify}(m, c, r) = 1\).

(Perfect Binding) For every \((c, m_1, r_1, m_2, r_2)\) such that \(m_1 \ne m_2\), the following holds for at least one \(i \in \{1, 2\}\):
$$\begin{aligned} \Pr [\mathsf {Verify}(m_i, c, r_i) = 1] = 0. \end{aligned}$$ 
(Computationally Hiding) For all security parameters \(\lambda \in \mathbb {N}\), messages \(m_1, m_2 \in \mathcal {M}_\lambda \),
$$\begin{aligned} \left\{ c_1: \begin{array}{l} c_1 \leftarrow \mathsf {Commit}(1^{\lambda }, m_1; r_1);\\ r_1 \leftarrow \mathcal {R}_\lambda \end{array}\right\} \approx _c \left\{ c_2: \begin{array}{l} c_2 \leftarrow \mathsf {Commit}(1^{\lambda }, m_2; r_2); \\ r_2 \leftarrow \mathcal {R}_\lambda \end{array}\right\} \end{aligned}$$
Perfectly binding commitments (without trusted setup) can be constructed from certifiably injective oneway functions. In this work, we show how to construct them under the Learning Parity with Low Noise assumption [1] and Learning with Errors assumption [35]. We would like to point out that the ‘no trusted setup’ requirement for commitments is essential for our VRF construction. We already know how to construct perfectly binding commitments with trusted setup from the LPN assumption [26], however it is not sufficient for our VRF construction as VRFs disallow trusted setup.
2.4 Admissible Hash Functions
A commonly used technique for achieving adaptive security is the partitioning strategy where the input space is partitioned into a ‘query partition’ and a ‘challenge partition’. This partitioning is achieved using admissible hash functions introduced by Boneh and Boyen [7]. Here we state a simplified definition from [23].
Definition 5
Let \(k, \ell \) and \(\theta \) be efficiently computable univariate polynomials. Let \(h{:} \, \{0,1\}^{k(\lambda )} \rightarrow \{0,1\}^{\ell (\lambda )}\) be an efficiently computable function and \(\mathsf {AdmSample}\) a PPT algorithm that takes as input \(1^{\lambda }\) and an integer Q, and outputs \(u \in \{0,1, \perp \}^{\ell (\lambda )}\). For any \(u\in \{0,1,\perp \}^{\ell (\lambda )}\), define \(P_u:\{0,1\}^{k(\lambda )} \rightarrow \{0,1\}\) as follows:
We say that \((h, \mathsf {AdmSample})\) is \(\theta \)admissible if the following condition holds:
For any efficiently computable polynomial Q, for all \(x_1, \ldots , x_{Q(\lambda )}, x^* \in \{0,1\}^{k(\lambda )}\), where \(x^* \notin \{x_i\}_1^{Q(\lambda )}\),
where the probability is taken over \(u \leftarrow \mathsf {AdmSample}(1^{\lambda }, Q(\lambda ))\).
Theorem 1
(Admissible Hash Function Family [7], simplified proof in [17]). For any efficiently computable polynomial k, there exist efficiently computable polynomials \(\ell , \theta \) such that there exist \(\theta \)admissible function families mapping k bits to \(\ell \) bits.
Note that the above theorem is information theoretic, and is not based on any cryptographic assumption.
2.5 Constrained Pseudorandom and Unpredictable Functions
Constrained pseudorandom functions, introduced by [8, 10, 27], are an extension of pseudorandom functions [20] where a party having the master PRF key can compute keys corresponding to any constraint from a constraint class. A constrained key for constraint C can be used to evaluate the PRF on inputs x that satisfy the constraint \(C(x) = 0\).^{Footnote 4} However, the constrained key should not reveal PRF evaluations at points not satisfied by the constraint. Constrained PRFs for general circuit constraints can be constructed using multilinear maps [8], indistinguishability obfuscation [9] and the learning with errors assumption [11]. Note that the construction from LWE only allows a single constrained key query, which is a weaker security definition than the standard fully ‘collusionresistant’ notion.
In this work, we will be using a special constraint family which we call ‘admissible hash compatible’, and the security definition will also be weaker than the standard (fully collusionresistant) security for constrained PRFs. This enables us to construct this primitive from weaker and standard cryptographic assumptions such as the \(n\text {}\mathsf {power DDH}\) assumption.
Definition 6
Let \(\mathcal {Z}_n = \{0,1, \perp \}^n\). An admissible hash compatible function family \(\mathcal {P}_n = \{P_z : \{0,1\}^n \rightarrow \{0,1\} ~  ~ z \in \mathcal {Z}_n\}\) is defined exactly as the predicate \(P_u(\cdot )\) in Definition 5.
Looking ahead the above admissible hash compatible function family will correspond to the family of constraints for which we assume constrained PRFs. Next, we formally define the syntax, correctness and security properties of constrained PRFs.
Syntax. Let \(n(\cdot )\) be a polynomial. A constrained pseudorandom function \(\mathsf {CPRF}\) with domain \(\{ \mathcal {X}_{\lambda } = \{0,1\}^{n(\lambda )}\}_{\lambda }\), range \(\mathcal {Y}= \{\mathcal {Y}_{\lambda }\}_{\lambda }\), key space \(\mathcal {K}= \{\mathcal {K}_{\lambda } \}_{\lambda }\) and constrained key space \(\mathcal {K}^{c}= \{\mathcal {K}^{c}_{\lambda }\}_{\lambda }\) for a family of admissible hash compatible constraints \(\{\mathcal {C}_\lambda = \mathcal {P}_{n(\lambda )}\}_\lambda \) consists of three algorithms \(\mathsf {Setup}, \mathsf {Constrain}, \mathsf {Evaluate}\) defined as follows. For simplicity of notation, we will refer to z as the constraint instead of \(P_z\).

\(\mathsf {Setup}(1^\lambda )\): The setup algorithm takes as input the security parameter \(\lambda \) and outputs a PRF key \(K \in \mathcal {K}_{\lambda }\).

\(\mathsf {Constrain}(K, z \in \{0,1,\perp \}^{n(\lambda )})\): The constrain algorithm takes as input a master PRF key \(K \in \mathcal {K}_{\lambda }\), a constraint \(z \in \{0,1,\perp \}^{n(\lambda )}\) and outputs a constrained key \(K_z \in \mathcal {K}^{c}_{\lambda }\).

\(\mathsf {Evaluate}(K \in \mathcal {K}_{\lambda } \cup \mathcal {K}^{c}_{\lambda }, x \in \{0,1\}^{n(\lambda )})\): The evaluation algorithm takes as input a PRF key K (master or constrained), and outputs \(y \in \mathcal {Y}\).
We would like to point out that in the above description there is a common evaluation algorithm that accepts both the PRF master key as well as the constrained key. Such an abstraction helps us in simplifying our VRF construction later in Sect. 3. Note that this is not a restriction on the constrained PRFs as it can achieved without loss of generality from any constrained PRF. Below we define the singlekey noquery constrained pseudorandomness security notion for constrained PRFs.
Definition 7
A pair of polynomial time algorithms \((\mathsf {Setup}, \mathsf {Constrain}, \mathsf {Evaluate})\) is a singlekey noquery secure constrained pseudorandom function for admissible hash compatible constraint family if it satisfies the following conditions:

(Correctness) For every security parameter \(\lambda \in \mathbb {N}\), master PRF key \(K \leftarrow \mathsf {Setup}(1^\lambda )\), constraint \(z \in \{0,1,\perp \}^{n(\lambda )}\), constrained key \(K_z \leftarrow \mathsf {Constrain}(K, z)\) and input \(x \in \{0,1\}^{n(\lambda )}\) such that \(P_z(x) = 0\), \( \mathsf {Evaluate}(K, x) = \mathsf {Evaluate}(K_z, x)\).

(Singlekey Noquery Constrained Pseudorandomness) For any PPT adversary \(\mathcal {A}= (\mathcal {A}_0, \mathcal {A}_1, \mathcal {A}_2)\) there exists a negligible function \(\text {negl}(\cdot )\), such that for all \(\lambda \in \mathbb {N}\), \(\mathsf {Adv}^{\mathrm {CPRF}}_{\mathcal {A}}(\lambda ) \le \text {negl}(\lambda )\), where advantage of \(\mathcal {A}\) is defined as
$$\begin{aligned} \mathsf {Adv}^{\mathrm {CPRF}}_{\mathcal {A}}(\lambda ) = \left \Pr \left[ \mathcal {A}_2(\widetilde{\mathsf {st}}, y_b)\ = b : \ \begin{array}{cl} K \leftarrow \mathsf {Setup}(1^{\lambda });\quad (z, \mathsf {st}) = \mathcal {A}_0(1^{\lambda })\\ \quad K_z \leftarrow \mathsf {Constrain}(K, z)\\ (x, \widetilde{\mathsf {st}}) \leftarrow \mathcal {A}_1(\mathsf {st}, K_z); \quad b \leftarrow \{0,1\}\\ y_1 = \mathsf {Evaluate}(K, x);\quad y_0 \leftarrow \mathcal {Y}_\lambda \end{array} \right]  \dfrac{1}{2} \right . \end{aligned}$$Also, the challenge point x chosen by \(\mathcal {A}\) must satisfy the constraint \(P_z(x) = 1\), i.e. it should not be possible to evaluate the PRF on x using constrained key \(K_z\).
Note that the above security notion is weaker than the standard fully collusionresistant security notion, since the adversary gets one constrained key, and then it must distinguish between a random string and the PRF evaluation at a point not satisfying the constraint. This is weaker than the standard security definition in two ways. First, there is only one constrained key query, and second, there are no evaluation queries. However, as we will see in Sect. 3, this suffices for our construction. Looking ahead, the high level idea is that we will partition the VRF input space using an admissible hash function, and to answer each evaluation query we only need a constrained key since a constrained key lets us evaluate at all points in the query partition.
Remark 1
Additionally, we want that there exists a polynomial \(s(\cdot )\) such that \(\forall \lambda \in \mathbb {N}\), \(K \in \mathcal {K}_{\lambda } \cup \mathcal {K}^{c}_{\lambda }\), \(K \le s(\lambda )\), i.e. size of each PRF key is polynomially bounded.
We could also define constrained PRFs for an even weaker constraint family which is the puncturing constraint function family.
Definition 8
A puncturing constraint function family \(\mathcal {P}_n = \{P_z : \{0,1\}^n \rightarrow \{0,1\} ~  ~ z \in \{0,1\}^n\}\) is defined exactly as the predicate \(P_u(\cdot )\) in Definition 5.
Definition 9
A set of polynomial time algorithms \((\mathsf {Setup}, \mathsf {Puncture}, \mathsf {Evaluate})\) is a secure puncturable pseudorandom function if it is a singlekey noquery secure constrained pseudorandom function (Definition 7) for puncturing constraint function family.
We also define the notion of constrained unpredictable functions which are syntactically the same as constrained PRFs with the difference only being that they only need to satisfy a weaker security requirement. Below we formally define constrained unpredictable functions.
Definition 10
A pair of polynomial time algorithms \((\mathsf {Setup}, \mathsf {Constrain}, \mathsf {Evaluate})\) is a singlekey noquery secure constrained unpredictable function for admissible hash compatible constraint family if it satisfies the correctness condition (as in Definition 7) and it also satisfies the following:

(Singlekey Noquery Constrained Unpredictability) For any PPT adversary \(\mathcal {A}= (\mathcal {A}_0, \mathcal {A}_1)\) there exists a negligible function \(\text {negl}(\cdot )\), such that for all \(\lambda \in \mathbb {N}\), \(\mathsf {Adv}^{\mathrm {CUF}}_{\mathcal {A}}(\lambda ) \le \text {negl}(\lambda )\), where advantage of \(\mathcal {A}\) is defined as
$$\begin{aligned} \mathsf {Adv}^{\mathrm {CUF}}_{\mathcal {A}}(\lambda ) = \Pr \left[ y = \mathsf {Evaluate}(K, x) : \ \begin{array}{cl} K \leftarrow \mathsf {Setup}(1^{\lambda });\quad (z, \mathsf {st}) = \mathcal {A}_0(1^{\lambda })\\ K_z \leftarrow \mathsf {Constrain}(K, z);\\ \quad (x, y) = \mathcal {A}_1(\mathsf {st}, K_z) \end{array} \right] . \end{aligned}$$Also, the challenge point x chosen by \(\mathcal {A}\) must satisfy the constraint \(P_z(x) = 1\), i.e. it should not be possible to evaluate the PRF on x using constrained key \(K_z\).
2.6 Strong Extractors
Extractors are combinatorial objects used to ‘extract’ uniformly random bits from a source that has high randomness, but is not uniformly random. In this work, we will be using seeded extractors. In a seeded extractor, the extraction algorithm takes as input a sample point x from the high randomness source \(\mathcal {X}\), together with a short seed \(\mathfrak {s}\), and outputs a string that looks uniformly random. Here, we will be using strong extractors, where the extracted string looks uniformly random even when the seed is given.
Definition 11
A \((k, \epsilon )\) strong extractor \(\mathsf {Ext}: \mathbb {D} \times \mathbb {S} \rightarrow \mathbb {Y}\) is a deterministic algorithm with domain \(\mathbb {D}\), range \(\mathbb {Y}\) and seed space \(\mathbb {S}\) such that for every source \(\mathcal {X}\) on \(\mathbb {D}\) with minentropy at least k, the following two distributions have statistical distance at most \(\epsilon \):
Using the Leftover Hash Lemma, we can construct strong extractors from pairwiseindependent hash functions. More formally, let \(\mathcal {H}= \{h : \{0,1\}^n \rightarrow \{0,1\}^m\}\) be a family of pairwise independent hash functions, and let \(m = k  2 \log (1/\epsilon )\). Then \(\mathsf {Ext}(x, h) = h(x)\) is a strong extractor with h being the seed. Such hash functions can be represented using O(n) bits.
2.7 Lattice Preliminaries
Given positive integers n, m, q and a matrix \(\varvec{\mathrm {A}} \in \mathbb {Z}_q^{n \times m}\), we let \(\varLambda _q^\perp (\varvec{\mathrm {A}})\) denote the lattice \(\{\varvec{\mathrm {x}} \in \mathbb {Z}^m \, : \, \varvec{\mathrm {A}} \cdot \varvec{\mathrm {x}} = \varvec{\mathrm {0}} \mod q\}\). For \(\varvec{\mathrm {u}} \in \mathbb {Z}_q^n\), we let \(\varLambda _q^{\varvec{\mathrm {u}}}(\varvec{\mathrm {A}})\) denote the coset \(\{\varvec{\mathrm {x}} \in \mathbb {Z}^m \, : \, \varvec{\mathrm {A}} \cdot \varvec{\mathrm {x}} = \varvec{\mathrm {u}} \mod q\}\).
Discrete Gaussians. Let \(\sigma \) be any positive real number. The Gaussian distribution \(\mathcal {D}_{\sigma }\) with parameter \(\sigma \) is defined by the probability distribution function \(\rho _{\sigma }(\varvec{\mathrm {x}}) = \exp (\pi \cdot \varvec{\mathrm {x}} ^2/\sigma ^2)\). For any set \(\L \subset \mathcal {R}^m\), define \(\rho _{\sigma }(\L ) = \sum _{\varvec{\mathrm {x}} \in \L } \rho _{\sigma }(\varvec{\mathrm {x}})\). The discrete Gaussian distribution \(\mathcal {D}_{\L , \sigma }\) over \(\L \) with parameter \(\sigma \) is defined by the probability distribution function \(\rho _{\L , \sigma }(\varvec{\mathrm {x}}) = \rho _{\sigma }(\varvec{\mathrm {x}})/\rho _{\sigma }(\L )\) for all \(\varvec{\mathrm {x}} \in \L \).
The following lemma (Lemma 4.4 of [19, 31]) shows that if the parameter \(\sigma \) of a discrete Gaussian distribution is small, then any vector drawn from this distribution will be short (with high probability).
Lemma 1
Let m, n, q be positive integers with \(m > n\), \(q\ge 2\). Let \(\mathbf {A}\in \mathbb {Z}_q^{n\times m}\) be a matrix of dimensions \(n\times m\), and \(\L = \varLambda _{q}^{\perp }(\mathbf {A})\). Then
3 Constructing Verifiable Random Functions
In this section, we give a generic construction of VRFs from admissible hash functions, perfectly binding commitments, NIWIs and constrained pseudorandom functions for admissible hash compatible constraints. We also prove that it satisfies correctness, unique provability and pseudorandomness properties (as described in Definition 1). Later in Sect. 3.3, we give a slightly modified construction for VRF that is selectivelysecure assuming only puncturable pseudorandom functions.
Let \((h, \mathsf {AdmSample})\) be an admissible hash function that hashes \(n(\lambda )\) bits to \(\ell (\lambda )\) bits, \((\mathcal {P}, \mathcal {V})\) be a NIWI proof system for language \(\mathcal {L}\) (where the language will be defined later), \((\mathsf {CS.Commit}, \mathsf {CS.Verify})\) be a perfectly binding commitment scheme with \(\left\{ \mathcal {M}_\lambda \right\} _\lambda , \left\{ \mathcal {R}_\lambda \right\} _\lambda \) and \(\left\{ \mathcal {C}_\lambda \right\} _\lambda \) as the message, randomness and commitment space, and \(\mathsf {CPRF}= (\mathsf {CPRF.Setup}, \mathsf {CPRF.Constrain}, \mathsf {CPRF.Eval})\) be a constrained pseudorandom function with \(\left\{ \mathcal {X}_\lambda \right\} _\lambda , \left\{ \mathcal {Y}_\lambda \right\} _\lambda , \left\{ \mathcal {K}_\lambda \right\} _\lambda \) and \(\left\{ \mathcal {K}^{c}_\lambda \right\} _\lambda \) as its domain, range, key and constrained key spaces. For simplicity assume that \(\mathcal {K}_\lambda \cup \mathcal {K}^{c}_\lambda \subseteq \mathcal {M}_\lambda \), or in other words, all the PRF master keys and constrained keys lie in the message space of the commitment scheme. Also, let \(\mathcal {X}_\lambda = \{0, 1\}^{\ell (\lambda )}\).
First, we define the language \(\mathcal {L}\). It contains instances of the form \(\left( c_1, c_2, c_3, x, y \right) \in \mathcal {C}_\lambda ^3 \times \{0, 1\}^{n(\lambda )} \times \mathcal {Y}_\lambda \) with the following witness relation:
Clearly the above language is in \(\varvec{\mathrm {NP}}\) as it can be verified in polynomial time. Next we describe our construction for VRFs with message space \(\{0, 1\}^{n(\lambda )}\) and range space \(\left\{ \mathcal {Y}_\lambda \right\} _\lambda \).
3.1 Construction

\(\mathsf {Setup}(1^{\lambda }) \rightarrow (\mathrm {SK}, \mathrm {VK}).\) It generates a PRF key for constrained pseudorandom function as \(K \leftarrow \mathsf {CPRF.Setup}(1^{\lambda })\). It also generates three independent commitments to the key K as \(c_i \leftarrow \mathsf {CS.Commit}(1^{\lambda }, K; r_i)\) for \(i \le 3\) where \(r_i\) is sampled as \(r_i \leftarrow \mathcal {R}_\lambda \), and sets the secretverification key pair as \(\mathrm {SK}= \left( K, \left\{ (c_i, r_i)\right\} _{i \le 3} \right) , \mathrm {VK}= (c_1, c_2, c_3)\).

\(\mathsf {Evaluate}(\mathrm {SK}, x) \rightarrow (y, \pi ).\) Let \(\mathrm {SK}= \left( K, \left\{ (c_i, r_i)\right\} _{i \le 3} \right) \). It runs the PRF evaluation algorithm on x as \(y = \mathsf {CPRF.Eval}(K, h(x))\). It also computes a NIWI proof \(\pi \) for the statement \((c_1, c_2, c_3, x, y) \in \mathcal {L}\) using NIWI prover algorithm \(\mathcal {P}\) with \((i = 1, j = 2, K, K, r_1, r_2)\) as the witness, and outputs y and \(\pi \) as the evaluation and corresponding proof.

\(\mathsf {Verify}(\mathrm {VK}, x, y, \pi ) \rightarrow \{0,1\}.\) Let \(\mathrm {VK}= (c_1, c_2, c_3)\). It runs NIWI verifier to check proof \(\pi \) as \(\mathcal {V}((c_1, c_2, c_3, x, y), \pi )\) and accepts the proof (outputs 1) iff \(\mathcal {V}\) outputs 1.
3.2 Correctness, Unique Provability and Pseudorandomness
Theorem 2
If \((h, \mathsf {AdmSample})\) is an admissible hash function, \((\mathsf {CS.Commit}\), \(\mathsf {CS.Verify})\) is a secure perfectly binding commitment scheme, \((\mathcal {P}, \mathcal {V})\) is a secure NIWI proof system for language \(\mathcal {L}\), and \(\mathsf {CPRF}\) is a secure singlekey constrained pseudorandom function according to Definitions 5, 4, 3, and 7 (respectively), then the above construction forms an adaptivelysecure VRF satisfying correctness, unique provability and pseudorandomness properties as described in Definition 1.
Correctness. For every wellformed secret and verification key pair \((\mathrm {SK}, \mathrm {VK}) \leftarrow \mathsf {Setup}(1^{\lambda })\), we know that both \(c_1\) and \(c_2\) are commitments to PRF key K with \(r_1\) and \(r_2\) as the corresponding openings, where \(\mathrm {SK}= \left( K, \left\{ (c_i, r_i)\right\} _{i \le 3} \right) \). Therefore, by perfect correctness of the constrained PRF and NIWI proof system, we can conclude that the above construction satisfies the VRF correctness condition.
Unique Provability. We will prove this by contradiction. Assume that the above construction does not satisfy unique provability property. This implies that there exists \((\mathrm {VK}, x, y_1, \pi _1, y_2, \pi _2)\) such that \(y_1 \ne y_2\) and \(\Pr [\mathsf {Verify}(\mathrm {VK}, x, y_i, \pi _i) = 1] > 2^{\varOmega (\lambda )}\) for both \(i \in \left\{ 1, 2\right\} \). To prove that this is not possible, we show that at least one of these proof verifications must involve verifying a NIWI proof for an invalid instance. Formal arguments proceed as follows:

Let \(\mathrm {VK}= (c_1, c_2, c_3)\). Since the commitment scheme is perfectly binding, we know that for each \(i \in \left\{ 1, 2, 3\right\} \) there exists at most one key \(K_i\) such that there exists an \(r_i\) which is a valid opening for \(c_i\), i.e. \(\mathsf {CS.Verify}(K_i, c_i, r_i) = 1\).

Suppose \(c_i\) is a commitment to key \(K_i\) for \(i \le 3\), and \(\mathsf {CPRF.Eval}(K_1, x) = \mathsf {CPRF.Eval}(K_2, x) = y_1\). Now since \(y_1 \ne y_2\), thus even when \(\mathsf {CPRF.Eval}(K_3, x) = y_2\) holds, we know that \(\left( c_1, c_2, c_3, x, y_2 \right) \notin \mathcal {L}\) as no two keys out of \(K_1, K_2, K_3\) evaluate to \(y_2\) on input x. Therefore, at least one proof out of \(\pi _1\) and \(\pi _2\) is a proof for an incorrect statement.

However, by statistical soundness of NIWI proof system, we know that for all instances not in \(\mathcal {L}\), probability that any proof gets verified is at most \(2^{\varOmega (\lambda )}\). Therefore, if the above construction does not satisfy unique provability, then the NIWI proof system is not statistically sound which contradicts our assumption. Hence, unique provability follows from perfect binding property of the commitment scheme and statistical soundness of NIWI proof system.
Pseudorandomness. The pseudorandomness proof follows from a sequence of hybrid games. The high level proof idea is as follows. We start by partitioning the input space into query and challenge partition using the admissible hash function. After partitioning we observe that to answer evaluation queries we only need a constrained PRF key which can evaluate on inputs in the query partition, however to give a proof we still need the master PRF key. Next we note that to compute the NIWI proofs we only need openings for any two commitments out of the three. Thus, we could switch one of the strings \(c_i\) to commit to the constrained key instead. This follows from the hiding property of the commitment scheme. Now we observe that we only need to compute NIWI proofs for the inputs in the query partition, thus we could use a master key  constrained key pair instead of using the master key  master key pair as the NIWI witness. This follows from witness indistinguishability property of NIWI proof system and the fact that the constrained and master key compute the same output on query partition. Using the same trick two more times, we could move to a hybrid game in which all three strings \(c_i\)’s are commitments of the constrained key. Finally, in this hybrid we could directly reduce the pseudorandomness security of VRF to constrained pseudorandomness security of the singlekey secure constrained PRF. Due to space constraints, the formal proof has been provided in the full version.
Remark 2
We would like to note that if we use a constrained unpredictable function instead of a constrained PRF in the above construction, then it results in an adaptivelysecure VUF (verifiable unpredictable function).
3.3 SelectivelySecure VRFs
In this section, we give a modified construction which assumes puncturable PRFs instead of constrained PRFs for admissible hash compatible constraints. The tradeoff is that we could only prove selective security of this construction. However, if we make subexponential security assumptions, then it could be proven to be adaptivelysecure as well.
Let \((\mathcal {P}, \mathcal {V})\) be a NIWI proof system for language \(\widetilde{\mathcal {L}}\) (where the language will be defined later), \((\mathsf {CS.Commit}, \mathsf {CS.Verify})\) be a perfectly binding commitment scheme with \(\left\{ \mathcal {M}_\lambda \right\} _\lambda , \left\{ \mathcal {R}_\lambda \right\} _\lambda \) and \(\left\{ \mathcal {C}_\lambda \right\} _\lambda \) as the message, randomness and commitment space, and \(\mathsf {PPRF}= (\mathsf {PPRF.Setup}, \mathsf {PPRF.Puncture}, \mathsf {PPRF.Eval})\) be a constrained pseudorandom function with \(\left\{ \mathcal {X}_\lambda \right\} _\lambda , \left\{ \mathcal {Y}_\lambda \right\} _\lambda , \left\{ \mathcal {K}_\lambda \right\} _\lambda \) and \(\left\{ \mathcal {K}^{p}_\lambda \right\} _\lambda \) as its domain, range, key and constrained key spaces. For simplicity assume that \(\mathcal {K}_\lambda \cup \mathcal {K}^{p}_\lambda \subseteq \mathcal {M}_\lambda \), or in other words, all the PRF master keys and constrained keys lie in the message space of the commitment scheme.
First, we define the language \(\widetilde{\mathcal {L}}\). It contains instances of the form \(\left( c_1, c_2, c_3, x, y \right) \in \mathcal {C}_\lambda ^3 \times \mathcal {X}_\lambda \times \mathcal {Y}_\lambda \) with the following witness relation:
Clearly the above language is in \(\varvec{\mathrm {NP}}\) as it can be verified in polynomial time. Next we describe our construction for selectivelysecure VRFs with message space \(\left\{ \mathcal {X}_\lambda \right\} _\lambda \) and range space \(\left\{ \mathcal {Y}_\lambda \right\} _\lambda \).

\(\mathsf {Setup}(1^{\lambda }) \rightarrow (\mathrm {SK}, \mathrm {VK}).\) It generates a PRF key for punctured pseudorandom function as \(K \leftarrow \mathsf {PPRF.Setup}(1^{\lambda })\). It also generates three independent commitments to the key K as \(c_i \leftarrow \mathsf {CS.Commit}(1^{\lambda }, K; r_i)\) for \(i \le 3\) where \(r_i\) is sampled as \(r_i \leftarrow \mathcal {R}_\lambda \), and sets the secretverification key pair as \(\mathrm {SK}= \left( K, \left\{ (c_i, r_i)\right\} _{i \le 3} \right) , \mathrm {VK}= (c_1, c_2, c_3)\).

\(\mathsf {Evaluate}(\mathrm {SK}, x) \rightarrow (y, \pi ).\) Let \(\mathrm {SK}= \left( K, \left\{ (c_i, r_i)\right\} _{i \le 3} \right) \). It runs the PRF evaluation algorithm on x as \(y = \mathsf {PPRF.Eval}(K, x)\). It also computes a NIWI proof \(\pi \) for the statement \((c_1, c_2, c_3, x, y) \in \widetilde{\mathcal {L}}\) using NIWI prover algorithm \(\mathcal {P}\) with \((i = 1, j = 2, K, K, r_1, r_2)\) as the witness, and outputs y and \(\pi \) as the evaluation and corresponding proof.

\(\mathsf {Verify}(\mathrm {VK}, x, y, \pi ) \rightarrow \{0,1\}.\) Let \(\mathrm {VK}= (c_1, c_2, c_3)\). It runs NIWI verifier to check proof \(\pi \) as \(\mathcal {V}((c_1, c_2, c_3, x, y), \pi )\) and accepts the proof (outputs 1) iff \(\mathcal {V}\) outputs 1.
Theorem 3
If \((\mathsf {CS.Commit}, \mathsf {CS.Verify})\) is a secure perfectly binding commitment scheme, \((\mathcal {P}, \mathcal {V})\) is a secure NIWI proof system for language \(\widetilde{\mathcal {L}}\), and \(\mathsf {PPRF}\) is a secure puncturable pseudorandom function according to Definitions 4, 3, and 9 (respectively), then the above construction forms a selectivelysecure VRF satisfying correctness, unique provability and pseudorandomness properties as described in Definition 2.
Proof Sketch. Correctness and unique provability of the above scheme could be proven similar to as in Sect. 3.2. The proof of pseudorandomness is also similar to that provided before with the following differences — (1) since we are only targeting selective security, the reduction algorithm receives the challenge input from the adversary at the start of the game, thus it does not need to perform any partitioning or abort, (2) in the final hybrid game, the reduction algorithm uses the adversary to attack the punctured pseudorandomness property. The main idea in the reduction to punctured pseudorandomness is that since at the start of the game adversary sends the challenge input to the reduction algorithm, the reduction algorithm could get a punctured key from the PRF challenger and use it inside the commitments as well as to answer each evaluation query.
4 Perfectly Binding Commitment Schemes
In this section, we give new constructions of perfectly binding noninteractive commitments from the Learning with Errors assumption and the Learning Parity with Noise assumption. These constructions are in the standard model without trusted setup. As mentioned in the introduction, there are already simple solutions [26] known from LWE/LPN when there is a trusted setup.
We will first present a construction based on the LWE assumption. Next, we will adapt this solution to work with the LPN assumption. However, this adaptation only works with low noise (that is, the Bernoulli parameter is \(1/\sqrt{n}\)). We also propose a different approach for constructing perfectly binding noninteractive commitments from the standard constant noise LPN problem. This approach reduces to finding error correcting codes with ‘robust’ generator matrices. Currently, we do not have any explicit^{Footnote 5} constructions for such error correcting codes, and finding such a family of generator matrices is an interesting open problem.
4.1 Construction from Learning with Errors
In this commitment scheme, our message space is \(\{0,1\}\) for simplicity. To commit to a bit x, one first chooses two vectors \(\varvec{\mathrm {s}}\), \(\varvec{\mathrm {w}}\) and outputs \(\varvec{\mathrm {w}}\) and \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x\). Clearly, this is not binding since there could be different \(\varvec{\mathrm {s}}\) vectors that open to different messages. Therefore, we need to ensure that the vector \(\varvec{\mathrm {s}}\) is fixed. To address this, we choose a matrix \(\mathbf {B}\) with certain structure and output \(\mathbf {B}\) and \(\mathbf {B}^T \varvec{\mathrm {s}} + \mathsf {noise}\). The special structure of the matrix ensures that there cannot be two different vectors \(\varvec{\mathrm {s}}_1, \varvec{\mathrm {s}}_2\) and noise vectors \(\mathsf {noise}_1, \mathsf {noise}_2\) such that \(\mathbf {B}^T \varvec{\mathrm {s}}_1 + \mathsf {noise}_1 = \mathbf {B}^T \varvec{\mathrm {s}}_2 + \mathsf {noise}_2\). Computational hiding of the committed bit follows from the fact that even though \(\mathbf {B}\) has special structure, it ‘looks’ like a random matrix, and therefore we can use the LWE assumption to argue that \(\mathbf {B}^T \varvec{\mathrm {s}} + \mathsf {noise}\) looks random, and therefore the message x is hidden.
We will now describe the algorithms formally. Let \(\lfloor {\cdot }\rfloor \) denote the floor operation, i.e. \(\lfloor {x}\rfloor = \max \{y \in \mathbb {Z}: y\le x\}\).

\(\mathsf {Commit}(1^n, x \in \{0,1\})\): The commitment algorithm first sets the LWE modulus \(p = 2^{n^{\epsilon }}\) for some \(\epsilon < 1/2\) and error distribution \(\chi = \mathcal {D}_{\sigma }\) where \(\sigma = n^c\) for some constant c. Next, it chooses a matrix \(\mathbf {A}\leftarrow \mathbb {Z}_p^{n \times n}\), low norm matrices \(\mathbf {C}\leftarrow \chi ^{n \times n}\), \(\mathbf {E}\leftarrow \chi ^{n \times n}\) and constructs \(\mathbf {D}= \lfloor {p/(4 n^{c + 1})}\rfloor \cdot \mathbf {I}\) (here \(\mathbf {I}\) is the \(n\times n\) identity matrix). Let \(\mathbf {B}= [\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {D}+ \mathbf {E}]\).
It then chooses vectors \(\varvec{\mathrm {s}} \leftarrow \chi ^n\), \(\varvec{\mathrm {w}} \leftarrow \mathbb {Z}_p^n\), \(\varvec{\mathrm {e}} \leftarrow \chi ^{2n}\) and \(f \leftarrow \chi \), and computes \(\varvec{\mathrm {y}} = \mathbf {B}^T \varvec{\mathrm {s}} + \varvec{\mathrm {e}}\) and \(z = \varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x(p/2) + f\). If either \(\Vert \mathbf {C} \Vert _{} > n^{c+2}\) or \(\Vert \mathbf {E} \Vert _{} > n^{c+2}\) or \(\Vert \varvec{\mathrm {e}} \Vert _{} > 2 n^{c+1}\) or \(\Vert \varvec{\mathrm {s}} \Vert _{} > n^{c+1}\) or \(f >\lfloor {p/100}\rfloor \), the commitment algorithm outputs x as the commitment. Else, the commitment consists of \((p, c, \mathbf {B}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\).

\(\mathsf {Verify}(\mathsf{com}, x, (\mathbf {C}, \mathbf {E}, \varvec{\mathrm {e}}, \varvec{\mathrm {s}}, f))\): Let \(\mathsf{com}= (p, c, \mathbf {B}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\). The verification algorithm first checks if \(\Vert \mathbf {C} \Vert \le n^{c+2}\), \(\Vert \mathbf {E} \Vert _{} \le n^{c+2}\), \(\Vert \varvec{\mathrm {e}} \Vert _{} \le 2 n^{c+1}\), \(\Vert \varvec{\mathrm {s}} \Vert \le n^{c+1}\) and \(f \le \lfloor {p/100}\rfloor \). Next, it checks that \(\mathbf {B}= [\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {D}+ \mathbf {E}]\), \(\mathbf {B}^T \varvec{\mathrm {s}} + \varvec{\mathrm {e}} = \varvec{\mathrm {y}}\) and \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x(p/2) + f = z\), where \(\varvec{\mathrm {D}} = \lfloor {p/(4 n^{c + 1})}\rfloor \cdot \mathbf {I}\). If all these checks pass, it outputs 1.
Theorem 4
If \((n, m, 2^{n^{\epsilon }}, \mathcal {D}_{n^c})\text {}\mathsf {LWE}\text {}\mathsf {ss}\) assumption holds, then the above construction is a perfectly binding computationally hiding commitment scheme as per Definition 4.
Perfect Correctness. Suppose there exist two different openings for the same commitment. Let \(\varvec{\mathrm {s}}_1, \varvec{\mathrm {e}}_1, {f_1}, \mathbf {C}_1, f_1\) and \(\varvec{\mathrm {s}}_2, \varvec{\mathrm {e}}_2, {f_2}, \mathbf {C}_2, f_2\) be the two openings. We will first show that \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\) and \(\varvec{\mathrm {e}}_1 = \varvec{\mathrm {e}}_2\). Next, we will argue that if \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\), then the commitment cannot be opened to two different bits.
Suppose \(\varvec{\mathrm {s}}_1 \ne \varvec{\mathrm {s}}_2\). Since \(\mathbf {B}^T \varvec{\mathrm {s}}_1 + \varvec{\mathrm {e}}_1\) = \(\mathbf {B}^T \varvec{\mathrm {s}}_2 + \varvec{\mathrm {e}}_2 \), it follows that \(\mathbf {B}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) = \varvec{\mathrm {e}}_2  \varvec{\mathrm {e}}_1\). Let \(\varvec{\mathrm {e}}^1_1\) and \(\varvec{\mathrm {e}}^1_2\) denote the first n components of \(\varvec{\mathrm {e}}_1\) and \(\varvec{\mathrm {e}}_2\) respectively. Then \(\mathbf {A}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) = \varvec{\mathrm {e}}^1_1  \varvec{\mathrm {e}}^1_2\). Note that \(\Vert \varvec{\mathrm {e}}_2  \varvec{\mathrm {e}}_1 \Vert \le 4 n^{c+1}\), and therefore, \(\Vert \mathbf {A}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert \le 4 n^{c+1}\).
Since \(\mathbf {C}_1\) and \(\mathbf {C}_2\) are matrices with low norm entries, it follows that \(\Vert \mathbf {C}_1 \Vert \le n^{c+2}\) and \(\Vert \mathbf {C}_2 \Vert \le n^{c+2}\). This implies \(\Vert \mathbf {C}_1^T \mathbf {A}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert _{} \le 4 n^{2c+3}\). Similarly, since \(\Vert \mathbf {E}_1 \Vert \le n^{c+2}\), \(\Vert \mathbf {E}^T_1(\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert \le 2 n^{2c+3}\). However, since the matrix \(\mathbf {D}\) has ‘mediumsized’ entries, if \(\varvec{\mathrm {s}}_1 \ne \varvec{\mathrm {s}}_2\), it follows that \(\left\ \mathbf {D}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty \ge \lfloor {p/(4 n^{c + 1})}\rfloor \). Additionally, since \(\mathbf {D}\) has mediumsized entries, we could also say that each entry of vector \(\mathbf {D}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2)\) is at most p / 2. This is because \(\left\ \mathbf {D}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty \le \left\ \mathbf {D}^T \right\ _\infty \cdot \left\ \varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2 \right\ _\infty \le \lfloor {p/(4 n^{c + 1})}\rfloor \cdot 2 n^{c + 1} \le p/2\). Therefore, the vector \(\mathbf {D}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2)\) is sufficiently long, i.e. \(\left\ \mathbf {D}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty \in \left[ \lfloor {p/(4 n^{c + 1})}\rfloor , p/2 \right] \).
Next, let us consider the norm of vector \(\mathbf {B}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2)\). Recall that \(\mathbf {B}= [\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {D}+ \mathbf {E}]\). Consider the matrix \(\varvec{\mathrm {X}} = [\mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {E}]\), i.e. it is same as \(\mathbf {B}\) except it does not contain matrix \(\mathbf {D}\). Using triangle inequality, we can write that \(\Vert \varvec{\mathrm {X}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert \le \) \(\Vert \mathbf {A}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert + \) \(\Vert \mathbf {C}_1^T \mathbf {A}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert + \) \(\Vert \mathbf {E}^T_1(\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \Vert \) \(\le 8 n^{2c + 3}\). Therefore, we could also say that each entry of vector \(\varvec{\mathrm {X}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2)\) is at most \(8 n^{2c + 3}\), i.e. \(\left\ \varvec{\mathrm {X}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty \le 8 n^{2c + 3}\).
We know that \(\varvec{\mathrm {B}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) = \varvec{\mathrm {X}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) + [\varvec{\mathrm {0}} ~  ~ \varvec{\mathrm {D}}]^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2)\). Therefore, given the above bounds, we could conclude that \(\lfloor {p/(4 n^{c + 1})}\rfloor  8 n^{2c + 3} \le \left\ \varvec{\mathrm {B}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty \le p/2 + 8 n^{2c + 3}\). Since, \(p = 2^{n^{\epsilon }}\), we know that for sufficiently large values of n, \(\lfloor {p/(8 n^{c + 1})}\rfloor \le \left\ \varvec{\mathrm {B}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty < p\). However, this is a contradiction since \(\mathbf {B}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) = \varvec{\mathrm {e}}_2  \varvec{\mathrm {e}}_1\) and \(\Vert \varvec{\mathrm {e}}_2  \varvec{\mathrm {e}}_1 \Vert \le 4 n^{c+1}\), thus \(\left\ \varvec{\mathrm {B}}^T (\varvec{\mathrm {s}}_1  \varvec{\mathrm {s}}_2) \right\ _\infty < 4 n^{c + 1}\).
Now, if \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\) and \(f_1, f_2\) are both at most \(\lfloor {p/100}\rfloor \), then \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_1 + f_1\) cannot be equal to \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_2 + f_2 + p/2\). This implies that any commitment cannot be opened to two different bits.
Computational Hiding. Due to space constraints, we defer the formal proof to the full version of our paper.
4.2 Construction from Learning Parity with Low Noise
We will now construct a perfectly binding noninteractive commitment scheme that can be proven secure under the low noise LPN assumption. At a high level, this solution is similar to our LWE solution. The message space is \(\{0,1\}\), and to commit to a bit x, we choose a vector \(\varvec{\mathrm {w}}\), secret vector \(\varvec{\mathrm {s}}\) and output \(\varvec{\mathrm {w}}, \varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x\) as part of the commitment. However, this is not enough, as there could exist \(\varvec{\mathrm {s}}_1, \varvec{\mathrm {s}}_2\) such that \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_1 + 1 \) = \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_2\). To prevent this, the commitment also consists of a matrix \(\mathbf {B}\) chosen from a special distribution, and \(\mathbf {B}^T \varvec{\mathrm {s}} + \mathsf {noise}'\) fixes the vector \(\varvec{\mathrm {s}}\). Drawing parallels with the LWE solution, we use an error correcting code’s generator matrix \(\mathbf {G}\) instead of the matrix \(\mathbf {D}\) used in the LWE solution. Both these matrices have a similar role: to map nonzero vectors to vectors with high hamming weight/high norm.
An important point to note here is that the Bernoulli parameter needs to be \(O(1/\sqrt{n})\). This is necessary for proving perfect binding. Recall, in the LWE perfect binding proof, we argue that since \(\mathbf {A}^T \varvec{\mathrm {s}}\) has low norm, \(\mathbf {C}^T \mathbf {A}^T \varvec{\mathrm {s}}\) also has low norm. For the analogous argument to work here, the error distribution must be \(O(1/\sqrt{n})\). In that case, we can argue that if the error distribution has hamming weight fraction at most \(1/100\sqrt{n}\) and each row of \(\mathbf {C}\) has hamming weight fraction at most \(1/100\sqrt{n}\), then \(\mathbf {C}^T \mathbf {A}^T \varvec{\mathrm {s}}\) has hamming weight fraction at most 1 / 10000. If the noise rate was constant, then we cannot get an upper bound on the hamming weight fraction of \(\mathbf {C}^T \mathbf {A}^T \varvec{\mathrm {s}}\).
We will now describe the formal construction. Let \(\beta = 1/(100\sqrt{n})\) and \(\chi = \mathsf {Ber}_{\beta }\) the noise distribution. Let \(\{\mathbf {G}_n \in \mathbb {Z}_2^{n \times 10n}\}_{n \in \mathbb {N}}\) be a family of generator matrices for error correcting codes where the distance of the code generated by \(\mathbf {G}_n\) is at least 4n.

\(\mathsf {Commit}(1^n, x \in \{0,1\})\): Let \(m = 10n\). Choose random matrices \(\mathbf {A}\leftarrow \mathbb {Z}_2^{n \times m}\), \(\varvec{\mathrm {w}} \leftarrow \mathbb {Z}_2^{n}\) and \(\mathbf {C}\leftarrow \chi ^{m \times m}\). Let \(\mathbf {B}= \left[ \mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {G}\right] \). Choose secret vector \(\varvec{\mathrm {s}} \leftarrow \mathbb {Z}_2^n\), error vector \(\varvec{\mathrm {e}} \leftarrow \chi ^{2 m}\) and set \(\varvec{\mathrm {y}} = \mathbf {B}^T \varvec{\mathrm {s}} + \varvec{\mathrm {e}}\). If either \(\mathsf {Ham} \text {}\mathsf {Wt}(\varvec{\mathrm {e}}) > m/(25\sqrt{n})\) or there exists some row \(\varvec{\mathrm {c}}_i\) of matrix \(\mathbf {C}\) such that \(\mathsf {Ham} \text {}\mathsf {Wt}(\varvec{\mathrm {c}}_i) > m/(50\sqrt{n})\), output the message x in clear as the commitment. Else, let \(z = \varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x\). The commitment string \(\mathsf{com}\) is set to be \((\mathbf {B}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\).

\(\mathsf {Verify}(\mathsf{com}, x, (\varvec{\mathrm {s}}, \varvec{\mathrm {e}}, \mathbf {C}))\): Let \(\mathsf{com}= (\mathbf {B}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\). The verification algorithm first checks that \(\mathsf {Ham} \text {}\mathsf {Wt}(\varvec{\mathrm {e}}) \le m/(25\sqrt{n})\) and all rows \(\varvec{\mathrm {c}}_i\) of \(\mathbf {C}\) satisfy \(\mathsf {Ham} \text {}\mathsf {Wt}(\varvec{\mathrm {c}}_i) \le m/(50\sqrt{n})\). Next, it checks if \(\mathbf {B}= \left[ \mathbf {A}~  ~ \mathbf {A}\mathbf {C}+ \mathbf {G}\right] \), \(\varvec{\mathrm {y}} = \mathbf {B}^T \varvec{\mathrm {s}} + \varvec{\mathrm {e}}\) and \(z = \varvec{\mathrm {w}}^T \varvec{\mathrm {s}} + x \). If all checks pass, it outputs 1, else it outputs 0.
Theorem 5
Assuming the Extended Learning Parity with Noise problem \(\mathsf {LPN}_{n,m,p}\) and Knapsack Learning Parity with Noise problem \(\mathsf {KLPN}_{n,m,\beta }\) (for \(\beta = 1/(100\sqrt{n})\)) is hard, the above construction is a perfectly binding computationally hiding commitment scheme as per Definition 4.
Perfect Correctness. First, we will argue perfect correctness. Suppose there exists a commitment \(\mathsf{com}= (\mathbf {B}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\) that can be opened to two different messages. Then there exist two different reveals (\(\varvec{\mathrm {s}}_1\), \(\varvec{\mathrm {e}}_1\), \(\mathbf {C}_1\)) and (\(\varvec{\mathrm {s}}_2\), \(\varvec{\mathrm {e}}_2\), \(\mathbf {C}_2\)) such that \( \mathbf {B}^T \varvec{\mathrm {s}}_1 + \varvec{\mathrm {e}}_1 = \varvec{\mathrm {y}} = \mathbf {B}^T \varvec{\mathrm {s}}_2 + \varvec{\mathrm {e}}_2\), \( \varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_1 + 0 ~ = z = \varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_2 + 1\) and \(\left[ \mathbf {A} \mathbf {A}\mathbf {C}_1 + \mathbf {G}\right] \) = \(\mathbf {B}\) = \(\left[ \mathbf {A} \mathbf {A}\mathbf {C}_2 + \mathbf {G}\right] \). We will first show that \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\), and then show that this implies perfect binding.
For proving that \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\) and \(\varvec{\mathrm {e}}_1 = \varvec{\mathrm {e}}_2\), notice that \(\mathbf {B}^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2) = \varvec{\mathrm {e}}_1 + \varvec{\mathrm {e}}_2\), which implies that \(\mathsf {Ham} \text {}\mathsf {Wt}(\left[ \mathbf {A} \mathbf {A}\mathbf {C}_1 + \mathbf {G}\right] ^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) \le 2 m/(25\sqrt{n})\) (recall, the hamming weight of \(\varvec{\mathrm {e}}_1 + \varvec{\mathrm {e}}_2\) is at most \(2 m/(25\sqrt{n})\)).
This implies, in particular, \(\mathsf {Ham} \text {}\mathsf {Wt}(\mathbf {A}^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) \le 2 m/(25\sqrt{n})\). Since each row of \(\mathbf {C}_1\) and \(\mathbf {C}_2\) has hamming weight at most \(m/(50\sqrt{n})\), \(\mathsf {Ham} \text {}\mathsf {Wt}((\mathbf {A}\mathbf {C}_1)^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) \le m^2/(625 n) < n\). As a result, \(\mathsf {Ham} \text {}\mathsf {Wt}(\left[ \mathbf {A} \mathbf {A}\mathbf {C}_1\right] ^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) < 2 n \). But if \(\varvec{\mathrm {s}}_1 \ne \varvec{\mathrm {s}}_2\), then \(\mathsf {Ham} \text {}\mathsf {Wt}(\mathbf {G}^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) \ge 4n\) which implies, using triangle inequality, that \(\mathsf {Ham} \text {}\mathsf {Wt}(\mathbf {B}^T (\varvec{\mathrm {s}}_1 + \varvec{\mathrm {s}}_2)) \ge 2n\). This brings us to a contradiction since \(\mathsf {Ham} \text {}\mathsf {Wt}(\varvec{\mathrm {e}}_1 + \varvec{\mathrm {e}}_2) \le 2 m/(25 \sqrt{n}) < n\).
Next, given that \(\varvec{\mathrm {s}}_1 = \varvec{\mathrm {s}}_2\), it follows that \(\varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_1 + 1 \ne \varvec{\mathrm {w}}^T \varvec{\mathrm {s}}_2\). This concludes our proof.
Computational Hiding. Due to space constraints, we defer the proof to the full version of our paper.
4.3 Construction from Learning Parity with Constant Noise
For this construction, we will require a polynomial time algorithm \(\mathsf {GenECC}\) that generates ‘robust’ error correcting code generator matrices. More formally, \(\mathsf {GenECC}(1^n)\) takes as input a parameter n and outputs \(\ell \) matrices \(\mathbf {G}_1, \ldots , \mathbf {G}_{\ell }\) of dimension \(n \times m\) such that the following property holds: for every matrix \(\mathbf {A}\in \mathbb {Z}_2^{n \times m}\), there exists an \(i \in [\ell ]\) such that every nonzero vector in the rowspace of \(\mathbf {A}+ \mathbf {G}_i\) has hamming weight at least m / 3. Let \(\beta = 1/100\) denote the error rate.

\(\mathsf {Commit}(1^n, x \in \{0,1\})\): The commitment algorithm first computes \((\mathbf {G}_1\), \(\ldots \), \(\mathbf {G}_{\ell }) \leftarrow \mathsf {GenECC}(1^n)\), where \(\mathbf {G}_i \in \mathbb {Z}_2^{n \times m}\). Next, it chooses \(\mathbf {A}\leftarrow \mathbb {Z}_2^{n \times m}\) and sets \(\mathbf {D}_i = \left[ \mathbf {A}+ \mathbf {G}_i \right] \). It chooses secret vectors \(\varvec{\mathrm {s}}_i \leftarrow \mathbb {Z}_2^n\) and error vectors \(\varvec{\mathrm {e}}_i \leftarrow \chi ^m\) for \(i\le \ell \). If any of the error vectors have hamming weight greater than \(2 m \beta \), then the algorithm outputs x in the clear. Else, it chooses \(\varvec{\mathrm {w}} \leftarrow \mathbb {Z}_2^{n}\), sets \(\varvec{\mathrm {y}}_i \leftarrow \mathbf {D}_i^{T} \varvec{\mathrm {s}}_i + \varvec{\mathrm {e}}_i\) for \(i \in [\ell ] \), \(z_i = \varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}}_i + x\) and outputs \(\mathsf{com}= \left( \mathbf {A}, \{\varvec{\mathrm {y}}_i, {z}_i\} \right) \) as the commitment.

\(\mathsf {Verify}(\mathsf{com}, x, (\{\varvec{\mathrm {s}}_i, \varvec{\mathrm {e}}_i\} ) )\): Let \(\mathsf{com}= (\mathbf {A}, \{ \varvec{\mathrm {y}}_i, z_i \})\). The verification algorithm first checks that \(\varvec{\mathrm {y}}_i = [\mathbf {A}+ \mathbf {G}_i]^{T} \varvec{\mathrm {s}}_i + \varvec{\mathrm {e}}_i\) for all \(i \in [\ell ]\) and \(\varvec{\mathrm {z}}_i = \varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}}_i + x\). Next, it checks that each error vector has hamming weight less than \(2 m \beta \). If all these checks pass, it outputs 1, else it outputs 0.
Perfect Correctness. This will crucially rely the robustness property of \(\mathsf {GenECC}\) algorithm. Suppose there exist two sets of vectors \(\{\varvec{\mathrm {s}}^1_i\}, \{\varvec{\mathrm {s}}^2_i\}\) and error vectors \(\{\varvec{\mathrm {e}}^1_i \}\) and \(\{\varvec{\mathrm {e}}^2_i\}\) such that \(\mathbf {D}_i^{T} \varvec{\mathrm {s}}^1_i + \varvec{\mathrm {e}}^1_i = \mathbf {D}_i^{T} \varvec{\mathrm {s}}^2_i + \varvec{\mathrm {e}}^2_i\). Then, for all \(i\le \ell \), \(\mathbf {D}_i^{T} (\varvec{\mathrm {s}}_i^1 + \varvec{\mathrm {s}}_i^2)\) has hamming weight at most \(4 m \beta \). This implies that for all \(i \le \ell \), there exists at least one nonzero vector in the rowspace of \(\mathbf {D}_i\) that has hamming weight at most \(4 m \beta \). But by the robustness property, for every \(\mathbf {A}\in \mathbb {Z}_2^{n \times m}\), there exists at least one index \(i \in [\ell ]\) such that the row space of \(\mathbf {A}+ \mathbf {G}_i\) has hamming weight at least m / 3. This brings us to a contradiction.
Computational Hiding Proof Sketch. The proof is fairly simple, and follows from the LPN assumption. First we introduce \(\ell \) hybrid experiments, where in the \(i^{th}\) experiment, \((\varvec{\mathrm {y}}_j, \varvec{\mathrm {z}}_j)\) are random for all \(j \le i\). The remaining \((\varvec{\mathrm {y}}_j, \varvec{\mathrm {z}}_j)\) components are same as in the actual construction. The only difference between the \((i1)^{th}\) and \(i^{th}\) hybrid is the distribution of \((\varvec{\mathrm {y}}_i, \varvec{\mathrm {z}}_i)\).
Hybrid \(\mathsf {Hybrid}_i\). In this experiment, the challenger chooses a matrix \(\mathbf {A}\leftarrow \mathbb {Z}_2^{n \times m}\), vector \(\varvec{\mathrm {w}} \leftarrow \mathbb {Z}_2^n\) and sets \(\mathbf {D}_i = \mathbf {A}+ \mathbf {G}_i\). Next, it chooses \(\varvec{\mathrm {s}}_j \leftarrow \mathbb {Z}_2^n\) and \(\varvec{\mathrm {e}}_j \leftarrow \mathsf {Ber}_{\beta }^m\) for all \(j \le \ell \). For \(j\le i\), it chooses \(y_j \leftarrow \mathbb {Z}_2^m\) and \(z_j \leftarrow \mathbb {Z}_2\). For \(j>i\), it chooses the commitment bit \(b \leftarrow \{0,1\}\), sets \(\varvec{\mathrm {y}}_j = \mathbf {D}_j^{T} \varvec{\mathrm {s}}_j + \varvec{\mathrm {e}}_j\) and \(z_j = \varvec{\mathrm {w}}^{T} \varvec{\mathrm {s}}_j + b\). It sends \((\mathbf {A}, \varvec{\mathrm {w}}, \{\varvec{\mathrm {y}}_i, z_i\}_i)\) to the adversary. The adversary outputs a bit \(b'\) and wins if \(b=b'\).
Suppose there exists an adversary \(\mathcal {A}\) that can distinguish between these two hybrids. Then we can construct a reduction algorithm \(\mathcal {B}\) that can break the extended LPN assumption. \(\mathcal {B}\) receives \((\mathbf {X}, \varvec{\mathrm {w}}, \varvec{\mathrm {y}}, z)\) from the LPN challenger, where \(\varvec{\mathrm {y}} = \mathbf {X}^T \varvec{\mathrm {s}} + \varvec{\mathrm {e}}\) or is random, and \(z = \varvec{\mathrm {w}}^T \varvec{\mathrm {s}}\). It sets \(\mathbf {A}= \mathbf {X} \mathbf {G}_i\). The remaining components can be generated using \(\mathbf {A}\) (note that there is a different \(\varvec{\mathrm {s}}_i\) for each i, so the reduction algorithm does not need the LPN secret \(\varvec{\mathrm {s}}\) to generate the remaining components). Depending on whether \(\varvec{\mathrm {y}}\) is random or not, \(\mathcal {B}\) either simulates Hybrid i or Hybrid \(i1\).
5 Constrained PRFs for Admissible Hash Compatible Constraints
In this section, we will provide two separate constructions of constrained PRFs for admissible hash compatible constraints. We prove security of the first construction under the \(n\text {}\mathsf {power DDH}\) assumption and the second construction is proven to be secure under the \(\mathsf {Phi}\text {}\mathsf {Hiding}\) assumption.
5.1 Constrained PRFs from \(n\text {}\mathsf {power DDH}\) Assumption
At a high level, our base PRF looks like the NaorReingold PRF [33]. The PRF key consists of 2n integers and a random group generator g. The PRF evaluation on an n bit strings is performed as follows: first choose n out of the 2n integers depending on the input, compute their product and then output this product in the exponent of g.

\(\mathsf {Setup}(1^{\lambda })\): The setup algorithm takes as input the security parameter \(\lambda \). It first generates a group of prime order as \((p, {\mathbb G}, g) \leftarrow \mathcal {G}(1^{\lambda })\), where p is a prime, \({\mathbb G}\) is a group of order p and g is a random generator. Next, it chooses 2n integers \(c_{i,b} \leftarrow \mathbb {Z}_p^{*}\) for \(i\le n\), \(b \in \{0,1\}\). It sets the master PRF key as \(K = \left( (p, {\mathbb G}, g), \{c_{i,b}\}_{i\le n, b\in \{0,1\}} \right) \).

\(\mathsf {Constrain}(K, u \in \{0,1,\perp \}^n)\): The constrain algorithm takes as input the master PRF key \(K = \left( (p, {\mathbb G}, g), \{c_{i,b}\}_{i, b} \right) \) and constraint \(u \in \{0,1, \perp \}^{n}\). It first chooses an integer \(a \in \mathbb {Z}_p^{*}\) and computes, for all \(i\le n\), \(b \in \{0,1\}\),
$$\begin{aligned} v_{i,b} = {\left\{ \begin{array}{ll} c_{i,b}/a \quad \text { if }u_i = b \vee u_i =\ \perp \\ c_{i,b} \quad \text { otherwise.} \end{array}\right. } \end{aligned}$$It sets the constrained key as \(K_u = \big ((p, {\mathbb G}, g), u, \{g, g^a, g^{a^2}, \ldots , g^{a^{n1}}\}, \{v_{i,b}\}_{i,b}\big )\).

\(\mathsf {Evaluate}(K, x \in \{0,1\}^n)\): The evaluation algorithm takes as input a PRF key K (which could be either the master PRF key or constrained PRF key) and an input string \(x \in \{0,1\}^n\).
If K is a master PRF key, then it can be parsed as \(K = \left( (p, {\mathbb G}, g), \{c_{i,b}\}_{i, b} \right) \). The evaluation algorithm computes \(t = \prod _{i \le n} c_{i, x_i}\) and outputs \(g^t\).
If K is a constrained key, then it consists of the group description \((p, {\mathbb G}, g)\), constraint \(u \in \{0,1, \perp \}^n\), group elements \((g_0, g_1, \ldots , g_{n1})\) and 2n integers \(\{v_{i,b}\}_{i,b}\). The evaluation algorithm first checks if \(P_u(x) = 0\). If not, it outputs \(\perp \). Else, it computes the product \(v = \prod _{i \le n} v_{i,x_i}\). Next, it counts the number of positions s such that \(u_i = x_i \vee u_i =\ \perp \). It outputs the evaluation as \(g_s^v\) (note that since \(P_u(x) = 0\), \(0 \le s < n\), and therefore the output is well defined).
Theorem 6
If \(n\text {}\mathsf {power DDH}\) assumption holds over \(\mathcal {G}\), then the above construction is a secure singlekey noquery secure constrained pseudorandom function for admissible hash compatible constraint family as per Definition 7.
Correctness. We need to show that for any PRF key K, any constraint \(u \in \{0,1,\perp \}^n\), any key \(K_u\) constrained at u and any input \(x \in \{0,1\}^n\) such that \(P_u(x) = 0\), evaluation at x using the master PRF key K matches the evaluation at x using the constrained key \(K_u\).
More formally, let \(K \leftarrow \mathsf {Setup}(1^n)\), and let \(K = ((p, {\mathbb G}, g), \{c_{i,b}\})\). Let \(u \in \{0,1,\perp \}^n\) be any constraint, and let \(K_u = ((p, {\mathbb G}, g)\), u, \(\{g, g^a, \ldots , g^{a^{n1}}\}\), \(\{v_{i,b}\})\) be the constrained key. On input \(x \in \{0,1\}^n\), the PRF evaluation using the master PRF key computes \(t =\prod c_{i,x_i}\) and outputs \(h = g^t\).
Let \(S = \{i \ : \ u_i = x_i \vee u_i =\ \perp \}\), and let \(s = S\). Since \(P_u(x) = 0\), it follows that \(s < n\) (since there is at least one index where \(u_i \ne \ \perp \wedge \ x_i \ne u_i \)). For all \(i \in S\), \(v_{i,x_i} \) is set to be \(c_{i,x_i}/a\), and for all \(i \notin S\), \(v_{i, x_i} = c_{i, x_i}\). As a result, \(v = \prod _i v_{i, x_i} = (\prod _{i} c_{i, x_i})/a^s\). Therefore, \((g^{a^s})^v = g^t = h\), which is equal to the master key evaluation.
Security. We will now show that the construction described above is secure as per Definition 7. Recall, in the singlekey noquery security game, the adversary is allowed to query for a single constrained key, after which the adversary must output a challenge point not in the constrained set and then distinguish between the PRF evaluation at the challenge point and a truly random string. We will show that such an adversary can be used to break the \(n\text {}\mathsf {power DDH}\) assumption. The reduction algorithm receives as challenge \((g, g^a, g^{a^2}, \ldots , g^{a^{n1}})\) and T, where \(T = g^{a^{n}}\) or a uniformly random group element. The reduction algorithm then receives a constrained key query u from the adversary. The reduction algorithm chooses 2n random integers and sends them along with (g, \(g^a\), \(\ldots \), \(g^{a^{n1}})\). Now, the adversary sends a point x such that \(P_u(x) = 1\). The reduction algorithm will use T to respond to the adversary. The crucial point here is that the reduction does not need to know a to construct this response.
Lemma 2
Assuming the \(n\text {}\mathsf {power DDH}\) assumption, for any \(\mathcal {A}\), \(\mathsf {Adv}^{\mathrm {CPRF}}_{\mathcal {A}}(n) \le \text {negl}(n)\).
Proof
Suppose there exists an adversary \(\mathcal {A}\) such that \(\mathsf {Adv}^{\mathrm {CPRF}}_{\mathcal {A}}(n) = \epsilon \). We will use \(\mathcal {A}\) to construct a reduction algorithm \(\mathcal {B}\) that breaks the \(n\text {}\mathsf {power DDH}\) assumption. The reduction algorithm receives the group description \((p, {\mathbb G}, g)\), n group elements \((g_0\), \(g_1\), \(\ldots \), \(g_{n1})\) and the challenge term T from the challenger. It then chooses 2n random integers \(v_{i,b} \leftarrow \mathbb {Z}_p^{*}\) for all \(i\le n, b\in \{0,1\}\). It receives constrained key query u from \(\mathcal {A}\), and sends \(((p, {\mathbb G}, g), u, \{g_0, \ldots , g_{n1}\}, \{v_{i,b}\}_{i, b})\) to \(\mathcal {A}\). Next, it receives the challenge input \(x \in \{0,1\}^n\) from \(\mathcal {A}\) such that \(P_u(x) = 1\). The reduction algorithm computes \(v = \prod _i v_{i, x_i}\) and sends \(T^v\) to the adversary. If \(\mathcal {A}\) guesses that the challenge string is random, then \(\mathcal {B}\) guesses that T is random, else it guesses that \(T = g^{a^n}\), where \(g_i = g^{a^i}\).
We now need to argue that \(\mathcal {B}\) perfectly simulates the singlekey noquery constrained PRF game. First, let us consider the case when \(g_i = g^{a^i}\) and \(T = g^{a^n}\). The constrained key is distributed as in the actual security game. The reduction algorithm implicitly sets \(c_{i,b} = v_{i,b} a\) for all i, b such that \(u_i = b \vee u_i =\ \perp \). On challenge input x such that \(P_u(x) = 1\), let \(v = \prod _i v_{i, x_i}\). Note that \(t = \prod _i c_{i, x_i} = a^n v\). As a result, its outputs \(T^v = g^{t}\) is the correct PRF evaluation at x.
Now, suppose T is a uniformly random group element. Then, once again, the constrained key’s distribution is identical to the real security game distribution, and the response to PRF challenge is a uniformly random group element. This implies that \(\mathcal {B}\) can break the \(n\text {}\mathsf {power DDH}\) assumption with advantage \(\epsilon \).
5.2 Constrained PRFs from PhiHiding Assumption
The PRF key consists of a RSA modulus, its factorization, 2n integers, a random group generator h and a strong extractor seed. The PRF evaluation on an n bit strings is performed as follows: first choose n out of the 2n integers depending on the input, compute their product, then compute this product in the exponent of h and finally apply a strong extractor on the product.

\(\mathsf {Setup}(1^{\lambda })\): The setup algorithm takes as input the security parameter \(\lambda \). It first sets input length \(n=\lambda \), parameter \(\ell _{\mathrm {RSA}}= 20(n+1)\), generates RSA modulus \(N = pq\), where p, q are primes of \(\ell _{\mathrm {RSA}}/2\) bits each. Next, it chooses 2n integers \(c_{i,b} \leftarrow \mathbb {Z}_{\phi (N)}\) for \(i\le n\), \(b \in \{0,1\}\) and \(h \leftarrow \mathbb {Z}_N^*\). Finally, it sets \(\ell _{\mathrm {s}}= O(n)\) and chooses an extractor seed \(\mathfrak {s}\leftarrow \{0,1\}^{\ell _{\mathrm {s}}}\). It sets the master PRF key as \(K = \left( (N, p, q), \{c_{i,b}\}_{i\le n, b\in \{0,1\}}, h, \mathfrak {s} \right) \).

\(\mathsf {Constrain}(K, u \in \{0,1,\perp \}^n)\): The constrain algorithm takes as input the master PRF key \(K = \left( (N, p, q), \{c_{i,b}\}_{i, b}, h, \mathfrak {s} \right) \) and constraint \(u \in \{0,1, \perp \}^{n}\). It first chooses an integer \(e \in \mathbb {Z}_{\phi (N)}^*\) and computes, for all \(i\le n\), \(b \in \{0,1\}\),
$$\begin{aligned} v_{i,b} = {\left\{ \begin{array}{ll} (c_{i,b}  1)\cdot e^{1} \mod \phi (N) \quad \text { if }u_i = b \vee u_i =\ \perp \\ c_{i,b} \cdot e^{1} \mod \phi (N) \quad \text { otherwise.} \end{array}\right. } \end{aligned}$$It sets the constrained key as \(K_u = \left( N, u, e, \{v_{i,b}\}_{i,b}, h^e, \mathfrak {s} \right) \).

\(\mathsf {Evaluate}(K, x \in \{0,1\}^n)\): The evaluation algorithm takes as input a PRF key K (which could be either the master PRF key or constrained PRF key) and an input string \(x \in \{0,1\}^n\).
If K is a master PRF key, then \(K =\left( (N, p, q), \{c_{i,b}\}_{i\le n, b\in \{0,1\}}, h, \mathfrak {s} \right) \). The evaluation algorithm computes \(t = \prod _{i \le n} c_{i, x_i}\) and outputs \(\mathsf {Ext}(h^t, \mathfrak {s})\).
If K is a constrained key, then \(K =\left( N, u, e, \{v_{i,b}\}_{i\le n, b\in \{0,1\}}, g, \mathfrak {s} \right) \). Recall g is set to be \(h^e\). The evaluation algorithm first checks if \(P_u(x) = 0\). If not, it outputs \(\perp \). Since \(P_u(x) = 0\), there exists an index i such that \(u_i \ne \ \perp \) and \(u_i \ne x_i\). Let \(i^*\) be the first such index. For all \(i \ne i^*\), compute \(w_{i,b} = v_{i, b} \cdot e + 1\) if \(u_i = b \vee u_i =\ \perp \), else \(w_{i,b} = v_{i,b} \cdot e\). Finally, set \(w_{i^*, x_{i^*}} = v_{i^*, x_{i^*}}\) and compute \(t' = \prod w_{i, x_i}\). Output \(\mathsf {Ext}(g^{t'}, \mathfrak {s})\).
Theorem 7
If PhiHiding assumption holds and \(\mathsf {Ext}\) is a \((\ell _{\mathrm {RSA}}/5, 1/2^{2n})\) strong extractor as per Definition 11, then the above construction is a secure singlekey noquery secure constrained pseudorandom function for admissible hash compatible constraint family as per Definition 7.
Correctness. We need to show that for any PRF key K, any constraint \(u \in \{0,1,\perp \}^n\), any key \(K_u\) constrained at u and any input \(x \in \{0,1\}^n\) such that \(P_u(x) = 0\), evaluation at x using the master PRF key K matches the evaluation at x using the constrained key \(K_u\).
More formally, let \(K \leftarrow \mathsf {Setup}(1^n)\), and let \(K = ((N, p, q), \{c_{i,b}\}, h, \mathfrak {s})\). Let \(u \in \{0,1,\perp \}^n\) be any constraint, and let \(K_u = (N, u, e, \{v_{i,b}\}, h^e, \mathfrak {s})\) be the constrained key. On input \(x \in \{0,1\}^n\), the PRF evaluation using the master PRF key computes \(t =\prod c_{i,x_i}\) and outputs \(\mathsf {Ext}(h^t, \mathfrak {s})\).
Since \(P_u(x) = 0\), there is at least one index \(i^*\) where \(u_{i^*} \ne \ \perp \wedge \ x_{i^*} \ne u_{i^*} \). As a result, \(v_{i^*, x_{i^*}} = c_{i^*, x_{i^*}} \cdot e^{1}\). For all \(i\ne i^*\), we can compute \(c_{i,b}\) given \(v_{i,b}\) and e. Therefore, if we define \(w_{i,b}\) as in the evaluation algorithm and compute \(t' = \prod _i w_{i, x_i}\), then \((h^e)^{t'} = h^{\prod c_{i, x_i}}\). Since both the constrained key evaluation and PRF key evaluation use the same extractor seed, the evaluation using the constrained key is correct.
Security. If \(P_u(x) = 1\), then there exists no i such that \(v_{i, x_i} = c_{i, x_i} \cdot e^{1}\). As a result, suppose there exists an adversary \(\mathcal {A}\) that can win the singlekey noquery constrained PRF security game. Then we can use \(\mathcal {A}\) to break the Phihiding assumption. We will prove security via a sequence of hybrid experiments. First, we will switch the exponent e in the constrained key from being a random element (coprime w.r.t. \(\phi (N)\)) to a factor of \(\phi (N)\). This step will rely on the Phihiding assumption. Next, we will show that any adversary has negligible advantage if e divides \(\phi (N)\). Intuitively, this step will follow because the quantity \(\gamma = h^e\) in the constrained key does not reveal h — there could be e different \(e^{th}\) roots of \(\gamma \). As a result, running the extractor on \(h^{\prod c_{i,x_i}}\) outputs a uniformly random bit. Due to space constraints, we defer the formal proof to the full version of our paper.
5.3 Constrained Unpredictable Functions from RSA Assumption
The PRF key consists of a RSA modulus, its factorization, 2n integers and a random group generator h. The PRF evaluation on an n bit strings is performed as follows: first choose n out of the 2n integers depending on the input, compute their product and then output this product in the exponent of h.

\(\mathsf {Setup}(1^{\lambda })\): The setup algorithm takes as input the security parameter \(\lambda \). It first generates RSA modulus \(N = pq\), where p, q are primes of \(\ell _{\mathrm {RSA}}/2\) bits each. Next, it chooses 2n integers \(c_{i,b} \leftarrow \mathbb {Z}_{\phi (N)}\) for \(i\le n\), \(b \in \{0,1\}\) and \(h \leftarrow \mathbb {Z}_N^*\). It sets the master PRF key as \(K = \left( (N, p, q), \{c_{i,b}\}_{i\le n, b\in \{0,1\}}, h \right) \).

\(\mathsf {Constrain}(K, u \in \{0,1,\perp \}^n)\): The constrain algorithm takes as input the master PRF key \(K = \left( (N, p, q), \{c_{i,b}\}_{i, b}, h \right) \) and constraint \(u \in \{0,1, \perp \}^{n}\). It first chooses an integer \(e \in \mathbb {Z}_{\phi (N)}^*\) and computes, for all \(i\le n\), \(b \in \{0,1\}\),
$$\begin{aligned} v_{i,b} = {\left\{ \begin{array}{ll} (c_{i,b}  1)\cdot e^{1} \mod \phi (N) \quad \text { if }u_i = b \vee u_i =\ \perp \\ c_{i,b} \cdot e^{1} \mod \phi (N) \quad \text { otherwise.} \end{array}\right. } \end{aligned}$$It sets the constrained key as \(K_u = \left( N, u, e, \{v_{i,b}\}_{i,b}, h^e \right) \).

\(\mathsf {Evaluate}(K, x \in \{0,1\}^n)\): The evaluation algorithm takes as input a PRF key K (which could be either the master PRF key or constrained PRF key) and an input string \(x \in \{0,1\}^n\).
If K is a master PRF key, then \(K =\left( (N, p, q), \{c_{i,b}\}_{i\le n, b\in \{0,1\}}, h \right) \). The evaluation algorithm computes \(t = \prod _{i \le n} c_{i, x_i}\) and outputs \(h^t\).
If K is a constrained key, then \(K =\left( N, u, e, \{v_{i,b}\}_{i\le n, b\in \{0,1\}}, g \right) \). Recall g is set to be \(h^e\). The evaluation algorithm first checks if \(P_u(x) = 0\). If not, it outputs \(\perp \). Since \(P_u(x) = 0\), there exists an index i such that \(u_i \ne \ \perp \) and \(u_i \ne x_i\). Let \(i^*\) be the first such index. For all \(i \ne i^*\), compute \(w_{i,b} = v_{i, b} \cdot e + 1\) if \(u_i = b \vee u_i =\ \perp \), else \(w_{i,b} = v_{i,b} \cdot e\). Finally, set \(w_{i^*, x_{i^*}} = v_{i^*, x_{i^*}}\) and compute \(t' = \prod w_{i, x_i}\). Output \(g^{t'}\).
Correctness. The proof of correctness is identical to that provided for correctness of constrained PRF in Sect. 5.2.
Security. If \(P_u(x) = 1\), then there exists no i such that \(v_{i, x_i} = c_{i, x_i} \cdot e^{1}\). As a result, suppose there exists an adversary \(\mathcal {A}\) that can win the singlekey noquery constrained unpredictable function security game. Then we can use \(\mathcal {A}\) to break the RSA assumption. We will prove security via a sequence of hybrid experiments. The idea is to set \(h^e\) to be the RSA challenge. At a high level, if the adversary can win the unpredictability game (i.e., correctly output \(h^{\prod c_{i, x_i}}\) at point x such that \(P_u(x) = 0\)), then it must have computed the \(e^{th}\) root of \(h^e\).
The detailed proof can be found in the full version of our paper.
Notes
 1.
Looking ahead, it will be crucial for proving the unique provability property that the commitment scheme used is perfectly binding.
 2.
The challenger needs to perform an abort step in case of bad partitioning, however for the above informal exposition we avoid discussing it. More details are provided in Sect. 3.
 3.
We will be using the (low noise) Knapsack LPN assumption. The Knapsack LPN assumption states that for a uniformly random matrix \(\mathbf {A}\) and a matrix \(\mathbf {E}\) such that each entry is 1 with probability p and \(\mathbf {A}\) has fewer rows than columns, then \((\mathbf {A}, \mathbf {A}\mathbf {E})\) look like uniformly random matrices.
 4.
We would like to point out that our notation departs from what has been used in the literature. Traditionally, it is considered that the constrained key allows PRF evaluation on points that satisfy \(C(x) = 1\). However, we switch the constraint to \(C(x) = 0\) for convenience.
 5.
We note that most randomly chosen linear codes satisfy this property.
References
Alekhnovich, M.: More on average case vs approximation complexity. In: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science (2003)
Badrinarayanan, S., Goyal, V., Jain, A., Sahai, A.: Verifiable functional encryption. In: Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016. LNCS, vol. 10032, pp. 557–587. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662538906_19
Badrinarayanan, S., Goyal, V., Jain, A., Sahai, A.: A note on VRFs from verifiable functional encryption. Cryptology ePrint Archive, Report 2017/051 (2017). http://eprint.iacr.org/2017/051
Barak, B., Ong, S.J., Vadhan, S.: Derandomization in cryptography. SIAM J. Comput. 37, 380–400 (2007)
Bitansky, N.: Verifiable random functions from noninteractive witnessindistinguishable proofs. Cryptology ePrint Archive, Report 2017/018 (2017). http://eprint.iacr.org/2017/018
Bitansky, N., Paneth, O.: ZAPs and noninteractive witness indistinguishability from indistinguishability obfuscation. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9015, pp. 401–427. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662464977_16
Boneh, D., Boyen, X.: Secure identity based encryption without random oracles. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 443–459. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540286288_27
Boneh, D., Waters, B.: Constrained pseudorandom functions and their applications. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013. LNCS, vol. 8270, pp. 280–300. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642420450_15
Boneh, D., Zhandry, M.: Multiparty key exchange, efficient traitor tracing, and more from indistinguishability obfuscation. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 480–499. Springer, Heidelberg (2014). https://doi.org/10.1007/9783662443712_27
Boyle, E., Goldwasser, S., Ivan, I.: Functional signatures and pseudorandom functions. In: Krawczyk, H. (ed.) PKC 2014. LNCS, vol. 8383, pp. 501–519. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642546310_29
Brakerski, Z., Vaikuntanathan, V.: Constrained keyhomomorphic PRFs from standard lattice assumptions  or: how to secretly embed a circuit in your PRF. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9015, pp. 1–30. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662464977_1
Chase, M., Meiklejohn, S.: Déjà Q: using dual systems to revisit qtype assumptions. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 622–639. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642552205_34
Dodis, Y.: Efficient construction of (distributed) verifiable random functions. In: Desmedt, Y.G. (ed.) PKC 2003. LNCS, vol. 2567, pp. 1–17. Springer, Heidelberg (2003). https://doi.org/10.1007/3540362886_1
Dodis, Y., Yampolskiy, A.: A verifiable random function with short proofs and keys. In: Vaudenay, S. (ed.) PKC 2005. LNCS, vol. 3386, pp. 416–431. Springer, Heidelberg (2005). https://doi.org/10.1007/9783540305804_28
Dwork, C., Naor, M.: Zaps and their applications. In: FOCS, pp. 283–293 (2000)
Feige, U., Shamir, A.: Witness indistinguishable and witness hiding protocols. In: STOC, pp. 416–426 (1990)
Freire, E.S.V., Hofheinz, D., Paterson, K.G., Striecks, C.: Programmable hash functions in the multilinear setting. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8042, pp. 513–530. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642400414_28
Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: FOCS (2013)
Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: STOC, pp. 197–206 (2008)
Goldreich, O., Goldwasser, S., Micali, S.: How to construct random functions (extended abstract). In: FOCS, pp. 464–479 (1984)
Groth, J., Ostrovsky, R., Sahai, A.: New techniques for noninteractive zeroknowledge. J. ACM 59(3), 11 (2012)
Hofheinz, D., Jager, T.: Verifiable random functions from standard assumptions. In: Kushilevitz, E., Malkin, T. (eds.) TCC 2016. LNCS, vol. 9562, pp. 336–362. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662490969_14
Hohenberger, S., Sahai, A., Waters, B.: Replacing a random oracle: full domain hash from indistinguishability obfuscation. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 201–220. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642552205_12
Hohenberger, S., Waters, B.: Constructing verifiable random functions with large input spaces. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 656–672. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642131905_33
Jager, T.: Verifiable random functions from weaker assumptions. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9015, pp. 121–143. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662464977_5
Jain, A., Krenn, S., Pietrzak, K., Tentes, A.: Commitments and efficient zeroknowledge proofs from learning parity with noise. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 663–680. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642349614_40
Kiayias, A., Papadopoulos, S., Triandopoulos, N., Zacharias, T.: Delegatable pseudorandom functions and applications. In: ACM Conference on Computer and Communications Security, pp. 669–684 (2013)
Kiltz, E., Masny, D., Pietrzak, K.: Simple chosenciphertext security from lownoise LPN. IACR Cryptology ePrint Archive 2015, 401 (2015). http://eprint.iacr.org/2015/401
Lysyanskaya, A.: Unique signatures and verifiable random functions from the DHDDH separation. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 597–612. Springer, Heidelberg (2002). https://doi.org/10.1007/3540457089_38
Micali, S., Rabin, M., Vadhan, S.: Verifiable random functions. In: Proceedings 40th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 120–130. IEEE (1999)
Micciancio, D., Regev, O.: Worstcase to averagecase reductions based on Gaussian measures. SIAM J. Comput. 37(1), 267–302 (2007)
Naor, M.: Bit commitment using pseudorandomness. J. Cryptol. 4(2), 151–158 (1991)
Naor, M., Reingold, O.: Numbertheoretic constructions of efficient pseudorandom functions. J. ACM 51(2), 231–262 (2004)
Nisan, N., Wigderson, A.: Hardness vs randomness. J. Comput. Syst. Sci. 49(2), 149–167 (1994). http://dx.doi.org/10.1016/S00220000(05)800431
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, 22–24 May 2005, pp. 84–93 (2005)
Sahai, A., Waters, B.: How to use indistinguishability obfuscation: deniable encryption, and more. In: STOC, pp. 475–484 (2014)
Acknowledgements
We give a large thanks to David Zuckerman for helpful discussions regarding the error correcting code described in Sect. 4.3. The second author is supported by the National Science Foundation (NSF) CNS1228443 and CNS1414023, the Office of Naval Research under contract N000141410333, and a Microsoft Faculty Fellowship. The fourth author is supported by NSF CNS1228599 and CNS1414082, DARPA SafeWare, Microsoft Faculty Fellowship, and Packard Foundation Fellowship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 International Association for Cryptologic Research
About this paper
Cite this paper
Goyal, R., Hohenberger, S., Koppula, V., Waters, B. (2017). A Generic Approach to Constructing and Proving Verifiable Random Functions. In: Kalai, Y., Reyzin, L. (eds) Theory of Cryptography. TCC 2017. Lecture Notes in Computer Science(), vol 10678. Springer, Cham. https://doi.org/10.1007/9783319705033_18
Download citation
DOI: https://doi.org/10.1007/9783319705033_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319705026
Online ISBN: 9783319705033
eBook Packages: Computer ScienceComputer Science (R0)