1 Introduction

Can we generate short proofs of correctness, consisting of a few bytes, of long and possibly non-deterministic computations, consisting of billions of steps? The notion of succinct non-interactive arguments (SNARGs) [19, 33] gives us a positive answer to this question by introducing two relaxations to the problem statement: computational soundness and the existence of a trusted common random, or reference, string. The construction of SNARGs has been a tremendously active area of research in cryptography, leading us to three broadly defined clusters of constructions.

The first cluster of constructions, starting from [33] (building on Kilian’s succinct interactive arguments [31]) showed SNARGs (and even SNARKs, succinct non-interactive arguments of knowledge) for all \(\textsf{NP}\) languages with security in the random oracle model or under strong, non-falsifiable, knowledge assumptions [7,8,9, 20]. This class of SNARG constructions have since then been engineered to have superior concrete efficiency (e.g. in a line of work starting from [5, 6, 35]) and have been deployed in the real world [4]. Their very popularity and increasingly widespread use raises the question of whether one can place SNARGs (and SNARKs) on a more solid footing of security, with constructions based on falsifiable assumptions. The rest of this paper will focus on constructions in the plain model (that is, eschewing random oracles).

The second cluster of constructions is really a singular beautiful construction, due to Sahai and Waters [38], that builds a SNARG for \(\textsf{NP}\) from a subexponentially falsifiable assumption, namely the existence of indistinguishability obfuscation (IO) schemes.Footnote 1 The Sahai-Waters SNARG comes with very short proofs, essentially \(O(\lambda )\) bits for a soundness error of \(2^{-\lambda }\), but a long common reference string (CRS) that is as long as the \(\textsf{NP}\) witness. A recent work of Jain and Jin [24] shows how to reduce the size of the CRS for a subclass of \(\textsf{NP} \cap \textsf{coNP}\). While IO can now be based on relatively well-studied (and falsifiable) cryptographic assumptions [22, 23], the constructions themselves are complex and inefficient, which motivates the quest for simpler and more direct constructions. An additional issue is that the constructions of [22, 23] are not post-quantum secure.

The third cluster of constructions follows the paradigm of Kalai, Raz and Rothblum [29, 30] who construct a designated verifier SNARG for P from the learning with errors (LWE) assumption. A designed verifier SNARG is a further relaxation where verification of the SNARG proof requires a private verification key. This has since been built upon by a large body of work over the last decade ([15, 17, 25, 28] and many others), obtaining public verifiability and extending the reach to larger and larger classes of languages. However, until now, the most expressive class for which we can construct a SNARG following this line of work (even in the easier designated verifier setting) is a subclass of \(\textsf{NP} \cap \textsf{coNP}\) [25].

Several intriguing questions emerge in light of these constructions.

SNARG for \(\textsf{NP}\) from LWE? Can we match the expressivity of the Sahai-Waters SNARG, while basing security only on the well-studied (and presumably post-quantum secure) LWE assumption? The question is open even in the easier, designated verifier (dv), setting.

Q1: Can we construct a (dv)SNARG for \(\textsf{NP}\) (or large subclasses thereof) from the LWE assumption?

Our first contribution is the construction of a designated verifier SNARG with reusable soundness for \(\textsf{UP}\) (unambiguous nondeterministic polynomial time), a subclass of \(\textsf{NP}\) where each instance has a unique witness (see Example 1 for a concrete example of such a language), from a variant of LWE (discussed in the next paragraph). In a designated verifier SNARG, one could require soundness against (efficient) malicious provers that either have oracle access to the verification algorithm or not. Soundness against the former (stronger) class of malicious provers is referred to as reusable soundness, since the verification key can be safely reused to check polynomially many proofs. If the cheating prover is not allowed access to the verification oracle, one obtains the weaker notion of non-reusable soundness (some SNARG constructions, e.g. [29, 30] achieve this weaker notion). Henceforth, when we refer to a designated verifier SNARG, we will always mean one with reusable soundness.

Our construction is sound under a relatively new, but popular, variant of LWE called evasive LWE [21, 39, 40, 44]. Informally, evasive LWE acts as a “bridge” between the standard learning with errors (LWE) assumption and the existence of indistinguishability obfuscation (IO). Evasive LWE has proved fruitful in constructing several cryptographic primitives such as witness encryption and null-IO [39, 40], optimal broadcast encryption [44], multi-authority ABE [41] and unbounded-depth attribute-based encryption [21] that we have so far been able to construct from IO but not from the LWE assumption alone. These constructions also provide a potential pathway to achieving constructions based on plain LWE. (For a description of the evasive LWE assumption, we refer the reader to Eq. (1) in Sect. 2.1).

Informal Theorem 1

Assuming subexponential LWE and evasive LWE, there exists an reusably sound designated-verifier zero-knowledge SNARG for UP. The length of the SNARG proof is \(\lambda +\omega (\log (|x|+|w|+\lambda ))\) to achieve a soundness error of \(2^{-\lambda }\). The length of the common reference string is \(\textsf{poly}(\lambda ,|x|,|w|)\).

We achieve this result via the construction of a new average-case obfuscator for a class of function families with pseudorandom truth tables, called \(\sigma \)-PRF obfuscation, from evasive LWE (see Sect. 4). We consider the notion and construction of \(\sigma \)-PRF obfuscation of potentially independent interest; for example, our construction captures and generalizes many existing constructions of primitives from LWE such as shift hiding functions and constrained pseudorandom functions.

Example 1

As an example, consider the language Factor defined by the relation:

$$ R(N, (p, q)) = {\left\{ \begin{array}{ll} 1 & \text {if}\,\, N = pq\,\, \text {and}\,\, p \le q\,\, \text {are both primes,}\\ 0 & \text {otherwise.} \end{array}\right. } $$

i.e. Factor is the language that contains all composite numbers which are a product of two primes. By the uniqueness of factorizations, it is clear that the above is a \(\textsf{UP}\) relation. Informal Theorem 1 gives a SNARG for Factor from LWE and evasive LWE. As far as we know, this is an example of a language for which we do not currently have a SNARG from LWEFootnote 2.

Adaptive Soundness. A second question relates to the notion of adaptive soundness which allows a cheating prover to pick the instance on which she decides to cheat, based on the common reference string. On the one hand, adaptive soundness is the desired form of security as many applications require one to publish a CRS and being able to handle \(\textsf{NP}\) statements chosen after the fact. On the other hand, constructions from the second and third cluster typically do not satisfy adaptive soundness. In particular, the Sahai-Waters construction is non-adaptively sound; and the construction of Jin, Kalai, Lombardi and Vaikuntanathan [25] for a subclass of \(\textsf{NP}\) \(\cap \ \textsf{coNP}\) is also inherently non-adaptively sound. This question is interesting even for the easier designated verifier setting, even with strong (falsifiable) assumptions, and even for subclasses of \(\textsf{NP}\).Footnote 3

Q2: Can we construct an adaptively secure (dv)SNARG for \(\textsf{NP}\) (or an interesting subclass thereof)?

Our second contribution is the construction of an adaptively secure reusable dvSNARG from evasive LWE. Indeed, we show that the construction from Theorem 1 is adaptively sound as-is.

As a third contribution, we show a general lifting theorem, based on careful complexity leveraging, that takes any non-adaptively sound “Sahai-Waters-type” \(\textsf{dvSNARG}\) construction, and shows that it is also adaptively sound, without increasing the proof length (rather, only the length of the CRS). In particular, we show that the Sahai-Waters SNARG restricted to the designated verifier setting can be viewed through the lens of witness PRFs, a notion introduced by Zhandry [45]. We then show that any witness-PRF based designated-verifier SNARG can be complexity leveraged to be adaptively sound without affecting the length of the proof.

Informal Theorem 2

Assuming the subexponential hardness of underlying assumptions, dvSNARG constructed via witness PRFs can be made adaptively sound, with proof size \(\lambda + \omega (\log (|x| + |w| + \lambda ))\) to achieve a soundness error of \(2^{-\lambda }\).

On the Gentry-Wichs Barrier. It is instructive to pause and ponder why our result does not contradict the Gentry-Wichs (GW) impossibility for SNARGs [19]. Gentry and Wichs showed that any construction of an adaptively sound SNARG for a hard language (even a designated verifier one, and even one with a long CRS [14]) cannot be based on a falsifiable assumption. In slightly more detail, any purported reduction from solving an instance of a falsifiable assumption to breaking the adaptive soundness of such a SNARG can be turned into an algorithm (that runs in the same time as the reduction) that decides the language without any help. Ergo, if the language is hard, such a reduction does not exist. Our construction seems to overcome the GW impossibility for two reasons, one more fundamental than the other: (1) our reduction runs in time exponential in the witness length, and is thus trivially powerful enough to decide the language; and (2) the evasive LWE assumption, on the face of it, is not falsifiable. The second of these two reasons does not appear to be fundamental, again for two reasons. First, to the best of our knowledge, it is plausible that evasive LWE could one day be proved hard under a falsifiable assumption. In such a world, the only reason why our SNARG construction evades the GW impossibility would be the runtime of the reduction. Secondly, it is worth noting that \(\sigma \)-PRF obfuscation, on which our SNARG for \(\textsf{UP}\) is directly based, is a subexponentially falsifiable assumption, in the same sense that IO is a subexponentially falsifiable assumption.

We emphasize that the fact that our reduction runs in time exponential in the witness length affects the length of the CRS but not the length of the proof (the same way it plays out in the [38] SNARG.)

SNARKs. A third question relates to the notion of succinct non-interactive arguments of knowledge, or SNARKs. SNARKs are more directly useful in several applications than plain SNARGs. They also compose better, e.g. in recursive constructions [8]. However, we know much less about SNARKs; by a recent result, SNARKs in the plain model with black-box extraction do not exist for all of \(\textsf{NP}\) [14, 27]. This leads us to the following question:

Q3: Can we construct a (dv)SNARK for an interesting subclass of \(\textsf{NP}\)?

We show a general compiler that takes any SNARG for \(\textsf{UP}\) and converts it into an extractable SNARK, preserving adaptive soundness and zero-knowledge. Given the impossibility result for \(\textsf{NP}\) mentioned above [14], our restriction to \(\textsf{UP}\) is essential.

Informal Theorem 3

Assuming subexponential LWE, any subexponentially sound SNARG for \(\textsf{UP} \) can be compiled into one with (non-adaptive) black-box knowledge soundness. If the original SNARG is publicly verifiable, so is the SNARK. If the underlying SNARG is adaptively sound, then the resulting SNARK is adaptively sound (with non-adaptive knowledge soundness). Additionally, if the underlying SNARG is zero-knowledge, the resulting SNARK is also zero-knowledge.

We achieve the weaker form of non-adaptive knowledge extraction; the stronger adaptive form is also known to be impossible w.r.t. black-box extraction for essentially any non-trivial language [14]. Our transformation here builds heavily on the work of [14] and corrects two issues with their construction (see Remark 2). Additionally, the transformation of [14] relied on a \(\textsf{SNARG}\) for \(\textsf{NP}\) to preserve zero-knowledge, whereas a \(\textsf{SNARG}\) for \(\textsf{UP}\) is sufficient for our transformation. Intuitively, we achieve this by pairing the \(\textsf{SNARG}\) scheme with an injective public key encryption scheme (see Sect. 2.3 for more details) to ensure that a \(\textsf{SNARG}\) for \(\textsf{UP}\) relations is sufficient for the upgrade. This also illustrates how one can use a \(\textsf{SNARG}\) for \(\textsf{UP}\) - namely, by pairing it with injective cryptographic primitives.

Combining this transformation with our SNARG for \(\textsf{UP}\), we obtain a construction of a reusably and adaptively sound, and non-adaptively extractable, SNARK for \(\textsf{UP}\) from evasive LWE.

Informal Theorem 4

Assuming subexponential LWE and evasive LWE, there exists a reusable and adaptively sound zero-knowledge \(\textsf{SNARK}\) for \(\textsf{UP}\) in the designated verifier setting, achieving non-adaptive black-box knowledge soundness with proof size \(\textsf{poly}(\lambda )\). The length of the common reference string is \(\textsf{poly}(\lambda , |x|, |w|).\)

Additionally, by applying our compilers to the Sahai-Waters \(\textsf{SNARG}\) with subexponential hardness of underlying assumptions, we also obtain the following corollary.

Informal Theorem 5

Assuming LWE, subexponentially-secure indistinguishability obfuscation and subexponentially-secure one-way functions, there exists a reusable and adaptively sound zero-knowledge \(\textsf{SNARK}\) system for \(\textsf{UP}\) in the designated verifier setting, achieving black-box knowledge soundness with proof size \(\textsf{poly}(\lambda )\). The length of the common reference string is \(\textsf{poly}(\lambda , |x|, |w|).\)

Example 2

As another illustrative example, consider the following language Dlog based on the discrete logarithm assumption. Given the generator g of a cyclic group \(\mathbb {G}\), we can define the relation \(R_{g, \mathbb {G}} : \mathbb {G} \times \{1, \dots , |\mathbb {G}|\} \rightarrow \{0, 1\}\):

$$ R_{g, \mathbb {G}}(h, x) = {\left\{ \begin{array}{ll} 1 & \text {if}\,\, h = g^x\\ 0 & \text {otherwise.} \end{array}\right. } $$

This is a \(\textsf{UP}\) relation, since x is in fact unique since g is the generator of the group. However, the corresponding language \(L_{g, \mathbb {G}}\) is vacuous in the sense that every \(h \in L_{g, \mathbb {G}}\). Hence, this language has a trivial SNARG, namely, a SNARG where the verifier always accepts. On the other hand, our SNARK construction guarantees that a prover that produces an accepting proof “knows” x. Hence, this gives a meaningful construction of a succinct argument for Dlog.

Overview of Results. For a roadmap of our results, see Fig. 1.

Fig. 1.
figure 1

Roadmap of our results and transformations. All of the constructions above are additionally multi-theorem adaptively zero knowledge. Our results are highlighted in orange, and concurrent work and its implications are highlighted in blue. Here, X denotes one additional assumption (on top of \(\textsf{iO}\) and \(\textsf{OWF}\)) as in Theorem 1 All transformations from our paper are denoted by arrows labeled by the corresponding sections. The dashed arrow represents an immediate consequence. (Color figure online)

1.1 Concurrent Work

In concurrent works, Waters and Wu [42] and Waters and Zhandry [43] modify the Sahai-Waters construction to achieve adaptivity in the publicly-verifiable setting. They show the following.

Theorem 1

([42, 43]). Assuming (1) either the polynomial hardness of computing discrete logs in a prime-order group, the polynomial hardness of factoring, or the polynomial hardness of LWE, (2) subexponentially-secure indistinguishability obfuscation, and (3) subexponentially-secure one-way functions, there exists a publicly verifiable, perfectly zero-knowledge \(\textsf{SNARG}\) for all of \(\textsf{NP}\), with proof size \(\textsf{poly}(\lambda )\).

In comparison to our “lifting” theorem (Informal Theorem 2), we note that the transformation from either works introduces a new assumption and requires a white-box modification to the Sahai-Waters \(\textsf{SNARG}\). On the other hand, our transformation works for any witness-PRF based SNARG, although we are restricted to the (reusable) designated verifier setting. We then show to instantiate this witness PRF template in two different ways - one via our witness PRF for \(\textsf{UP}\), and another via the Sahai-Waters construction.

Combining the Waters-Wu or the Waters-Zhandry \(\textsf{SNARG}\) with our \(\textsf{SNARG}\) to \(\textsf{SNARK}\) compiler in Informal Theorem 3, or the compiler of [14], we obtain the corollary that there exists a (publicly verifiable) \(\textsf{zk}\textsf{SNARK}\) for all of \(\textsf{UP}\) achieving black-box extractability (see Fig. 1).

1.2 Organization of the Paper

First, in Sect. 2, we give high-level descriptions of our SNARG for UP, our adaptive security transformation, and our SNARG-to-SNARK compiler. In Sect. 3, we state basic definitions and lemmas. In Sect. 4, we introduce the notion of \(\sigma \)-matrix PRFs, and show how to obfuscate sufficiently secure \(\sigma \)-matrix PRFs using LWE and evasive LWE. In Sect. 5, we use our \(\sigma \)-matrix PRF obfuscation to construct an adaptively secure witness PRF for UP. In Sect. 6, we show a generic transformation from a (sufficiently secure) adaptively secure witness PRF for an NP language L to a reusable, adaptively secure designated-verifier SNARG for L. In Sect. 7, we show that witness PRF-based SNARGs (which includes our SNARG for UP) can be generically upgraded to adaptive soundness in the designated verifier setting, if we assume the subexponential hardness of the underlying primitives. Finally, in Sect. 8, we show that any subexponentially sound SNARG for \(\textsf{UP}\) can be generically transformed into a SNARK for \(\textsf{UP}\), with black-box knowledge soundness.

We defer many details to the full version of the paper [32].

2 Technical Overview

2.1 Our SNARG for UP from Evasive LWE

Sahai-Waters SNARG Template. We first recall the zk-SNARG for \(\textsf{NP}\) construction of Sahai and Waters [38] from indistinguishability obfuscation and one-way functions. For simplicity, we describe their construction in the designated-verifier model. Given a relation circuit C, the Sahai-Waters common reference string is an obfuscation of the following circuit P. On input xw, P checks if \(C(x, w) = 1\). If yes, P outputs a PRF value \(\pi = f_k(x)\), and otherwise it outputs \(\bot .\) The designated verifier simply stores the PRF key k, and on input \((x, \pi )\), accepts iff \(\pi = f_k(x)\). To show soundness of the SNARG on a non-adaptively chosen \(x^* \notin L\), Sahai and Waters use a punctured programming technique to show that \(f_k(x^*)\) remains hidden even given an indistinguishability obfuscation of P.

Our Approach via Witness PRFs. Our SNARG for \(\textsf{UP}\) construction proceeds in two steps, and the second step follows an approach very similar to that of Sahai and Waters. In the first step, we construct a witness PRF as defined by Zhandry [45] for \(\textsf{UP}\) (as discussed in Sect. 2.2, we can view that Sahai-Waters SNARG as a witness PRF SNARG as well). In the second, which we explain next, we generically convert any witness PRF for a \(\textsf{NP}\) language L into a SNARG for L. Recall that, intuitively, a witness PRF is a PRF \(f_k\) taking instances x as input, along with an evaluation key \(\textsf{ek}\) such that

$$ \textsf{Eval}(\textsf{ek}, x, w) = {\left\{ \begin{array}{ll} f_k(x) & \text {if}\,\, R(x, w) = 1\\ \bot & \text {otherwise.} \end{array}\right. } $$

Note that the evaluation key also takes in as input a witness w along with In particular, \(\textsf{ek}\) hides \(f_k(x)\) for all \(x \notin L\)Footnote 4. Given a witness PRF for L, we construct a SNARG for L as follows: the common reference string is the evaluation key \(\textsf{ek}\); the prover on input (xw) computes \(y = f_k(x)\) using \(\textsf{ek}\) and w and outputs y, and the designated verifier stores k and on input \((x, \pi )\), accepts iff \(\pi = f_k(x)\). The (adaptive) security of the witness PRF implies the adaptive security of our SNARG; for a rough intuition, one can view the witness PRF-based argument as a strengthening of the Sahai-Waters punctured programming argument, where all \(x \notin L\) are punctured simultaneously. For more details on this transformation, see Sect. 2.2.

Above, the witness PRF evaluation key \(\textsf{ek}\) plays the role of an obfuscation of the original PRF; in particular, functionality allows computation of \(f_k(x)\) for \(x \in L\) given a witness w for x, but security guarantees that the values \(f_k(x)\) are hidden for \(x \notin L\). Indeed, we will construct it using a notion of obfuscation for (a certain class of) PRFs.

New Notion of Obfuscation. We first recall that a (read-once) matrix branching program (MBP) is a collection of matrices \(\big (\{\textbf{M}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textbf{u}, \textbf{v}\big )\) over some ring, which we will take to be \(\mathbb {Z}_q\) for prime q. Such a program computes the function which maps any input \(x \in \{0, 1\}^h\) to the value \(\textbf{u}^T \left( \prod _{i \in [h]} \textbf{M}_{i, x_i}\right) \textbf{v}\in \mathbb {Z}_q\). Our obfuscation \(\textsf{Obf}\) will satisfy the following guarantee. Suppose \(\{f_k\}_k\) is a PRF computable by (polynomial-sized) matrix branching programs that is highly secure; in particular, for random k, the truth table \(\{f_k(x)\}_{x \in \{0, 1\}^h}\) is indistinguishable from random. Moreover, this is true in the presence of some leakage \(\textsf{aux}(k)\) on the key k. Then

$$ \textsf{Obf}(f_k), \textsf{aux}(k) \approx _c \textsf{Obf}(f_{k'}), \textsf{aux}(k) \;, $$

where \(k'\) is a fresh random key chosen independently of k. We defer the discussion of how to achieve this guarantee from evasive LWE (and LWE) to the next subsection; in the remainder of this one, we describe how to use it to construct a witness PRF for \(\textsf{UP}\).

Here is the idea. Fix a PRF family \(\{g_k\}_k\). Given a relation R, define another function family \(\{f_{k_1, k_2}\}_{k_1, k_2}\) via

$$\begin{aligned} f_{k_1, k_2}(x, w) &= {\left\{ \begin{array}{ll} g_{k_1}(x) & \text {if}\,\, R(x, w) = 1\\ g_{k_1}(x) + g_{k_2}(x, w) & \text {otherwise.} \end{array}\right. } \end{aligned}$$

Then, the witness PRF generation algorithm simply samples \(k = (k_1, k_2)\) and \(\textsf{ek}= \textsf{Obf}(f_{k_1, k_2})\); the underlying PRF is \(g_{k_1}(x)\), which, for \(x \in L\), can be computed given \(\textsf{ek}\) and the witness w for x.

We would like to prove security by using the obfuscation guarantee to replace \(\textsf{Obf}(f_{k_1, k_2})\) with an independent value \(\textsf{Obf}(f_{k'_1, k'_2})\). This places constraints on our instantiation of f (and g). Namely, we need \(f_{k_1, k_2}\) to be computable with a (polynomial-sized, read-once) matrix branching program, and we need it to be a PRF. (Now it is clear why our construction only works for \(\textsf{UP}\); if some \(x \in L\) had two distinct witnesses \(w_1, w_2\), then f would not be a PRF, since one would have \(f_{k_1, k_2}(x, w_1) = g_{k_1}(x) = f_{k_1, k_2}(x, w_2)\).) But we need something even stronger; we need f to be a highly secure PRF even under any leakage the adversary can see in the witness PRF game. In particular, the witness PRF adversary can also request PRF values \(g_{k_1}(x)\) for \(x \notin L\), that are not present in the truth table of f. So we need f to be a (highly secure) PRF even in the presence of leakage \(\textsf{aux}(k_1, k_2) := \{g_{k_1}(x)\}_{x \notin L}\).

As described, these constraints are too strong. It is known that read-once branching programs cannot compute highly secure PRFs, even without leakage [16]. However, we have one more trick up our sleeve. Our obfuscation guarantee also holds for function families computable by matrix branching programs whose outputs are only pseudorandom after small errors are added to them. We will call such function families \(\sigma \)-matrix PRFs (\(\sigma > 0\) is the width of Gaussian errors needed to make the outputs pseudorandom). And with appropriate parameters, a random matrix branching program is a highly-secure \(\sigma \)-matrix PRF under (subexponential) LWE. (This is essentially the BLMR PRF [11], but without rounding.) We can fairly easily combine together two such programs and a matrix branching program computing a \(\textsf{UP}\) relation to produce a function family satisfying the desired “almost-PRF” guarantee. See Sect. 5 for more details. Next, we turn to a description of our obfuscation scheme from LWE and evasive LWE.

Constructing an obfuscation scheme for \(\sigma \)-Matrix PRFs. To construct our obfuscation scheme, we rely on the evasive LWE assumption, introduced by [39, 44]. We first describe the assumption and then use it to construct our obfuscation scheme. Fix some efficiently samplable distributions \((\textbf{S},\textbf{P},\textsf{aux})\) over \(\mathbb {Z}_q^{n' \times n} \times \mathbb {Z}_q^{n \times t} \times \{0, 1\}^*\). We want to be able to show statements of the form

$$\begin{aligned} ({\textbf{S}\textbf{B}+ \textbf{E}}, \textbf{B}^{-1}(\textbf{P}),\textsf{aux}) \approx _c ({\textbf{C}}, \textbf{B}^{-1}(\textbf{P}),\textsf{aux}) \end{aligned}$$

where \(\textbf{B}\leftarrow \mathbb {Z}_q^{n \times m},\textbf{C}\leftarrow \mathbb {Z}_q^{n' \times m}\) are uniformly random. (The reader should think of parameters \(t \ge m = \varOmega (n \log q)\) so that \(\textbf{P}\) is wider than \(\textbf{B}\).) There are two distinguishing strategies in the literature:

  • distinguish \(\textbf{S}\textbf{B}+\textbf{E}\) from \(\textbf{C}\) given \(\textsf{aux}\);

  • compute \((\textbf{S}\textbf{B}+ \textbf{E}) \cdot \textbf{B}^{-1}(\textbf{P}) = \textbf{S}\textbf{P}+ \textbf{E}\cdot \textbf{B}^{-1}(\textbf{P}) \approx \textbf{S}\textbf{P}\) and distinguish the latter from uniform, again given \(\textsf{aux}\).

The evasive LWE assumption essentially asserts that these are the only distinguishing attacks. Namely,

$$\begin{aligned} &{{\textbf {if}}}& ({\textbf{S}\textbf{B}+ \textbf{E}},{\textbf{S}\textbf{P}+ \textbf{E}'}, \textsf{aux}) &\approx _c ({\textbf{C}},{\textbf{C}'}, \textsf{aux}), \end{aligned}$$
(1)
$$\begin{aligned} &{{\textbf {then}}}& ({\textbf{S}\textbf{B}+ \textbf{E}}, \textbf{B}^{-1}(\textbf{P}),\textsf{aux}) &\approx _c ({\textbf{C}}, \textbf{B}^{-1}(\textbf{P}),\textsf{aux}) \end{aligned}$$
(2)

where \(\textbf{E}'\) is a fresh noise matrix of not-too-large magnitude relative to \(\textbf{E}\). We refer to Eq. (1) as the pre-condition, and Eq. (2) as the post-condition. We give a formal definition of Evasive LWE in Sect. 4.5.

We now use evasive LWE to construct our obfuscation for \(\sigma \)-matrix PRFs \(f_k(\textbf{x}) = \textbf{u}^T \textbf{M}_\textbf{x}\textbf{v}\), where \(k = \{\textbf{M}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textbf{u}, \textbf{v}\). As above, we will assume that \(f_k\) is read-once. And, as mentioned above, our construction closely follows the witness encryption construction of Vaikuntanathan, Wichs and Wee [40].

First, we sample matrices with small Gaussian entries \(\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\), and construct the following diagonal matrices

$$\widehat{\textbf{S}}_{i, b} = \begin{pmatrix} \textbf{M}_{i, b} & \\ & \textbf{S}_{i, b}. \end{pmatrix}$$

Additionally, set the “bookend vectors” \(\hat{\textbf{u}} = (\textbf{u}\mid \textbf{1}^n)\) and \(\hat{\textbf{v}} = (\textbf{v}\mid \textbf{0}^n)\), where \(\textbf{0}\) and \(\textbf{1}\) denote the all-zeros vector and the all-ones vector respectively. Note that

$$\hat{\textbf{u}}^T \widehat{\textbf{S}}_\textbf{x}\hat{\textbf{v}} = \textbf{u}^T \textbf{M}_\textbf{x}\textbf{v}= f_k(\textbf{x})~.$$

Using the encoding technique of Gentry, Gorbunov and Halevi [18] (often referred to as the GGH15 encoding), we encode our matrices as follows:

figure a

where \(\textbf{A}_h = \hat{\textbf{v}}\), \(\textbf{A}_i \leftarrow \mathbb {Z}_q^{(n+w) \times 4 (n + w) \log q}\), and \(\textbf{A}_i^{-1}(\cdot )\) denotes a random preimage with small entries. We use to indicate noised terms; for example, indicates \(\textbf{A}+\textbf{E}\) for some small noise term \(\textbf{E}\). By the correctness of the GGH15 encodings, we have that up to small additive error,

$$ \textbf{D}_\textbf{x}:= \prod _{i\in [h]} \textbf{D}_{i,x_i} \approx \hat{\textbf{u}} \widehat{\textbf{S}}_\textbf{x}\hat{\textbf{v}} = f_k(\textbf{x}) \;, $$

that is, the GGH encodings can be used to approximately compute the functionality of \(f_k\).

To prove security, we use LWE and evasive LWE to show that the GGH15 encodings look pseudorandom. For notational simplicity, below, each instance of \(\mathcal {U}\) will denote an independently sampled, uniformly sampled matrix over \(\mathbb {Z}_q\) with appropriate dimensions. For the first step of the argument, we set:

figure d

Note that the product \(\textbf{S}\textbf{P}\) is just a noisy version of the pseudorandom truth table, that is:

figure e

Thus, \(\textbf{S}\textbf{P}\) is pseudorandom in in the presence of \(\textsf{aux}(k)\). It follows is too. Additionally, writing \(\textbf{A}_{h - 1} = \begin{pmatrix} \overline{\textbf{A}}_{h - 1} \\ \underline{\textbf{A}}_{h - 1} \end{pmatrix}\), we have that

figure g

by invoking a BLMR-type argument [11] with public matrices \(\textbf{S}_{i, b}\) and secret matrix \(\underline{\textbf{A}}_{h-1}.\) Because the \(\textbf{S}_{i, b}\) matrices are treated as public, this indistinguishability also holds in the presence of , and because the \(\textbf{S}_{i, b}\) and \(\textbf{A}_{i}\) matrices are sampled independently of k, it holds even in the presence of \(\textsf{aux}(k)\). Therefore,

figure i

Therefore, invoking evasive LWE, we get the following as the post-condition:

figure j

That is, we have “peeled off” the last pair of matrices \(\{\textbf{D}_{h, b}\}_b\) while preserving pseudorandomness. Now we repeat. In the second step of the argument, we use evasive LWE to peel off \(\{\textbf{D}_{h-1, b}\}_b\), by setting

figure k

Continuing in this way, applying evasive LWE a total of \(h - 1\) times, we finally obtain that

$$\begin{aligned} (\{\textbf{D}_{1, b}\}_b, \{\textbf{D}_{i, b}\}_{i \ge 2, b}, \textsf{aux}(k)) &\approx _c (\mathcal {U}, \{\textbf{D}_{i, b}\}_{i \ge 2, b}, \textsf{aux}(k)). \end{aligned}$$

Now the random matrix \(\textbf{A}_1\) only appears in \(\{\textbf{D}_{2, b}\}_b\). Indeed, we have

figure l

And, following [16], the random short preimages \(\textbf{A}_i^{-1}(\cdot )\) are sampled in such a way that for any matrix \(\textbf{Z}\), for uniformly random \(\textbf{A}_i\), is indistinguishable from a discrete Gaussian sample \(\mathcal {D}_{\mathbb {Z}, \sigma }\) (independent of \(\textbf{Z}\)). Thus we can replace \(\{\textbf{D}_{2, b}\}_b\) with a fresh discrete Gaussian sample \(\mathcal {D}_{\mathbb {Z}, \sigma }\). But now \(\textbf{A}_2\) only appears in \(\{\textbf{D}_{3, b}\}_b\), so we can repeat the argument and replace \(\{\textbf{D}_{3, b}\}_b\) with \(\mathcal {D}_{\mathbb {Z}, \sigma }\). Continuing in this way, we obtain

$$\begin{aligned} (\{\textbf{D}_{1, b}\}_b, \{\textbf{D}_{i, b}\}_{i \ge 2, b}, \textsf{aux}(k)) \approx _c (\mathcal {U}, \{\mathcal {D}_{\mathbb {Z}, \sigma }\}_{i \ge 2}, \textsf{aux}(k)). \end{aligned}$$

completing the proof. For a formal description of the obfuscation scheme and the formal proof, we point the reader to Sect. 4.

On Heuristic Counterexamples to Evasive LWE. As shown in [40], the evasive LWE assumption is likely false for general auxiliary input \(\textsf{aux}\). This is due to a heuristic attack in which \(\textsf{aux}\) is an (ideal) obfuscation of a program which knows \(\textbf{P}\) along with a trapdoor \(\tau \) for \(\textbf{P}\), and on input \((\textbf{C}, \textbf{D})\) uses \(\tau \) to decide if \(\textbf{C}\cdot \textbf{D}\) is close to \(\textbf{S}' \cdot \textbf{P}\) for some \(\textbf{S}'\). (See [40, Section 8.2] for details.) Such an \(\textsf{aux}\) will break the post-condition of evasive LWE while leaving the pre-condition valid. While it is reasonable to conjecture the security of evasive LWE for essentially any “reasonable” \(\textsf{aux}\) [40], in our case, caution is warranted.

The auxiliary input we use to argue the security of GGH encodings of \(\sigma \)-matrix PRFs has two components. The first is the sequence of GGH-encoding matrices \((\{\textbf{D}_{i, b}\}_{i \ge j}, \textsf{aux}(k))\) arising from our inductive argument. The second is the auxiliary information \(\textsf{aux}(k)\) related to the PRF key k.

In our witness PRF construction, the second component \(\textsf{aux}(k)\) is a list of PRF values \(\{\textsf{PRF}_\textsf{fk}(\textbf{x})\}_{\textbf{x}\notin L}\) on inputs not in the language L, which seems quite benign. But the first component seems, at least superficially, quite similar to the auxiliary input used in the heuristic attack. It is a GGH encoding that can be viewed as the obfuscation of a certain program \(\varPi \), and it is generated using a trapdoor for a matrix related to \(\textbf{P}\). However, we believe the similarity is only superficial. While the trapdoor is used to generate the obfuscation of \(\varPi \), the functionality of \(\varPi \) is independent of the trapdoor for \(\textbf{P}\) (even of \(\textbf{P}\)). Indeed, evaluating the obfuscated program involves multiplying \(\textbf{S}\textbf{B}+ \textbf{E}\) by the first matrix that is part of the obfuscated program, simply yielding the matrix \(\textbf{S}\textbf{P}+ \textbf{E}'\) in the evasive LWE precondition. This is a key difference between our \(\textsf{aux}\) and the contrived auxiliary input described in [40], indeed, one that makes the existence of a similar attack unlikely.

2.2 Achieving Adaptive Security for Witness-PRF Based SNARGs

As discussed above, Gentry and Wichs [19] showed that it is not possible to construct an adaptively secure SNARG for all languages in \(\textsf{NP}\) through polynomial time reductions to falsifiable assumptions, even in the designated verifier setting. In fact, this impossibility result holds for all languages which have subexponentially hard-membership problems. Even \(\textsf{UP}\) languages such as decisional Diffie-Hellman (DDH) are believed to have such subexponentially hard-membership problems.

One interpretation of the Gentry-Wichs barrier is that one has to turn to subexponential hardness assumptions to be able to argue adaptive soundness based on (subexponentially) falsifiable assumptions. Indeed, our SNARG for \(\textsf{UP}\) relies on subexponential LWE, in addition to evasive LWE. While the latter assumption is is seemingly not falsifiable, our reduction reduces the soundness of the SNARG to witness PRFs, which is in fact subexponentially falsifiable. Therefore, the main reason our SNARG can be shown to be adaptively sound is that we reduce to a subexponential hardness assumption.

It is then natural to ask more generally if existing SNARG constructions, in particular the Sahai-Waters SNARG, can be shown to be adaptively secure via similar reductions to subexponential security assumptions.

Complexity-Leveraging Does Not Work Generically. A general strategy to promote a non-adaptively secure scheme to an adaptively secure one is complexity leveraging [10]. In the case of SNARGs, the high-level idea is to guess the instance \(x^* \leftarrow \overline{L}\) that the adaptive cheating prover is going to cheat on, and proceed with the non-adaptive security reduction. Since the probability of guessing \(x^*\) correctly is \(1/2^{|x|}\), the resulting reduction incurs an exponential loss in advantage, thereby affecting the security parameter. This approach does immediately work, in a somewhat weak sense—by assuming subexponential hardness of the underlying assumptions, one can straightforwardly prove adaptive soundness. However, the difficult part is ensuring that the resulting SNARG proofs remain succinct because it seems like the proof size has to scale with the security parameter. Indeed, naively applying complexity leveraging to the publicly verifiable Sahai-Waters SNARG [38] incurs a proof length of at least |x|.Footnote 5

Complexity-Leveraging on the Witness PRF. As we will see shortly, the transformation from witness PRF to \(\textsf{SNARG}\) preserves adaptivity in the following sense; if the underlying witness PRF is adaptively sound, then the resulting \(\textsf{SNARG}\) is also adaptively sound. So, if one can transform a non-adaptive witness PRF into an adaptive one, one also transforms, for free, the non-adaptive SNARG based on the witness PRF into an adaptive SNARG. In particular, this would make the Sahai-Waters SNARG adaptively secure, since it can be viewed as a non-adaptive witness PRF (see Sect. 7 for details).

To this end, consider a non-adaptive witness PRF \(\textsf{wPRF}\) which is \(2^{\rho ^\alpha }\)-secure, where \(\alpha > 0\) and \(\rho \) is a security parameter.Footnote 6 That is, for all \(x^* \notin L\), against adversaries who are given \(\textsf{ek}\), \(\textsf{PRF}_{\textsf{sk}}(x^*)\) is indistinguishable from uniform except with advantage \(2^{-\rho ^\alpha }\). Now consider applying a standard complexity leveraging argument (shown visually in Fig. 2). Notice that if the non-adaptive adversary \(\mathcal {A}'\) has advantage at most \(\epsilon \), then the adaptive adversary \(\mathcal {A}'\) has advantage at most \(\epsilon /2^{|x|}\). Thus, choosing \(\rho = (|x| + \lambda )^{1/\alpha }\) is enough for adaptive security against adversaries running in time \(2^{\lambda }\).

Fig. 2.
figure 2

Complexity leveraging argument for witness PRFs. Here, \(\mathcal {C}\) is the non-adaptive witness PRF challenger, and \(\mathcal {A}'\) is a non-adaptive adversary for the witness PRF which is using an adaptive witness PRF adversary \(\mathcal {A}\) as a subroutine.

In particular, if one instantiates the Sahai-Waters SNARG with subexponential underlying assumptions, then one can obtain a subexponentially secure adaptive witness PRF from iO and one-way functions.

Adaptive Witness PRF to Adaptive SNARG. How can the transformation retain both adaptivity and succinctness? Our key observation is that we can have two security parameters: one for the witness PRF \(\textsf{wPRF}\) and one for the proof size of the designated-verifier SNARG. In particular, note that the transformation essentially turns the witness PRF indistinguishability (i.e. the problem of distinguishing \((\textsf{ek}, \textsf{wPRF}_\textsf{sk}(x^*)\) and \((\textsf{ek}, r)\)) into a search problem (i.e. the problem of finding \(\textsf{wPRF}_\textsf{sk}(x^*)\)). Recall that we constructed a subexponentially adaptively sound witness PRF in Fig. 2.

Let \(\ell \) be the proof size of the witness PRF to SNARG transformation. Suppose we want the SNARG for L to be \(2^{-\lambda }\) secure, and we have an adaptively secure witness PRF that is \(2^{\rho ^\alpha }\) secure. Consider the reduction in Fig. 3. Let \(W_\beta = \Pr [\mathcal {C}\,\, \text {accepts when}\,\, b = \beta ]\). By adaptive security of the witness PRF, we have \(|W_1 - W_0| \le 2^{-\rho ^\alpha }.\)

Fig. 3.
figure 3

Adaptively sound SNARG from adaptively secure witness PRF. For simplicity, we will only illustrate non-reusable adaptive soundness. Here, \(\mathcal {C}\) is an adaptive witness PRF challenger, and \(\mathcal {P}^*\) is a cheating prover that produces proofs \((x^*, \pi )\) for \(x^* \notin L\).

Suppose \(\mathcal {P}^*\) is a cheating prover that succeeds with probability p. Note that if \(b = 1\), then \(\mathcal {A}\) succeeds whenever \(\mathcal {P}^*\) successfully produces a cheating proof, and hence \(W_1 = p\). If \(b = 0\), since \(y_0\) is completely hidden, the probability that \(\pi = y_0\) is at most \(2^{-\ell }.\) Therefore, by the \(2^{-\rho ^\alpha }\) security of the witness PRF, we have that

$$\begin{aligned} 2^{-\rho ^\alpha } \ge W_1 - W_0 \ge p - 2^{-\ell } = \texttt{Adv}[\mathcal {A}]. \end{aligned}$$

Choosing \(\rho \) such that \(\rho ^\alpha \ge 2\lambda \), we then have that \(\texttt{Adv}[\mathcal {A}]\le 2^{-2\lambda }\). Therefore, choosing the proof size \(\ell = \lambda + 1\), we have that \(p \le 2^{-2\lambda } + 2^{-(\lambda + 1)} \le 2^{-\lambda }\) as desired. Hence, we obtain a proof size of \(\lambda + 1\) as desired. Stepping back, note that we incur the cost of the subexponential security only in the evaluation key of the witness PRF (i.e. the \(\textsf{crs}\) of the SNARG scheme) rather than the proof length of the SNARG scheme.

2.3 From SNARGs for \(\textsf{UP}\) to SNARKs for \(\textsf{UP}\)

In this section, we outline our transformation from a \(\textsf{SNARG}\) to \(\textsf{SNARK}\) for \(\textsf{UP} \) from Sect. 8. We largely follow the work of Campanelli et al. [14]. The main difference the result is that we transform a \(\textsf{zk}\textsf{SNARG}\) for \(\textsf{UP}\) (rather than \(\textsf{NP}\)) to a \(\textsf{zk}\textsf{SNARK}\) for \(\textsf{UP}\). See the full version [32] for the main differences between our transformation and theirs, and a description of some of the issues in their construction that we address. Our knowledge extractor relies on the fact that any \(x \in L\) has a unique witness w. Our construction will essentially extract a single bit of w at some index i which is hidden from the prover.

Fix a \(\textsf{UP}\) language L with relation \(R: \mathcal {X}\times \mathcal {W}= \{0, 1\},\) where \(\mathcal {X}\) is the set of instances, and \(\mathcal {W}\) is the corresponding set of potential witnesses. Additionally, fix a fully homomorphic encryption scheme \(\textsf{FHE}\) and an public-key encryption scheme \(\textsf{PKE}\). We transform R into a new \(\textsf{UP}\) relation \(R^{(\textsf{ek}, \textsf{pk}, \textsf{ct})}\) as follows.

  • Sample a secret key \(\textsf{sk}_\textsf{FHE}\) and a evaluation \(\textsf{ek}\) from \(\textsf{FHE}.\textsf{Gen}(1^\lambda ).\)

  • Sample a secret key \(\textsf{sk}_\textsf{PKE}\) and public key \(\textsf{pk}\) for some public key encryption scheme \(\textsf{PKE}\).

  • Pick an index \(i \leftarrow [|w|]\), and compute \(\textsf{ct}= \textsf{FHE}.\textsf{Enc}_\textsf{sk}(i).\)

  • Let \(C_w\) be the circuit that takes as input j and outputs the bit w[j].

Now, we define a new relation \(R^{(\textsf{ek}, \textsf{pk}, \textsf{ct})}: \mathcal {X}' \times \mathcal {W}' \rightarrow \{0, 1\}\)

$$ R^{(\textsf{ek}, \textsf{pk}, \textsf{ct})} ((x, \rho ), (w, r)) = {\left\{ \begin{array}{ll} 1 & \text {if}\,\, R(x, w) = 1\\ & \textsf{PKE}.\textsf{Enc}_{pk}(\textsf{FHE}.\textsf{Eval}_\textsf{ek}(C_w, \textsf{ct}); r) = \rho \\ 0 & \text {otherwise.} \end{array}\right. } $$

Here, \(\textsf{FHE}.\textsf{Eval}_{\textsf{ek}}\) takes as input the circuit \(C_w\) and the ciphertext \(\textsf{ct}\) and homomorphically evaluates \(C_w\) on the ciphertext. Additionally, suppose that \(\textsf{PKE}\) has an injective encryption function, i.e. \(\textsf{PKE}.\textsf{Enc}_\textsf{sk}\) is injective in the message space and randomness. Assuming that \(\textsf{FHE}.\textsf{Eval}_\textsf{ek}\) is a deterministic function and \(\textsf{PKE}\) is injective, one can argue that \(R^{(\textsf{ek}, \textsf{pk}, \textsf{ct})}\) is a \(\textsf{UP}\) relation. Moreover, given the secret key \(\textsf{sk}_\textsf{FHE}\) and \(\textsf{sk}_\textsf{PKE}\), one can extract w[i] from the ciphertext \(\rho \).

SNARK Construction. We are now ready to outline our \(\textsf{SNARG}\) to \(\textsf{SNARK}\) construction (for a more detailed construction, see Sect. 8). Let \(\varPi \) be an subexponentially sound \(\textsf{SNARG}\) system for \(\textsf{UP}\) languages.

  • \(\textsf{SNARK}.\textsf{Gen}(1^\lambda ):\)

    • Sample \(({\textsf{sk}_\textsf{FHE}}, {\textsf{ek}})\) for FHE, and \((\textsf{sk}_\textsf{PKE}, \textsf{pk})\) for PKE, and a ciphertext \(\textsf{ct}= \textsf{FHE}.\textsf{Enc}_{\textsf{sk}}(i)\), as described earlier.

    • Compute \((\textsf{crs}, \tau ) \leftarrow \varPi .\textsf{Gen}(1^\lambda )\), for relation \(R^{(\textbf{c}, {\textsf{ek}}, \textsf{pk})}_\alpha \).

    • Set the new common reference string \(\textsf{crs}' = (\textsf{crs}, {\textsf{ek}}, \textsf{pk}, \textbf{c}),\) and verifier state \(\tau \).

    • Output \((\textsf{crs}', \tau ).\)

  • \(\textsf{SNARK}.\textsf{Prove}(\textsf{crs}', x, w):\)

    • Sample randomness r, and compute \(\rho = \textsf{PKE}.\textsf{Enc}_{pk}(\textsf{FHE}.\textsf{Eval}_\textsf{ek}(C_w, \textsf{ct}); r),\) i.e. homomorphically compute the circuit \(C_w\) on the FHE ciphertext \(\textsf{ct}\) (this will result in an encryption of w[i] under \(\textsf{sk}_\textsf{FHE}\)), and encrypt under the public key \(\textsf{pk}\).

    • Compute \(\pi \leftarrow \varPi .\textsf{Prove}(\textsf{crs}, (x, \rho ), (w, r)).\)

    • Output \(\rho , \pi \) as the proof for x.

  • \(\textsf{SNARG}.\textsf{Verify}(\tau , x, (\rho , \pi )):\) Accept iff \(\varPi .\textsf{Verify}(\tau , (x, \rho ), \pi )\) accepts, i.e. treat \((x, \rho )\) as the statement under relation \(R^{(\textbf{c}, {\textsf{ek}}, \textsf{pk})}_\alpha \), and \(\pi \) as the \(\textsf{SNARG}\) proof.

We sketch the proofs of knowledge extractability and zero-knowledge.

Knowledge Soundness. First, we sketch why the above construction is knowledge-sound, i.e. one can extract a witness for x by repeatedly querying an non-adaptive \(\mathcal {P}^*\) on various common reference strings (see Definition 4 for a more detailed definition). Suppose \(\mathcal {P}^*\) succeeds in creating proofs for x with probability \(\epsilon .\) Since the language is \(\textsf{UP}\), there is a unique witness w for x. We crucially rely on the uniqueness of w.

By the subexponential soundness of \(\varPi \), with probability approximately \(\epsilon \), \((x, \rho ), \pi \) created under \(\textsf{crs}' = (\textsf{crs}, {\textsf{ek}}, \textsf{pk}, \textbf{c})\) is accepted only if \((x, \rho )\) has a witness under \(R^{(\textsf{ek}, \textsf{pk}, \textsf{ct})}\) (here, we complexity leverage on the choice of \(\rho \) since the adversary only non-adaptively chooses x in the SNARK construction). Moreover, given as trapdoor \(\textsf{td}= (\textsf{sk}_\textsf{FHE}, \textsf{sk}_\textsf{PKE})\), one can decrypt \(\rho \) to obtain \(C_w[i] = w[i].\) By repeating this experiment many times, one should eventually collect all bits of the unique witness w. Note that since the index i is encrypted, one can argue that the probability of extracting each w[i] is approximately \(\epsilon /|w|\) (up to \(\textsf{negl}(\lambda )\) terms) by IND-CPA security of the FHE scheme. Therefore, by repeating the experiment many times, one can extract all the bits of w.

Here, we crucially rely on the adaptive soundness of \(\varPi \) because even an honest prover can only create an instance for \(\varPi \) after \(\textsf{crs}'\) is generated and the parameters \((\textsf{pk}, \textsf{ek}, \textbf{c})\) are publicly available.

Zero-Knowledge. To argue that the transformation preserves zero-knowledge, we rely on the public-key encryption scheme. Recall that if \(\textsf{FHE}.\textsf{Eval}_\textsf{ek}\) is deterministic, it may not circuit-private, and might leak information about underlying circuit. Therefore, the additional layer of public-key encryption allows one to invoke IND-CPA security to simulate the ciphertext \(\rho \) without the witness. Then, \(\pi \) can be simulated using just \((x, \rho )\) using the zero-knowledge simulator for the underlying zero-knowledge SNARG scheme \(\varPi .\)

3 Preliminaries

3.1 LWE Assumption

Given \(n, m, q \in \mathbb {N}\) and \(\sigma , \delta > 0\), the subexponential LWE assumption \(\textsf{LWE}^{\delta }_{n, m, q, \sigma }\) asserts that \((\textbf{A}, \textbf{s}\textbf{A}+ \textbf{e}) \approx _c (\textbf{A}, \textbf{b}),\) with security parameter \(\mu = 2^{n^\delta }\), where \(\textbf{s}\leftarrow \mathcal {U}(\mathbb {Z}_q^n), \textbf{A}\leftarrow \mathcal {U}(\mathbb {Z}_q^{n \times m})\), \(\textbf{e}\leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^m\), and \(\textbf{b}\leftarrow \mathbb {Z}_q^m\).

Following [40], we rely on the above assumption holding for some \(\delta > 0\), for parameters such that \(q / \sigma \le 2^{n^\delta }\).

3.2 Security Parameters

In this paper, \(\lambda \) denotes the main security parameter. We will sometimes use \(\mu \) as a second, related security parameter. In particular, we will assume throughout that parameters \(n = n(\lambda ), h = h(\lambda ) \le \textsf{poly}(\lambda )\) and \(q = q(\lambda )\) satisfy \(h \le n^{\delta /20}\) and \(q = 2^{n^\delta }\), and we will set \(\mu = \mu (\lambda ) := 2^{n^\delta }\). For a function \(f = f(\lambda )\), we will say \(\mathcal {D}_1 \approx _c \mathcal {D}_2\) with security parameter f to mean that all non-uniform probabilistic \(\textsf{poly}(f)\)-time distinguishers have advantage at most \(\textsf{negl}(f)\).

3.3 Trapdoor and Pre-image Sampling

Given a matrix \(\textbf{A}\in \mathbb {Z}_q^{n \times m}\) for \(m \ge 2 n \log q\), a vector \(\textbf{y}\in \mathbb {Z}_q^n\), and \(\sigma > 0\), we use \(\textbf{A}^{-1}(\textbf{y}, \sigma )\) to denote the distribution of a vector \(\textbf{d}\) sampled from \(\mathcal {D}_{\mathbb {Z}, \sigma }^m\) conditioned on \(\textbf{A}\textbf{d}= \textbf{y}\pmod {q}\). (Vectors satisfying the condition exist except with probability \(\textsf{negl}(\mu )\).) We extend this notation to matrices \(\textbf{Y}\in \mathbb {Z}_q^{n \times k}\) in the natural way (i.e., columnwise). We sometimes suppress \(\sigma \) when it is clear from context.

Lemma 1

([16, Lemma 3.10], originally [1, 2, 34]). Assume \(\textsf{LWE}_{n, m, q, \sigma }^\delta \). There is a PPT algorithm \(\textsf{TrapSam}(1^n, 1^m, q)\) that, on input the modulus \(q \ge 2\) and dimensions nm such that \(m \ge 2n \log q\), outputs a matrix \(\textbf{A}\) such that \(\textbf{A}\approx _s \mathcal {U}(\mathbb {Z}_q^{n \times m})\) with security parameter \(\mu \), along with a trapdoor \(\tau \) (referenced in the next lemmas).

The following lemma adapts the BLMR PRF [11, Theorem 5.1] to the setting of subexponential LWE.

Lemma 2

Let \(n, n', q, h \in \mathbb {N}\) and \(f, k, \sigma , \sigma ' \in \mathbb {R}\) be functions of \(\lambda \) satisfying

  • \(\lambda = 2^{(n')^\delta }\)

  • \(\lceil (2 \log q) \cdot n' \rceil \le n \le \textsf{poly}(n')\),

  • \(\sigma ' \ge k \cdot (n^2 \sigma )^{h + 1}\),

  • \(2^{-n^\delta } \le 1/k(n) \le \textsf{negl}(2^h \cdot n \cdot f(n))\) .

Let

$$ \textbf{a}\leftarrow \mathbb {Z}_q^{n}, \{\textbf{e}_\textbf{x}\leftarrow \mathcal {D}_{\mathbb {Z}, \sigma '}\}_{x \in \{0, 1\}^h}. $$

Then, assuming \(\textsf{LWE}^{\delta }_{n', \textsf{poly}(n'), q, \sigma }\),

$$ \{\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\}_{{\begin{array}{l}i \in [h] \\ b \in \{0, 1\}\end{array}}}, \left\{ \left( \prod _{i = 1}^h \textbf{S}_{i, \textbf{x}_i}\right) \cdot \textbf{a}+ \textbf{e}_\textbf{x}\right\} _{\textbf{x}\in \{0, 1\}^h} \approx _c \{\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\}_{{\begin{array}{l}i \in [h] \\ b \in \{0, 1\}\end{array}}}, \{\mathcal {U}\}_{\textbf{x}\in \{0, 1\}^h} \;. $$

where all \(\textsf{poly}(\mu )\)-time distinguishers have advantage at most \(\textsf{negl}(f(n))\). In particular, one can take \(q = 2^{n^\delta }\) (to rely on subexponential LWE with \(q / \sigma \le 2^{n^\delta }\)), \(h = n^c\) for some \(c < \delta /20\), \(k = 2^{h^3 \lambda }\), and \(f(n) = 2^{-h^2\lambda }\).

Finally, we present a “rounded” version of Lemma 2.

Lemma 3

With parameters as in Lemma 2, let \(p \in \mathbb {N}\) be such that \(k \cdot 2^h \cdot (2 \sigma ' \sqrt{n} + 1) \le p \le q / k\). (Setting \(p = 2^{h^7 \lambda }\) along with the other “in particular” parameters of Lemma 2 satisfies this.) Then, assuming \(\textsf{LWE}^{\delta }_{n', \textsf{poly}(n'), q, \sigma }\),

$$ \{\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\}_{{\begin{array}{l}i \in [h] \\ b \in \{0, 1\}\end{array}}}, \left\{ \left\lfloor \left( \prod _{i = 1}^h \textbf{S}_{i, \textbf{x}_i}\right) \cdot \textbf{a}\right\rceil _p \right\} _{\textbf{x}\in \{0, 1\}^h} \approx _c \{\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\}_{{\begin{array}{l}i \in [h] \\ b \in \{0, 1\}\end{array}}}, \{\mathcal {U}\}_{\textbf{x}\in \{0, 1\}^h} \; $$

where all \(\textsf{poly}(\mu )\)-time distinguishers have advantage at most \(\textsf{negl}(f(n))\).

3.4 SNARGs, SNARKs and NIZKs

A designated verifier non-interactive argument system (Definition 1) \(\varPi \) consists of a triple of efficient algorithms \(\varPi = (\textsf{Gen}, \textsf{Prove}, \textsf{Verify}):\)

  • \(\varPi .\textsf{Gen}(1^\lambda , R):\) Probabilistic algorithm that outputs a common reference string \(\textsf{crs}\) and a private state \(\tau \) for the verifier.

  • \(\varPi .\textsf{Prove}(\textsf{crs}, x, w):\) Given the common reference string \(\textsf{crs}\), a statement x and a witness w, outputs a proof \(\pi .\)

  • \(\varPi .\textsf{Verify}(\tau , x, \pi ):\) Takes as input the private state \(\tau \), a statement x, and proof \(\pi \) and outputs either 0 or 1. An argument system is publicly verifiable if \(\tau = \textsf{crs}\) or \(\tau \subseteq \textsf{crs}.\)

We say that such an argument is also succinct (Definition 2) if the proof size \(\pi \) is small. We say that such a system is also non-interactive zero-knowledge if it satisfies the zero-knowledge condition described in Definition 3.

Remark 1

In the definitions below, we use the notation \(\Pr [E:\mathcal {D}]\) to denote the probability of event E over the distribution of \(\mathcal {D}\) (sometimes notated as \(\Pr _\mathcal {D}[E]\)). We denote conditional probability of event E conditioned on event C as \(\Pr [E\mid C].\)

Definition 1

(Non-Interactive Argument). We say that

\(\varPi = (\textsf{Gen}, \textsf{Prove}, \textsf{Verify})\) is a designated verifier non-interactive argument for a language \(L \in \textsf{NP}\) if L has an NP relation R such that \(\varPi \) satisfies the following three properties:

  • Completeness: For all xw such that \(R(x, w) = 1,\)

    $$ \Pr \left[ \textsf{Verify}(\tau , x, \pi ) = 0 : \begin{array}{c} (\textsf{crs}, \tau ) \leftarrow \textsf{Gen}(1^\lambda , R) \\ \pi \leftarrow \textsf{Prove}(\textsf{crs}, x, w) \end{array} \right] = \textsf{negl}(\lambda ). $$
  • Adaptive soundness: For all p.p.t. algorithms \(\overline{\mathcal {P}}\),

    $$ \Pr \left[ \begin{array}{c} \textsf{Verify}(\tau , x, \pi ) = 1 \\ \wedge \ x \notin L \end{array} :\begin{array}{c} (\textsf{crs}, \tau ) \leftarrow \textsf{Gen}(1^\lambda , R) \\ (x, \pi ) \leftarrow \overline{\mathcal {P}}(1^\lambda , \textsf{crs}) \end{array}\right] = \textsf{negl}(\lambda ). $$

    We additionally say that \(\varPi \) is reusable if soundness holds even against malicious provers \(\overline{\mathcal {P}}\) with oracle access to \(\textsf{Verify}(\tau , \cdot , \cdot ).\)

  • Non-adaptive soundness: For every \(x \notin L\), and every p.p.t. adversary \(\overline{\mathcal {P}},\)

    $$ \Pr \left[ \begin{array}{c} \textsf{Verify}(\tau , x, \pi ) = 1 \end{array} : \begin{array}{c} (\textsf{crs}, \tau ) \leftarrow \textsf{Gen}(1^\lambda , R) \\ \pi \leftarrow \overline{\mathcal {P}}(1^\lambda , \textsf{crs}) \end{array}\right] = \textsf{negl}(\lambda ). $$

An \(\varPi \) is publicly verifiable if \(\tau = \textsf{crs}\) or \(\tau \subseteq \textsf{crs}.\)

Definition 2

( \(\textsf{SNARG}\) ). We say that \(\varPi = (\textsf{Gen}, \textsf{Prove}, \textsf{Verify})\) is a (designated verifier) succinct non-interactive argument \((\textsf{SNARG})\) for a language \(L \in \textsf{NP}\) if \(\varPi \) is a (designated verifier) non-interactive argument with the following additional succinctness condition:

  • Succinctness: For proofs \(\pi \leftarrow \textsf{Prove}(\textsf{crs}, x, w)\) where \(R(x, w) = 1\), the proof length \(|\pi |\) is at most \(\textsf{poly}(\lambda ) \cdot \textsf{polylog}(|x| + |w|)\).

Definition 3

( \(\textsf{NIZK}\) ). We say that \(\varPi = (\textsf{Gen}, \textsf{Prove}, \textsf{Verify})\) is a non-interactive zero-knowledge argument \((\textsf{NIZK})\) for a language \(L \in \textsf{NP}\) if \(\varPi \) is a non-interactive argument with one of the following zero knowledge properties.

  • Adaptive multi-theorem zero-knowledge: There exists a p.p.t. simulator \(\mathcal {S}= (\mathcal {S}_1, \mathcal {S}_2)\) that satisfies the following: For all (stateful) p.p.t. adversaries \(\mathcal {A}\), we have that for experiments \(\texttt{EXP}_\mathcal {A}^\texttt{Real}(1^\lambda )\) and \(\texttt{EXP}_\mathcal {A}^\texttt{Ideal}(1^\lambda )\) as in Experiments 1 and 2,

    $$ \left| \Pr [\texttt{EXP}_\mathcal {A}^\texttt{Real}(1^\lambda ) = 1] - \Pr [\texttt{EXP}_\mathcal {A}^\texttt{Ideal}(1^\lambda ) = 1]\right| = \textsf{negl}(\lambda ) $$

    if \(\mathcal {A}\) is limited to querying only (xw) such that \(R(x, w) = 1.\)

    Additionally, we say \(\varPi \) is statistically adaptively multi-theorem zero-knowledge if the above indistinguishability holds even for all stateful unbounded adversaries \(\mathcal {A}\) making \(\textsf{poly}(\lambda )\) queries.

  • Adaptive single-theorem zero-knowledge: This is defined similarly to adaptive multi-theorem zero-knowledge, except that \(\mathcal {A}\) is allowed to make only a single query (Fig. 4).

Fig. 4.
figure 4

Real and ideal experiments for zero-knowledge in Definition 3.

We follow the definition of black-box knowledge soundness of Campanelli et al. [14].

Definition 4

(Black-box knowledge soundness). A designated verifier non-interactive argument system as in Definition 1 is non-adaptive black-box \(\epsilon (\lambda )\)-knowledge sound for a relation R if there exists a non-uniform PPT extracter \(\textsf{Ext}\) such that for any non-uniform PPT prover \(\mathcal {P}= (\mathcal {P}_{inp}, \mathcal {P}_{chall}),\)

$$ \Pr \left[ \begin{array}{c} \textsf{Ver}(\tau , x, \pi ) = 1 \\ \wedge \ R(x, w) \ne 1 \end{array} : \begin{array}{c} (x, \textsf{st}) \leftarrow \mathcal {P}_{inp}(1^\lambda )\\ (\textsf{crs}, \tau ) \leftarrow \textsf{Gen}(1^\lambda )\\ \pi \leftarrow \mathcal {P}_{chall}(\textsf{st}, \textsf{crs})\\ w \leftarrow \textsf{Ext}^{\mathcal {P}_{chall}(\textsf{st}, \cdot )}(\textsf{crs}, \tau , x, \pi ) \end{array} \right] \le \epsilon (\lambda ). $$

We say that the argument system is non-adaptively black-box knowledge sound if \(\epsilon (\lambda ) = \textsf{negl}(\lambda ).\)

Intuitively, the prover selects the instance x on which she generates a proof, and the extractor is permitted to query the prover on many possible \(\textsf{crs}\) values (possibly with corresponding trapdoors) to reconstruct a witness for x.

Definition 5

(Non-Adaptive \(\textsf{SNARK}\) ). A (designated verifier) \(\textsf{SNARG}\) system (Definition 2) is also a non-adaptive succinct argument of knowledge \((\textsf{SNARK} )\) if it is non-adaptively black-box knowledge sound, as in Definition 4.

4 Obfuscating Matrix PRFs with Noise (\(\sigma \)-Matrix PRFs)

In this section, we define a weakening of pseudorandom function (PRF) where the adversary sees the outputs after independent noise is added to each output. Then we show that, if such a \(\sigma \)-PRF has sufficient security and can be computed by a matrix branching program, it can be meaningfully obfuscated using evasive LWE.

4.1 Matrix Branching Programs and Tools

We will work with matrix branching programs (MBPs) that compute functions \(f: \{0,1\}^{\ell }\rightarrow \mathbb {Z}_q\) for some prime q. In this paper we consider MBPs specified by a collection of matrices \(\big ( \textbf{M}_{i,b}: i \in [h := c \cdot \ell ], b \in \{0,1\} \big )\) and two vectors \(\textbf{u}, \textbf{v}\) (all over some ring \(\mathcal {R}\), which, for us, will always be \(\mathbb {Z}_q\) for a prime q). We say that the MBP computes the function f which maps each input \(\textbf{x}\in \{0,1\}^{\ell }\) to the value \(f(\textbf{x}) \in \mathbb {Z}_q\) given by

$$ \textbf{u}^T \left( \prod _{i=1}^{h} \textbf{M}_{i,x_{j_i}} \right) \textbf{v}\;, $$

where \(j_i := (i - 1) \bmod \ell + 1\). Or, more explicitly, \(f(\textbf{x})\) is given by

$$ \textbf{u}^T \big ( (\textbf{M}_{1, x_1} \cdot \; \ldots \; \cdot \textbf{M}_{\ell , x_\ell }) \cdot (\textbf{M}_{\ell + 1, x_1} \cdot \; \ldots \; \cdot \textbf{M}_{2 \ell , x_\ell }) \cdot \ldots \cdot (\textbf{M}_{(c - 1) \ell + 1, x_1} \cdot \; \ldots \; \cdot \textbf{M}_{c \ell , x_\ell }) \big ) \textbf{v}~. $$

Such MBPs are called read-c MBPs. When \(c = 1\), we say the MBP is read-once.

The following classical result shows that any function computable by logarithmic-depth Boolean circuits can be represented by a matrix branching program.

Theorem 2

(Barrington’s Theorem [3]). If \(f : \{0, 1\}^n \rightarrow \{0, 1\}\) can be computed by a circuit of depth d, then it can be computed by a matrix branching program

$$ \{\textbf{M}_{i,b}: i \in [h := c \cdot \ell ], b \in \{0,1\}\}, \textbf{u}, \textbf{v}$$

where \(h = O(4^d)\), and all matrices \(\textbf{M}_{i, b} \in \{0, 1\}^{5 \times 5}\) are permutations.

4.2 \(\sigma \)-Matrix PRFs

Our arguments will crucially rely on the following relaxation of the notion of a pseudorandom function with outputs in \(\mathbb {Z}_q\). Informally, it is pseudorandom against adversaries who are only permitted to observe the output values after independent Gaussian noise has been added to each value.

Definition 6

Let \(q = q(\lambda ) \in \mathbb {N}\) and \(\sigma = \sigma (\lambda ) > 0\). A family of deterministic functions \(\mathcal {F}:= \{f_k : \mathcal {X}_\lambda \rightarrow \mathbb {Z}_{q(\lambda )}\}\) is called \(\sigma \)-pseudorandom if for all PPT adversaries \(\mathcal {A}\),

$$ \left| \Pr _{k, \mathcal {A}, O'}[\mathcal {A}^{O'(\cdot )}(1^\lambda ) = 1] - \Pr _{k, \mathcal {A}}[\mathcal {A}^{O(\cdot )}(1^\lambda ) = 1] \right| \le \textsf{negl}(\lambda ) \;, $$

where the function \(O'\) is chosen by sampling discrete Gaussian errors \(\{e_\textbf{x}\leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }\}_{\textbf{x}\in \mathcal {X}_\lambda }\), and on input \(\textbf{x}\in \mathcal {X}_\lambda \), \(O'\) outputs \(O'(\textbf{x}) = f_k(\textbf{x}) + e_\textbf{x}\).

Given a PPT function \(\textsf{aux}\), we say that \(\mathcal {F}\) is additionally \(\sigma \)-pseudorandom in the presence of \(\textsf{aux}\) Definition 6 holds even when the adversary is additionally given \(\textsf{aux}(k)\) as input. We will abbreviate “efficiently computable \(\sigma \)-pseudorandom function family” as “\(\sigma \)-PRF”. As is typical, we will abbreviate “efficiently computable pseudorandom function family” as “PRF”. Similarly, we will abbreviate “efficiently computable \(\sigma \)-pseudorandom function family” as “\(\sigma \)-PRF”.

The \(\sigma \)-PRFs in this paper will have \(\mathcal {X}_\lambda = \{0, 1\}^h\), where \(h = h(\lambda ) \le \textsf{poly}(\lambda )\), and will be pseudorandom even against adversaries running in time \(\textsf{poly}(\mu )\), where \(\mu = 2^{h^2 \lambda } \gg 2^h\). Notice that such adversaries have the ability to write down the entire truth table of the \(\sigma \)-PRF (but with Gaussian errors added) and perform arbitrary polynomial-time computations on it.

We will call a \(\sigma \)-PRF computable by a \(\textsf{poly}(\lambda )\)-sized MBP a \(\sigma \)-matrix PRF.

In this section, we show how to obfuscate \(\sigma \)-matrix PRFs with sufficient security, using evasive LWE. Our main obfuscation construction (Algorithm 3) will use Gentry, Gorbunov and Halevi encodings of the more modern type introduced in their 2015 work [18], or GGH encodings for short. Before we introduce GGH encodings and give our main construction, we need one additional tool, which we provide in the next subsection.

4.3 Transforming Read-c PRFs into Read-Once PRFs

Notice that for general read-c \(\sigma \)-matrix PRFs, it is not the case that all noisy products \(\{\textbf{u}\textbf{M}_\textbf{x}\textbf{v}+ e_\textbf{x}\}_{\textbf{x}\in \{0, 1\}^h}\) are pseudorandom— the \(\sigma \)-PRF guarantee only requires that noisy products corresponding to inputs \(\textbf{x}' \in \{0, 1\}^\ell \) (i.e., with \(\textbf{x}= \textbf{x}' \mid \textbf{x}' \mid \dots \mid \textbf{x}'\), where \(\mid \) denotes concatenation) need be pseudorandom. However, our proof techniques will require all products to be pseudorandom.

To fix this, we construct a generic transformation that modifies a read-c \(\sigma \)-matrix PRF so that all its products are pseudorandom, without losing functionality. We defer this transformation to the full version of the paper.

4.4 GGH Encodings

To construct our obfuscation scheme, we rely on the machinery of Gentry, Gorbunov, and Halevi [18], which we hereby refer to as GGH encodings. (This definition is closely related to the definitions given in [16, 40].)

Construction 1

Given as input matrices \(\{\textbf{M}_{i, b} \in \mathbb {Z}_q^{n_{i - 1} \times n_i}\}_{i \in [h], b \in \{0, 1\}}\),Footnote 7 the randomized algorithm \(\textsf{ggh}.\textsf{encode}\) outputs

$$\begin{aligned} &\{ \textbf{D}_{1, b} := \textbf{M}_{1, b} \textbf{A}_1 + \textbf{E}_{1, b} \}_{b \in \{0, 1\}}, \\ &\{ \textbf{D}_{i, b} := \textbf{A}_{i-1}^{-1}\left( \textbf{M}_{i, b} \textbf{A}_i + \textbf{E}_{i, b}, \sigma \right) \}_{2 \le i \le h - 1, b \in \{0, 1\}}, \\ &\{ \textbf{D}_{h, b} := \textbf{A}_{h - 1}^{-1}(\textbf{M}_{h, b} + \textbf{E}_{h, b}, \sigma ) \}_{b \in \{0, 1\}} \;, \end{aligned}$$

where \(\sigma = 2 \sqrt{n \log q}\), \(\textbf{A}_i \in \mathbb {Z}_q^{n_i \times m_i}\) is sampled using Lemma 1, \(\textbf{E}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n_{i-1} \times m_i}\), and \(m_i := 4 n_i \log q\).

We extend the construction to MBPs \(P = (\{\textbf{M}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textbf{u}, \textbf{v})\) via

$$ \textsf{ggh}.\textsf{encode}(P) := \textsf{ggh}.\textsf{encode}(\{\textbf{M}'_{i, b}\}_{i, b}) \;, $$

where \(\textbf{M}'_{1, b} := \textbf{u}\textbf{M}_{1, b}\), \(\textbf{M}'_{h, b} := \textbf{M}_{h, b} \textbf{v}\), and for \(2 \le i \le h - 1\), \(\textbf{M}'_{i, b} := \textbf{M}_{i, b}\).

The next lemma, which is similar to [40, Lemma 4.3] and [16, Lemma 5.3], captures the functionality provided by the construction, which is roughly that if the input matrices \(\textbf{M}_{i, b}\) have small entries, then for all \(\textbf{x}\in \{0, 1\}^h\), \(|\textbf{D}_\textbf{x}- \textbf{M}_{\textbf{x}}|\) is small.

Lemma 4

Except with probability \(\textsf{negl}(\mu )\) over \(\{\textbf{D}_{i, b}\}_{i \in [h], b \in \{0, 1\}} \leftarrow \textsf{ggh}.\textsf{encode}(\{\textbf{M}_{i, b}\}_{i, b}, \sigma )\), letting \(B := \max \{ \sigma \sqrt{n}, \max _{i \in [h-1], b \in \{0, 1\}} \Vert \textbf{M}_{i, b} \Vert _{\infty } \} \) we have that for all \(\textbf{x}\in \{0, 1\}^h\),

$$ \left\| \textbf{D}_\textbf{x}- \textbf{M}_{\textbf{x}} \right\| _{\infty } \le h \cdot \left( \prod _{i = 1}^{h - 1} m_i \right) \cdot B^h \;, $$

We defer the proof to the full version of the paper.

4.5 Evasive LWE

To argue the security property of GGH encodings that we need, we use the following evasive LWE assumption from [40].

Let \(\sigma , \sigma ' \in \mathbb {R}_{> 0}\), and let \(\textsf{Samp}\) be a PPT algorithm that on input \(1^\lambda \) outputs

$$ \textbf{S}\in \mathbb {Z}_q^{n' \times n}, \textbf{P}\in \mathbb {Z}_q^{n \times t}, \textsf{aux}\in \{0, 1\}^* \;. $$

We define the following advantage functions:

$$ \textsf{Adv}_{\mathcal {A}_0}^{\textrm{pre}}(\lambda ) := \Pr [\mathcal {A}_0(\textbf{S}\textbf{B}+ \textbf{E}, \textbf{S}\textbf{P}+ \textbf{E}', \textsf{aux}) = 1] - \Pr [\mathcal {A}_0(\textbf{C}, \textbf{C}', \textsf{aux}) = 1], $$
$$ \textsf{Adv}_{\mathcal {A}_1}^{\textrm{post}}(\lambda ) := \Pr [\mathcal {A}_1(\textbf{S}\textbf{B}+ \textbf{E}, \textbf{D}, \textsf{aux}) = 1] - \Pr [\mathcal {A}_1(\textbf{C}, \textbf{D}, \textsf{aux}) = 1] $$

where

$$\begin{aligned} & (\textbf{S}, \textbf{P}, \textsf{aux}) \leftarrow \textsf{Samp}(1^\lambda ) \\ & \textbf{B}\leftarrow \mathbb {Z}_q^{n \times m}, \textbf{E}\leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n' \times m}, \textbf{E}' \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma '}^{n' \times t}, \\ & \textbf{C}\leftarrow \mathbb {Z}_q^{n' \times m}, \textbf{C}' \leftarrow \mathbb {Z}_q^{n' \times t}, \\ & \textbf{D}\leftarrow \textbf{B}^{-1}(\textbf{P}, \sigma ) \;. \end{aligned}$$

We say that the evasive LWE assumption \(\textsf{eLWE}(\textsf{Samp}, \sigma , \sigma ')\) holds if there exists some polynomial \(Q(\cdot )\) such that for every PPT \(\mathcal {A}_1\) there exists another PPT \(\mathcal {A}_0\) such that

$$ \textsf{Adv}_{\mathcal {A}_0}^{\textrm{pre}}(\lambda ) \ge \textsf{Adv}_{\mathcal {A}_1}^{\textrm{post}}(\lambda ) / Q(\lambda ) - \textsf{negl}(\lambda ) $$

and \(\textsf{time}(\mathcal {A}_0) \le \textsf{time}(\mathcal {A}_1) \cdot Q(\lambda )\). In this work, we will assume evasive LWE with \(\sigma = \sigma '\).

4.6 Obfuscating Sufficiently Secure \(\sigma \)-Matrix PRFs

figure n

Footnote 8 Footnote 9

The samplers \(\textsf{Samp}_{\mathcal {F}, \textsf{aux}, j}\) in the evasive LWE assumptions made by the next theorem will depend on the matrices sampled by \(\textsf{ggh}.\textsf{encode}\). We defer the full description of these samplers to the full version [32]

Theorem 3

Let \(\sigma \ge \sqrt{2n}\) and \(B \ge \sigma \sqrt{n}\), and suppose \(\mathcal {F}:= \{f_k\}_{k \in \mathcal {K}_\lambda }\) is a height-h MBP with all entries of \(\textbf{u}\) and the \(\textbf{M}_{i, b}\) bounded by B. Let \(\{\textbf{D}^{(k)}_{i, b}\}_{i \in [h], b \in \{0, 1\}}\) be the output of Algorithm 3 on input \(f_k\), and let \(m := 4 (n + w) \log q\). Then for all \(k \in \mathcal {K}_\lambda \), except with probability \(\textsf{negl}(\mu )\), we have that for all \(\textbf{x}\in \{0, 1\}^\ell \),

$$ |f_k(\textbf{x}) - \textbf{D}^{(k)}_{\textbf{y}}| \le h (m B)^{h} \;, $$

where \(\textbf{y}:= \textbf{x} \mid \textbf{x} \mid \dots \mid \textbf{x} \in \{0, 1\}^h\).

Moreover, letting \(\sigma ' = 2^{h^3} \cdot (n^2 \sigma )^{h + 1}\), and assuming LWE and evasive LWE (with appropriate parameters), if \(\mathcal {F}\) is a \(\sigma '\)-matrix PRF such that \(\textsf{poly}(\mu )\)-time adversaries achieve distinguishing advantage at most \(\textsf{negl}(2^{h^2 \lambda })\), then there is a distribution \(\mathcal {D}\) (independent of k) such that for \(k \leftarrow \mathcal {K}_\lambda \),

$$ \{\textbf{D}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textsf{aux}(k) \approx _c \mathcal {D}, \textsf{aux}(k) \;, $$

such that \(\textsf{poly}(\mu )\)-time adversaries achieve distinguishing advantage at most \(\textsf{negl}(2^{h^2 \lambda })\).

We defer the proof to the full version of the paper.

5 Witness PRFs for \(\textsf{UP}\)

In this section, we first construct witness PRFs for \(\textsf{UP}\), that is, the class of unambiguous non-deterministic polynomial time languages. We use the standard definition of witness PRFs due to Zhandry [45], which, along with the (standard) definition of \(\textsf{UP}\), may be found in the full version [32].

5.1 Construction

The construction takes inspiration from the witness encryption construction of [40].

Construction 2

Let \(\alpha = \alpha (\lambda )\), and let \(R_{\alpha }: \{0, 1\}^{\alpha } \times \{0, 1\}^{p(\alpha )} \rightarrow \{0, 1\}\) be a \(\textsf{UP}\) relation. Choose \(\alpha (\lambda ) \in \textsf{poly}(\lambda )\) small enough that \(\alpha , p(\alpha ) \le n^{\delta /10}\). By a classical reduction to Circuit-SAT, we may assume without loss of generality that \(R_{\alpha }(\textbf{x}, \textbf{w})\) is represented by a circuit of depth \(O(\log |\alpha + p(\alpha )|) = O(\log \lambda )\). Let \(\ell = \alpha + p(\alpha )\).

The generation algorithm \(\textsf{Gen}(1^\lambda , R_{\alpha })\) proceeds as follows. First, it uses Barrington’s theorem (Theorem 2) to construct a read-c MBP

$$ \varGamma _{\alpha } = \left( \{\textbf{M}_{i, b} \in \{0, 1\}^{v \times v}\}_{i \in [h], b \in \{0, 1\}}, \textbf{u}, \textbf{v}\in \{0, 1\}^v \right) $$

computing \(1 - R_{\alpha }\), where \(v = O(1)\), \(c, l \in \textsf{poly}(\lambda )\), and \(h := c \cdot \ell \). Specifically, for all \(\textbf{x}\in \{0, 1\}^\alpha \) and \(\textbf{w}\in \{0, 1\}^\ell \),

$$ 1 - R_{\alpha }(\textbf{x}, \textbf{w}) = \textbf{u}^T \textbf{M}_{(\textbf{x}, \textbf{w}) \mid (\textbf{x}, \textbf{w}) \mid \dots \mid (\textbf{x}, \textbf{w})} \textbf{v}\;. $$

For \(i \in [h]\), it samples \(\textbf{S}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\), and for \(i \le \alpha \), it samples \(\textbf{T}_{i, b} \leftarrow \mathcal {D}_{\mathbb {Z}, \sigma }^{n \times n}\), where \(\sigma = \sqrt{2n}\). Next, it samples matrices

$$\begin{aligned} \textbf{Q}_{i, b} &= {\left\{ \begin{array}{ll} \begin{pmatrix} \textbf{M}_{i, b} \otimes \textbf{S}_{i, b} & \\ & \textbf{T}_{i, b}, \end{pmatrix} & 1 \le i \le \alpha , \\ \begin{pmatrix} \textbf{M}_{i, b} \otimes \textbf{S}_{i, b} & \\ & \textbf{I}\end{pmatrix} & \alpha < i \le h, \end{array}\right. } \\ \textbf{L}&= \left( (\textbf{u}^T \otimes (1, 0, \dots , 0)) \mid (1, 0, \dots , 0)\right) , \\ \textbf{R}&= \begin{pmatrix} \textbf{v}\otimes \textbf{a}\\ \textbf{b}\end{pmatrix}, \end{aligned}$$

where \(\textbf{a}, \textbf{b}\leftarrow \mathbb {Z}_q^n\), and sets \(\mathcal {F}:= (\textbf{L}, \{\textbf{Q}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textbf{R})\). Finally, it outputs

$$\begin{aligned} \textsf{fk}&:= \{\{\textbf{T}_{i, b}\}_{i \in [\alpha ], b \in \{0, 1\}}, \textbf{b}\}, \\ \textsf{ek}&:= P := \textsf{ggh}.\textsf{encode}(\mathcal {F}) \;. \end{aligned}$$

The other two algorithms are as follows, where \(p := 2^{h^7 \lambda }\) (following Lemma 3).

  • \(\textsf{F}(\textsf{fk}, \textbf{x}):\) Outputs \(\left\lfloor \textbf{e}_1^T \textbf{T}_\textbf{x}\cdot \textbf{b} \right\rceil _p \in \mathbb {Z}_p\).

  • \(\textsf{Eval}(\textsf{ek}, \textbf{x}, \textbf{w}):\) If \(R_{\alpha }(\textbf{x}, \textbf{w}) = 0\), output \(\bot \). Else, use \(\textsf{ek}\) to compute

    $$ y := \left\lfloor P(\textbf{x}, \textbf{w}) \right\rceil _p \in \mathbb {Z}_p $$

    and output y.

Lemma 5

View \((\textbf{L}, \{\textbf{Q}_{i, b}\}_{i \in [h], b \in \{0, 1\}}, \textbf{R})\) in Construction 2 as a family of MBPs \(\{f_k\}_k\) whose keys are the random matrices \(k := (\{\textbf{S}_{i, b}\}_{i, b}, \{\textbf{T}_{i, b}\}_{i, b})\). Let \(\sigma = \sqrt{2n}, \sigma ' = 2^{h^3 \lambda } \cdot (n^2 \sigma )^{h + 1}\) (following Lemma 2), and let \(\textsf{aux}(k) := \{\left\lfloor \textbf{e}_1^T \textbf{T}_\textbf{x}\cdot \textbf{B} \right\rceil _p\}_{\textbf{x}\notin L}\). Then \(\{f_k\}_k\) is a \(\sigma '\)-matrix PRF in the presence of \(\textsf{aux}\) such that all \(\textsf{poly}(\mu )\)-time adversaries have distinguishing advantage at most \(\textsf{negl}(2^{-h^2 \lambda })\).

Theorem 4

Let L be a \(\textsf{UP}\) language with \(\textsf{UP}\) relation R, and assume \(\textsf{LWE}_{n, \textsf{poly}(n), q, \sigma }\) and \(\textsf{eLWE}_{\textsf{Samp}_{\mathcal {F}, \textsf{aux}, j}, \sigma ', \sigma '}\) for all \(1 \le j \le h - 1\), where \(\textsf{aux}(k) := \{\left\lfloor \textbf{T}_\textbf{x}\cdot \textbf{B} \right\rceil _p\}_{\textbf{x}\notin L}\), and all parameters and matrices are as in Construction 2.

Then Construction 2 is an adaptively secure witness PRF for L.

We defer the proofs of the above lemma and theorem to the full version of the paper.

6 Designated Verifier Zero-Knowledge SNARGs from Witness PRFs

In this section, we show how to construct a designated verifier zero-knowledge SNARG generically from witness PRFs. We follow the NIZK construction of Sahai and Waters [38] (which is also a SNARG construction), and demonstrate how one can use a witness PRF for a relation R to obtain a designated-verifier SNARG for relation R. As a corollary, we obtain a SNARG for \(\textsf{UP}\) from evasive LWE.

Theorem 5

Suppose there exists an adaptively secure witness PRF for a \(\textsf{NP}\) language L with witness relation \(R : \{0, 1\}^{\ell _x} \times \{0, 1\}^{\ell _w} \rightarrow \{0, 1\}\). Then, for any \(\ell \ge \omega (\log (\lambda ))\), there exists a reusable, adaptively secure designated verifier SNARG for R which has statistical adaptive multi-theorem zero-knowledge (see Definition 3), with proof size \(\ell \), and prover (and verifier) runtime \(\textsf{poly}(\ell _x + \lambda + \ell )\).Footnote 10

Moreover, if the witness PRF is subexponentially secure (that is, for some \(\delta > 0\), all \(\textsf{poly}(2^{\lambda ^\delta })\) adversaries achieve advantage at most \(\textsf{negl}(2^{\lambda ^\delta })\) in the adaptive security game), then for any proof size \(\ell \ge \omega (\lambda ^\delta )\), the above SNARG is subexponentially sound (more precisely, all \(\textsf{poly}(2^{\lambda ^\delta })\) adversaries achieve success probability at most \(\textsf{negl}(2^{\lambda ^\delta })\) in the reusable adaptive soundness game).

figure o

We defer the proof to the full version of the paper.

Therefore, using our witness PRF for \(\textsf{UP}\) from Construction 2, we have the following corollary.

Corollary 1

Let L be a \(\textsf{UP}\) language with \(\textsf{UP}\) relation R. Assuming the LWE and evasive LWE assumptions made in Theorem 4, there exists a subexponentially adaptively sound reusable designated-verifer SNARG for L.

7 Adaptive SNARGs from Non-adaptive Witness PRFs

In this section, we show that a subexponentially secure non-adaptive witness PRF (defined in Definition 7) is sufficient to construct adaptive designated-verifier SNARGs with succinct proofs. Then, in Sect. 7.2, we argue that the Sahai-Waters SNARG [38] in the designated-verifier setting can be viewed as a non-adaptive witness PRF.

7.1 Non-adaptive Witness PRFs to SNARGs

First, we introduce the notion of a non-adaptive witness PRF.

Definition 7

(Non-Adaptive Witness PRFs). A non-adaptively secure witness PRF (non-adaptive wPRF) is a triple of PPT algorithms \(\textsf{wPRF}= (\textsf{wPRF}.\textsf{Gen}, \textsf{wPRF}.\textsf{F}, \textsf{wPRF}.\textsf{Eval})\)satisfying the witness PRF correctness property, and satisfying the following, alternative, security definition.

Non-adaptive Security: Consider the following experiment \(\texttt{EXP}_\mathcal {A}^R(b, \lambda )\) between a challenger \(\mathcal {C}\) and adversary \(\mathcal {A}\):

  • \(\mathcal {A}\) chooses some \(x^* \notin L\), and sends it to \(\mathcal {C}\).

  • If \(b = 0\), \(\mathcal {C}\) sets \(y = \textsf{F}(\textsf{fk}, x)\). Otherwise, she samples \(y \leftarrow \mathcal {Y}\). Then she sends y to \(\mathcal {A}\).

  • \(\mathcal {C}\) runs \((\textsf{fk}, \textsf{ek}) \leftarrow \textsf{Gen}(1^\lambda , R)\), and sends \(\textsf{ek}\) to \(\mathcal {A}.\)

  • \(\mathcal {A}\) now can make adaptive queries to \(x \in \mathcal {X}\) with \(x \ne x^*\), to which \(\mathcal {C}\) responds with \(\textsf{F}(\textsf{fk}, x)\).

  • \(\mathcal {A}\) outputs a bit \(b'\).

Let \(W_b\) be the event that the adversary outputs 1 in experiment b, and define the adversary’s advantage to be \(\texttt{Adv}_\mathcal {A}^R(\lambda )= \left| \Pr [W_0] - \Pr [W_1]\right| \). We say that \((\textsf{Gen}, \textsf{F}, \textsf{Eval})\) is non-adaptively secure for a relation R if for all PPT \(\mathcal {A}\), \(\texttt{Adv}_\mathcal {A}^R(\lambda ) = \textsf{negl}(\lambda )\). We further say that it is subexponentially non-adaptively secure if there exists a constant \(\alpha > 0\) such that, for all \(\mathcal {A}\) running in time \(\textsf{poly}(2^{\lambda ^\alpha })\), \(\texttt{Adv}_\mathcal {A}^R(\lambda ) = \textsf{negl}(2^{\lambda ^\alpha })\).

We now show that a witness PRF that is subexponentially non-adaptively secure is in fact (subexponentially) adaptively secure.

Theorem 6

Assume there exists a subexponentially non-adaptively secure witness PRF for a language \(L \in \textsf{NP}\). Then, there exists a subexponentially adaptively secure witness PRF for L.

We defer the proof to the full version of the paper.

Combining this with Theorem 5, we have the following corollary.

Corollary 2

Assume there exists a subexponentially non-adaptively secure witness PRF for a language \(L \in \textsf{NP}\). Then, there exists a subexponentially adaptively sound designated-verifier SNARG for \(L \in \textsf{NP}.\)

7.2 Sahai-Waters SNARG through the Lens of Non-adaptive Witness PRFs

In the full version of the paper, we show that the Sahai-Waters SNARG construction [38] restricted to the designated verifier setting is a non-adaptive PRF.

Theorem 7

Assuming subexponentially secure one-way functions and subexponentially secure indistinguishability obfuscation, there exists a non-adaptively subexponentially secure witness PRF.

We defer the proof to the full version of the paper. Combining Theorem 7 and Theorem 6, we obtain the following corollary.

Corollary 3

Assuming subexponentially secure one-way functions and subexponentially secure indistinguishability obfuscation, there exists a reusable, adaptively subexponentially sound SNARG for \(\textsf{NP}\) in the designated verifier setting.

8 SNARK for \(\textsf{UP}\)

In this section, we show that any subexponentially sound SNARG for \(\textsf{UP}\) can be generically transformed into one with black-box knowledge soundness. Our transformation preserves public verifiability and zero knowledge.

We say that a public-key encryption scheme is injective, if except with negligible probability over the choice of public key \(\textsf{pk}\), its encryption function \(\textsf{Enc}_\textsf{pk}(m; r)\) is injective. Our zero-knowledge transformation makes use of a fully homomorphic encryption scheme (FHE) as well as an injective public-key encryption scheme. Both primitives can be instantiated from the learning with errors (LWE) assumption.

Theorem 8

Assume the existence of a leveled fully homomorphic encryption scheme and an injective public-key encryption scheme. If there exists a reusably and subexponentially soundFootnote 11 designated-verifier \(\textsf{SNARG}\) system for \(\textsf{UP}\), then there exists a reusably sound designated-verifier \(\textsf{SNARK}\) for \(\textsf{UP}\) with a non-adaptive black-box knowledge extractor.

Additionally, the transformation preserves public verifiability, zero-knowledge, and adaptive soundness.

Plugging in the \(\textsf{zk} \)-\(\textsf{SNARG}\) system we constructed in Sect. 6, and noting that LWE implies a leveled fully homomorphic encryption scheme and an injective public-key encryption scheme, we get the following corollary.

Corollary 4

Let L be a \(\textsf{UP}\) language with \(\textsf{UP}\) relation R. Assuming the LWE and evasive LWE assumptions of Theorem 4, there exists a reusably and adaptively sound designated-verifier \(\textsf{zk} \)-\(\textsf{SNARK}\) for L with a non-adaptive black-box knowledge extractor.

Furthermore, applying both Corollary 3 and Theorem 8 to the Sahai-Waters SNARG [38], we get the same primitive, but from indistinguishability obfuscation instead of evasive LWE.

Corollary 5

Let L be a \(\textsf{UP}\) language with \(\textsf{UP}\) relation R. Assuming subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions, and \(\textsf{LWE}_{n, \textsf{poly}(n), q, \sigma }\) for \(n = \textsf{poly}(\lambda )\), \(\sigma \ge \sqrt{2n}\), and \(q = 2^{n^\varepsilon }\) for some constant \(\varepsilon > 0\), there exists a reusably and adaptively sound designated-verifier \(\textsf{zk} \)-\(\textsf{SNARK}\) for L with a non-adaptive black-box knowledge extractor.

Remark 2

(On the work of [14]). Our transformation is inspired by the recent work of Campanelli, Ganesh, Khoshakhlagh and Siim [14]. However, there are two issues with their claim. First, they claim that their transformation converts a non-adaptive SNARG (for \(\textsf{UP}\)) into a SNARK. To the best of our knowledge, their transformation as-is seems to requires subexponential soundness of the underlying SNARG. We elaborate on this in the full version [32]. Secondly, they claim that their construction preserves zero-knowledge, but to the best of our understanding, this is not the case. We have contacted the authors and they agree with both of the above points. Coming up with a transformation from non-adaptive SNARG (with negligible soundness) for \(\textsf{UP}\)/\(\textsf{NP}\) to a non-adaptively extractable SNARK for \(\textsf{UP}\) is an interesting open problem.

Remark 3

One could ask if a similar compiler can be shown for \(\textsf{NP}\) instead of merely \(\textsf{UP}\). It turns out that black-box extractable SNARKs for \(\textsf{NP}\) in the plain model do not exist [14, 27]; therefore, the restriction to \(\textsf{UP}\) is, in a sense, necessary.

We defer the construction and proofs to the full version of the paper.