Interactive proofs [62] are central to modern cryptography and complexity theory. One extensively studied aspect of interactive proofs is their expressibility, culminating with the result \({\mathsf {IP}}={\mathsf {PSPACE}}\) [96]. Another aspect, which is the focus of this work, is that proofs for \({\mathsf {NP}}\) statements can potentially be much shorter than an \({\mathsf {NP}}\) witness and be verified much faster than the time required for checking the \({\mathsf {NP}}\) witness.
Background
Succinct interactive arguments. In interactive proofs for \({\mathsf {NP}}\) with statistical soundness, significant savings in communication (let alone verification time) are unlikely [21, 58, 67, 102]. If we settle for proof systems with computational soundness, known as argument systems [9], then significant savings can be made. Using collision-resistant hashes (\(\text{ CRH } \)s) and probabilistically-checkable proofs (\(\text{ PCP } \)s) [16], Kilian [74] showed a four-message interactive argument for \({\mathsf {NP}}\) where, to prove membership of an instance \(x\) in a given \({\mathsf {NP}}\) language L with \({\mathsf {NP}}\) machine \(M_{L}\), communication and verification time are bounded by \({\mathrm {poly}}(\lambda + |M_{L}| + |x| + \log t)\), and the prover’s running time is \({\mathrm {poly}}(\lambda + |M_{L}| + |x| + t)\). Here, \(t\) is the classical \({\mathsf {NP}}\) verification time of \(M_{L}\) for the instance \(x\), \(\lambda \) is a security parameter, and \({\mathrm {poly}}\) is a universal polynomial (independent of \(\lambda \), \(M_{L}\), \(x\), and \(t\)). We call such argument systems succinct.
Proof of knowledge. A natural strengthening of computational soundness is (computational) proof of knowledge: it requires that, whenever the verifier is convinced by an efficient prover, not only can we conclude that a valid witness for the theorem exists, but also that such a witness can be extracted efficiently from the prover. This property is satisfied by most proof system constructions, including the aforementioned one of Kilian [19], and is useful in many applications of succinct arguments.
Removing interaction. Kilian’s protocol requires four messages. A challenge, which is of both theoretical and practical interest, is the construction of non-interactive succinct arguments. As a first step in this direction, Micali [82] showed how to construct publicly-verifiable one-message succinct non-interactive arguments for \({\mathsf {NP}}\), in the random oracle model, by applying the Fiat–Shamir heuristic [54] to Kilian’s protocol. In the plain model, one-message solutions are impossible for hard-enough languages (against non-uniform provers), so one usually considers the weaker goal of two-message succinct arguments where the verifier message is generated independently of the statement to be proven. Following [68], we call such arguments \(\text{ SNARG } \)s. More precisely, a \(\text{ SNARG } \) for a language L is a triple of algorithms \((G ,P,V)\) where: the generator \(G\), given the security parameter \(\lambda \), samples a reference string \(\sigma \) and a corresponding verification state \(\tau \) (\(G\) can be thought to be run during an offline phase, by the verifier, or by someone the verifier trusts); the (honest) prover \(P(\sigma ,x,w)\) produces a proof \(\pi \) for the statement “\(x\in L\)” given a witness \(w\); then, \(V(\tau ,x,\pi )\) verifies the validity of \(\pi \). Soundness should hold even if \(x\) is chosen depending on \(\sigma \).
Gentry and Wichs [68] showed that no \(\text{ SNARG } \) can be proven secure via a black-box reduction to a falsifiable assumption [85]; this may justify using non-standard assumptions to construct \(\text{ SNARG } \)s. (Note that [68] rule out \(\text{ SNARG } \)s only for (hard-enough) \({\mathsf {NP}}\) languages. For the weaker goal of verifying deterministic polynomial-time computations in various models, there are beautiful constructions relying on standard assumptions, such as [3, 20, 37, 38, 40, 41, 52, 55, 59, 77]. We focus on verifying nondeterministic polynomial-time computations.)
Extending earlier works [1, 44, 50, 83], several works showed how to remove interaction in Kilian’s \(\text{ PCP } \)-based protocol and obtain \(\text{ SNARG } \)s of knowledge (\(\text{ SNARK } \)s) using extractable collision-resistant hashing [10, 11, 46, 60], or construct \(\text{ MIP } \)-based \(\text{ SNARK } \)s using fully-homomorphic encryption with an extractable homomorphism property [8].
The preprocessing model. A notion that is weaker than a \(\text{ SNARK } \) is that of a preprocessing \(\text{ SNARK } \): here, the verifier is allowed to conduct an expensive offline phase. More precisely, the generator \(G\) takes as an additional input a time bound \(T\), may run in time \({\mathrm {poly}}(\lambda +T)\) (rather than \({\mathrm {poly}}(\lambda + \log T)\)), and generates \(\sigma \) and \(\tau \) that can be used, respectively, to prove and verify correctness of computations of length at most \(T\). Bitansky et al. [12] showed that \(\text{ SNARK } \)s can always be “algorithmically improved”; in particular, preprocessing \(\text{ SNARK } \)s imply ones without preprocessing. (The result of [12] crucially relies on the fast verification time and the adaptive proof-of-knowledge property of the \(\text{ SNARK } \).) Thus, “preprocessing can always be removed” at the expense of only a \({\mathrm {poly}}(\lambda )\)-loss in verification efficiency.
Zero knowledge. Another desired feature of \(\text{ SNARK } \)s is zero knowledge, namely hiding from the verifier anything but the truth of the statement being proved. More concretely, we aim at constructions of \(\text{ SNARK } \)s that satisfy the standard notion of non-interactive zero-knowledge [17]. Previous \(\text{ SNARK } \) constructions, starting from [64] and onward, achieve zero knowledge at a very small extra cost. This will also be the case for the main constructions in this work. However, to simplify the exposition, we will start by focusing on the other features, and discuss the extra zero knowledge feature separately.
Motivation
The typical approach to construct succinct arguments (or, more generally, other forms of proof systems with nontrivial efficiency properties) conforms with the following methodology: first, give an information-theoretic construction, using some form of probabilistic checking to verify computations, in a model that enforces certain restrictions on provers (e.g., the \(\text{ PCP } \) model [10, 11, 19, 44, 46, 60, 74, 82] or other models of probabilistic checking [8, 72, 76, 94, 95, 97, 98]); next, use cryptographic tools to compile the information-theoretic construction into an argument system (where there are no restrictions on the prover other than it being an efficient algorithm). We refer to the former ingredient as a probabilistic proof system and to the latter as a cryptographic compiler.
Existing constructions of preprocessing \(\text{ SNARK } \)s seem to diverge from this methodology, while at the same time offering several attractive features: such as public verification, proofs consisting of only O(1) encrypted (or encoded) field elements, and verification via arithmetic circuits that are linear in the statement.
Groth [64] and Lipmaa [78] (who builds on Groth’s approach) introduced clever techniques for constructing preprocessing \(\text{ SNARK } \)s by leveraging knowledge-of-exponent assumptions [25, 43, 71] in bilinear groups. At high level, Groth considered a simple reduction from circuit satisfaction problems to an algebraic satisfaction problem of quadratic equations, and then constructed a set of specific cryptographic tools to succinctly check satisfiability of this problem. Gennaro et al. [56] made a first step to better separate the “information-theoretic ingredient” from the “cryptographic ingredient” in preprocessing \(\text{ SNARK } \)s. They formulated a new type of algebraic satisfaction problems, called Quadratic Span Programs (\(\text{ QSP } \)s), which are expressive enough to allow for much simpler, and more efficient, cryptographic checking, essentially under the same assumptions used by Groth. In particular, they invested significant effort in obtaining an efficient reduction from circuit satisfiability to \(\text{ QSP } \)s. (See Sect. 1.5 for a more detailed overview of the relation between our work and [56].)
Comparing the latter QSP-based approach to the probabilistic-checking-based approach described above, we note that a reduction to an algebraic satisfaction problem is a typical first step, because such satisfaction problems tend to be more amenable to probabilistic checking. As explained above, cryptographic tools are then usually invoked to enforce the relevant probabilistic-checking model (e.g., the \(\text{ PCP } \) one). The aforementioned works [56, 64, 78], on the other hand, seem to somehow skip the probabilistic-checking step, and directly construct specific cryptographic tools for checking satisfiability of the algebraic problem itself. While this discrepancy may not be a problem per se, we believe that understanding it and formulating a clear methodology for the construction of preprocessing \(\text{ SNARK } \)s are problems of great interest. Furthermore, a clear methodology that separates an information-theoretic probabilistic proof system from a cryptographic compiler may lead not only to a deeper conceptual understanding, but also to concrete improvements to different features of \(\text{ SNARK } \)s (e.g., communication complexity, verifier complexity, prover complexity, and so on). Thus, we ask:
Is there a general methodology for constructing preprocessing \(\text{ SNARK } \)s from probabilistic proof systems? Which improvements can it lead to?
Our Results
We present a general methodology for constructing preprocessing \(\text{ SNARK } \)s from suitable kinds of probabilistic proof systems. Using different instantiations of this methodology, we obtain conceptually simple variants of previous \(\text{ SNARK } \)s, as well as \(\text{ SNARK } \)s with new efficiency features.
Our methodology starts with a linear PCP,Footnote 1 a more structured variant of a classical PCP in which the verifier can make a small number of inner-product queries to a single proof vector. We transform such a linear PCP into a stronger kind of probabilistic proof system called a linear interactive proof, and then to a \(\text{ SNARK } \) via a cryptographic compiler that respects the efficiency features of the linear PCP.
In more detail, our contribution is threefold:
-
We introduce a new, information-theoretic probabilistic proof system that extends the standard interactive proof model by considering algebraically-bounded provers. Concretely, we focus on linear interactive proofs (\(\text{ LIP } \)s), where both honest and malicious provers are restricted to computing linear (or affine) functions of messages they receive over some finite field or ring. We construct succinct two-message \(\text{ LIP } \)s for \({\mathsf {NP}}\) by applying a simple and general transformation to any linear PCP. We also present an alternative construction of LIPs from classical PCPs.
-
We give cryptographic transformations from (succinct, two-message) \(\text{ LIP } \)s to preprocessing \(\text{ SNARK } \)s, using different forms of linear targeted malleability [34], which can be instantiated based on existing knowledge assumptions. More concretely, we assume a “linear-only” encryption scheme that only supports linear homomorphism. Our transformation is very intuitive: to force a prover to “act linearly” on the verifier’s message, as in the LIP soundness guarantee, the preprocessed common reference string simply includes an encryption of each field or ring element in the verifier’s LIP message with such a linear-only encryption. This enables the honest \(\text{ SNARK } \) prover to faithfully compute an encryption of its correct LIP message, which the verifier can decrypt. For the case of designated-verifier \(\text{ SNARK } \)s, this simple idea suffices. To obtain public verification, we require the LIP verification to be “simple” (say, testing whether a quadratic function of the answers is 0) and replace the linear-only encryption by a linear-only encoding that supports “simple” zero-tests (say, via pairing).
-
Following this methodology, we obtain several constructions that either simplify previous ones or exhibit new asymptotic efficiency features. The latter include “single-ciphertext preprocessing \(\text{ SNARK } \)s” and improved succinctness-soundness tradeoffs in the designated-verifier setting. We also offer a new perspective on existing constructions of preprocessing \(\text{ SNARK } \)s: namely, although existing constructions do not explicitly invoke \(\text{ PCP } \)s, they can be reinterpreted as using linear \(\text{ PCP } \)s.
-
We also extend our methodology to obtain zero-knowledge \(\text{ LIP } \)s and \(\text{ SNARK } \)s.
We now discuss our results further, starting in Sect. 1.3.1 with the information-theoretic constructions of \(\text{ LIP } \)s, followed in Sect. 1.3.2 by the cryptographic transformations to preprocessing \(\text{ SNARK } \)s, and concluding in Sect. 1.3.3 with the new features we are able to obtain.
Linear Interactive Proofs
The \(\text{ LIP } \) model modifies the traditional interactive proofs model in a way analogous to the way the common study of algebraically-bounded “adversaries” modifies other settings, such as pseudorandomness [35, 86] and randomness extraction [48, 63]. In the \(\text{ LIP } \) model, both honest and malicious provers are restricted to apply linear (or affine) functions over a finite field \(\mathbb {F}\) to messages they receive from the verifier. (The notion can be naturally generalized to apply over rings.) The choice of these linear functions can depend on auxiliary input to the prover (e.g., a witness), but not on the verifier’s messages (Fig. 1).
With the goal of non-interactive succinct verification in mind, we restrict our attention to (input-oblivious) two-message \(\text{ LIP } \)s for boolean circuit satisfiability problems with the following template. To verify the relation \(\mathcal {R}_{C} = \left\{ (x,w):C(x,w)=1\right\} \) where \(C\) is a boolean circuit, the \(\text{ LIP } \) verifier \(V_{\scriptscriptstyle {\mathsf {LIP}}}\) sends to the \(\text{ LIP } \) prover \(P_{\scriptscriptstyle {\mathsf {LIP}}}\) a message \({\mathbf {q}}\) that is a vector of field elements, depending on \(C\) but not on \(x\); \(V_{\scriptscriptstyle {\mathsf {LIP}}}\) may also output a verification state \({\mathbf {u}}\). The \(\text{ LIP } \) prover \(P_{\scriptscriptstyle {\mathsf {LIP}}}(x,w)\) applies to \({\mathbf {q}}\) an affine transformation \(\Pi = (\Pi ',\varvec{b})\), resulting in only a constant number of field elements. The prover’s message \({\mathbf {a}} = \Pi ' \cdot {\mathbf {q}}+\varvec{b}\) can then be quickly verified (e.g., with \(O(|x|)\) field operations) by \(V_{\scriptscriptstyle {\mathsf {LIP}}}\), and the soundness error is at most \(O(1/|\mathbb {F}|)\). From here on, we shall use the term \(\text{ LIP } \) to refer to \(\text{ LIP } \)s that adhere to the above template.
LIP complexity measures. Our constructions provide different tradeoffs among several complexity measures of an \(\text{ LIP } \), which ultimately affect the features of the resulting preprocessing \(\text{ SNARK } \)s. The two most basic complexity measures are the number of field elements sent by the verifier and the number of those sent by the prover. An additional measure that we consider in this work is the algebraic complexity of the verifier (when viewed as an \(\mathbb {F}\)-arithmetic circuit). Specifically, splitting the verifier into a query algorithm \(Q_{\scriptscriptstyle {\mathsf {LIP}}}\) and a decision algorithm \(D_{\scriptscriptstyle {\mathsf {LIP}}}\), we say that it has degree \((d_{Q} ,d_{D})\) if \(Q_{\scriptscriptstyle {\mathsf {LIP}}}\) can be computed by a vector of multivariate polynomials of total degree \(d_{Q}\) each in the verifier’s randomness, and \(D_{\scriptscriptstyle {\mathsf {LIP}}}\) by a vector of multivariate polynomials of total degree \(d_{D}\) each in the \(\text{ LIP } \) answers \({\mathbf {a}}\) and the verification state \({\mathbf {u}}\). Finally, of course, the running times of the query algorithm, decision algorithm, and prover algorithm are all complexity measures of interest. See Sect. 2.3 for a definition of \(\text{ LIP } \)s and their complexity measures.
As mentioned above, our \(\text{ LIP } \) constructions are obtained by applying general transformations to two types of \(\text{ PCP } \)s. We now describe each of these transformations and the features they achieve. Some of the parameters of the resulting constructions are summarized in Table 1.
LIPs from linear PCPs. A linear \(\text{ PCP } \) (\(\text{ LPCP } \)) of length \(m\) is an oracle computing a linear function \(\varvec{\pi }:\mathbb {F}^{m} \rightarrow \mathbb {F}\); namely, the answer to each oracle query \({\varvec{q}}_{i} \in \mathbb {F}^{m}\) is \(a_{i}=\left\langle \varvec{\pi } , {\varvec{q}}_{i} \right\rangle \). Note that, unlike in an \(\text{ LIP } \) where different affine functions, given by a matrix \(\Pi \) and shift \(\varvec{b}\), are applied to a message \({\mathbf {q}}\), in an \(\text{ LPCP } \) there is one linear function \(\varvec{\pi }\), which is applied to different queries. (An \(\text{ LPCP } \) with a single query can be viewed as a special case of an \(\text{ LIP } \).) This difference prevents a direct use of an \(\text{ LPCP } \) as an \(\text{ LIP } \).
Our first transformation converts any (multi-query) \(\text{ LPCP } \) into an \(\text{ LIP } \) with closely related parameters. Concretely, we transform any \(k\)-query \(\text{ LPCP } \) of length \(m\) over \(\mathbb {F}\) into an \(\text{ LIP } \) with verifier message in \(\mathbb {F}^{(k+1)m}\), prover message in \(\mathbb {F}^{k+1}\), and the same soundness error up to an additive term of \({1}/{|\mathbb {F}|}\). The transformation preserves the key properties of the \(\text{ LPCP } \), including the algebraic complexity of the verifier. Our transformation is quite natural: the verifier sends \({\mathbf {q}}=(\varvec{q}_{1},\dots ,\varvec{q}_{k+1})\) where \(\varvec{q}_{1},\dots ,\varvec{q}_{k}\) are the \(\text{ LPCP } \) queries and \(\varvec{q}_{k+1}=\alpha _{1}\varvec{q}_{1}+\cdots + \alpha _{k}\varvec{q}_{k}\) is a random linear combination of these. The (honest) prover responds with \(a_{i}=\left\langle \varvec{\pi } , {\mathbf {\varvec{q}}}_{i} \right\rangle \), for \(i=1,\ldots ,k+1\). To prevent a malicious prover from using inconsistent choices for \(\varvec{\pi }\), the verifier checks that \(a_{k+1}=\alpha _{1}a_{1}+\cdots +\alpha _{k}a_{k}\).
By relying on two different \(\text{ LPCP } \) instantiations, we obtain two corresponding \(\text{ LIP } \) constructions:
-
A variant of the Hadamard-based \(\text{ PCP } \) of Arora et al. [4] (ALMSS), extended to work over an arbitrary finite field \(\mathbb {F}\), yields a very simple \(\text{ LPCP } \) with three queries. After applying our transformation, for a circuit \(C\) of size \(s\) and input length \(n\), the resulting \(\text{ LIP } \) for \(\mathcal {R}_{C}\) has verifier message in \(\mathbb {F}^{O(s^{2})}\), prover message in \(\mathbb {F}^{4}\), and soundness error \(O(1/|\mathbb {F}|)\). When viewed as \(\mathbb {F}\)-arithmetic circuits, the prover \(P_{\scriptscriptstyle {\mathsf {LIP}}}\) and query algorithm \(Q_{\scriptscriptstyle {\mathsf {LIP}}}\) are both of size \(O(s^{2})\), and the decision algorithm is of size \(O(n)\). Furthermore, the degree of \((Q_{\scriptscriptstyle {\mathsf {LIP}}},D_{\scriptscriptstyle {\mathsf {LIP}}})\) is (2, 2).
-
A (strong) quadratic span program (\(\text{ QSP } \)), as defined by Gennaro et al. [56], directly yields a corresponding \(\text{ LPCP } \) with three queries. For a circuit \(C\) of size \(s\) and input length \(n\), the resulting \(\text{ LIP } \) for \(\mathcal {R}_{C}\) has verifier message in \(\mathbb {F}^{O(s)}\), prover message in \(\mathbb {F}^{4}\), and soundness error \(O(s/|\mathbb {F}|)\). When viewed as \(\mathbb {F}\)-arithmetic circuits, the prover \(P_{\scriptscriptstyle {\mathsf {LIP}}}\) is of size \(\widetilde{O}(s)\), the query algorithm \(Q_{\scriptscriptstyle {\mathsf {LIP}}}\) is of size \(O(s)\), and the decision algorithm is of size \(O(n)\). The degree of \((Q_{\scriptscriptstyle {\mathsf {LIP}}},D_{\scriptscriptstyle {\mathsf {LIP}}})\) is \((O(s),2)\).
A notable feature of the \(\text{ LIP } \)s obtained above is the very low “online complexity” of verification: in both cases, the decision algorithm is an arithmetic circuit of size \(O(n)\). Moreover, all the efficiency features mentioned above apply not only to satisfiability of boolean circuits \(C\), but also to satisfiability of \(\mathbb {F}\)-arithmetic circuits.
In both the above constructions, the circuit to be verified is first represented as an appropriate algebraic satisfaction problem, and then probabilistic checking machinery is invoked. In the first case, the problem is a system of quadratic equations over \(\mathbb {F}\), and, in the second case, it is a (strong) quadratic span program (\(\text{ QSP } \)) over \(\mathbb {F}\). These algebraic problems are the very same problems underlying [56, 64, 78].
As explained earlier, [56] invested much effort to show an efficient reduction from circuit satisfiability problems to \(\text{ QSP } \)s. Our work does not subsume nor simplify the reduction to \(\text{ QSP } \)s of [56], but instead reveals a simple \(\text{ LPCP } \) to check a \(\text{ QSP } \), and this \(\text{ LPCP } \) can be plugged into our general transformations. Reducing circuit satisfiability to a system of quadratic equations over \(\mathbb {F}\) is much simpler, but generating proofs for the resulting problem is quadratically more expensive. (Concretely, both [64, 78] require \(O(s^{2})\) computation already in the preprocessing phase). See Sect. 3.1 for more details.
LIPs from traditional PCPs. Our second transformation relies on traditional “unstructured” \(\text{ PCP } \)s. These \(\text{ PCP } \)s are typically more difficult to construct than \(\text{ LPCP } \)s; however, our second transformation has the advantage of requiring the prover to send only a single field element. Concretely, our transformation converts a traditional \(k\)-query \(\text{ PCP } \) into a 1-query \(\text{ LPCP } \), over a sufficiently large field. Here the \(\text{ PCP } \) oracle is represented via its truth table, which is assumed to be a binary string of polynomial size (unlike the \(\text{ LPCP } \)s mentioned above, whose truth tables have size that is exponential in the circuit size). The transformation converts any \(k\)-query \(\text{ PCP } \) of proof length \(m\) and soundness error \(\varepsilon \) into an \(\text{ LIP } \), with soundness error \(O(\varepsilon )\) over a field of size \(2^{O(k)} / \varepsilon \), in which the verifier sends \(m\) field elements and receives only a single field element in return. The high-level idea is to use a sparse linear combination of the \(\text{ PCP } \) entries to pack the \(k\) answer bits into a single field element. The choice of this linear combination uses additional random noise to ensure that the prover’s coefficients are restricted to binary values, and uses easy instances of subset-sum to enable an efficient decoding of the \(k\) answer bits.
Taking time complexity to an extreme, we can apply this transformation to the \(\text{ PCP } \)s of Ben-Sasson et al. [28] and get \(\text{ LIP } \)s where the prover and verifier complexity are both optimal up to \({\mathrm {polylog}}(s)\) factors, but where the prover sends a single element in a field of size \(|\mathbb {F}|=2^{\lambda \cdot {\mathrm {polylog}}(s)}\). Taking succinctness to an extreme, we can apply our transformation to \(\text{ PCP } \)s with soundness error \(2^{-\lambda }\) and \(O(\lambda )\) queries, obtaining an \(\text{ LIP } \) with similar soundness error in which the prover sends a single element in a field of size \(|\mathbb {F}|=2^{\lambda \cdot O(1)}\). For instance, using the query-efficient \(\text{ PCP } \)s of Håstad and Khot [69], the field size is only \(|\mathbb {F}|=2^{\lambda \cdot (3+o(1))}\).Footnote 2 (Jumping ahead, this means that a field element can be encrypted using a single, normal-size ciphertext of homomorphic encryption schemes such as Paillier or Elgamal even when \(\lambda =100\).) On the down side, the degrees of the \(\text{ LIP } \) verifiers obtained via this transformation are high; we give evidence that this is inherent when starting from “unstructured” \(\text{ PCP } \)s. See Sect. 3.2 for more details.
Honest-verifier zero-knowledge LIPs. We also show how to make the above \(\text{ LIP } \)s zero-knowledge against honest verifiers (\(\text{ HVZK } \)). Looking ahead, using \(\text{ HVZK } \text{ LIP } \)s in our cryptographic transformations results in preprocessing \(\text{ SNARK } \)s that are zero-knowledge (against malicious verifiers in the CRS model).
For the Hadamard-based \(\text{ LIP } \), an \(\text{ HVZK } \) variant can be obtained directly with essentially no additional cost. More generally, we show how to transform any \(\text{ LPCP } \) where the decision algorithm is of low degree to an \(\text{ HVZK } \text{ LPCP } \) with the same parameters up to constant factors (see Sect. 8); this \(\text{ HVZK } \text{ LPCP } \) can then be plugged into our first transformation to obtain an \(\text{ HVZK } \text{ LIP } \). Both of the \(\text{ LPCP } \) constructions mentioned earlier satisfy the requisite degree constraints.
For the second transformation, which applies to traditional \(\text{ PCP } \)s (whose verifiers, as discussed above, must have high degree and thus cannot benefit from our general \(\text{ HVZK } \) transformation), we show that if the \(\text{ PCP } \) is \(\text{ HVZK } \) (see [47] for efficient constructions), then so is the resulting \(\text{ LIP } \); in particular, the \(\text{ HVZK } \text{ LIP } \) answer still consists of a single field element.
Proof of knowledge. In each of the above transformations, we ensure not only soundness for the \(\text{ LIP } \), but also a proof of knowledge property. Namely, it is possible to efficiently extract from a convincing affine function \(\Pi \) a witness for the underlying statement. The proof of knowledge property is then preserved in the subsequent cryptographic compilations, ultimately allowing to establish the proof of knowledge property for the preprocessing \(\text{ SNARK } \). As discussed in Sect. 1.1, proof of knowledge is a very desirable property for preprocessing \(\text{ SNARK } \)s; for instance, it enables to remove the preprocessing phase, as well as to improve the complexity of the prover and verifier, via the result of [12].
Table 1 Summary of our \(\text{ LIP } \) constructions Preprocessing \(\text{ SNARK } \)s from \(\text{ LIP } \)s
We explain how to use cryptographic tools to transform an \(\text{ LIP } \) into a corresponding preprocessing \(\text{ SNARK } \). At high level, the challenge is to ensure that an arbitrary (yet computationally-bounded) prover behaves as if it was a linear (or affine) function. The idea, which also implicitly appears in previous constructions, is to use an encryption scheme with targeted malleability [34] for the class of affine functions: namely, an encryption scheme that “only allows affine homomorphic operations” on an encrypted plaintext (and these operations are independent of the underlying plaintexts). Intuitively, the verifier would simply encrypt each field element in the \(\text{ LIP } \) message \({\mathbf {q}}\), send the resulting ciphertexts to the prover, and have the prover homomorphically evaluate the \(\text{ LIP } \) affine function on the ciphertexts; targeted malleability ensures that malicious provers can only invoke (malicious) affine strategies.
We concretize the above approach in several ways, depending on the properties of the \(\text{ LIP } \) and the exact flavor of targeted malleability; different choices will induce different properties for the resulting preprocessing \(\text{ SNARK } \). In particular, we identify natural sufficient properties that enable an \(\text{ LIP } \) to be compiled into a publicly-verifiable \(\text{ SNARK } \). We also discuss possible instantiations of the cryptographic tools, based on existing knowledge assumptions. (Recall that, in light of the negative result of [68], the use of nonstandard cryptographic assumptions seems to be justified.)
Designated-verifier preprocessing SNARKs from arbitrary LIPs. First, we show that any \(\text{ LIP } \) can be compiled into a corresponding designated-verifier preprocessing \(\text{ SNARK } \) with similar parameters. (Recall that “designated verifier” means that the verifier needs to maintain a secret verification state.) To do so, we rely on what we call linear-only encryption: an additively homomorphic encryption that is (a) semantically-secure, and (b) linear-only. The linear-only property essentially says that, given a public key \({\mathsf {pk}}\) and ciphertexts \({\mathsf {Enc}}_{{\mathsf {pk}}}(a_{1}),\dots ,{\mathsf {Enc}}_{{\mathsf {pk}}}(a_{m})\), it is infeasible to compute a new ciphertext \(c'\) in the image of \({\mathsf {Enc}}_{{\mathsf {pk}}}\), except by “knowing” \(\beta ,\alpha _{1},\dots ,\alpha _{m}\) such that \(c' \in {\mathsf {Enc}}_{{\mathsf {pk}}}(\beta +\sum _{i=1}^{m} \alpha _{i} a_{i})\). Formally, the property is captured by guaranteeing that, whenever \(A({\mathsf {pk}},{\mathsf {Enc}}_{{\mathsf {pk}}}(a_{1}),\dots ,{\mathsf {Enc}}_{{\mathsf {pk}}}(a_{m}))\) produces valid ciphertexts \((c'_{1} ,\dots ,c'_{k})\), an efficient extractor \(E\) (non-uniformly depending on \(A\)) can extract a corresponding affine function \(\Pi \) “explaining” the ciphertexts. As a candidate for such an encryption scheme, we propose variants of Paillier encryption [88] (as also considered in [56]) and of Elgamal encryption [51] (in those cases where the plaintext is guaranteed to belong to a polynomial-size set, so that decryption can be done efficiently). These variants are “sparsified” versions of their standard counterparts; concretely, a ciphertext does not only include \({\mathsf {Enc}}_{{\mathsf {pk}}}(a)\), but also \({\mathsf {Enc}}_{{\mathsf {pk}}}(\alpha \cdot a)\), for a secret field element \(\alpha \). (This “sparsification” follows a pattern found in many constructions conjectured to satisfy “knowledge-of-exponent” assumptions.) As for Paillier encryption, we have to consider \(\text{ LIP } \)s over the ring \(\mathbb {Z}_{\mathfrak {p}\mathfrak {q}}\) (instead of a finite field \(\mathbb {F}\)); essentially, the same results also hold in this setting (except that soundness is \(O(1/\min \left\{ \mathfrak {p},\mathfrak {q}\right\} )\) instead of \(O(1/|\mathbb {F}|)\)).
We also consider a notion of targeted malleability, weaker than linear-only encryption, that is closer to the definition template of Boneh et al. [34]. In such a notion, the extractor is replaced by a simulator. Relying on this weaker variant, we are only able to prove the security of our preprocessing \(\text{ SNARK } \)s against non-adaptive choices of statements (and still prove soundness, though not proof of knowledge, if the simulator is allowed to be inefficient). Nonetheless, for natural instantiations, even adaptive security seems likely to hold for our construction, but we do not know how to prove it. One advantage of working with this weaker variant is that it seems to allow for more efficient candidates constructions. Concretely, the linear-only property rules out any encryption scheme where ciphertexts can be sampled obliviously; instead, the weaker notion does not, and thus allows for shorter ciphertexts. For example, we can consider a standard (“non-sparsified”) version of Paillier encryption. We will get back to this point in Sect. 1.3.3.
For further details on the above transformations, see Sect. 6.1.
Publicly-verifiable preprocessing SNARKs from LIPs with low-degree verifiers. Next, we identify properties of \(\text{ LIP } \)s that are sufficient for a transformation to publicly-verifiable preprocessing \(\text{ SNARK } \)s. Note that, if we aim for public verifiability, we cannot use semantically-secure encryption to encode the message of the \(\text{ LIP } \) verifier, because we need to “publicly test” (without decryption) certain properties of the plaintext underlying the prover’s response. The idea, implicit in previous publicly-verifiable preprocessing \(\text{ SNARK } \) constructions, is to use linear-only encodings (rather than encryption) that do allow such public tests, while still providing certain one-wayness properties. When using such encodings with an \(\text{ LIP } \), however, it must be the case that the public tests support evaluating the decision algorithm of the \(\text{ LIP } \) and, moreover, the \(\text{ LIP } \) remains secure despite some “leakage” on the queries. We show that \(\text{ LIP } \)s with low-degree verifiers (which we call algebraic \(\text{ LIP } \)s), combined with appropriate one-way encodings, suffice for this purpose.
More concretely, like [56, 64, 78], we consider candidate encodings in bilinear groups under similar knowledge-of-exponent and computational Diffie–Hellman assumptions; for such encoding instantiations, we must start with an \(\text{ LIP } \) where the degree \(d_{D}\) of the decision algorithm \(D_{\scriptscriptstyle {\mathsf {LIP}}}\) is at most quadratic. (If we had multilinear maps supporting higher-degree polynomials, we could support higher values of \(d_{D}\).) In addition to \(d_{D}\le 2\), to ensure security even in the presence of certain one-way leakage, we need the query algorithm \(Q_{\scriptscriptstyle {\mathsf {LIP}}}\) to be of polynomial degree.
Both of the \(\text{ LIP } \) constructions from \(\text{ LPCP } \)s described in Sect. 1.3.1 satisfy these requirements. When combined with the above transformation, these \(\text{ LIP } \) constructions imply new constructions of publicly-verifiable preprocessing \(\text{ SNARK } \)s, one of which can be seen as a simplification of the construction of [64] and the other as a reinterpretation (and slight simplification) of the construction of [56].
For more details, see Sect. 6.2.
Zero knowledge. In all aforementioned transformations to preprocessing \(\text{ SNARK } \)s, if we start with an \(\text{ HVZK } \text{ LIP } \) (such as those mentioned in Sect. 1.3.1) and additionally require a rerandomization property for the linear-only encryption/encoding (which is available in all of the candidate instantiations we consider), we obtain preprocessing \(\text{ SNARK } \)s that are (perfect) zero-knowledge in the CRS model. In addition, for the case of publicly-verifiable (perfect) zero-knowledge preprocessing \(\text{ SNARK } \)s, the CRS can be tested, so that (similarly to previous works [56, 64, 78]) we also obtain succinct ZAPs. See Sect. 6.3.
New Efficiency Features for \(\text{ SNARK } \)s
We obtain the following improvements in communication complexity for preprocessing \(\text{ SNARK } \)s.
“Single-ciphertext preprocessing SNARKs”. If we combine the \(\text{ LIP } \)s that we obtained from traditional \(\text{ PCP } \)s (where the prover returns only a single field element) with “non-sparsified” Paillier encryption, we obtain (non-adaptive) preprocessing \(\text{ SNARK } \)s that consist of a single Paillier ciphertext. Moreover, when using the query-efficient \(\text{ PCP } \) from [69] as the underlying \(\text{ PCP } \), even a standard-size Paillier ciphertext (with plaintext group \(\mathbb {Z}_{\mathfrak {p}\mathfrak {q}}\) where \(\mathfrak {p},\mathfrak {q}\) are 512-bit primes) suffices for achieving soundness error \(2^{-\lambda }\) with \(\lambda =100\). (For the case of [69], due to the queries’ dependence on the input, the reference string of the \(\text{ SNARK } \) also depends on the input.) Alternatively, using the sparsified version of Paillier encryption, we can also get security against adaptively-chosen statements with only two Paillier ciphertexts.
Optimal succinctness. A fundamental question about succinct arguments is how low can we push communication complexity. More accurately: what is the optimal tradeoff between communication complexity and soundness? Ideally, we would want succinct arguments that are optimally succinct: to achieve \(2^{-\Omega (\lambda )}\) soundness against \(2^{O(\lambda )}\)-bounded provers, the proof length is \(O(\lambda )\) bits long.
In several existing constructions of succinct arguments, to provide \(2^{-\Omega (\lambda )}\) soundness against \(2^{O(\lambda )}\)-bounded provers, the prover has to communicate \(\omega (\lambda )\) bits to the verifier. Concretely, \(\text{ PCP } \)-based (and \(\text{ MIP } \)-based) solutions require \(\Omega (\lambda ^{3})\) bits of communication. This also holds for preprocessing \(\text{ SNARK } \)s based on Paillier encryption, which suffer from subexponential-time attacks. In the case of pairing-based solutions, subexponential-time attacks are not known to be inherent (this applies to the base groups, relevant to SNARK constructions, rather than the target group).Footnote 3
Following our approach, any candidate for linear-only homomorphic encryption that does not suffer from subexponential-time attacks, would yield other instantiations of preprocessing \(\text{ SNARK } \)s that are optimally succinct. Currently, the only known such candidate is Elgamal encryption (say, in appropriate elliptic curve groups) [89]. However, the problem with using Elgamal decryption in our approach is that it requires to compute discrete logarithms.
One way to overcome this problem is to ensure that honest proofs are always decrypted to a known polynomial-size set. This can be done by taking the \(\text{ LIP } \) to be over a field \(\mathbb {F}_{\mathfrak {p}}\) of only polynomial size, and ensuring that any honest proof \(\varvec{\pi }\) has small \(\ell _{1}\)-norm \(\Vert \varvec{\pi }\Vert _{1}\), so that in particular, the prover’s answer is taken from a set of size at most \(\Vert \varvec{\pi }\Vert _{1} \cdot \mathfrak {p}\). For example, in the two \(\text{ LPCP } \)-based constructions described in Sect. 1.3.1, this norm is \(O(s^{2})\) and \(O(s)\), respectively, for a circuit of size \(s\). This approach, however, has two caveats: the soundness of the underlying \(\text{ LIP } \) is only \(1/{\mathrm {poly}}(\lambda )\) and moreover, the verifier’s running time is proportional to \(s\), and not independent of it, as we usually require. With such an \(\text{ LIP } \), we would be able to directly use Elgamal encryption because linear tests on the plaintexts can be carried out “in the exponent,” without having to take discrete logarithms.
Finally, a rather generic approach for obtaining “almost-optimal succinctness” is to use (linear-only) Elgamal encryption in conjunction with any linear homomorphic encryption scheme (perhaps not having the linear-only property) that is sufficiently secure. Concretely, the verifier sends his \(\text{ LIP } \) message encrypted under both encryption schemes, and then the prover homomorphically evaluates the affine function on both. The additional ciphertext can be efficiently decrypted, and can assist in the decryption of the Elgamal ciphertext. For example, there are encryption schemes based on Ring-LWE [79] that are conjectured to have quasiexponential security; by using these in the approach we just discussed, we can obtain \(2^{-\Omega (\lambda )}\) soundness against \(2^{O(\lambda )}\)-bounded provers with \(\widetilde{O}(\lambda )\) bits of communication.
Strong knowledge and reusability. Designated-verifier \(\text{ SNARK } \)s typically suffer from a problem known as the verifier rejection problem: security is compromised if the prover can learn the verifier’s responses to multiple adaptively-chosen statements and proofs. For example, the \(\text{ PCP } \)-based (or \(\text{ MIP } \)-based) \(\text{ SNARK } \)s of [8, 10, 11, 46, 60] suffer from the verifier rejection problem because a prover can adaptively learn the encrypted \(\text{ PCP } \) (or \(\text{ MIP } \)) queries, by feeding different statements and proofs to the verifier and learning his responses, and since the secrecy of these queries is crucial, security is lost.
Of course, one way to avoid the verifier rejection problem is to generate a new reference string for each statement and proof. Indeed, this is an attractive solution for the aforementioned \(\text{ SNARK } \)s because generating a new reference string is very cheap: it costs \({\mathrm {poly}}(\lambda )\). However, for a designated-verifier preprocessing \(\text{ SNARK } \), generating a new reference string is not cheap at all, and being able to reuse the same reference string across an unbounded number of adaptively-chosen statements and proofs is a very desirable property.
A property that is satisfied by all algebraic \(\text{ LIP } \)s (including the \(\text{ LPCP } \)-based \(\text{ LIP } \)s discussed in Sect. 1.3.1), which we call strong knowledge, is that such attacks are impossible. Specifically, for such \(\text{ LIP } \)s, every prover either makes the verifier accept with probability 1 or with probability less than \(O({\mathrm {poly}}(\lambda )/|\mathbb {F}|)\). (In Sect. 9, we also show that traditional “unstructured” PCPs cannot satisfy this property.) Given LIPs with strong knowledge, it seems that designated-verifier \(\text{ SNARK } \)s that have a reusable reference string can be constructed. Formalizing the connection between strong knowledge and reusable reference string actually requires notions of linear-only encryption that are somewhat more delicate than those we have considered so far. See details in Sect. 9 for additional discussions.
Previous Structured PCPs
Ishai et al. [72] proposed the idea of constructing argument systems with nontrivial efficiency properties by using “structured” \(\text{ PCP } \)s and cryptographic primitives with homomorphic properties, rather than (as in previous approaches) “unstructured” polynomial-size \(\text{ PCP } \)s and collision-resistant hashing. We have shown how to apply this basic approach in order to obtain succinct non-interactive arguments with preprocessing. We now compare our work to other works that have also followed the basic approach of [72].
Strong vs. weak linear PCPs. Both in our work and in [72], the notion of a “structured” \(\text{ PCP } \) is taken to be a linear \(\text{ PCP } \). However, the notion of a linear \(\text{ PCP } \) used in our work does not coincide with the one used in [72]. Indeed there are two ways in which one can formalize the intuitive notion of a linear \(\text{ PCP } \). Specifically:
-
A strong linear \(\text{ PCP } \) is a \(\text{ PCP } \) in which the honest proof oracle is guaranteed to be a linear function, and soundness is required to hold for all (including nonlinear) proof oracles.
-
A weak linear \(\text{ PCP } \) is a \(\text{ PCP } \) in which the honest proof oracle is guaranteed to be a linear function, and soundness is required to hold only for linear proof oracles.
In particular, a weak linear \(\text{ PCP } \) assumes an algebraically-bounded prover, while a strong linear \(\text{ PCP } \) does not. While Ishai et al. [72] considered strong linear \(\text{ PCP } \)s, in our work we are interested in studying algebraically-bounded provers, and thus consider weak linear \(\text{ PCP } \)s.
Arguments from strong linear PCPs. Ishai et al. [72] constructed a four-message argument system for \({\mathsf {NP}}\) in which the prover-to-verifier communication is short (i.e., an argument with a laconic prover [67]) by combining a strong linear \(\text{ PCP } \) and (standard) linear homomorphic encryption; they also showed how to extend their approach to “balance” the communication between the prover and verifier and obtain a \(O(1/\varepsilon )\)-message argument system for \({\mathsf {NP}}\) with \(O(n^{\varepsilon })\) communication complexity. Let us briefly compare their work with ours.
First, in this paper we focus on the non-interactive setting, while Ishai et al. focused on the interactive setting. In particular, in light of the negative result of Gentry and Wichs [68], this means that the use of non-standard assumptions in our setting (such as linear targeted malleability) may be justified; in contrast, Ishai et al. only relied on the standard semantic security of linear homomorphic encryption (and did not rely on linear targeted malleability properties). Second, we focus on constructing (non-interactive) succinct arguments, while Ishai et al. focus on constructing arguments with a laconic prover. Third, by relying on weak linear \(\text{ PCP } \)s (instead of strong linear \(\text{ PCP } \)s) we do not need to perform (explicitly or implicitly) linearity testing, while Ishai et al. do. Intuitively, this is because we rely on the assumption of linear targeted malleability, which ensures that a prover is algebraically bounded (in fact, in our case, linear); not having to perform proximity testing is crucial for preserving the algebraic properties of a linear \(\text{ PCP } \) (and thus, e.g., obtain public verifiability) and obtaining \(O({\mathrm {poly}}(\lambda )/|\mathbb {F}|)\) soundness with only a constant number of encrypted/encoded group elements. (Recall that linearity testing only guarantees constant soundness with a constant number of queries.)
Turning to computational efficiency, while their basic protocol does not provide the verifier with any saving in computation, Ishai et al. noted that their protocol actually yields a batching argument: namely, an argument in which, in order to simultaneously verify the correct evaluation of \(\ell \) circuits of size S, the verifier may run in time S (i.e., in time \(S / \ell \) per circuit evaluation). In fact, a set of works [94, 95, 97, 98] has improved upon, optimized, and implemented the batching argument of Ishai et al. [72] for the purpose of verifiable delegation of computation.
Finally, [94] have also observed that \(\text{ QSP } \)s can be used to construct weak linear \(\text{ PCP } \)s; while we compile weak linear \(\text{ PCP } \)s into \(\text{ LIP } \)s, [94] (as in previous work) compile weak linear \(\text{ PCP } \)s into strong ones. Indeed, note that a weak linear \(\text{ PCP } \) can always be compiled into a corresponding strong one, by letting the verifier additionally perform linearity testing and self-correction; this compilation does not affect proof length, increases query complexity by only a constant multiplicative factor, and guarantees constant soundness.
Remark 1.1
The notions of (strong or linear) \(\text{ PCP } \) discussed above should not be confused with the (unrelated) notion of a linear PCP of Proximity (linear PCPP) [31, 80], which we now recall for the purpose of comparison.
Given a field \(\mathbb {F}\), an \(\mathbb {F}\)-linear circuit [100] is an \(\mathbb {F}\)-arithmetic circuit \(C:\mathbb {F}^{h} \rightarrow \mathbb {F}^{\ell }\) in which every gate computes an \(\mathbb {F}\)-linear combination of its inputs; its kernel, denoted \({\mathrm {ker}}(C)\), is the set of all \(w\in \mathbb {F}^{h}\) for which \(C(w)=0^{\ell }\). A linear PCPP for a field \(\mathbb {F}\) is an oracle machine V with the following properties: (1) V takes as input an \(\mathbb {F}\)-linear circuit \(C\) and has oracle access to a vector \(w\in \mathbb {F}^{h}\) and an auxiliary vector \(\pi \) of elements in \(\mathbb {F}\), (2) if \(w\in {\mathrm {ker}}(C)\) then there exists \(\pi \) so that \(V^{w,\pi }(C)\) accepts with probability 1, and (3) if \(w\) is far from \({\mathrm {ker}}(C)\) then \(V^{w,\pi }(C)\) rejects with high probability for every \(\pi \).
Thus, a linear PCPP is a proximity tester for the kernels of linear circuits (which are not universal), while a (strong or weak) linear \(\text{ PCP } \) is a \(\text{ PCP } \) in which the proof oracle is a linear function.
Related and Subsequent Work
In this section we include a more detailed comparison with the work of Gennaro et al. [56] (GGPR), which is the most closely related to the current work, as well as some subsequent works in this area.
Comparison with GGPR. Our work can be seen as providing a conceptually simple general methodology that not only captures close variantsFootnote 4 of the SNARKs from GGPR (as well as earlier SNARKs from [64, 78]), but can also be instantiated in other useful ways. In more detail, GGPR consider the QSP constraint satisfaction problem, and show how to directly compile it into a SNARK. This is similar to the previous works of Groth [64] and Lipmaa [78], except that the QSP representation is quadratically more efficient. In contrast, our starting point is a linear interactive proof—a new kind of probabilistic information-theoretic proof system, which we show how to build from any (classical or linear) PCP. Only then, we compile such LIPs into SNARKs. The LIP abstraction also admits a natural zero-knowledge variant, which in the QSP-based approach is part of the cryptographic compiler. When using a LIP based on the QSP construction of GGPR, we end up with a slightly different SNARK from that of GGPR, which is in fact slightly less succinct (8 vs. 7 bilinear group elements). Indeed, the GGPR construction makes an additional optimization thanks to compiling QSPs directly.
Whereas QSPs (as well as their arithmetic QAP variant) are tied to polynomials and to quadratic verification, the linear PCP and LIP primitives are more general. GGPR-style linear PCPs still give the best efficiency for most applications, however, other linear PCPs have proven useful in this work and in subsequent works [7, 22, 87]. For example, we show that a LIP based on the Hadamard linear PCP, which is not captured by a QSP, yields a very simple SNARK construction with quadratic CRS size. The single-query linear PCP (or LIP) we obtain from a classical PCP, which serves as a basis for “single-ciphertext SNARKs,” is also not captured by a QSP. Applications in subsequent works are discussed below.
Subsequent developments. An influential work of Groth [65], building on a 2-element LIP implicit in [45], obtained a (publicly verifiable) SNARK requiring only 3 bilinear group elements (or roughly 1000 bits), and left open the possibility of a SNARK with 2 group elements. The latter would follow from a LIP with a linear decision procedure. However, the existence of such a LIP was ruled out in [65], settling an open question posed in the conference version of this work.
These barriers from [65] were recently circumvented in [22] by relaxing either the soundness or the completeness requirement. Settling for inverse-polynomial soundness, practical designated-verifier SNARKs for small circuits with only 2 group elements were obtained by applying a variant of the packing transformation from this work to the Hadamard PCP. Moreover, a 1-element LIP with a linear decision procedure, negligible soundness error, and non-negligible (but sub-constant) completeness error follows from the \({\mathsf {NP}}\)-hardness of approximating a problem related to linear codes, implying 2-element laconic arguments for \({\mathsf {NP}}\) with negligible soundness error and sub-constant completeness error. Finally, a plausible (but yet unproven) hardness of approximation result would imply a 1-element laconic argument with predictable answers, which would in turn imply witness encryption [57].
Several other kinds of “linear” probabilistic proof systems in the spirit of LIP were used in subsequent works. For instance, a variant of LIP was used in [13] to obtain sublinear-communication arguments for arithmetic circuits in which the prover runs in linear time. Fully linear proof systems, where linear queries apply jointly to the input and the proof vector, were used for sublinear zero-knowledge proofs on secret-shared data and information-theoretic secure multiparty computation [7].
We refer the reader to Thaler’s recent survey [99] for an overview of SNARKs based on Linear PCP (Chapter 14) and comparison to other approaches to practical arguments (Chapter 15). Earlier expositions appear in [7, Section 2], [18, Section 5], and [73].
Organization
In Sect. 2, we introduce the notions of \(\text{ LPCP } \)s and \(\text{ LIP } \)s. In Sect. 3, we present our transformations for constructing \(\text{ LIP } \)s from several notions of \(\text{ PCP } \)s. In Sect. 4, we give the basic definitions for preprocessing \(\text{ SNARK } \)s. In Sect. 5, we define the relevant notions of linear targeted malleability, as well as candidate constructions for these. In Sect. 6, we present our transformations from \(\text{ LIP } \)s to preprocessing \(\text{ SNARK } \)s. In Sect. 7, we discuss two constructions of algebraic \(\text{ LPCP } \)s. In Sect. 8, we present our general transformation to obtain \(\text{ HVZK } \) for \(\text{ LPCP } \)s with low-degree decision algorithms. In Sect. 9, we discuss the notion of strong knowledge and its connection to designated-verifier \(\text{ SNARK } \)s with a reusable reference string.