Abstract
A succinct functional commitment (SFC) scheme for a circuit class \(\mathbf {CC}\) enables, for any circuit \(\mathcal {C}\in \mathbf {CC}\), the committer to first succinctly commit to a vector \(\varvec{\alpha }\), and later succinctly open the commitment to \(\mathcal {C}(\varvec{\alpha }, \varvec{\beta })\), where the verifier chooses \(\varvec{\beta }\) at the time of opening. Unfortunately, SFC commitment schemes are known only for severely limited function classes like the class of inner products. By making nonblackbox use of SNARKconstruction techniques, we propose a SFC scheme for the large class of semisparse polynomials. The new SFC scheme can be used to, say, efficiently (1) implement sparse polynomials, and (2) aggregate various interesting SFC (e.g., vector commitment and polynomial commitment) schemes. The new scheme is evaluationbinding under a new instantiation of the computational uberassumption. We provide a thorough analysis of the new assumption.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
A succinct functional commitment (SFC) scheme [29] for a circuit class \(\mathbf {CC}\) enables the committer, for any \(\mathcal {C}\in \mathbf {CC}\), to first commit to a vector \(\varvec{\alpha }\), and later open the commitment to \(\mathcal {C}(\varvec{\alpha }, \varvec{\beta })\), where the verifier chooses \(\varvec{\beta }\) at the time of opening. An SFC scheme must be evaluationbinding (given a commitment, it is intractable to open it to \(\varvec{\mathsf {\xi }} = \mathcal {C}(\varvec{\alpha }, \varvec{\beta })\) and \(\varvec{\mathsf {\xi }}' = \mathcal {C}(\varvec{\alpha }, \varvec{\beta })\) for \(\varvec{\mathsf {\xi }} \ne \varvec{\mathsf {\xi }}'\)) and hiding (a commitment and possibly many openings should not reveal any additional information about \(\varvec{\alpha }\)). Succinctness means that both the commitment and the opening have length \(\mathsf {polylog} (\varvec{\alpha }, \varvec{\beta })\).
In particular, an SFC scheme for inner products (SIPFC) assumes that, \(\mathcal {C}\) computes the inner product \((\varvec{\alpha }, \varvec{\beta }) \rightarrow \left\langle \varvec{\alpha }, \varvec{\beta } \right\rangle \) [25, 29, 30]. As explained in [29], one can use an SIPFC scheme to construct succinct vector commitment schemes [12], polynomial commitment schemes [27], and accumulators [5]. Each of these primitives has a large number of independent applications. Succinct polynomial commitment schemes have recently become very popular since they can be used to construct (updatable) SNARKs [15, 35, 40, 41] (a direction somewhat opposite to the one we will pursue in the current paper). Since, in several applications (e.g., in cryptocurrencies, [38]), one has to run many instances of SFC in parallel, there is a recent surge of interest in aggregatable SFC schemes, [8, 9, 22, 28, 38]. All mentioned papers propose succinct FC schemes for limited functionalities.
Since there are no prior SFC schemes for broader classes of functions, there is a large gap between function classes for which an SFC scheme is known and the class of all efficiently (e.g., polysize arithmetic circuits) verifiable functions. Filling a similar gap is notoriously hard in the case of related primitives like functional encryption, homomorphic encryption, and NIZK. A natural question to ask is whether something similar holds in the case of functional commitment.
It is easy to construct an SFC for all polysize circuits under nonfalsifiable assumptions: given a commitment to \(\varvec{\alpha }\), the opening consists of a SNARK argument [20, 23, 31] that \(\mathcal {C}(\varvec{\alpha }, \varvec{\beta }) = \varvec{\mathsf {\xi }}\). However, while nonfalsifiable assumptions are required to construct SNARKs [21], they are not needed in the case of SFC schemes. Thus, just using SNARK as a blackbox is not a satisfactory solution.
Moreover, since one can construct nonsuccinct NIZK from falsifiable assumptions for , one can construct a nonsuccinct FC (nSFC) from a nonsuccinct NIZK. Bitansky [6] pursued this approach, proposing an nSFC, for all circuits, that uses NIZK as a blackbox. By using NIWIs in a nonblackbox manner, Bitansky proposed another, nontrivial, nSFC scheme that does not achieve zeroknowledge but does not require the CRS model. Alternatively, consider the FC scheme where the commitment consists of fullyhomomorphic encryptions \(C_i\) of individual coefficients \(\alpha _i\), and the opening is the randomizer R of the evaluation of the circuit \(\mathcal {C}\) on them. The verifier can reevaluate the circuit on \(C_i\) and her input, and then check that the result is equal to . However, the resulting FC is not succinct since one has to encrypt all \(\alpha _i\) individually.
Thus, the main question is to construct succinct FC schemes, under falsifiable assumptions, for a wide variety of functionalities.
Our Contributions. We propose a falsifiable SFC scheme \(\mathsf {FC^{}_{sn}}\) for the class of semisparse polynomials \(\mathbf {CC}= \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) whose correct computation can be verified by using an arbitrary polynomialsize arithmetic circuit that is “compilable” according to the definition, given in a few paragraphs. Notably, \(\mathsf {FC^{}_{sn}}\) allows efficiently aggregate various SFC schemes, e.g., vector commitments with innerproduct commitments and polynomial commitments. We analyze the power of \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) by using techniques from algebraic complexity theory; the name of the class will be explain in Sect. 4.
We prove that \(\mathsf {FC^{}_{sn}}\) is secure under a new falsifiable assumption (computational spanuberassumption in a group \(\mathbb {G}_1\)) that is reminiscent of the wellknown computational uberassumption in \(\mathbb {G}_1\). We then thoroughly analyze the security of the new assumption.
Our Techniques. Next, we provide a highlevel overview of our technical contributions. The construction of \(\mathsf {FC^{}_{sn}}\) consists of the following steps.

1.
Compilation of the original circuit \(\mathcal {C}\) computing the fixed function \(\varvec{\mathcal {F}} \in \mathbf {CC}\) to a circuit \(\mathcal {C}^*\) consisting of four public subcircuits.

2.
Representation of \(\mathcal {C}^*\) in the QAP language which SNARKs usually use.

3.
Construction of SFC for the QAP representation, by using SNARK techniques in a nonblackbox way.
Next we describe these steps in detail.
Circuit Compilation. Let be a polynomialsize arithmetic circuit that, on input \((\varvec{\alpha }, \varvec{\beta })\), outputs \(\varvec{\mathsf {\xi }} = \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }) = (\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }))_{i = 1}^\kappa \). Here, the committer’s input \(\varvec{\alpha }\) is secret, and the verifier’s input is public. We modify the circuit \(\mathcal {C}\) to a compiled circuit \(\mathcal {C}^*\), see Fig. 1, that consists of the subcircuits \(\mathcal {C}_\phi \), \(\mathcal {C}_\psi \), \(\mathcal {C}_\chi \), and \(\mathcal {C}_\xi \). In the commitment phase, the committer uses the circuit \(\mathcal {C}_\phi \) to compute several polynomials \(\phi _i (\varvec{\alpha })\) depending on only 1 (this allows the output polynomials to have a nonzero constant term) and \(\varvec{\alpha }\). In the opening phase, the verifier sends \(\varvec{\beta }\) to the committer, who uses the circuit \(\mathcal {C}_\psi \) to compute several polynomials \(\psi _i (\varvec{\beta })\) depending on 1 and \(\varvec{\beta }\). The verifier can redo this part of the computation. After that, the committer uses the circuit \(\mathcal {C}_\chi \) to compute several polynomials \(\chi _i (\varvec{\alpha }, \varvec{\beta })\) from the inputs and outputs of \(\mathcal {C}_\phi \) and \(\mathcal {C}_\psi \). Finally, the committer uses \(\mathcal {C}_\xi \) to compute the outputs \(\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\) of \(\mathcal {C}^*\). We will explain more thoroughly this compilation in Sect. 3.
Intuitively, the compilation restricts the class of circuits in two ways. First, we add a small circuit \(\mathcal {C}_\xi \) at the top of the compiled circuit to guarantee that the R1CS representation of \(\mathcal {C}^*\) has several allzero columns and rows, which helps us in the security reduction. This does, however, not restrict the circuit class for which the SFC is defined and it only increases the number of gates by \(\kappa \). Second, \(\mathcal {C}_\chi \) is restricted to have multiplicative depth 1, i.e., it sums up products of polynomials in \(\varvec{\alpha }\) with polynomials in \(\varvec{\beta }\). This guarantees that in a collision, the two accepted openings have a linear relation that does not depend on secret data \(\varvec{\alpha }\). The latter makes it possible for the reduction to break the underlying falsifiable assumption. Thus, we are restricted to the class \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) of circuits where each output can be written as \(\sum _{i, j} \phi _i (\varvec{\alpha }) \psi _j (\varvec{\beta })\), for efficiently computable polynomials \(\phi _i\) and \(\psi _j\), and the sum is taken over number products.
By employing tools from the algebraic complexity theory, in Sect. 4, we study the class \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) of “compilable” (according to the given definition) arithmetic circuits. We say that a polynomial \(f \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) if f has a circuit that belongs to \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\). The new SFC scheme can implement f iff \(f \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\). First, we show that any sparse polynomial (over indeterminates, chosen by both the committer and the verifier) f belongs to \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\). Second, we construct a nonsparse polynomial \(f \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\). This relies on a result of BenOr who constructed an \(O (n^2)\)size arithmetic circuit that simultaneously computes the dth symmetric polynomial \(\sigma _d (X_1, \ldots , X_n)\), for \(d \in [1\,..\,n]\). Third, we construct a polynomial \(f \in \mathbf {VP}\) such that \(f \not \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\), where \(\mathbf {VP}\) is the class of polydegree polynomials that have polysize circuits, [39].
R1CS/QAP Representation. Let \(\mathcal {C}\) be an arithmetic circuit, and \(\mathcal {C}^*\) be its compilation. A circuit evaluation can be verified by verifying a matrix equation, where matrices define the circuit uniquely and reflect all the circuit constraints. SNARKs usually use QAP (Quadratic Arithmetic Program, [20]), a polynomial version of R1CS, which allows for better efficiency.
Constructing the Underlying SNARK. Intuitively, we start constructing a SNARK for \(\mathcal {C}^*\) by following the approach of Groth [24] who proposed the most efficient known zkSNARK, or more precisely, its recent modification by Lipmaa [33]. However, we modify this approach whenever it suits our goals. The new SFC inherits the efficiency of Groth’s SNARK; this is the main reason why we chose Groth’s SNARK; it may be the case that SFCs constructed from less efficient SNARKs have other desirable properties, but this is out of the scope of the current paper. We chose the modified version of [33] due to its versatility: [33] explains sufficiently well how to construct a SNARK for QAP so that it is feasible to modify its approach to suit the current paper.
The New SFC Scheme. In the SNARKs of [24, 33], the argument consists of three group elements, \(\pi = ([\mathsf {A}]_{1}, [\mathsf {B}]_{2}, [\mathsf {C}]_{1})\). (We use the bracket additive notation, see Sect. 2.) Due to our restrictions on \(\mathcal {C}^*\), both \([\mathsf {A}]_{1}\) and \([\mathsf {B}]_{2}\) can be written as sums of a nonfunctional commitment that depends on the secret data only and a nonfunctional commitment that depends on public data only. By the public data we mean \((\varvec{\beta }, \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }))\); any other function of \(\varvec{\alpha }\) is a part of the secret data. E.g., \([\mathsf {A}]_{1} = [\mathsf {A}_s]_{1} + [\mathsf {A}_p]_{1}\), where \([\mathsf {A}_s]_{1}\) is computed by the committer before \(\varvec{\beta }\) becomes available, and \([\mathsf {A}_p]_{1}\) can be recomputed by the verifier since it only depends on the public data. However, \([\mathsf {C}]_{1} = [\mathsf {C}_{sp}]_{1} + [\mathsf {C}_p]_{1}\), where \([\mathsf {C}_p]_{1}\) depends only on public data but \([\mathsf {C}_{sp}]_{1}\) depends both on public and private data.
In the new SFC commitment scheme, the functional commitment is \(C= ([\mathsf {A}_s]_{1}, [\mathsf {B}_s]_{2})\) and the opening is \([\mathsf {C}_{sp}]_{1}\). After receiving the opening, the verifier recomputes \([\mathsf {A}_p]_{1}\), \([\mathsf {B}_p]_{2}\), and \([\mathsf {C}_p]_{1}\), and then runs the SNARK verifier on the argument \(\pi = ([\mathsf {A}_s]_{1} + [\mathsf {A}_p]_{1}, [\mathsf {B}_s]_{2} + [\mathsf {B}_p]_{2}, [\mathsf {C}_{sp}]_{1} + [\mathsf {C}_p]_{1})\). However, as we will see later, the commitment also includes auxiliary elements \([\mathsf {B}^{\mathsf {aux}}_i]_{1}\) needed to obtain an efficient security reduction.
We will denote the new SFC commitment scheme by \(\mathsf {FC^{}_{sn}}\). We denote by \(\mathsf {FC^{\mathcal {C}}_{sn}}\) its specialization to the circuit \(\mathcal {C}\).
Applications. To demonstrate the usefulness of \(\mathsf {FC^{}_{sn}}\), we will give several applications: some of them are wellknown, and some are new. In all cases, the function of interest can be rewritten as a semisparse polynomial in \((\varvec{\alpha }, \varvec{\beta })\). Some of these examples are closely related to but still sufficiently different from IPFC. In particular, [29] showed how to use an efficient IPFC to construct SFC for polynomial commitments [27], accumulators [5], and vector commitments [12] (See the full version [34].). We use \(\mathsf {FC^{}_{sn}}\) to construct subvector commitments [28], aggregated polynomial commitment [9, 15] (one can commit to multiple polynomials at once, each of which can be opened at a different point), and multivariate polynomial commitments [10]. Also, we outline a few seemingly new applications like the aggregated inner product (that, in particular, can be used to implement subvector commitment) and evaluationpoint commitment schemes. (See the full version [34].) All described commitment schemes are succinct.
Importantly, \(\mathsf {FC^{}_{sn}}\) achieves easy aggregation in a more general sense. Let \(\mathcal {C}_i\) be some circuits for which efficient SFC schemes exist. We can then construct an efficient SFC for the circuit that consists of the sequential composition of \(\mathcal {C}_i\)s. In particular, we can aggregate multiple polynomial commitment schemes, some vector commitment schemes, and say an evaluationpoint commitment scheme. Some of the referred papers [8, 9, 22, 28, 38] construct aggregated commitment schemes for a concrete circuit (e.g., an aggregated polynomial commitment scheme). Importantly, \(\mathsf {FC^{}_{sn}}\) allows one to aggregate different SFC schemes.
Security. The correctness and perfect hiding proofs are straightforward. The main thing worthy of note here is that we have three definitions of hiding (comhiding, openhiding, and zeroknowledge, see Sect. 2). For the sake of completeness, we also give three different hiding proofs. The SFC schemes must work in the CRS model to obtain zeroknowledge. However, since zeroknowledge is stronger than the other two definitions, the proof of zeroknowledge, that follows roughly from the zeroknowledge of the related SNARK, suffices. Note that say [29] only considered the weakest hiding notion (comhiding).
The evaluationbinding proof differs significantly from the knowledgesoundness proofs of SNARKs. The knowledgesoundness of SNARKs can only be proven under nonfalsifiable assumptions [21]. In particular, Groth proved the knowledgesoundness of the SNARK from [24] in the generic group model while Lipmaa [33] proved it under HAK (hashalgebraic knowledge assumption, a tautological knowledge assumption) and a known computational assumption (namely, qPDL [31]). Such assumptions have very little in common with assumptions we use. As expected, a knowledgesoundness proof that uses nonfalsifiable assumptions has a very different flavor compared to an evaluationbinding proof that only uses falsifiable assumptions. We emphasize it is not clear a priori that an SFC constructed from SNARKs could rely on falsifiable assumptions.
We prove the evaluationbinding of \(\mathsf {FC^{}_{sn}}\) under the new \((\mathcal {R}, \mathcal {S}, \{f_i\})\)computational spanuberassumption in source group \(\mathbb {G}_1\), where and with \(f_i \not \in {\text {span}}(\mathcal {R})\). This assumption states that given a commitment key \(\mathsf {ck}= ([\varrho (\chi , y): \varrho \in \mathcal {R}]_{1}, [\sigma (\chi , y): \sigma \in \mathcal {S}]_{2})\), where \(\chi , y\) are random trapdoors, it is difficult to compute \((\varvec{\varDelta } \ne \varvec{0}, \sum _{i = 1}^{\kappa } \varDelta _i [f_i (\chi , y)]_{1})\), where \(\varvec{\varDelta }\) is adversarially chosen. (See Definition 6 for a formal definition.) Importantly, if \(\kappa = 1\) then we just have an uberassumption in \(\mathbb {G}_1\). We show that (see Theorem 2), for concrete \(\mathcal {R}\) and \(f_i\), \(f_i (X, Y) \not \in {\text {span}}(\mathcal {R})\).
The full evaluationbinding proof is quite tricky and relies significantly on the structure of matrices U, V, W, and of the commitment key. Given a collision, we “almost” compute \((\varvec{\varDelta }, \sum \varDelta _i [f_i (\chi , y)]_{1})\), where \(\varvec{\varDelta }\) is the componentwise difference between two claimed values of \(\varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta })\). To eliminate “almost” in the previous sentence, the committer outputs \(\kappa \) additional “helper” elements \([\mathsf {B}^{\mathsf {aux}}_i]_{1}\), where extra care has to be used to guarantee that the helper elements can be computed given the commitment key. In both cases, to succeed, we need to assume that the matrices (U, V, W) satisfy some natural restrictions stated in individual theorems. These restrictions are collected together in Theorem 1.
Analysis of the SpanUberAssumption. The spanuberassumption is falsifiable and, thus, significantly more realistic than nonfalsifiable (knowledge) assumptions needed to prove the adaptive soundness of SNARKs. Still, it is a new assumption and thus we have written down three different proofs that it follows from already known assumptions. (See Lemma 2 and Theorem 4, and another theorem in the full version).
In the full version [34], we prove that the spanuberassumption in \(\mathbb {G}_1\) holds under the known \((\mathcal {R}, \mathcal {S}, f'_i)\)computational uberassumption in the target group \(\mathbb {G}_T\) [7]. Here, \(f'_i\) are different from but related to \(f_i\). We also prove that \(f'_i \not \in {\text {span}}(\mathcal {R}\mathcal {S})\). Since \(f_i (X, Y) \not \in {\text {span}}(\mathcal {R})\) and \(f'_i (X, Y) \not \in {\text {span}}(\mathcal {R}\mathcal {S})\) (in the case of the uberassumption in \(\mathbb {G}_T\)), we have an instantiation of the computational uberassumption, known to be secure [7] in the generic group model.
Since the generic group model is very restrictive and has known weaknesses [16, 17] not shared by wellchosen knowledge assumptions, we will use the newer methodology of [33]. In the full version [34], we prove that if \(f_i \not \in {\text {span}}(\mathcal {R})\) then the \((\mathcal {R}, \mathcal {S}, \{f_i\})\)computational spanuberassumption in \(\mathbb {G}_1\) holds under a HAK and a PDL assumption. Since uberassumption in \(\mathbb {G}_T\) is not secure under a HAK assumption (the latter only handles the case the adversary outputs elements in source groups since the target group is nongeneric), this result is orthogonal to the previous result. As a corollary of independent interest, we get that if \(f_i (X, Y) \not \in {\text {span}}(\mathcal {R})\) then uberassumption in \(\mathbb {G}_1\) holds under a HAK and a PDL assumption.
In compositeorder bilinear groups, the computational uberassumption in \(\mathbb {G}_T\) holds under a subgroup hiding assumption [13]. Thus, due to Lemma 2, a compositeorder group spanuberassumption (and also the new SFC) is secure under a subgroup hiding assumption. In Theorem 4, we use the Déjà Q approach of [14] to prove that the spanuberassumption in \(\mathbb {G}_\iota \), \(\iota \in \{1, 2\}\), is secure under a subgroup hiding assumption. This proof is more direct than the reduction through an uberassumption in \(\mathbb {G}_T\). Moreover, the Déjà Q approach is more applicable if one is working in the source group. Whether a similar reduction holds in the case of primeorder groups is an interesting open question.
Efficiency. It is difficult to provide a detailed efficiency comparison of our newly constructed scheme to all the abundant existing work in all applications. \(\mathsf {FC^{}_{sn}}\) is generic, works for a large class of circuits, and can tackle scenarios, not possible with previous work, but at the same time, it can also be used to solve the much simpler case of, e.g., inner product. We stress that \(\mathsf {FC^{}_{sn}}\), when straightforwardly specialized to the IPFC case, is nearly as efficient as the most efficient known prior IPFC, losing ground only in the CRS length. On the other hand, we are not aware of any previous aggregated IPFC schemes (See the full version [34].).
This paper uses heavily a yet unpublished paper [33] of the first author.
2 Preliminaries
If \(\mathcal {R}= (\varrho _1 (\varvec{X}), \ldots , \varrho _n (\varvec{X}))\) is a tuple of polynomials over and \(\varvec{x}\) is a vector of integers then \(\mathcal {R}(\varvec{x}) := (\varrho _1 (\varvec{x}), \ldots , \varrho _n (\varvec{x}))\). Let be the set of degree\(\le d\) polynomials over . For a matrix U, let \(\varvec{U}_i\) be its ith row, \(\varvec{U}^{(j)}\) be its jth column. Let \(\varvec{a} \circ \varvec{b}\) denote the componentwise product of two vectors \(\varvec{a}\) and \(\varvec{b}\), \((\varvec{a} \circ \varvec{b})_i = a_i b_i\). Let denote the vertical concatenation of vectors \(\varvec{a}_i\). is the security parameter, and denotes its unary representation. PPT denotes probabilistic polynomialtime. For an algorithm , is the range of , i.e., the set of valid outputs of , denotes the random tape of (assuming the given value of ), and denotes the uniformly random choice of a randomizer r from the set/distribution \(\mathcal {S}\).
Interpolation. Assume \(\nu \) is a power of two, and let \(\omega \) be the \(\nu \)th primitive root of unity modulo p. Such \(\omega \) exists, given that \(\nu \mid (p  1)\). Then,

\(\ell (X) := \prod _{i = 1}^\nu (X  \omega ^{i  1}) = X^\nu  1\) is the unique degree \(\nu \) monic polynomial such that \(\ell (\omega ^{i  1}) = 0\) for all \(i \in [1\,..\,\nu ]\).

For \(i \in [1\,..\,\nu ]\), \(\ell _i (X)\) is the ith Lagrange basis polynomial, i.e., the unique degree \(\nu  1\) polynomial s.t. \(\ell _i (\omega ^{i  1}) = 1\) and \(\ell _i (\omega ^{j  1}) = 0\) for \(i \ne j\). Clearly, \(\ell _i (X) := \ell (X) / (\ell ' (\omega ^{i  1}) (X  \omega ^{i  1})) = (X^\nu  1) \omega ^{i  1} / (\nu (X  \omega ^{i  1}))\).
Moreover, \((\ell _j (\omega ^{i  1}))_{i = 1}^\nu = \varvec{e}_j\) (the jth unit vector) and \((\ell (\omega ^{i  1}))_{i = 1}^\nu = \varvec{0}_\nu \).
Bilinear Pairings. Let \(\nu \) be an integer parameter (the circuit size in our application). A bilinear group generator returns \((p, \mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, \hat{e}, \mathsf {P}_1, \mathsf {P}_2)\), where \(\mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T\) are three additive cyclic groups of prime order p, \(\hat{e}: \mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) is a nondegenerate efficiently computable bilinear pairing, and \(\mathsf {P}_\iota \) is a fixed generator of \(\mathbb {G}_\iota \). We assume \(\mathsf {P}_T = \hat{e}(\mathsf {P}_1, \mathsf {P}_2)\). We require the bilinear pairing to be Type3, i.e., there is no efficient isomorphism between \(\mathbb {G}_1\) and \(\mathbb {G}_2\). For efficient interpolation, we assume that p is such that \(\nu \mid (p  1)\). When emphasizing efficiency is not important, we drop the parameter \(\nu \) and just write . We use additive notation together with the standard ellipticcurve “bracket” notation. Namely, we write \([a]_{\iota }\) to denote \(a \mathsf {P}_{\iota }\), and \([a]_{1} \bullet [b]_{2}\) to denote \(\hat{e}([a]_{1}, [b]_{2})\). We use freely the bracket notation together with matrix notation, e.g., if \(A B = C\) as matrices then \([A]_{1} \bullet [B]_{2} = [C]_{T}\).
UberAssumption. The following assumption is a special case of the more general uberassumption of [7, 11].
Definition 1
([7, 11]). Let . Let \(\mathcal {R}\), \(\mathcal {S}\), and \(\mathcal {T}\) be three tuples of bivariate polynomials from . Let \(f\) be a bivariate polynomial from . The \((\mathcal {R}, \mathcal {S}, \mathcal {T}, f)\)computational uberassumption for in group \(\mathbb {G}_\iota \), where \(\iota \in \{1, 2, T\}\), states that for any PPT adversary , , where
[7, 11] considered the general case of cvariate polynomials for any c. In our case, \(\mathcal {T}= \emptyset \); then, we have an \((\mathcal {R}, \mathcal {S}, f)\)computational uberassumption in \(\mathbb {G}_\iota \).
Importantly [7, 11], (i) if \(f(X, Y)\) is not in the span of \(\{\varrho (X, Y)\}\) then the \((\mathcal {R}, \mathcal {S}, \mathcal {T}, f)\)computational uberassumption for \(\mathbb {G}_1\) holds in the generic group model, and (ii) if \(f(X, Y)\) is not in the span of \(\{\varrho (X, Y) \sigma (X, Y) + \tau (X, Y)\}\) then the \((\mathcal {R}, \mathcal {S}, \mathcal {T}, f)\)computational uberassumption for \(\mathbb {G}_T\) is difficult in the generic group model. We will only invoke the uberassumption in the case \(f(X, Y)\) is not in the span of \(\{\varrho (X, Y)\}\).
QAP. Let \(\mathbf {R}= \{(\mathsf {z}, \mathsf {wit})\}\) be a relation between statements and witnesses. Quadratic Arithmetic Program (QAP) was introduced in [20] as a language where for an input \(\mathsf {z}\) and witness \(\mathsf {wit}\), \((\mathsf {z}, \mathsf {wit}) \in \mathbf {R}\) can be verified by using a parallel quadratic check. QAP has an efficient reduction from the (either Boolean or Arithmetic) CircuitSAT. Thus, an efficient zkSNARK for QAP results in an efficient zkSNARK for CircuitSAT.
We consider arithmetic circuits that consist only of fanin2 multiplication gates, but either input of each multiplication gate can be any weighted sum of wire values, [20]. Let \(\mu _0< \mu \) be a nonnegative integer. In the case of arithmetic circuits, \(\nu \) is the number of multiplication gates, \(\mu \) is the number of wires, and \(\mu _0\) is the number of public inputs.
Let , such that \(\omega \) is the \(\nu \)th primitive root of unity modulo p. This requirement is needed for the sake of efficiency, and we will make it implicitly throughout the paper. However, it is not needed for the new SFC to work. Let U, V, and W be instancedependent matrices and let \(\varvec{\mathsf {a}}\) be a witness. A QAP is characterized by the constraint \(U \varvec{\mathsf {a}} \circ V \varvec{\mathsf {a}} = W \varvec{\mathsf {a}}\). Let \(L_{\varvec{a}} (X) := \sum _{i = 1}^\nu a_i \ell _i (X)\) be the interpolating polynomial of \(\varvec{a} = (a_1, \ldots , a_\nu )^\top \) at points \(\omega ^{i  1}\), with \(L_{\varvec{a}} (\omega ^{i  1}) = a_i\). For \(j \in [1\,..\,\mu ]\), define \(u_j (X) := L_{\varvec{U}^{(j)}} (X)\), \(v_j (X) := L_{\varvec{V}^{(j)}} (X)\), and \(w_j (X) := L_{\varvec{W}^{(j)}} (X)\) to be interpolating polynomials of the jth column of the corresponding matrix. Thus, . Let \(u (X) = \sum _{j = 1}^{\mu } \mathsf {a}_j u_j (X)\), \(v (X) = \sum _{j = 1}^{\mu } \mathsf {a}_j v_j (X)\), and \(w (X) = \sum _{j = 1}^{\mu } \mathsf {a}_j w_j (X)\). Then \(U \varvec{\mathsf {a}} \circ V \varvec{\mathsf {a}} = W \varvec{\mathsf {a}}\) iff \(\ell (X) \mid u (X) v (X)  w (X)\) iff \(u (X) v (X) \equiv w (X) \pmod {\ell (X)}\) iff there exists a polynomial \(\mathcal {H}(X)\) such that \(u (X) v (X)  w (X) = \mathcal {H}(X) \ell (X)\).
A QAP instance \(\mathcal {I}_{\mathsf {qap}}\) is equal to . \(\mathcal {I}_{\mathsf {qap}}\) defines the following relation:
where u(X), v(X), and w(X) are as above. Alternatively, \((\mathsf {z}, \mathsf {wit}) \in \mathbf {R}\) if there exists a (degree \(\le \nu  2\)) polynomial \(\mathcal {H}(X)\), s.t. the following key equation holds:
On top of checking Eq. (2), the verifier also needs to check that u(X), v(X), and w(X) are correctly computed: that is, (i) the first \(\mu _0\) coefficients \(\mathsf {a}_j\) in u(X) are equal to the public inputs, and (ii) u(X), v(X), and w(X) are all computed by using the same coefficients \(\mathsf {a}_j\) for \(j \le \mu \).
Since both the committer and the verifier have inputs, we will use a variation of QAP that handles public inputs differently (see Sect. 3). In particular, we will use different parameters instead of \(\mu _0\).
SNARKs. Let \(\mathcal {R}\) be a relation generator, such that returns a polynomialtime decidable binary relation \(\mathbf {R}= \{(\mathsf {z}, \mathsf {wit})\}\). Here, \(\mathsf {z}\) is a statement, and \(\mathsf {wit}\) is a witness. \(\mathcal {R}\) also outputs the system parameters that will be given to the honest parties and the adversary. A noninteractive zeroknowledge (NIZK) argument system for \(\mathcal {R}\) consists of four PPT algorithms:

CRS generator: \(\mathsf {K_{crs}}\) is a probabilistic algorithm that, given , outputs \((\mathsf {crs}, \mathsf {td})\) where \(\mathsf {crs}\) is a CRS and \(\mathsf {td}\) is a simulation trapdoor. Otherwise, it outputs a special symbol \(\bot \).

Prover: is a probabilistic algorithm that, given for \((\mathsf {z}, \mathsf {wit}) \in \mathbf {R}\), outputs an argument \(\pi \). Otherwise, it outputs \(\bot \).

Verifier: is a probabilistic algorithm that, given , returns either 0 (reject) or 1 (accept).

Simulator: is a probabilistic algorithm that, given , outputs an argument \(\pi \).
A NIZK argument system must satisfy completeness (an honest verifier accepts an honest prover), knowledgesoundness (if a prover makes an honest verifier accept, then one can extract from the prover a witness \(\mathsf {wit}\)), and zeroknowledge (there exists a simulator that, knowing CRS trapdoor but not the witness, can produce accepting statements with the verifier’s view being indistinguishable from the view when interacting with an honest prover). See the full version [34] for formal definitions. A SNARK (succinct noninteractive argument of knowledge, [20, 23, 24, 31,32,33]) is a NIZK argument system where the argument is sublinear in the input size.
Functional Commitment Schemes. Let \(\mathcal {D}\) be some domain. In a functional commitment scheme for a circuit \(\mathcal {C}: \mathcal {D}^{\mu _\alpha } \times \mathcal {D}^{\mu _\beta } \rightarrow \mathcal {D}^{\kappa }\), one first commits to a vector \(\varvec{\alpha } \in \mathcal {D}^{\mu _\alpha }\), obtaining a functional commitment \(C\). The goal is to allow the committer to later open \(C\) to \(\varvec{\mathsf {\xi }} = \mathcal {C}(\varvec{\alpha }, \varvec{\beta }) \in \mathcal {D}^{\kappa }\), where \(\varvec{\beta } \in \mathcal {D}^{\mu _\beta }\) is a public input that is chosen by the verifier before the opening. We generalize the notion of functional commitment, given in [29], from inner products to arbitrary circuits. Compared to [29], we also provide a stronger hiding definition.
Let \(\mathbf {CC}\) be a class of circuits \(\mathcal {C}: \mathcal {D}^{\mu _\alpha } \times \mathcal {D}^{\mu _\beta } \rightarrow \mathcal {D}^{\kappa }\). A functional commitment scheme \(\mathsf {FC}\) for \(\mathbf {CC}\) is a tuple of four (possibly probabilistic) polynomial time algorithms , where

Commitmentkey generator: is a probabilistic algorithm that, given a security parameter and a circuit \(\mathcal {C}\in \mathbf {CC}\), outputs a commitment key \(\mathsf {ck}\) and a trapdoor key \(\mathsf {tk}\). We implicitly assume and \(\mathcal {C}\) are described by \(\mathsf {ck}\).

Commitment: \(\mathsf {com} (\mathsf {ck}, \varvec{\alpha }; r)\) is a probabilistic algorithm that takes as input the commitment key \(\mathsf {ck}\), a message vector \(\varvec{\alpha } \in \mathcal {D}^{\mu _\alpha }\) and some randomizer r. It outputs \((C, D)\), where \(C\) is a commitment to \(\varvec{\alpha }\) and \(D\) is a decommitment information. We denote the first output C of \(\mathsf {com} (\mathsf {ck}; \varvec{\alpha }; r)\) by \(\mathsf {com}_1 (\mathsf {ck}; \varvec{\alpha }; r)\).

Opening: \(\mathsf {open}(\mathsf {ck}, C, D, \varvec{\beta })\) is a deterministic algorithm that takes as input the commitment key \(\mathsf {ck}\), a commitment \(C\) (to \(\varvec{\alpha }\)), a decommitment information \(D\), and a vector \(\varvec{\beta } \in \mathcal {D}^{\mu _\beta }\). Assume that the ith output value of the circuit \(\mathcal {C}\) is \(\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\), where \(\mathcal {F}_i\) is a public function. It computes an opening \({\text {op}}_{\varvec{\mathsf {\xi }}}\) to \(\varvec{\mathsf {\xi }} = \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }) := (\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }))_{i = 1}^{\kappa }\).

Verification: is a deterministic algorithm that takes as input the commitment key \(\mathsf {ck}\), a commitment \(C\), an opening \({\text {op}}_{\varvec{\mathsf {\xi }}}\), a vector \(\varvec{\beta } \in \mathcal {D}^{\mu _\beta }\), and \(\varvec{\mathsf {\xi }} \in \mathcal {D}^{\kappa }\). It outputs 1 if \({\text {op}}_{\varvec{\mathsf {\xi }}}\) is a valid opening for \(C\) being a commitment to some \(\varvec{\alpha } \in \mathcal {D}^{\mu _\alpha }\) such that \(\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }) = \varvec{\mathsf {\xi }}\) and outputs 0 otherwise.
Security of FC. Next, we give three definitions of the hiding property for FC schemes of increasing strength. The first definition corresponds to the definition of hiding given in [29] and essentially states that commitments do not reveal any information about \(\varvec{\alpha }\). The other two definitions seem to be novel at least in the context of general FC. We provide all three definitions, since in some applications, a weaker definition might be sufficient. Moreover, the third definition (zeroknowledge) makes only sense in the CRS model; in a CRSless model, one can rely on the openhiding property.
Definition 2
(Perfect comhiding). A functional commitment scheme for circuit class \(\mathbf {CC}\) is perfectly hiding if for any , \(\mathcal {C}\in \mathbf {CC}\), , for all \(\varvec{\alpha }_1,\varvec{\alpha }_2 \in \mathcal {D}^{\mu _\alpha }\) with \(\varvec{\alpha }_1 \ne \varvec{\alpha }_2\), the two distributions \(\delta _1\) and \(\delta _2\) are identical, where
The openhiding property is considerably stronger, stating that the commitment and the openings together do not reveal more information on \(\varvec{\alpha }\) than the values \(\mathcal {C}(\varvec{\alpha }, \varvec{\beta }_i)\) on queried values \(\varvec{\beta }_i\). Trivial nonsuccinct FC schemes, where one uses a perfectlyhiding commitment scheme to commit to \(\varvec{\beta }\), and then in the opening phase, opens the whole database, are comhiding but not openhiding.
Definition 3
(Perfect openhiding). A functional commitment scheme for circuit class \(\mathbf {CC}\) is perfectly openhiding if for any , \(\mathcal {C}\in \mathbf {CC}\), , for all \(\varvec{\alpha }_1,\varvec{\alpha }_2 \in \mathcal {D}^{\mu _\alpha }\) with \(\varvec{\alpha }_1 \ne \varvec{\alpha }_2\), and of \(\varvec{\beta }_i\) such that \(\mathcal {C}(\varvec{\alpha }_1, \varvec{\beta }_i) = \mathcal {C}(\varvec{\alpha }_2, \varvec{\beta }_i)\) for all \(i \le Q\), the two distributions \(\delta _1\) and \(\delta _2\) are identical, where \(\delta _b :=\)
Finally, zeroknowledge FC schemes have simulationbased hiding. While simulationbased security is a gold standard in cryptography, it is usually more complicated to achieve than gamebased security. In particular, one needs to have a trusted \(\mathsf {ck}\) (and its trapdoor) to achieve zeroknowledge. We will leave it as an open problem whether one can use instead the much weaker bare public key (BPK) model, by using the techniques of [1, 2, 4, 18]. Note that [33] showed that their SNARKs are all secure in the BPK model.
Definition 4
(Perfect zeroknowledge). An FC scheme for \(\mathbf {CC}\) is perfectly zeroknowledge if there exists a PPT simulator , such that for all , all \(\mathcal {C}\in \mathbf {CC}\), , for all \(\varvec{\alpha } \in \mathcal {D}^{\mu _\alpha }\), for any polysize set of \(\varvec{\beta }_i\), \(\delta _0\) and \(\delta _1\) are identical, where
Next, we will define evaluationbinding. Evaluationbinding can be weaker than binding, but sometimes the two notions are equivalent. (Consider the case of the inner product when the adversary asks the committer to open a commitment for \(\varvec{\beta } = \varvec{e}_i\) for each i). In the context of FC schemes, evaluationbinding is the distinguishing security notion.
Definition 5
(Computational evaluationbinding). A functional commitment scheme for circuit class \(\mathbf {CC}\) is computationally evaluationbinding if for any , \(\mathcal {C}\in \mathbf {CC}\), and a nonuniform PPT adversary , , where
An FC scheme is succinct (SFC), if both the commitments and openings have length that is polylogarithmic in \(\varvec{\alpha }\) and \(\varvec{\beta }\).
3 The New SFC Scheme
In this section, we will construct a succinct functional commitment (SFC) scheme for (almost) all polynomialsize arithmetic circuits by mixing techniques from SNARKs with original ideas, needed to construct a SFC scheme. Let \(\varvec{\mathcal {F}}\) be a fixed vector function that takes inputs from two parties, the committer and the verifier. Let \(\alpha _j\) be private inputs of the committer, used when committing. Let \(\beta _j\) be public inputs of the verifier, used when opening the commitment.
Let \(\mathcal {C}\) be an arithmetic circuit that inputs \(\alpha _j\) and \(\beta _j\) and computes \(\varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }) = (\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }))_{i = 1}^\kappa \), where \(\varvec{\alpha }\) is the private input of the committer and \(\varvec{\beta }\) is chosen by the verifier, possibly only later. We compile \(\mathcal {C}\) to a circuit \(\mathcal {C}^*\) that consists of four subcircuits \(\mathcal {C}_\phi \), \(\mathcal {C}_\psi \), \(\mathcal {C}_\chi \) and \(\mathcal {C}_\xi \). We need the division to four subcircuits to prove evaluationbinding; we will give more details later.
After that, we use the QAPrepresentation [20] (more precisely, the approach of [33]) of arithmetic circuits, obtaining polynomials \(\mathsf {A}(X, Y)\), \(\mathsf {B}(X, Y)\) (the “commitment polynomials” to all left/right inputs of all gates of \(\mathcal {C}^*\), correspondingly), and \(\mathsf {C}(X, Y)\) (the “opening polynomial”), such that \(\mathsf {C}(X, Y)\) is in the linear span of the “polynomial commitment key” \(\mathsf {ck}_1 = (\varrho (X, Y): \varrho \in \mathcal {R})\) if and only if the committer was honest. The circuit compilation allows us to divide the polynomials to “private” parts (transmitted during the commitment) and “public” parts (trasmitted during the opening), such that one can, given two different openings for the same commitment, break a computational assumption. We then use SNARKbased techniques to construct the SFC for \(\mathcal {C}^*\) with succinct commitment and opening. We postpone security proofs to Sect. 5; we currently emphasize that the evaluationbinding proof is novel (in particular, not related to the knowledgesoundness proofs of SNARKs at all).
Circuit Compilation. Let \(\mathcal {C}\) be a polynomialsize arithmetic circuit that, on input \((\varvec{\alpha }, \varvec{\beta })\), outputs \(\varvec{\mathsf {\xi }} = \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }) = (\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }))_{i = 1}^\kappa \). We compile \(\mathcal {C}\) to a compiled circuit \(\mathcal {C}^*\), see Fig. 1, that consists of the public subcircuits \(\mathcal {C}_\phi \), \(\mathcal {C}_\psi \), \(\mathcal {C}_\chi \), and \(\mathcal {C}_\xi \) that are combined as follows. In the commitment phase, the committer uses the circuit \(\mathcal {C}_\phi \) to compute a number of polynomials \(\phi _i (\varvec{\alpha })\) depending on only 1 and \(\varvec{\alpha }\). More precisely, \(\varvec{\phi } (\varvec{\alpha }) = (\phi _1 (\varvec{\alpha }), \ldots , \phi _{\mu _\phi } (\varvec{\alpha }))\) denotes the set of the outputs of all (including intermediate) gates in \(\mathcal {C}_\phi \) (the same is the case of other circuits and corresponding polynomials). The commitment depends only on 1, \(\varvec{\alpha }\), and \(\varvec{\phi } (\varvec{\alpha })\). In the opening phase, the verifier sends \(\varvec{\beta }\) to the committer, who uses the circuit \(\mathcal {C}_\psi \) to compute some polynomials \(\psi _i (\varvec{\beta })\) depending on 1 and \(\varvec{\beta }\). This part of the computation is public and can be redone by the verifier.
After that, the committer uses the circuit \(\mathcal {C}_\chi \) to compute a number of polynomials \(\chi _i (\varvec{\alpha }, \varvec{\beta })\) from the inputs and outputs of \(\mathcal {C}_\phi \) and \(\mathcal {C}_\psi \), i.e., from \((1, \varvec{\alpha }, \varvec{\beta }, \varvec{\phi } (\varvec{\alpha }), \varvec{\psi } (\varvec{\beta }))\). \(\mathcal {C}_\chi \) has multiplicative depth 1, and thus, w.l.o.g., each \(\chi _i (\varvec{\alpha }, \varvec{\beta })\) is a product of some \(\phi _j (\varvec{\alpha })\) with some \(\psi _k (\varvec{\beta })\). Finally, the committer uses \(\mathcal {C}_\xi \) to compute the outputs \(\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\) of \(\mathcal {C}^*\). We will explain the need for such compilation after Eqs. (7) and (8). We will summarize all actual restrictions on the circuits in Theorem 1. In the introduction, we gave an intuitive explanation of how this compilation reduces the circuit class that we can handle. See Sect. 4 for an additional discussion on the power of this circuit class.
Next, let be the value of all wires of \(\mathcal {C}^*\). We write
Here, , , , , , and . Thus, \(\mu = 1 + \mu _\alpha +\mu _\beta + \mu _\phi + \mu _\psi + \mu _\chi + \kappa \). To use the RC1S approach, we construct matrices U, V, and W, such that \(U \varvec{\mathsf {a}} \circ V \varvec{\mathsf {a}} = W\varvec{\mathsf {a}}\) iff \(\mathcal {C}^*\) is correctly computed. Let and . First, we define R1CSmatrices \(U_\phi , U_\psi , U_\chi , U_\xi , V_\phi , V_\psi , V_\chi \) such that (various subcircuits of ) \(\mathcal {C}^*\) are correctly computed iff
Here, , , , and . In particular,
Next, we define , as
correspondingly. Clearly, \(\nu := \mu _\phi + \mu _\psi + \mu _\chi + \kappa \). Here, we labeled vertically each column of each matrix by the supposed value of the corresponding coefficients of \(\varvec{\mathsf {a}} = 1 // \varvec{\alpha } // \ldots // \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta })\). Some submatrices (\(U_\psi \) and \(V_\psi \)) are divided between noncontinuous areas. The empty submatrices are allzero in the compiled instance. Clearly, \(U \varvec{\mathsf {a}} \circ V \varvec{\mathsf {a}} = W \varvec{\mathsf {a}}\) iff Eq. (4) holds.
QAP Representation. Recall that , \(i \in [1\,..\,\nu ]\), interpolates the \(\nu \)dimensional unit vector \(\varvec{e}_i\). To obtain a QAP representation of the equation \(U \varvec{\mathsf {a}} \circ V \varvec{\mathsf {a}} = W \varvec{\mathsf {a}}\), we use interpolating polynomials; e.g., \(u_j (X)\) interpolates the jth column of U. (See Sect. 2.) To simplify notation, we introduce polynomials like \(u_{\phi j} (X)\) and \(u_{\chi j} (X)\), where say \(u_{\chi j} (X)\) interpolates (all \(\nu \) rows of the) the jth column of the \(\nu \times (1 + \mu _\alpha + \mu _\beta + \mu _\phi + \mu _\psi )\) submatrix of U that contains \(U_\chi \). E.g., \(u_{\chi j} (X)\) interpolates the jth column of \(U_\chi \) (preceded and followed by 0 rows), \(u_{\chi j} (X) = \sum _{i = 1}^{\mu _\chi } U_{\chi i j} \ell _{\mu _\phi + \mu _\psi + i} (X)\).
We divide the polynomials u(X) and v(X) into two addends: one polynomial (\(u_s, v_s\), resp.) that depends on \(\varvec{\alpha }\) but not on \(\varvec{\beta }\), and another polynomial (\(u_p, v_p\), resp.) that depends on public values (\(\varvec{\beta }\) and \(\{\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\}\)) but not on \(\varvec{\alpha }\) otherwise. Such a division is possible due to the way \(\mathcal {C}^*\) is composed from the subcircuits. Thus, \(u (X) = \sum _{j = 1}^{\mu } \mathsf {a}_{j} u_{j} (X) = u_s (X) + u_p (X)\) and \(v (X) = \sum _{j = 1}^{\mu } \mathsf {a}_{j} v_{j} (X) = v_s (X) + v_p (X)\), where
and
(In particular, recall that \(\mathsf {a}_1 = 1\).) Here, \(u_1 (X) = u_{\phi 1} (X) + u_{\psi 1} (X) + u_{\chi 1} (X)\) and \(v_1 (X) = v_{\phi 1} (X) + v_{\psi 1} (X) + v_{\chi 1} (X) + \sum _{i = 1}^\kappa \ell _{\nu  \kappa + i} (X)\). The concrete shape of all these polynomials follows from Eqs. (3) and (6).
In Theorems 2 and 3 (see their claims and proofs), we will need several conditions to hold. Next, we will state and prove that these conditions hold for \(\mathcal {C}^*\). One can observe directly that most of the guarantees, given by \(\mathcal {C}^*\) about the shape of U, V, W, are actually required by the following conditions. Since the addition of the circuit \(\mathcal {C}_\xi \) is essentially for free (it only means the addition of \(\kappa \) gates), many of the following conditions are very easy to satisfy; we will denote such conditions by a superscript \(+\) as in (a)\(^+\). We emphasize that the only restrictive conditions are Items i and j that basically state that \(\mathcal {C}_\chi \) can only have multiplicative depth 1. (See Remark 1 for discussion.) That is, the new SFC scheme will work for all circuits \(\mathcal {C}\) that have a polynomialsize compiled circuit \(\mathcal {C}^*\), such that \(\mathcal {C}_\chi \) has multiplicative depth 1.
Theorem 1
Let \(\mathcal {C}\) be an arithmetic circuit and let \(\mathcal {C}^*\) be its compiled version, so that U, V, W are defined as in Eq. (6). Then the following holds.

(a)
\(^+\) For \(j \in [1\,..\,\mu  \kappa ]\): if \(\varvec{W}^{(j)}=\varvec{0}\) then \(U_{\nu  \kappa + i, j} = 0\) for \(i \in [1\,..\,\kappa ]\).

(b)
\(^+\) For \(I \in [1\,..\,\kappa ]\) and \(j \in [1\,..\,\mu  \kappa ]\), \(W_{\nu  \kappa + I, j} = 0\).

(c)
\(^+\) For \(j \in [2\,..\,1 + \mu _\alpha + \mu _\phi ]\), \(v_{\phi j} (X), v_{\chi j} (X)\) are in the span of \((\ell _i (X))_{i = 1}^{\nu  \kappa }\).

(d)
\(^+\) For \(j \in [2 + \mu _\alpha + \mu _\phi \,..\,\mu ]\), \(v_1 (X)  \sum _{i = 1}^{\kappa } \ell _{\nu  \kappa + i} (X)\) and \(v_j (X)\) are in the span of \((\ell _i (X))_{i = 1}^{\nu  \kappa }\).

(e)
\(^+\) For \(j \in [\mu  \kappa \,..\,\mu ]\), \(\varvec{U}^{(j)} = \varvec{0}\).

(f)
\(^+\) For \(j \in [\mu  \kappa \,..\,\mu ]\), \(\varvec{V}^{(j)} = \varvec{0}\).

(g)
\(^+\) For \(i \in [1\,..\,\kappa ]\), \(w_{\mu  \kappa + i} (X) = \ell _{\nu  \kappa + i }(X)\).

(h)
The set of nonzero \(\varvec{W}^{(j)}\), \(j \in [1\,..\,\mu  \kappa ]\), is linearly independent.

(i)
For \(j \in [\mu  \mu _\chi  \kappa +1\,..\,\mu  \kappa ]\), \(U_{i j} = 0\) if \(i \le \nu  \kappa \), while the last \(\kappa \) rows of this column range define a matrix \(U_\xi \) that satisfies Eq. (4).

(j)
For \(j \in [\mu  \mu _\chi  \kappa +1\,..\,\mu  \kappa ]\), \(\varvec{V}^{(j)} = \varvec{0}\).
Proof
First, we summarize the requirements, denoting each submatrix of U, V, and W by the number of condition that ascertains that this submatrix is 0 (or has a welldefined nonzero form); moreover, Item h states that the columns of W, that contain identity matrices, are linearly independent. That is, \(U, V, W = \)
Item a: follows since \(\varvec{W}^{(j)} = \varvec{0}\) in the columns labeled by 1, \(\varvec{\alpha }\) and \(\varvec{\beta }\), and the last rows of U in all these columns are equal to 0, according to Eq. (6).
Item b: obvious from W in Eq. (6).
Item c: follows since the last rows of V, corresponding to columns labeled by \(\varvec{\alpha }\) and \(\varvec{\beta }\), are equal to 0.
Item d: follows since the last rows of V, corresponding to columns labeled by \(\varvec{\beta }\), \(\varvec{\psi } (\varvec{\beta })\), \(\varvec{\chi } (\varvec{\alpha }, \varvec{\beta })\), and \(\varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta })\), are equal to 0, and the last rows of \(\varvec{V}^{(1)}\) are equal to \(\varvec{1}_\kappa \).
Items e to g, i and j: follows from direct observation.
Item h: follows from the fact that \(\varvec{W}^{(j)} = \varvec{0}\) for some columns j, and the submatrix of W that consists of the rest of the columns is an identity matrix. \(\square \)
Remark 1
The compiled circuit \(\mathcal {C}^*\) satisfies some conditions, not required by Theorem 1. First, by Item h, the set of nonzero \(\varvec{W}^{(j)}\) has to be linearly independent (not necessarily an identity matrix), while in Eq. (6), the corresponding columns constitute an identity matrix. Second, by Item a, last rows of \(\varvec{U}^{(j)}\) need to be zero only if \(\varvec{W}^{(j)}\) is 0; one can insert dummy gates to \(\mathcal {C}^*\) such that W has no zero columns. This essentially just corresponds to the fact that we start with an arithmetic circuit and each constraint is about a concrete gate being correctly evaluated. Third, several submatrices of U, V, W are allzero in our template while there is no actual need for that. For example, \(U_\xi \) can be generalized, and \(U_\phi \) and \(U_\psi \) can also both depend on \(\varvec{\alpha }\) and \(\varvec{\beta }\). For the sake of simplicity, we stick to the presented compilation process, and leave the possible generalizations to future work.
SNARKRelated Techniques. Next, we follow [33] to derive polynomials related to the SNARK, underlying the new SFC. We simplify the derivation a bit, and refer to [33] for full generality. Let \(\mathsf {A}(X, Y) = r_a + u (X) Y\) and \(\mathsf {B}(X, Y) = r_b + v (X) Y\) for . ([33] considered the general case where \(\mathsf {A}(X, Y) = r_a Y^\alpha + u (X) Y^\beta \) and \(\mathsf {B}(X, Y) = r_b Y^\alpha + v (X) Y^\beta \) for some small integers \(\alpha , \beta \) to be fixed later). The addends \(r_a\) and \(r_b\) are needed to protect the secret information hidden by \(\mathsf {A}(X, Y)\) and \(\mathsf {B}(X, Y)\), and we use the indeterminate Y to simplify the security proofs. As with u and w, we divide the polynomials \(\mathsf {A}, \mathsf {B}, \mathsf {C}\) into two addends: (i) a polynomial (\(\mathsf {A}_s, \mathsf {B}_s\), \(\mathsf {C}_{sp}\)), where \(\mathsf {A}_s\) and \(\mathsf {B}_s\) depend on \(\varvec{\alpha }\) but not on \(\varvec{\beta }\) while \(\mathsf {C}_{sp}\) depends on both \(\varvec{\alpha }\) and \(\varvec{\beta }\), and (ii) a polynomial (\(\mathsf {A}_p, \mathsf {B}_p, \mathsf {C}_p\), resp.) that depends on public values (\(\varvec{\beta }\) and \(\{\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\}\)) but not on \(\varvec{\alpha }\) otherwise. (Such a division was not possible in [33] since there one did not work with a compiled circuit \(\mathcal {C}^*\).) Then,
For integer constants \(\delta \) and \(\eta \) that we will fix later, define
where the last equation holds iff the committer is honest (see Eq. (2)). Intuitively, we want that a committer must be able to compute \(\mathsf {C}(X, Y)\) iff he was honest.
Following [33], the inclusion of \(Y^\delta \) and \(Y^\eta \) in the definition of \(\mathsf {C}(X, Y)\) serves two goals. First, it introduces the addend \(u (X) Y^{\eta + 1} + v (X) Y^{\delta + 1} + w (X) Y^2 = \sum _{j = 1}^{\mu } \mathsf {a}_{j} (u_{j} (X) Y^{\eta + 1} + v_{j} (X) Y^{\delta + 1} + w_{j} (X) Y^2)\) that makes it easier to verify that uses the same coefficients \(\alpha _{j}\) when computing \([\mathsf {A}]_{1}\), \([\mathsf {B}]_{2}\), and \([\mathsf {C}]_{1}\). Second, the coefficient of \(Y^2\) is \(u (X) v (X)  w (X)\) that divides by \(\ell (X)\) iff the committer is honest. That is, the coefficient of \(Y^2\) is \(\mathcal {H}(X) \ell (X)\) for some polynomial \(\mathcal {H}(X)\) iff the prover is honest and thus \(\varvec{\mathsf {\xi }} = \varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta })\).
Let \(\gamma \) be another small integer, fixed later. Let \(\mathsf {C}(X, Y) = \mathsf {C}_{sp}(X, Y) + \mathsf {C}_p (X, Y) Y^\gamma \), where \(\mathsf {C}_p (X, Y)\) depends only on \(\varvec{\mathsf {\xi }}\). (In [33], \(\mathsf {C}_{sp}(X, Y) \) was multiplied with \(Y^\alpha \) but here \(\alpha = 0\).) The factor \(Y^\gamma \) is used to “separate” the public and the secret parts. In the honest case,
Intuitively, the verifier checks that \(\mathsf {C}_{sp}(X, Y)\) is correctly computed by checking that \(\mathcal {V}(X, Y) = 0\), where
Here, \((\mathsf {A}_s, \mathsf {B}_s)\) (the part of \((\mathsf {A}, \mathsf {B})\) that only depends on private information) is the functional commitment, \(\mathsf {C}_{sp}\) is the opening, and \(\mathsf {A}_p\), \(\mathsf {B}_p\), and \(\mathsf {C}_p\) can be recomputed by the verifier given public information.
The New SFC Scheme \(\varvec{\mathsf {FC^{}_{sn}}}\): Details. We are now ready to describe the new succinct functional commitment scheme \(\mathsf {FC^{}_{sn}}\), see Fig. 2. Here, instead of operating with bivariate polynomials like \(\mathsf {A}(X, Y)\), one operates with their encodings like \([\mathsf {A}_s (\chi , y)]_{\iota }\) in the source groups, where \(\chi \) and \(y\) are secret trapdoors. The commitment key of the SFC scheme contains the minimal amount of information needed to perform commitment, opening, and verification by honest parties. The expression of \(\mathsf {ck}\) in \(\mathsf {KC}\) has a generic form; one can replace the polynomials \(u_j (X)\), \(v_j (X)\), \(w_j (X)\) with their values evident from Eq. (6). Finally, \(\ell _j (X)\) (and thus also \(u_j (X)\), \(v_j (X)\), and \(w_j (X)\)) has degree \(\nu  1\) and can thus be computed from \((X^i)_{i = 0}^{\nu  1}\), while \(\ell (X)\) has degree \(\nu \). We explain in the correctness proof of Theorem 3 how to compute \([\mathsf {B}^{\mathsf {aux}}_i (\chi , y)]_{1}\).
Note that \(\mathsf {FC^{}_{sn}}\) can also be seen as a SNARK proving that \(\varvec{\mathcal {F}} (\varvec{\alpha }, \varvec{\beta }) = \varvec{\mathsf {\xi }}\), if we let the prover to compute \([\mathsf {A}_p]_{1}\), \([\mathsf {B}_p]_{2}\), and \([\mathsf {C}_p]_{1}\).
Instantiation. Let \(\mathcal {C}\) be a fixed circuit. Let \(\mathcal {R}\) and \(\mathcal {S}\) be two sets of bivariate polynomials, such that the commitment key of \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is equal to \(\mathsf {ck}= ([\mathcal {R}(\chi , y)]_{1}, [\mathcal {S}(\chi , y)]_{2})\). Similarly to [33], let
be the set of exponents of Y in all polynomials from \(\mathcal {R}\). Let \(\mathsf {Crit}= \{2, \eta + 1\}\) and \(\overline{\mathsf {Crit}} = \mathsf {Mon}_1 \setminus \mathsf {Crit}\). For the evaluationbinding proof to hold, we need to fix values of , such that the coefficients from \(\mathsf {Crit}\) are unique, i.e.,
That is, \(\mathsf {Crit}\cap \overline{\mathsf {Crit}} = \emptyset \) and \(\mathsf {Crit} = 2\). It follows from Theorem 1 that the polynomial \(f_i (X, Y) := \ell _{\nu  \kappa + i} (X) Y^{\eta + 1}\), \(i \in [1\,..\,\kappa ]\), does not belong to \({\text {span}}(\mathcal {R})\).
We will later consider two different evaluations for \(\gamma \), \(\delta \), and \(\eta \). Replacing \(\gamma \), \(\delta \), and \(\eta \) with \(1\), \(0\), and \(3\) guarantees that Eq. (11) holds (see Theorem 2, Item 1, for more). Then,
In this case, the \(\mathsf {ck}\) has one element (namely, \([1]_{1}\)) twice, and thus \(\mathsf {ck}\) can be shortened by one element.
Alternatively, replacing \(\gamma \), \(\delta \), and \(\eta \) with \(4\), \(0\), and \(7\) (this choice is sufficient for the evaluationbinding reduction to uberassumption in \(\mathbb {G}_T\) to work and will be explained in Theorem 2, Item 2), we get
Then, \(\mathsf {ck}\) has one element (\([1]_{1}\)) twice, and thus it can be shortened.
Efficiency. The CRS length is \(1 + \nu + 1 + (\nu  1) + (\mu  \kappa ) + \kappa + 1 = 2 \nu + \mu + 2\) elements from \(\mathbb {G}_1\), \(\nu + 3\) elements from \(\mathbb {G}_2\), and 1 element from \(\mathbb {G}_T\). In the case of fixed \(\gamma \), \(\delta \) and \(\eta \) in the previous two paragraphs, the CRS length will shorten by 1 element of \(\mathbb {G}_1\).
The functional commitment takes \((\nu + 1) + \kappa (\nu + 1) = (\kappa + 1) (\nu + 1)\) exponentiations in \(\mathbb {G}_1\) and \(\nu + 1\) exponentiations in \(\mathbb {G}_2\). The length of the functional commitment is \(\kappa + 1\) elements of \(\mathbb {G}_1\) and 1 element of \(\mathbb {G}_2\).
The opening takes \(\mu _\beta + \mu _\psi + \kappa \) (to compute \([\mathsf {A}_p]_{1}\); note that \(u_1 (X)\) and other simular polynomials are precomputed), \(\mu _\alpha + \mu _\beta + \mu _\phi + \mu _\psi \) (to compute \([v (\chi ) y]_{1}\)) and \(2 + (\nu  1) + (\mu  \kappa ) = \nu + \mu  \kappa + 1\) (to compute \([\mathsf {C}_{sp}]_{1}\)) exponentiations in \(\mathbb {G}_1\), in total, \(\nu + \mu + \mu _\alpha + 2 \mu _\beta + \mu _\phi + 2 \mu _\psi + 1\) exponentiations. The length of the opening is 1 element of \(\mathbb {G}_1\).
The verification takes \((\mu _\beta + \mu _\psi + \kappa ) + \kappa = \mu _\beta + \mu _\psi + 2 \kappa \) (to compute \([\mathsf {A}_p, \mathsf {C}_p]_{1}\)) exponentiations in \(\mathbb {G}_1\), \(\mu _\beta + \mu _\psi \) (to compute \([\mathsf {B}_p]_{2}\)) exponentiations in \(\mathbb {G}_2\), and \(2 \kappa + 3\) pairings. Here, we do not count computations (e.g., computation of \([\ell _{\nu  \kappa + i} (\chi ) y]_{1}\) from \([(\chi ^i y)_{i = 0}^{\nu  1}]_{1}\)) that are only done once per the CRS.
The real efficiency depends of course significantly on the concrete application. We will give some detailed examples in the full version [34].
4 On the Circuit Class and Example Applications
Next, we study the power of the implementable circuit class \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\), and we show that many known functional commitment scheme are for functionalities that belong to this class, and thus can be implemented by \(\mathsf {FC^{}_{sn}}\).
In this section, we assume basic knowledge of the algebraic complexity theory. See [37] for necessary background. \(\mathbf {VP}\) is the class of polynomial families \(\{f_n\}\), where \(f_n\) is an univariate polynomial of variables of degree that has an arithmetic circuit of size [39]. \(\mathrm {\Sigma \Pi \Sigma }\) (resp., \(\mathrm {\Sigma \Pi \Sigma \Pi }\)) is the class of depth3 (resp., depth4) circuits composed of alternating levels of sum and product gates with a sum gate at the top [37, Sect. 3.5]. Sparse polynomials are nvariate polynomials that have monomials.
Recall that a compiled circuit \(\mathcal {C}^*\) can evaluate a vector polynomial \(\varvec{f} (\varvec{\alpha }, \varvec{\beta }) = (f_i (\varvec{\alpha }, \varvec{\beta }))_{i = 1}^{\kappa }\) iff and each \(f_i\) can be written as
where all polynomials \(\phi _j\) and \(\psi _k\) are in the complexity class \(\mathbf {VP}\), and there are a polynomial number of additions in the representation Eq. (12) (thus, also a polynomial number of polynomials \(\phi _j\) and \(\psi _k\)). We call such representation an efficient \(\mathrm {\Sigma \Pi \forall }\)representation (here, \(\forall \) denotes “any”) of \(\varvec{f}\), and we denote by \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) the class of circuits (or vector polynomials) that have an efficient \(\mathrm {\Sigma \Pi \forall }\)presentation. Clearly, \(\mathsf {FC^{}_{sn}}\) can implement \(\varvec{f}\) iff \(\varvec{f} \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\).
It is clear that all sparse polynomials in \(\mathbf {VP}\) have an efficient \(\mathrm {\Sigma \Pi \forall }\)representation, and thus \(\mathsf {FC^{}_{sn}}\) can implement all sparse polynomials. However, we can do more. For example, consider the polynomial \(f' (\alpha , \varvec{\beta }) = \prod _{i = 1}^n (\alpha + \beta _i)\) for . Since \(f'\) has \(2^n\) monomials, it is not sparse. However, we can rewrite \(f'\) as \(f' (\alpha , \varvec{\beta }) = \sum _{d = 0}^n \alpha ^d \sigma _{n  d} (\varvec{\beta })\), where \(\sigma _{n  d} (\varvec{\beta }) = \sum _{T \subseteq [1\,..\,n], T = d} \prod _{i \in T} \beta _i\) is the \((n  d)\)th symmetric polynomial. There exists a \(\mathrm {\Sigma \Pi \Sigma }\) circuit of size \(O (n^2)\), due to BenOr (see [37, Sect. 3.5]), that computes all n symmetric polynomials in parallel. Thus, f has an efficient \(\mathrm {\Sigma \Pi \forall }\) representation, and thus \(\mathsf {FC^{}_{sn}}\) can implement at least one nonsparse polynomial.
On the other hand, \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\subseteq \mathbf {VP}\). To see that \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\subsetneq \mathbf {VP}\), consider the polynomial \(f'' (\varvec{\alpha }, \varvec{\beta }) = \prod _{i = 1}^n (\alpha _i + \beta _i)\) for . Since \(f''\) has \(2^n\) monomials, it is not sparse. Considering \(\beta _i\) as coefficients, it also has \(2^n\) monomials in \(\varvec{\alpha }\) (the case of considering \(\alpha _i\) as coefficients is dual), and thus any \(\mathrm {\Sigma \Pi \forall }\)representation of \(f''\) requires at least \(2^n\) addition gates. Since \(f''\) can be implemented by a \(\mathrm {\Pi \Sigma }\) circuit [37], it means \(\mathrm {\Pi \Sigma }\not \subset \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\); however, clearly, \(\mathrm {\Pi \Sigma }\not \subset \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) so \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\) is incomparable to \(\mathrm {\Pi \Sigma }\). Thus
It is an interesting open problem to characterize \(\mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\). Motivated by our analysis of \(\alpha ''\), it seems we can implement all polynomials \(f (\varvec{\alpha }, \varvec{\beta })\), where either the dimension \(\mu _\alpha \) of \(\varvec{\alpha }\) or the dimension \(\mu _\beta \) of \(\varvec{\beta }\) is logarithmic in . Really, if then there are at most possible monomials \(\phi _i (\varvec{\alpha })\) in \(\varvec{\alpha }\), and thus there exists an efficient \(\mathrm {\Sigma \Pi \forall }\)representation of f.
Known Types of SFCs as (Semi)Sparse Polynomials. In Table 1, we write down the functionalities of several previous known types of SFCs. This shows that in all such cases, one has a sparse polynomial and thus can use \(\mathsf {FC^{}_{sn}}\) to implement them. In none of these cases, one needs the power of nonsparse semisparse polynomials, and we leave it as another open question to find an application where such power is needed. In the case of the vector commitment scheme (resp., accumulator), one implements the innerproduct scheme with \(\varvec{\beta } = \varvec{e}_I\) (resp., \(\chi _{\varvec{\alpha }} (X) = \prod (X  \alpha _i)\)). In the case of say the polynomial commitment scheme, \(\varvec{\beta } =(1, \beta , \ldots , \beta ^{n  1})\) and thus \(\mu _\beta = n\).
Aggregation. The next lemma is straightforward.
Lemma 1
Assume that \(\mathcal {C}_i \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\), where \(i \in [1\,..\,Q]\), and . Then their parallel composition \(\mathcal {C}^{\Vert } = (\mathcal {C}_1 \Vert \ldots \Vert \mathcal {C}_Q) \in \mathbf {CC}_{\mathrm {\Sigma \Pi \forall }}\).
Proof
Obvious since we can just “parallelize” the representation in Eq. (12). \(\square \)
In practice, Lemma 1 is very important since it means that \(\mathsf {FC^{}_{sn}}\) allows to aggregate a polynomial number of SFCs for which \(\mathsf {FC^{}_{sn}}\) is efficient. It just results in a larger circuit \(\mathcal {C}^{\Vert }\)(and thus larger parameters like \(\mu \) and \(\kappa \). However, as the length of the commitment in \(\mathsf {FC^{}_{sn}}\) depends on \(\kappa \), it means that the commitment stays succinct when \(Q < \mathsf {wit}\). On the other hand, the length of the opening will be one group element, independently of Q.
As a corollary of Lemma 1, we can construct succinct aggregated innerproduct SFCs, accumulators, (multipoint / multipolynomial) polynomial commitment schemes, vector commitment schemes (including subvector commitment schmes), but also aggregate all these SFC variants with each other. Due to the lack of space, we will give more details and examples in the full version [34].
Example: Succinct Aggregated InnerProduct Functional Commitment. In an aggregated SIPFC, the committer commits to \(\varvec{\alpha }\) and then opens it simultaneously to \(\left\langle \varvec{\alpha }, \varvec{\beta }_i \right\rangle = \sum _{j = 1}^n \alpha _j \beta _{i j}\) for \(\kappa \) different verifierprovided vectors \(\varvec{\beta }_i\), where \(i \in [1\,..\,\kappa ]\). Assume \(\varvec{\alpha }\) and each \(\varvec{\beta }_i\) are ndimensional vectors. There is no circuit \(\mathcal {C}_\phi \) or \(\mathcal {C}_\psi \). Given \(\varvec{\alpha }\) and \(\varvec{\beta }_i\), \(\mathcal {C}_\chi \) computes \(\kappa n\) products \(\chi _{i j} (\varvec{\alpha }, \varvec{\beta }) = \alpha _j \beta _{i j}\), \(i \in [1\,..\,\kappa ]\) and \(j \in [1\,..\,n]\), and \(\mathcal {C}_\xi \) sums them together to obtain \(\kappa \) outputs \(\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta }) = \sum _{j = 1}^n \alpha _j \beta _{i j}\). Thus, , \(V_\chi = I_{\kappa n}\), (note that \(\mathcal {C}_\chi \) does not take 1 as an input), and
Here, \(\nu = \kappa (n + 1)\), \(\mu = 1 + n + \kappa n + \kappa n + \kappa = (\kappa + 1) n + \kappa + 1\), \(\mathsf {A}_s (X, Y) = r_a + \sum _{j = 1}^{n} \alpha _j u_{\chi j} (X) Y\), \(\mathsf {A}_p (X, Y) = \sum _{i = 1}^{\kappa } \left\langle \varvec{\alpha }, \varvec{\beta }_i \right\rangle \ell _{\nu  \kappa + i} (X) Y\). Importantly, \(\mathsf {B}_s (X, Y) = 0\) (since there is nothing to hide, one can set \(r_b \leftarrow 0\); hence, also \(\mathsf {B}^{\mathsf {aux}}_i (X, Y) = 0\); thus the commitment is only one group element, \([\mathsf {A}_s]_{1}\)), and \(\mathsf {B}_p (X, Y) = \sum _{i = 1}^{\kappa } \ell _{\nu  \kappa + i} (X) Y + \sum _{i = 1}^{\kappa } \sum _{j = 1}^{n} \beta _{i j} v_{\chi , n (i  1) + j} (X) Y\). The verifier has to execute \(2 \kappa \) exponentiations in \(\mathbb {G}_1\) to compute \([\mathsf {A}_p]_{1}\) and \([\mathsf {C}_p]_{1}\), \(\kappa n\) exponentiations in \(\mathbb {G}_2\) to compute \([\mathsf {B}_p]_{2}\), and 3 pairings. We emphasize that here, both the functional commitment and the opening will consist of a single group element. One obtains IPFC by setting \(\kappa \leftarrow 1\); in this case, the verification executes 2 exponentiations in \(\mathbb {G}_1\), n exponentiations in \(\mathbb {G}_2\), and 3 pairings.
Let us briefly compare the resulting nonaggregated IPFC with the IPFC of [25]. Interestingly, while the presented IPFC is a simple specialization of the general SFC scheme, it is only slightly less efficient than [25]. Let \(g_\iota \) denote the bitlength of an element of the group \(\mathbb {G}_\iota \). The CRS length is \(2 n g_1 + (n + 1) g_2\) in [25], and \((3 (\kappa + 1) + (4 \kappa + 1) n) g_1 + (\kappa +\kappa n + 3) g_2 + 1 g_T\) (this shortens to \((5 n\,+\,6) g_1\,+\,(n\,+\,4) g_2 + 1 g_T\) when \(\kappa = 1\)) in our case. The commitment takes \(n + 1\) exponentiations in [25], and \(n\;+\;2\) in our case. Interestingly, a straightforward [25] opening takes \(\varTheta (n^2)\) multiplications (this can be probably optimized), while in our case it takes \(\varTheta (n \log n)\) multiplications. The verifier takes n exponentiations in [25], and \(n + 3\) here. The commitment and opening are both 1 group elements in both schemes. Thus, our generic, unoptimized scheme is essentially as efficient as the most efficient known prior IPFC, losing ground only in the CRS length. On the other hand, we are not aware of any previous aggregated IPFC schemes.
5 Security of \(\mathsf {FC^{}_{sn}}\)
Next, we prove the security of \(\mathsf {FC^{}_{sn}}\). While its correctness and hiding proofs are straightforward, evaluationbinding is far from it. As before, for a fixed \(\mathcal {C}\), let \(\mathcal {R}\) and \(\mathcal {S}\) be two sets of bivariate polynomials, s.t. \(\mathsf {ck}= ([\mathcal {R}(\chi , y)]_{1}, [\mathcal {S}(\chi , y)]_{2})\). For a fixed \(\mathcal {C}\), in Theorem 3, we will reduce evaluationbinding of \(\mathsf {FC^{\mathcal {C}}_{sn}}\) to a \((\mathcal {R}, \mathcal {S}, \{f_i\})\)spanuberassumption in \(\mathbb {G}_1\), a new assumption that states that it is difficult to output an element \(\sum \varDelta _i [f_i (\chi , y)]_{1}\) together with the coefficient vector \(\varvec{\varDelta } \ne \varvec{0}\), where \(f_i \not \in {\text {span}}(\mathcal {R})\). Thus, it is a generalization of the \((\mathcal {R}, \mathcal {S}, \cdot )\)computational uberassumption in \(\mathbb {G}_1\). Importantly, if \(\kappa = 1\) then it is equivalent to the latter. To motivate the spanuberassumption, we will show that it follows from the more conventional \((\mathcal {R}, \mathcal {S}, f'_I)\)computational uberassumption (for a related set of polynomials \(f'_i\)) in \(\mathbb {G}_T\) [7]; see Lemma 2. Thus, for the concrete parameters \(\mathcal {R}, \mathcal {S}, \{f_i\}\), and \(\{f'_i\}\),
For the reduction to the PDL and HAK assumptions in the full version [34] to work, we also prove that \(f_i \not \in {\text {span}}(\mathcal {R})\) and \(f'_i \not \in {\text {span}}(\mathcal {R}\mathcal {S})\); see Theorem 2. (Intuitively, this is needed for the spanuberassumptions to be secure in the generic model.) Each concrete proof (e.g., the proof of correctness, the proof of evaluationbinding, and the proofs that \(f_i \not \in {\text {span}}(\mathcal {R})\) and \(f'_i \not \in {\text {span}}(\mathcal {R}\mathcal {S})\)) puts some simple restrictions on the matrices U, V, W. They can usually be satisfied by slightly modifying the underlying arithmetic circuit.
Definition 6
Let \(\mathcal {R}\), \(\mathcal {S}\), and \(\mathcal {T}\) be three tuples of bivariate polynomials over . Let \(f_i\) be bivariate polynomial over . The \((\mathcal {R}, \mathcal {S}, \mathcal {T}, \{f_i\}_{i = 1}^{\kappa })\) computational spanuberassumption for in group \(\mathbb {G}_\iota \), where \(\iota \in \{1, 2, T\}\), states that for any PPT adversary , , where
If \(\kappa = 1\) then the \((\mathcal {R}, \mathcal {S}, \mathcal {T}, \{f\})\) spanuberassumption is the same as the \((\mathcal {R}, \mathcal {S}, \mathcal {T}, f_1)\) uberassumption: in this case the adversary is tasked to output and \(\varDelta [f_1 (\chi , y)]_{\iota }\) which is equivalent to outputting \([f_1 (\chi , y)]_{\iota }\).
We will now show that the used polynomials are linearly independent.
Theorem 2
Write \(\mathsf {ck}= ([\varrho (X, Y): \varrho \in \mathcal {R}]_{1}, [\sigma (X, Y): \sigma \in \mathcal {S}]_{2})\) as in Fig. 2. For \(i \in [1\,..\,\kappa ]\), let \(f_i (X, Y) := \ell _{\nu  \kappa + i} (X) Y^{\eta + 1}\) and \(f'_i (X, Y) := (\ell _{\nu  \kappa + i} (X))^2 Y^{\eta + 2}\).

1.
Assume \(\gamma = 1\), \(\delta = 0\), and \(\eta = 3\). Assume Items a and h of Theorem 1 hold. Then \(f_i (X, Y) \not \in {\text {span}}(\mathcal {R})\) for \(i \in [1\,..\,\kappa ]\).

2.
Assume \(\gamma = 4\), \(\delta = 0\), \(\eta = 7\), and that Items a, b and h of Theorem 1 hold. Then \(f'_i (X, Y) \not \in {\text {span}}(\mathcal {R}\mathcal {S})\) for \(i \in [1\,..\,\kappa ]\).
Proof
(1: \(f_I \not \in {\text {span}}(\mathcal {R})\)). Let \(\mathsf {Mon}_1\) be as in Eq. (10) and \(\mathsf {Crit}= \{2, \eta + 1\}\). For the rest of the proof to make sense, as we will see in a few paragraphs, we need to fix \(\gamma \), \(\delta \), and \(\eta \) so that the coefficients in \(\mathsf {Mon}_1\) and in \(\mathsf {Mon}_1 \setminus \mathsf {Crit}\) are different (in particular, the coefficients in \(\mathsf {Crit}\) are different from each other). A small exhaustive search shows that one can define \(\gamma = 1\), \(\delta = 0\), \(\eta = 3\), as in the claim. This setting can be easily manually verified, by noticing that \(\mathsf {Mon}_1 = \{0, 1, 2, 3, 4\}\), \(\mathsf {Crit}= \{2, 4\}\), and thus \(\mathsf {Mon}_1 \setminus \mathsf {Crit}= \{0, 1, 3\}\).
Assume that, for some I, \(f_I (X, Y) = \ell _{\nu  \kappa + I} (X) Y^{\eta + 1}\) belongs to the span of \(\mathcal {R}\). We consider the coefficients of \(Y^i\), for \(i \in \mathsf {Crit}\), in the resulting equality (for some unknown coefficients in front of the polynomials from \(\mathcal {R}\)), and derive a contradiction from this. Thus, we write down an arbitrary linear combination of polynomials in \(\mathcal {R}\) as a linear combination of \(u_j (X) Y^{\eta + 1} + v_j (X) Y^{\delta + 1} + w_j (X) Y^{2}\), \(X^i \ell (X) Y^{2}\), and T(X, Y), where T(X, Y) is some polynomial with monomials that do not have \(Y^i\) for \(i \in \mathsf {Crit}\). That is,
for some (thus \(t (X) \ell (X) Y^2\) encompasses all \(X^i \ell (X) Y^{2}\)) and integers \(t'_j\).
First, considering only the coefficient of \(Y^{2}\) in both the lefthand side and the right hand side of Eq. (13),
Due to Item h of Theorem 1, either \(w_j (X) = 0\) or \(t'_j = 0\) for \(j \in [1\,..\,\mu  \kappa ]\). Let \(\mathcal {J} \subset [1\,..\,\mu  \kappa ]\) be the set of indices \(j \in [1\,..\,\mu  \kappa ]\) so that \(w_j (X) = 0\).
Second, considering only the coefficient of \(Y^{\eta + 1}\) in Eq. (13),
Due to Item a of Theorem 1, \(\ell _{\nu  \kappa + I} (X)\) is linearly independent of (the nonzero elements of) \(\{u_j (X)\}_{j \in \mathcal {J}}\), a contradiction. Hence, \(f_I (X, Y) \not \in {\text {span}}(\mathcal {R})\).
(Item 2: \(f'_I \not \in {\text {span}}(\mathcal {R}\mathcal {S})\)). For the proof to make sense, as we will see in a few paragraphs, we need that the set of critical coefficients \(\mathsf {Crit}' := \{3, \eta + 2\}\) (that is different from \(\mathsf {Crit}\) above) is different from the set \(\mathsf {Mon}'\setminus \mathsf {Crit}'\) all other coefficients in \(\mathcal {R}\mathcal {S}\), where \(\mathsf {Mon}' :=\)
is defined by \(\mathsf {Mon}' = \mathsf {Mon}_1 + \mathsf {Mon}_2\), where \(\mathsf {Mon}_1\) is as in Eq. (10) and \(\mathsf {Mon}_2 = \{0,1,\gamma ,\eta \}\) is the set of exponents of Y in all polynomials from \(\mathcal {S}\). A small exhaustive search, performed by using computer algebra, shows that one can define \(\gamma = 4\), \(\delta = 0\), \(\eta = 7\), as in the claim. This setting can be easily manually verified, by noticing that \(\mathsf {Mon}' \setminus \mathsf {Crit}' = \{3,2,1,0,1,2,4,5,6,7,8,11,12,14,15\}\) and \(\mathsf {Crit}' = \{3, 9\}\).
Assume now in contrary that \(f'_I \in {\text {span}}(\mathcal {R}\mathcal {S})\). Then, as in Item 1, \((\ell _{\nu  \kappa + I} (X))^2 Y^{\eta + 2}\) is in the span of some polynomials containing \(Y^i\) for \(i \in \mathsf {Crit}'\) (and we need to quantify the coefficients of these polynomials) and of all other polynomials. Clearly, the first type of polynomials are in the span of \(X^i \ell (X) Y^2\) times \(Y^\eta \), \(X^i \ell (X) Y^2\) times \(X^k Y\), \(u_j (X) Y^{\eta + 1} + v_j (X) Y^{\delta + 1} + w_j (X) Y^2\) times \(Y^\eta \), and \(u_j (X) Y^{\eta + 1} + v_j (X) Y^{\delta + 1} + w_j (X) Y^2\) times \(X^k Y\), for properly chosen i, j, and k. Thus,
where \(t'_j (X)\), \(t^*_j (X)\), t(X) and \(t'' (X)\) are univariate polynomials, and T(X, Y) is a polynomial that does not contain monomials with \(Y^i\), \(i \in \mathsf {Crit}'\). We now consider separately the coefficients of \(Y^i\) in this equation for each \(i \in \mathsf {Crit}'\) and derive a contradiction.
First, considering the coefficients of \(Y^{3}\), we get \(\sum _{j = 1}^{\mu  \kappa } t^*_j (X) w_j (X) + t'' (X) \ell (X) = 0\). Due to Item h of Theorem 1, either \(t^*_j (X) = 0\) or \(w_j (X) = 0\) for \(1 \le j \le \mu  \kappa \). Let \(\mathcal {J} \subset [1\,..\,\mu  \kappa ]\) be the set of indices j so that \(w_j (X) = 0\).
Second, the coefficients of \(Y^{\eta + 2}\) give us
Due to Items a, b and h of Theorem 1 (and of the fact that \((\ell _{\nu  \kappa + i} (X))^2\) has degree \(2 \nu \)), \(\{(\ell _{\nu  \kappa + I} (X))^2\} \cup \{u_j (X)\}_{j \in \mathcal {J}} \cup \{w_j (X)\}_{j \not \in \mathcal {J}} \cup \{X^i \ell (X)\}_{i = 0}^{\nu  2}\) is linearly independent. Contradiction, and thus \(f'_I (X, Y) \not \in {\text {span}}(\mathcal {R}\mathcal {S})\). \(\square \)
Next, we show that for the concrete choice of the parameters \(\mathcal {R}\), \(\mathcal {S}\), \(f_i\), and \(f'_i\), the spanuberassumption in \(\mathbb {G}_1\) is at least as strong as the uberassumption in \(\mathbb {G}_T\). The new assumption may be weaker since the latter assumption argues about elements in \(\mathbb {G}_T\), which may not always be possible [26]. However, the proof of Lemma 2 depends crucially on the concrete parameters.
Lemma 2
(Uberassumption in \(\mathbb {G}_T\) \(\Rightarrow \) spanuberassumption). Assume \(\gamma = 4\), \(\delta = 0\), and \(\eta = 7\). Let \(\mathsf {FC^{\mathcal {C}}_{sn}}\) be the SFC scheme for arithmetic circuits in Fig. 2. Write \(\mathsf {ck}= ([\varrho (X, Y): \varrho \in \mathcal {R}]_{1}, [\sigma (X, Y): \sigma \in \mathcal {S}]_{2})\) as in Fig. 2. For \(i \in [1\,..\,\kappa ]\), let \(f_i (X, Y) := \ell _{\nu  \kappa + i} (X) Y^{\eta + 1}\) and \(f'_i (X, Y) := (\ell _{\nu  \kappa + i} (X))^2 Y^{\eta + 2}\). If the \((\mathcal {R}, \mathcal {S}, f'_I)\) computational uberassumption holds in \(\mathbb {G}_T\) for each \(I \in [1\,..\,\kappa ]\) then the \((\mathcal {R}, \mathcal {S}, \{f_i\}_{i = 1}^{\kappa })\) computational spanuberassumption holds in \(\mathbb {G}_1\).
Proof (Sketch)
Assume is an adversary against the \((\mathcal {R}, \mathcal {S}, \{f_i\}_{i = 1}^{\kappa })\) computational spanuberassumption that has successfully output \(\varvec{\varDelta } \ne \varvec{0}\) and \([z]_{1} = \sum _{i = 1}^{\kappa } \varDelta _i [f_i (\chi , y)]_{1} = \sum _{i = 1}^{\kappa } \varDelta _i [\ell _{\nu  \kappa + i} (X) Y^{\eta + 1}]_{1}\).
Since \(\varvec{\varDelta } \ne \varvec{0}\), then there exists at least one coordinate I such that \(\varDelta _I \ne 0\). Let be the following adversary against the \((\mathcal {R}, \mathcal {S}, f_I)\) computational uberassumption in \(\mathbb {G}_T\). Given \(\mathsf {ck}\) and \([z]_{1}\), computes
Let \(d_i (X)\) be the rational function satisfying \(d_i (X) \ell (X) = \ell _{\nu  \kappa + i} (X) \ell _{\nu  \kappa + I} (X)\). Clearly, \(d_i (X)\) is a polynomial for \(i \ne I\). Thus, \(d (X) := \sum _{i \ne I} \varDelta _i / \varDelta _I \cdot d_i (X)\) is a polynomial of degree \(\le \nu  2\). Since \([y^\eta ]_{2}\) is a part of the commitment key, can efficiently compute \(\sum _{i \ne I} \varDelta _i / \varDelta _I \cdot [\ell _{\nu  \kappa + i} (\chi ) y^{\eta + 1}]_{1} \bullet [\ell _{\nu  \kappa + I} (\chi ) y]_{2} = \sum _{i \ne I} \varDelta _i / \varDelta _I \cdot [d_i (\chi ) \ell (\chi ) y^{2}]_{1} \bullet [y^\eta ]_{2} = [d (\chi ) \ell (\chi ) y^{2}]_{1} \bullet [y^\eta ]_{2}\). Thus, can compute
and break the \((\mathcal {R}, \mathcal {S}, f'_I)\)computational uberassumption in \(\mathbb {G}_T\). \(\square \)
Theorem 3
(Security of \(\mathsf {FC^{}_{sn}}\)). Let \(\mathcal {C}\) be a fixed circuit and let \(\mathsf {FC^{\mathcal {C}}_{sn}}\) be the SFC scheme in Fig. 2. Let \(\mathsf {ck}= ([\varrho (X, Y): \varrho \in \mathcal {R}]_{1}, [\sigma (X, Y): \sigma \in \mathcal {S}]_{2})\) as in Fig. 2. For \(i \in [1\,..\,\kappa ]\), let \(f_i (X, Y) := \ell _{\nu  \kappa + i} (X) Y^{\eta + 1}\).

1.
Assume Item c of Theorem 1 holds. Then \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is correct.

2.
\(\mathsf {FC^{\mathcal {C}}_{sn}}\) is perfectly comhiding.

3.
\(\mathsf {FC^{\mathcal {C}}_{sn}}\) is perfectly openhiding.

4.
\(\mathsf {FC^{\mathcal {C}}_{sn}}\) is perfectly zeroknowledge.

5.
Assume that either \(\gamma = 1\), \(\delta = 0\), and \(\eta = 3\) or \(\gamma = 4\), \(\delta = 0\), and \(\eta = 7\). Assume that Items d to g, i and j of Theorem 1 hold. If the \((\mathcal {R}, \mathcal {S}, \{f_i\})\)computational spanuberassumption holds in \(\mathbb {G}_1\) then the SFC scheme \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is computationally evaluationbinding.
Proof
(1: correctness). We first show that the prover can compute \(\mathsf {B}^{\mathsf {aux}}_i (X, Y)\), and then that the verification equation holds. Recall that for \(i \in [1\,..\,\kappa ]\), \(\mathsf {B}^{\mathsf {aux}}_i (X, Y) = \ell _{\nu  \kappa + i} (X) \mathsf {B}_s (X, Y) Y = \ell _{\nu  \kappa + i} (X) (r_b + v_s (X, Y) Y) Y\), where \(v_s (X)\) is as in Eq. (8). First, the addend \(r_b \ell _{\nu  \kappa + i} (X) Y\) belongs to the span of \((X^i Y)_{i = 0}^{\nu  1} \subset \mathcal {R}\). Second, due to Item c of Theorem 1, for all \(j \in [2\,..\,1 + \mu _\alpha + \mu _\phi ]\),
and thus \(\mathsf {B}^{\mathsf {aux}}_i (X, Y)  r_b \ell _{\nu  \kappa + i} (X) Y\) is equal to \(b'_{i} (X) \ell (X) Y^{2}\) for some polynomial . Thus, \(\mathsf {B}^{\mathsf {aux}}_i (X) \in {\text {span}}(\mathcal {R})\) and the committer can compute \([\ell _{\nu  \kappa + i} (\chi ) \mathsf {B}_s y]_{2} = [\mathsf {B}^{\mathsf {aux}}_i (\chi , y)]_{2}\).
Assume that , \(([\mathsf {A}_s, \{\mathsf {B}^{\mathsf {aux}}_i\}_{i = 1}^{\kappa }]_{1}, [\mathsf {B}_s]_{2}) \leftarrow \mathsf {com} (\mathsf {ck}; \varvec{\alpha }; r_a, r_b)\) and \([\mathsf {C}_{sp}]_{1} \leftarrow \mathsf {open}(\mathsf {ck}; ([\mathsf {A}_s, \{\mathsf {B}^{\mathsf {aux}}_i\}_{i = 1}^{\kappa }]_{1}, [\mathsf {B}_s]_{2}), (\varvec{\alpha }, r_a, r_b), \varvec{\beta })\). It is clear that then the verifier accepts.
(2: perfect comhiding). Follows from the fact that \(([\mathsf {A}_s]_{1}, [\mathsf {B}_s]_{2})\) is perfectly masked by uniformly random . Moreover, \([\mathsf {B}^{\mathsf {aux}}_i]_{1}\) are publicly verifiable functions of \([\mathsf {B}_s]_{2}\).
(3: perfect openhiding). Due to comhiding and the fact that \([\mathsf {A}_p]_{1}\), \([\mathsf {B}_p]_{2}\), and \([\mathsf {C}_p]_{1}\) only depend on \((\varvec{\beta }, \{\mathcal {F}_i (\varvec{\alpha }, \varvec{\beta })\})\) (and not on \(\varvec{\alpha }\) otherwise), it means that the distribution of all elements in the opening (except possibly \([\mathsf {C}_{sp}]_{1}\)) is the same for any two vectors \(\varvec{\alpha }_1\) and \(\varvec{\alpha }_2\) that satisfy \(\mathcal {F}_i (\varvec{\alpha }_1, \varvec{\beta }) = \mathcal {F}_i (\varvec{\alpha }_2, \varvec{\beta })\) for all i, Since \([\mathsf {C}_{sp}]_{1}\) is the unique element that makes the verifier to accept, this means that the same claim holds for the whole opening, and \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is openhiding.
(4: perfect zeroknowledge). We construct as follows. It has \((\chi , y)\) as the trapdoor. It samples random , and then sets \([\mathsf {B}^{\mathsf {aux}}_i]_{1} \leftarrow [\ell _{\nu  \kappa + i} (\chi ) y\mathsf {B}_s]_{1}\) for all i. It computes \(\mathsf {B}_p\) (by using the trapdoors), \([\mathsf {A}_p]_{1}\), and \([\mathsf {C}_p]_{1}\). It then computes the unique \([\mathsf {C}_{sp}]_{1}\) that makes the verifier to accept,
(5: evaluationbinding). Assume that is an evaluationbinding adversary that, with probability and in time , returns a collision \((([\mathsf {A}_s, \{\mathsf {B}^{\mathsf {aux}}_i\}_{i = 1}^{\kappa }]_{1}, [\mathsf {B}_s]_{2}); \varvec{\beta }; \{\mathsf {\xi }_i\}, [\mathsf {C}_{sp}]_{1}, \{\tilde{\mathsf {\xi }}_i\}, [\tilde{\mathsf {C}}_{sp}]_{1})\) with \(\varvec{\mathsf {\xi }} \ne \varvec{\tilde{\mathsf {\xi }}}\), such that (here, \([\mathsf {A}_p, \mathsf {C}_p]_{1}\) / \([\tilde{\mathsf {A}}_p, \tilde{\mathsf {C}}_p]_{1}\) is the opening in the collision),
and \([\ell _{\nu  \kappa + i} (\chi ) y]_{1} \bullet [\mathsf {B}_s]_{2} = [\mathsf {B}^{\mathsf {aux}}_i]_{1} \bullet [1]_{2}\) for \(i \in [1\,..\,\kappa ]\). Here we used the fact that by Items f and j of Theorem 1 (see also the definition of \(u_p (X)\) and \(v_p (X)\) in Eqs. (7) and (8)), the value of \([\mathsf {B}_p]_{2}\) stays the same in both openings.
We now construct an adversary against the computational uberassumption in \(\mathbb {G}_1\). From the collision, by subtracting the second equation from the first equation and then moving everything from \(\mathbb {G}_T\) (the result of pairings) to \(\mathbb {G}_1\),
Denote \(\varDelta _i := \mathsf {\xi }_i  \tilde{\mathsf {\xi }}_i\). Let \(\varvec{\mathsf {a}}\) and \(\varvec{\tilde{\mathsf {a}}}\) be witnesses, used by when creating the collision. Without any further assumptions (see Eqs. (7) and (9)), \(A_p (X)  \tilde{A}_p (X) = \sum _{j = \mu _\alpha + \mu _\phi + 2}^{\mu } (\mathsf {a}_j  \tilde{\mathsf {a}}_j) u_j (X) Y = \sum _{j = \mu  \mu _\chi  \kappa }^{\mu  \kappa } (\mathsf {a}_j  \tilde{\mathsf {a}}_j) u_j (X) Y + \sum _{j = \mu  \kappa + 1}^{\mu } \varDelta _{j  (\mu  \kappa )} u_j (X) Y\). (This is since for \(j \le \mu _\alpha + \mu _\phi + 1\), \(\mathsf {a}_j = \tilde{\mathsf {a}}_j\) is either fixed by the commitment or can be recomputed by the verifier from \(\varvec{\beta }\) alone.) Thus, Eq. (14) is equivalent to
Assuming Items e to g of Theorem 1,
Assuming additionally Item i of Theorem 1,
Let \([z]_{1} := \sum _{i = 1}^{\kappa } \varDelta _i [\ell _{\nu  \kappa + i} (\chi ) y^{\eta + 1}]_{1} (= \sum \varDelta _i [f_i (\chi , y)]_{1})\). In what follows, we show that can compute \([z]_{1}\) and thus break the spanuberassumption. From the last displayed equation, we get
(The last equation is guaranteed by \([\ell _{\nu  \kappa + i} (\chi ) y]_{1} \bullet [\mathsf {B}_s]_{2} = [\mathsf {B}^{\mathsf {aux}}_i]_{1} \bullet [1]_{2}\).)
We now show how to efficiently compute \([\ell _{\nu  \kappa + i} (\chi ) (\mathsf {B}_p  y) y]_{1}\). Let \(t (X) = v_p (X)  \sum _{i = 1}^{\kappa } \ell _{\nu  \kappa + i} (X)\). Let \(h'_i (X)\) be the rational function that satisfies
Due to Item d of Theorem 1 and the definition of t(X) (see also Eqs. (7) and (8)),
Moreover, \(\ell (X) \mid \ell _{\nu  \kappa + i} (X) \ell _{\nu  \kappa + j} (X)\), for \(i \ne j\), and \(\ell (X) \mid \ell _{\nu  \kappa + i} (X) (\ell _{\nu  \kappa + i} (X)  1)\). Thus, the polynomial on the righthand side of Eq. (15) divides by \(\ell (X)\). Thus, \(h'_i (X)\) is a polynomial of degree \(\le \nu  2\) and thus can compute efficiently
and then
Thus, given the collision, outputs \((\varvec{\varDelta }, [z]_{1} = \sum \varDelta _i [f_i (\chi , y)]_{1})\) for \(f_i (X, Y) \not \in {\text {span}}(\mathcal {R})\). Thus, breaks (w.p. and time close to ) the \((\mathcal {R}, \mathcal {S}, \{f_i\})\)computational spanuberassumption in \(\mathbb {G}_1\) in the case \(f_i \not \in {\text {span}}(\mathcal {R})\). \(\square \)
The following Corollary follows from Item 5 in Theorem 3 and Lemma 2.
Corollary 1
Let \(\mathcal {C}\) be a fixed circuit. Let \(\gamma = 4\), \(\delta = 0\), and \(\eta = 7\). Let \(f'_i (X, Y) := (\ell _{\nu  \kappa + i} (X))^2 Y^{\eta + 2}\). If the \((\mathcal {R}, \mathcal {S}, f'_I)\)computational uberassumption holds in \(\mathbb {G}_T\) for all \(I \in [1\,..\,\kappa ]\) then \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is computationally evaluationbinding.
Remark 2
Importantly, the indeterminate Y is crucial in establishing the independence of \(f_i\) from \(\mathcal {R}\). Let \(\mathcal {R}^* := \{(X^i)_{i = 0}^{\nu  1}, (X^i \ell (X))_{i = 0}^{\nu  2}\}\), \(\mathcal {S}^* := \{(X^i)_{i = 0}^{\nu  1}\}\), and \(f^*_i := \ell _{\nu  \kappa + i} (X)\). One can establish that \(\mathsf {FC^{\mathcal {C}}_{sn}}\) is evaluationbinding under the \((\mathcal {R}^*, \mathcal {S}^*, \{f^*_i\})\)computational spanuberassumption in \(\mathbb {G}_1\). Really, consider the following \((\mathcal {R}^*, \mathcal {S}^*, \{f^*_i\})\)spanuberassumption adversary that will create \(y\) herself, generate a new \(\mathsf {ck}\) based on her input and \(y\), and then use in Theorem 3 to break the \((\mathcal {R}^*, \mathcal {S}^*, \{f^*_i\})\)computational spanuberassumption. will have similar success as . However, \(f^*_i \in {\text {span}}(\mathcal {R}^*)\) and thus the \((\mathcal {R}^*, \mathcal {S}^*, \{f^*_i\})\)computational spanuberassumption itself is not secure.
On the Security of the SpanUberAssumption. It is known that in compositeorder bilinear groups, the computational uberassumption in \(\mathbb {G}_T\) holds under appropriate subgroup hiding assumptions [13]. Hence, a compositeorder group version of the spanuberassumption (and also of the new SFC) is secure under a subgroup hiding assumption. In the full version [34], we will use the Déjà Q approach of [14] directly to prove that the spanuberassumption in \(\mathbb {G}_\iota \), \(\iota \in \{1, 2\}\), is secure under a subgroup hiding assumption. More precisely, we establish the following corollary. (See the full version [34] for the definition of subgroup hiding and extended adaptive parameter hiding.)
Theorem 4
The \((\mathcal {R}, \mathcal {S}, \{f_i\}_{i = 1}^\kappa )\)computational spanuberassumption holds in the source group \(\mathbb {G}_1\) with all but negligible probability if

1.
subgroup hiding holds in \(\mathbb {G}_1\) with respect to \(\mu =\{\mathsf {P}^2_1, \mathsf {P}^1_2\}\),

2.
subgroup hiding holds in \(\mathbb {G}_2\) with respect to \(\mu =\{\mathsf {P}^1_1\}\),

3.
extended adaptive parameter hiding holds with respect to \(\mathcal {R}\cup \{f_i\}_{i = 1}^\kappa \) and \(\mathsf {aux}= \{ {\mathsf {P}^1_2}^{\sigma (\cdot )}\}_{\sigma \in \mathcal {S}}\) for any \(\mathsf {P}^1_2\in \mathbb {G}_2\).

4.
the polynomials in \(\mathcal {R}\) have maximum degree .
Here, \(\mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T\) are additive groups of composite order \(N = p_1 p_2\) (\(p_1\ne p_2\)) and \(\mathsf {P}^1_\iota \in \mathbb {G}_{\iota , p_1}\), \(\mathsf {P}^2_\iota \in \mathbb {G}_{\iota , p_2}\) are randomly sampled subgroup generators, where \(\mathbb {G}_{\iota , p_j}\) is the subgroup of \(\mathbb {G}_\iota \) of order \(p_j\) and \(\mathsf {P}_\iota \in \mathbb {G}_\iota = \mathbb {G}_{\iota , p_1} \oplus \mathbb {G}_{\iota , p_2}\).
The direct proof in the full version [34] is simpler than the mentioned twostep proof since it does not rely on the intermediate step of reducing the spanuberassumption to a uberassumption in \(\mathbb {G}_T\). Moreover, the Déjà Q approach is more straightforward in case one works in the source group. We will leave it up to future work to reduce primeorder spanuberassumption to a simpler assumption; there has been almost no prior work on reducing primeorder assumptions.
Finally, in the full version [34], by following [33], we will prove that the spanuberassumption is secure under a hash algebraic knowledge (HAK) assumption and the wellknown PDL assumption [31], from which it follows that is secure in the algebraic group model (with hashing) [19] under the PDL assumption.^{Footnote 1} Following the semigenericgroup model of [26], the HAK assumptions of [33] are defined only in the case when the adversary outputs elements in the source groups (but not in \(\mathbb {G}_T\)), and thus one cannot prove the security of the computational uberassumption in \(\mathbb {G}_T\) using the approach of [33]. Thus, in a welldefined sense, the spanuberassumption is weaker than the uberassumption in \(\mathbb {G}_T\).
Notes
 1.
As a corollary of independent interest, we also show in the full version [34] that if \(f\not \in {\text {span}}(\mathcal {R})\) then the \((\mathcal {R}, \mathcal {S}, \mathcal {T})\)uberassumption follows from HAK and PDL.
References
Abdolmaleki, B., Baghery, K., Lipmaa, H., Zając, M.: A subversionresistant SNARK. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017, Part III. LNCS, vol. 10626, pp. 3–33. Springer, Cham (2017). https://doi.org/10.1007/9783319707006_1
Abdolmaleki, B., Lipmaa, H., Siim, J., Zając, M.: On QANIZK in the BPK model. In: Kiayias, A., Kohlweiss, M., Wallden, P., Zikas, V. (eds.) PKC 2020, Part I. LNCS, vol. 12110, pp. 590–620. Springer, Cham (2020). https://doi.org/10.1007/9783030453749_20
Barić, N., Pfitzmann, B.: Collisionfree accumulators and failstop signature schemes without trees. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 480–494. Springer, Heidelberg (1997). https://doi.org/10.1007/3540690530_33
Bellare, M., Fuchsbauer, G., Scafuro, A.: NIZKs with an untrusted CRS: security in the face of parameter subversion. In: Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016, Part II. LNCS, vol. 10032, pp. 777–804. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662538906_26
Benaloh, J., de Mare, M.: Oneway accumulators: a decentralized alternative to digital signatures. In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 274–285. Springer, Heidelberg (1994). https://doi.org/10.1007/3540482857_24
Bitansky, N.: Verifiable random functions from noninteractive witnessindistinguishable proofs. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017, Part II. LNCS, vol. 10678, pp. 567–594. Springer, Cham (2017). https://doi.org/10.1007/9783319705033_19
Boneh, D., Boyen, X., Goh, E.J.: Hierarchical identity based encryption with constant size ciphertext. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 440–456. Springer, Heidelberg (2005). https://doi.org/10.1007/11426639_26
Boneh, D., Bünz, B., Fisch, B.: Batching techniques for accumulators with applications to IOPs and stateless blockchains. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019, Part I. LNCS, vol. 11692, pp. 561–586. Springer, Cham (2019). https://doi.org/10.1007/9783030269487_20
Boneh, D., Drake, J., Fisch, B., Gabizon, A.: Efficient polynomial commitment schemes for multiple points and polynomials. Technical report 2020/081, IACR (2020)
Bowe, S., Grigg, J., Hopwood, D.: Halo: recursive proof composition without a trusted setup. Technical report (2019). https://electriccoin.co/wpcontent/uploads/2019/09/Halo.pdf
Boyen, X.: The uberassumption family. In: Galbraith, S.D., Paterson, K.G. (eds.) Pairing 2008. LNCS, vol. 5209, pp. 39–56. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540855385_3
Catalano, D., Fiore, D.: Vector commitments and their applications. In: Kurosawa, K., Hanaoka, G. (eds.) PKC 2013. LNCS, vol. 7778, pp. 55–72. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642363627_5
Chase, M., Maller, M., Meiklejohn, S.: Déjà Q all over again: tighter and broader reductions of qtype assumptions. In: Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016, Part II. LNCS, vol. 10032, pp. 655–681. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662538906_22
Chase, M., Meiklejohn, S.: Déjà Q: using dual systems to revisit qtype assumptions. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 622–639. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642552205_34
Chiesa, A., Hu, Y., Maller, M., Mishra, P., Vesely, N., Ward, N.: Marlin: preprocessing zkSNARKs with universal and updatable SRS. In: Canteaut, A., Ishai, Y. (eds.) EUROCRYPT 2020, Part I. LNCS, vol. 12105, pp. 738–768. Springer, Cham (2020). https://doi.org/10.1007/9783030457211_26
Dent, A.W.: Adapting the weaknesses of the random oracle model to the generic group model. In: Zheng, Y. (ed.) ASIACRYPT 2002. LNCS, vol. 2501, pp. 100–109. Springer, Heidelberg (2002). https://doi.org/10.1007/3540361782_6
Fischlin, M.: A note on security proofs in the generic model. In: Okamoto, T. (ed.) ASIACRYPT 2000. LNCS, vol. 1976, pp. 458–469. Springer, Heidelberg (2000). https://doi.org/10.1007/3540444483_35
Fuchsbauer, G.: Subversionzeroknowledge SNARKs. In: Abdalla, M., Dahab, R. (eds.) PKC 2018, Part I. LNCS, vol. 10769, pp. 315–347. Springer, Cham (2018). https://doi.org/10.1007/9783319765785_11
Fuchsbauer, G., Kiltz, E., Loss, J.: The algebraic group model and its applications. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018, Part II. LNCS, vol. 10992, pp. 33–62. Springer, Cham (2018). https://doi.org/10.1007/9783319968810_2
Gennaro, R., Gentry, C., Parno, B., Raykova, M.: Quadratic span programs and succinct NIZKs without PCPs. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 626–645. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642383489_37
Gentry, C., Wichs, D.: Separating succinct noninteractive arguments from all falsifiable assumptions. In: 43rd ACM STOC, pp. 99–108 (2011)
Gorbunov, S., Reyzin, L., Wee, H., Zhang, Z.: PointProofs: aggregating proofs for multiple vector commitments. Technical report 2020/419, IACR (2020)
Groth, J.: Short pairingbased noninteractive zeroknowledge arguments. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp. 321–340. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642173738_19
Groth, J.: On the size of pairingbased noninteractive arguments. In: Fischlin, M., Coron, J.S. (eds.) EUROCRYPT 2016, Part II. LNCS, vol. 9666, pp. 305–326. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662498965_11
Izabachène, M., Libert, B., Vergnaud, D.: Blockwise Psignatures and noninteractive anonymous credentials with efficient attributes. In: Chen, L. (ed.) IMACC 2011. LNCS, vol. 7089, pp. 431–450. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642255168_26
Jager, T., Rupp, A.: The semigeneric group model and applications to pairingbased cryptography. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp. 539–556. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642173738_31
Kate, A., Zaverucha, G.M., Goldberg, I.: Constantsize commitments to polynomials and their applications. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp. 177–194. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642173738_11
Lai, R.W.F., Malavolta, G.: Subvector commitments with application to succinct arguments. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019, Part I. LNCS, vol. 11692, pp. 530–560. Springer, Cham (2019). https://doi.org/10.1007/9783030269487_19
Libert, B., Ramanna, S.C., Yung, M.: Functional commitment schemes: from polynomial commitments to pairingbased accumulators from simple assumptions. In: ICALP 2016. LIPIcs, vol. 55, pp. 30:1–30:14 (2016)
Libert, B., Yung, M.: Concise mercurial vector commitments and independent zeroknowledge sets with short proofs. In: Micciancio, D. (ed.) TCC 2010. LNCS, vol. 5978, pp. 499–517. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642117992_30
Lipmaa, H.: Progressionfree sets and sublinear pairingbased noninteractive zeroknowledge arguments. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 169–189. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642289149_10
Lipmaa, H.: Succinct noninteractive zero knowledge arguments from span programs and linear errorcorrecting codes. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part I. LNCS, vol. 8269, pp. 41–60. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642420337_3
Lipmaa, H.: Simulationextractable ZKSNARKs revisited. Technical report 2019/612, IACR (2019) https://eprint.iacr.org/2019/612. Accessed 8 Feb 2020
Lipmaa, H., Pavlyk, K.: Succinct functional commitment for a large class of arithmetic circuits. Technical Report 2020/?, IACR (2020)
Maller, M., Bowe, S., Kohlweiss, M., Meiklejohn, S.: Sonic: zeroknowledge SNARKs from linearsize universal and updatable structured reference strings. In: ACM CCS 2019, pp. 2111–2128 (2019)
Papamanthou, C., Shi, E., Tamassia, R.: Signatures of correct computation. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 222–242. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642365942_13
Shpilka, A., Yehudayoff, A.: Arithmetic circuits: a survey of recent results and open questions. In: Foundations and Trends in Theoretical Computer Science, vol. 5. Now Publishers Inc. (2010)
Tomescu, A., Abraham, I., Buterin, V., Drake, J., Feist, D., Khovratovich, D.: Aggregatable subvector commitments for stateless cryptocurrencies. Technical report 2020/527, IACR (2020)
Valiant, L.G.: Completeness classes in algebra. In: STOC 1979, pp. 249–261 (1979)
Wahby, R.S., Tzialla, I., Shelat, A., Thaler, J., Walfish, M.: Doublyefficient zkSNARKs without trusted setup. In: IEEE SP 2018, pp. 926–943 (2018)
Zhang, Y., Genkin, D., Katz, J., Papadopoulos, D., Papamanthou, C.: vSQL: verifying arbitrary SQL queries over dynamic outsourced databases. In: IEEE SP 2017, pp. 863–880 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 International Association for Cryptologic Research
About this paper
Cite this paper
Lipmaa, H., Pavlyk, K. (2020). Succinct Functional Commitment for a Large Class of Arithmetic Circuits. In: Moriai, S., Wang, H. (eds) Advances in Cryptology – ASIACRYPT 2020. ASIACRYPT 2020. Lecture Notes in Computer Science(), vol 12493. Springer, Cham. https://doi.org/10.1007/9783030648404_23
Download citation
DOI: https://doi.org/10.1007/9783030648404_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030648398
Online ISBN: 9783030648404
eBook Packages: Computer ScienceComputer Science (R0)