Skip to main content
Log in

Fine-Grained Secure Attribute-Based Encryption

  • Research Article
  • Published:
Journal of Cryptology Aims and scope Submit manuscript

Abstract

Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge proof system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. Essentially, the BKP framework used the GS proof for linear equations and replaced the GS commitment with the Pedersen commitment.

  2. The IBKEM can be straightforwardly extended to one with large key space as we will discuss later in this section.

  3. See Sect. 2 for the notion of \(\mathbf {{N}}^\lambda \).

  4. In fact, the rightmost vector \((r_1,\cdots ,r_{n-1},1)^\top \) of the intermediate matrix generated by \({\textsf{RSamp}}(n)\) (see Fig. 1) forms a vector in the kernel of \(\mathbf {{M}}^\top \). See the proof of Lemma 3 in [15] for more details.

  5. One-time (respectively, unbounded) simulation soundness prevents the adversary from proving a false statement after seeing a single simulated proof for a statement (respectively, multiple simulated proofs for statements) of its choice. We refer the reader to [23] for the formal definitions.

  6. We do not exploit the sampleability of the distribution for this construction.

References

  1. B. Applebaum, Y. Ishai, E. Kushilevitz, Cryptography in NC\(^0\), in 45th FOCS. (IEEE Computer Society Press, 2004), pp. 166–175

  2. M. Ball, D. Dachman-Soled, M. Kulkarni, New techniques for zero-knowledge: Leveraging inefficient provers to reduce assumptions, interaction, and trust, in Micciancio, D., Ristenpart, T. (eds.) CRYPTO 2020, Part III. LNCS, vol. 12172 (Springer, Heidelberg, 2020), pp. 674–703

  3. D.A.M. Barrington, Bounded-width polynomial-size branching programs recognize exactly those languages in \(\text{NC}^1\), in 18th ACM STOC (ACM Press, 1986), pp. 1–5

  4. M. Bellare, S. Goldwasser, New paradigms for digital signatures and message authentication based on non-interactive zero knowledge proofs, in Brassard, G. (ed.) CRYPTO’89. LNCS, vol. 435 (Springer, Heidelberg, 1990), pp. 194–211

  5. O. Blazy, E. Kiltz, J. Pan, (Hierarchical) identity-based encryption from affine message authentication, in Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616 (Springer, Heidelberg, 2014), pp. 408–425

  6. D. Boneh, M.K. Franklin, Identity-based encryption from the Weil pairing, in Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139 (Springer, Heidelberg, 2001), pp. 213–229

  7. D. Boneh, P.A. Papakonstantinou, C. Rackoff, Y. Vahlis, B. Waters, On the impossibility of basing identity based encryption on trapdoor permutations, in 49th FOCS (IEEE Computer Society Press, 2008), pp. 283–292

  8. C. Brzuska, G. Couteau, Towards fine-grained one-way functions from strong average-case hardness. IACR Cryptol. ePrint Arch. 2020, 1326 (2020)

    MATH  Google Scholar 

  9. M. Campanelli, R. Gennaro, Fine-grained secure computation, in Beimel, A., Dziembowski, S. (eds.) TCC 2018, Part II. LNCS, vol. 11240 (Springer, Heidelberg, 2018), pp. 66–97

  10. J. Chen, R. Gay, H. Wee, Improved dual system ABE in prime-order groups via predicate encodings, in Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057 (Springer, Heidelberg, 2015), pp. 595–624

  11. J. Chen, H. Wee, Fully, (almost) tightly secure IBE and dual system groups, in Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043 (Springer, Heidelberg, 2013), pp. 435–460

  12. C. Cocks, An identity based encryption scheme based on quadratic residues, in Honary, B. (ed.) 8th IMA International Conference on Cryptography and Coding. LNCS, vol. 2260 (Springer, Heidelberg, 2001), pp. 360–363

  13. A. Degwekar, V. Vaikuntanathan, P.N. Vasudevan, Fine-grained cryptography, in Robshaw, M., Katz, J. (eds.) CRYPTO 2016, Part III. LNCS, vol. 9816 (Springer, Heidelberg, 2016), pp. 533–562

  14. S. Egashira, Y. Wang, K. Tanaka, Fine-grained cryptography revisited, in Galbraith, S.D., Moriai, S. (eds.) ASIACRYPT 2019, Part III. LNCS, vol. 11923 (Springer, Heidelberg, 2019), pp. 637–666

  15. S. Egashira, Y. Wang, K. Tanaka, Fine-grained cryptography revisited. J. Cryptol.34(3), 23 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  16. G. Fuchsbauer, E. Kiltz, J. Loss, The algebraic group model and its applications, in Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018, Part II. LNCS, vol. 10992 (Springer, Heidelberg, 2018), pp. 33–62

  17. C. Gentry, A. Silverberg, Hierarchical ID-based cryptography, in Zheng, Y. (ed.) ASIACRYPT 2002. LNCS, vol. 2501 (Springer, Heidelberg, 2002), pp. 548–566

  18. V. Goyal, O. Pandey, A. Sahai, B. Waters, Attribute-based encryption for fine-grained access control of encrypted data, in: Juels, A., Wright, R.N., De Capitani di Vimercati, S. (eds.) ACM CCS 2006 (ACM Press, 2006), Available as Cryptology ePrint Archive Report 2006/309, pp. 89–98

  19. J. Groth, A. Sahai, Efficient non-interactive proof systems for bilinear groups, in Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965 (Springer, Heidelberg, 2008), pp. 415–432

  20. J. Horwitz, B. Lynn, Toward hierarchical identity-based encryption, in Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332 (Springer, Heidelberg, 2002), pp. 466–481

  21. Y. Ishai, E. Kushilevitz, Randomizing polynomials: A new representation with applications to round-efficient secure computation, in 41st FOCS (IEEE Computer Society Press, 2000), pp. 294–304

  22. C.S. Jutla, A. Roy, Shorter quasi-adaptive NIZK proofs for linear subspaces, in Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part I. LNCS, vol. 8269 (Springer, Heidelberg, 2013), pp. 1–20

  23. E. Kiltz, H. Wee, Quasi-adaptive NIZK for linear subspaces revisited, in Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015, Part II. LNCS, vol. 9057 (Springer, Heidelberg, 2015), pp. 101–128

  24. A.B. Lewko, B. Waters, New techniques for dual system encryption and fully secure HIBE with short ciphertexts, in: Micciancio, D. (ed.) TCC 2010. LNCS, vol. 5978 (Springer, Heidelberg, 2010), pp. 455–479

  25. U.M. Maurer, Abstract models of computation in cryptography (invited paper), in: Smart, N.P. (ed.) 10th IMA International Conference on Cryptography and Coding. LNCS, vol. 3796 (Springer, Heidelberg, 2005), pp. 1–12

  26. R.C. Merkle, Secure communications over insecure channels. Commun. ACM21(4), 294–299 (1978)

    Article  MATH  Google Scholar 

  27. P. Morillo, C. Ràfols, J.L.. Villar, The kernel matrix Diffie–Hellman assumption, in Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016, Part I. LNCS, vol. 10031 (Springer, Heidelberg, 2016), pp. 729–758

  28. A.A. Razborov, Lower bounds on the size of bounded depth circuits over a complete basis with logical addition, in Mathematical notes of the Academy of Sciences of the USSR 41(4) (1987)

  29. A. Shamir, Identity-based cryptosystems and signature schemes, in Blakley, G.R., Chaum, D. (eds.) CRYPTO’84. LNCS, vol. 196 (Springer, Heidelberg, 1984), pp. 47–53

  30. V. Shoup, Lower bounds for discrete logarithms and related problems, in Fumy, W. (ed.) EUROCRYPT’97. LNCS, vol. 1233 (Springer, Heidelberg, 1997), pp. 256–266

  31. R. Smolensky, Algebraic methods in the theory of lower bounds for Boolean circuit complexity, in: Aho, A. (ed.) 19th ACM STOC (ACM Press, 1987), pp. 77–82

  32. Y. Wang, J. Pan, Non-interactive zero-knowledge proofs with fine-grained security, in Dunkelman, O., Dziembowski, S. (eds.) EUROCRYPT 2022, Part II. LNCS, vol. 13276 (Springer, Heidelberg, 2022), pp. 305–335

  33. Y. Wang, J. Pan, Unconditionally secure NIZK in the fine-grained setting, in Agrawal, S., Lin, D. (eds.) ASIACRYPT 2022, Part II. LNCS, vol. 13792 (Springer, Heidelberg, 2022), pp. 437–465

  34. Y. Wang, J. Pan, Y. Chen, Fine-grained secure attribute-based encryption, in Malkin, T., Peikert, C. (eds.) CRYPTO 2021, Part IV. LNCS, vol. 12828 (Springer, Heidelberg, 2021), pp. 179–207

  35. B. Waters, Dual system encryption: Realizing fully secure IBE and HIBE under simple assumptions, in Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677 (Springer, Heidelberg, 2009), pp. 619–636

  36. H. Wee, Dual system encryption via predicate encodings, in Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349 (Springer, Heidelberg, 2014), pp. 616–637

Download references

Acknowledgements

We would like to thank the anonymous reviewers for their valuable comments on a previous version of this paper. Parts of Yuyu Wang’s work was supported by the National Natural Science Foundation for Young Scientists of China under Grant Number 62002049, the Natural Science Foundation of Sichuan under Grant Number 2023NSFSC0472, the Sichuan Science and Technology Program under Grant Number 2022YFG0037, and the National Key Research and Development Program of China under Grant Number 2022YFB3104600. Parts of Jiaxin Pan’s work was supported by the Research Council of Norway under Project No. 324235. Parts of Yu Chen’s work was supported by the National Key Research and Development Program of China under Grant Number 2021YFA1000600, the National Natural Science Foundation of China under Grant Number 62272269, Taishan Scholar Program of Shandong Province.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Chen.

Additional information

Communicated by Alon Rosen

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of this paper appeared at Crypto 2021 [34], this is the full version

Appendices

Appendices

The Proof of Theorem 2.13

Proof

We prove Theorem 2.13 by the following two propositions.

Proposition A.1

For all \(\mathbf {{M}}^\top \in {\textsf{ZeroSamp}}(\lambda )\) and \(\textbf{x} \in {\textsf{SampNo}}_\lambda (\mathbf {{M}})\), we have \(\textbf{x} \in \{0,1\}^\lambda {\setminus }{{\,\textrm{Im}\,}}(\mathbf {{M}})\).

Proof of Proposition A.1

According to  Lemma 2.10, we have \({{\,\textrm{Im}\,}}(\mathbf {{M}}) = \{\textbf{x} \, | \, \textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}, \, \textbf{x} = \mathbf {{M}}\textbf{w} \}\). Since \(\mathbf {{N}}^\lambda \textbf{w}=\textbf{0}\) for any \(\textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}\), we have \({{\,\textrm{Im}\,}}(\mathbf {{M}})=\{\textbf{x} \, | \, \textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}, \, \textbf{x} = (\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \}\).Footnote 3 Moreover, \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\) is of full rank according to Lemma 2.9. Hence, for any \(\textbf{w} \in \{1\}\times \{0, 1\}^{\lambda -1}\) and any \(\textbf{x}\in {{\,\textrm{Im}\,}}(\mathbf {{M}})\), we have \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \ne \textbf{x}\). Namely, for any \(\textbf{w} \in \{1\}\times \{0, 1\}^{\lambda -1}\), we have \((\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w} \in \{0, 1\}^{\lambda } {\setminus } {{\,\textrm{Im}\,}}(\mathbf {{M}}^\top )\), completing the proof of Proposition A.1. \(\square \)

Proposition A.2

For any \(\mathcal {A} = \{a_{\lambda }\} \in \mathsf {NC^1}\),

$$\begin{aligned} \left| \Pr \left[ a_{\lambda }(\mathbf {{M}}, \textbf{x}_0)=1\right] - \Pr \left[ a_{\lambda }(\mathbf {{M}}, \textbf{x}_1)=1\right] \right| \le \textsf{negl}(\lambda ) \end{aligned}$$

where , , and .

Proof of Proposition A.2

Let \(\mathcal {A}= \{a_{\lambda }\}\) be any adversary in \(\mathsf {NC^1}\). We give intermediate games in Fig. 19 to show that the advantage of \(\mathcal {A}\) in breaking Proposition A.2 is negligible.

Lemma A.3

\(\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_0^{a_\lambda }\Rightarrow 1].\)

Proof

In \(\textsf{G}_1\) we sample instead of . Then Lemma A.3 follows from the fact that the distributions of \(\textbf{x}= \mathbf {{M}} \textbf{w}\) and \(\textbf{x}' = \mathbf {{M}} \textbf{w}'\) are identical where , , and , according to Lemma 2.11. \(\square \)

Lemma A.4

\(\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1].\)

Proof

In \(\textsf{G}_2\), we compute \(\textbf{x}\) as \(\textbf{x} = (\mathbf {{M+N^\lambda }}) \textbf{w}\) instead. Then Lemma A.4 follows from the fact that for any \(\textbf{w} \in \{0\}\times \{0, 1\}^{\lambda -1}\), we have \(\mathbf {{N}}^\lambda \textbf{w}=\textbf{0}\). \(\square \)

Lemma A.5

There exists an adversary \(\mathcal {B}_1 = \{b_{\lambda }^1\} \in \mathsf {NC^1}\) such that \(b_{\lambda }^1\) breaks Definition 2.5, which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma 2.6, with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_3^{a_\lambda }\Rightarrow 1\right] -\Pr \left[ \textsf{G}_2^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_2\) and \(\textsf{G}_3\) only differ in the distribution of \(\mathbf {{M}}\), namely, or , and we build the distinguisher \(b_\lambda ^1\) as follows.

\(b_\lambda ^1\) runs in exactly the same way as the challenger of \(\textsf{G}_2\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{M}}\) by itself, it takes as input \(\mathbf {{M}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^1\) outputs \(\beta \) as well. If \(\mathbf {{M}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_2\) (respectively, \(\textsf{G}_3\)). Hence, the probability that \(b_\lambda ^1\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} \left| \Pr \left[ \textsf{G}^{a_\lambda }_3\Rightarrow 1\right] - \Pr \left[ \textsf{G}^{a_\lambda }_2 \Rightarrow 1\right] \right| . \end{aligned}$$

Moreover, since all operations in \(b_\lambda ^1\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_1 = \{b_{\lambda }^1\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)

Lemma A.6

\(\Pr [\textsf{G}_4^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1].\)

Proof

In \(\textsf{G}_4\) we sample instead of .

Let . The distribution of \(\mathbf {{M}} + {\mathbf {{N}}^\lambda }\) is identical to the output distribution of \({\textsf{ZeroSamp}}(\lambda )\) according to Lemma 2.9. Therefore, according to Lemma 2.11, the distributions of \(\textbf{x} = (\mathbf {{M}} + {\mathbf {{N}}^\lambda }) \textbf{w}\) and \(\textbf{x}' = (\mathbf {{M}} + {\mathbf {{N}}^\lambda })\textbf{w}'\) are identical for and , completing this part of proof. \(\square \)

Lemma A.7

There exists an adversary \(\mathcal {B}_2=\{b^2_\lambda \}_{\lambda \in \mathbb {N}}\) such that \(b^2_\lambda \) breaks the fine-grained matrix linear assumption with advantage

$$\begin{aligned} \left| \Pr \left[ \textsf{G}_5^{a_\lambda }\Rightarrow 1\right] -\Pr \left[ \textsf{G}_4^{a_\lambda }\Rightarrow 1\right] \right| . \end{aligned}$$

Proof

\(\textsf{G}_5\) and \(\textsf{G}_4\) only differ in the distribution of \(\mathbf {{M}}\), namely, or , and we build the distinguisher \(b_\lambda ^2\) as follows.

\(b_\lambda ^2\) runs in exactly the same way as the challenger of \(\textsf{G}_2\) except that in \(\textsc {Init}\), instead of generating \(\mathbf {{M}}\) by itself, it takes as input \(\mathbf {{M}}^\top \) generated as or from its own challenger. When \(a_\lambda \) outputs \(\beta \), \(b_\lambda ^2\) outputs \(\beta \) as well. If \(\mathbf {{M}}\) is generated as (respectively, ), the view of \(a_\lambda \) is the same as its view in \(\textsf{G}_4\) (respectively, \(\textsf{G}_5\)). Hence, the probability that \(b_\lambda ^2\) breaks the fine-grained matrix linear assumption is

$$\begin{aligned} | \Pr [\textsf{G}^{a_\lambda }_5\Rightarrow 1] - \Pr [\textsf{G}^{a_\lambda }_4 \Rightarrow 1]|. \end{aligned}$$

Moreover, since all operations in \(b_\lambda ^2\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}_2 = \{b_{\lambda }^2\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing this part of proof. \(\square \)

Then Proposition A.2 follows from the fact that \(\textsf{G}_0\) and \(\textsf{G}_5\) are the real games of Proposition A.2, where the values \(\textbf{x}\) are sampled from \({\textsf{SampYes}}_\lambda \) and \({\textsf{SampNo}}_\lambda \) respectively.

\(\square \)

Putting all above together, Theorem 2.13 immediately follows. \(\square \)

Fine-Grained Secure Quasi-Adaptive NIZK

In this section, we construct fine-grained QA-NIZK with adaptive soundness. We first give the definition of \(\mathsf {NC^1}\)-QA-NIZK with adaptive soundness. Then we prove an \(\mathsf {NC^1}\) version of the Kernel Matrix Diffie-Hellman assumption [27], based on which we give a warm-up QA-NIZK in \(\mathsf {NC^1}\) with relatively low efficiency. Finally, we show how to achieve a more efficient construction.

1.1 Definitions

We now recall the definition of fine-grained QA-NIZK. Let \(\mathcal {D}_{\lambda }\) be a probability distribution over a collection of relations \({\textsf{R}}=\{{\textsf{R}}_{\mathbf {{M}}}\}_{\mathbf {{M}}\in \mathcal {D}_{\lambda }}\) parametrized by a matrix \(\mathbf {{M}}\in \{0, 1\}^{n\times t}\) of rank \(t'< n\) generated as with the associated language

$$\begin{aligned} {{\mathcal {L}}}_{\mathbf {{M}}}=\left\{ {\textbf{t}}:\exists \textbf{w} \in \{0, 1\}^t, \text { s.t. } \textbf{t}= \mathbf {{M}} \textbf{w}\right\} . \end{aligned}$$

Witness Sampleability. Notice that similar to witness sampleable distribution in the classical world [22], we require that \(\mathcal {D}_\lambda \) additionally outputs a non-zero matrix \(\mathbf {{M}}^\bot \in \{0, 1\}^{n\times (n-t')}\) in the kernel of \(\mathbf {{M}}^\top \). An example of sampleable distribution is \({\textsf{ZeroSamp}}(n)\), which can additionally sample a non-zero vector in the kernel of its output.Footnote 4

Definition B.1

(Quasi-adaptive non-interactive zero-knowledge proof) A \({\mathcal {C}}_1\)-quasi-adaptive non-interactive zero-knowledge proof (QA-NIZK) for a set of language distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) is a function family \({\textsf{QANIZK}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\in \mathcal C_1\) with the following properties.

  • \({{\textsf{Gen}}}_\lambda (\mathbf {{M}})\) returns a CRS \(\textsf{crs}\) and a simulation trapdoor \(\textsf{td}\).

  • \({{\textsf{Prove}}}_\lambda (\textsf{crs},{} \textbf{y},\textbf{w})\) returns a proof \(\pi \).

  • \({{\textsf{Ver}}}_\lambda (\textsf{crs},{}\textbf{y},\pi )\) deterministically returns 1 (accept) or 0 (reject).

  • \({{\textsf{Sim}}}_\lambda (\textsf{crs},\textsf{td},{}\textbf{y})\) returns a simulated proof \(\pi \).

Perfect completeness is satisfied if for all \((\mathbf {{M}}^\top ,\mathbf {{M}}^\bot )\in {\mathcal {D}}_\lambda \), all vectors \((\textbf{y},\textbf{w})\) such that \(\textbf{y}={\mathbf {{M}}\textbf{w}}\), all \((\textsf{crs},\textsf{td}) \in {{\textsf{Gen}}}_\lambda (\mathbf {{M}})\), and all \(\pi \in {{\textsf{Prove}}}_\lambda (\textsf{crs},\textbf{y},\textbf{w})\), we have

$$\begin{aligned} {{\textsf{Ver}}}_\lambda (\textsf{crs},{} \textbf{y},\pi )=1. \end{aligned}$$

Perfect zero knowledge is satisfied if for all \(\lambda \), all \((\mathbf {{M}}^\top ,\mathbf {{M}}^\bot )\in \mathcal {D}_\lambda \), all \((\textbf{y},\textbf{w})\) with \(\textbf{y}={\mathbf {{M}}\textbf{w}}\), and all \((\textsf{crs},\textsf{td})\in {{\textsf{Gen}}}_\lambda (\mathbf {{M}})\), the following two distributions are identical:

$$\begin{aligned} {{\textsf{Prove}}}_\lambda (\textsf{crs},\textbf{y},\textbf{w})~~\text{ and }~~{{\textsf{Sim}}}_\lambda (\textsf{crs},\textsf{td},\textbf{y}). \end{aligned}$$

Definition B.2

(Adaptive soundness for QANIZK) \({\textsf{QANIZK}}\) is said to satisfy \({\mathcal {C}}_2\)-adaptive soundness if for any adversary \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in {\mathcal {C}}_2\),

$$\begin{aligned} \Pr [{\textsf{AS}}^{a_\lambda } \Rightarrow 1]\le \textsf{negl}(\lambda ), \end{aligned}$$

where Game \({\textsf{AS}}\) is defined in Fig. 20.

Fig. 20
figure 20

The \({\textsf{AS}}\) security game for \({\textsf{QANIZK}}\)

We note that in the above definition, the term “quasi-adaptive” means that the construction of the CRS depends on the statement \(\mathbf {{M}}\). On the other hand, “adaptive” in the context of adaptive soundness means that in the soundness experiment, the adversary can choose the statement adaptively after seeing the CRS.

1.2 A Warm-Up Construction

A New Lemma. We now prove the following lemma under the assumption \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), based on which we can achieve adaptively sound QA-NIZKs in \(\mathsf {NC^1}\). It can be thought of as the counterpart of the Kernel Matrix Diffie-Hellman assumption [27] in \(\mathsf {NC^1}\).

Definition B.3

(Fine-grained kernel matrix assumption) If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then for all \(\lambda \in \mathbb {N}\) and any adversary \(\mathcal {A}= \{a_{\lambda }\}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\), we have

where .

Lemma B.4

If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then the fine-grained kernel matrix assumption (see Definition B.3) holds.

Proof

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be an adversary such that \(a_\lambda \) breaks the fine-grained kernel matrix assumption with probability \(\epsilon \), we construct another adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained subset membership problem (see Definition 2.12), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Theorem 2.13, with the same probability as follows.

On input \((\mathbf {{M}},\textbf{u})\) where and or , \(b_\lambda \) forwards \(\mathbf {{M}}\) to \(a_\lambda \). When \(a_\lambda \) outputs \(\textbf{c}\), \(b_\lambda \) outputs 1 iff the last element in \(\textbf{c}\) is 1, \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), and \(\textbf{c}^\top \textbf{u}=0\).

When , the probability that \(b_\lambda \) outputs 1 is \(\epsilon \). The reason is that when \(a_\lambda \) succeeds, we must have \(\textbf{c}^\top \textbf{u}=0\) when \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), and the last element of \(\textbf{c}\) must be 1 according to Lemma 2.7. Moreover, when , we have \(\textbf{u}=(\mathbf {{M}}+\mathbf {{N}}^\lambda )\textbf{w}\) for some \(\textbf{w}\in \{1\}\times \{0, 1\}^{\lambda -1}\). If \(\textbf{c}^\top \mathbf {{M}}=\textbf{0}\), we have \(\textbf{c}^\top \textbf{u} = \textbf{c}^\top \mathbf {{N}}^\lambda \textbf{w}=\textbf{c}^\top (0,\cdots ,0,1)^\top \), i.e., either \(\textbf{c}^\top \textbf{u}= 1\) or the last element of \(\textbf{c}\) is 0. Hence, \(b_\lambda \) outputs 0 anyway when . Therefore, we have

Moreover, since all operations in \(b_{\lambda }\) are performed in \(\mathsf {NC^1}\), we have \(\mathcal {B}= \{b_{\lambda }\}_{\lambda \in \mathbb {N}} \in \mathsf {NC^1}\), completing the proof of Lemma B.4. \(\square \)

Constructing QA-NIZK Based on Lemma B.4. Based on the above lemma, we can easily achieve \(\mathsf {NC^1}\)-QA-NIZKs with adaptive soundness, one-time simulation soundness, and unbounded simulation soundness against \(\mathsf {NC^1}\) by adopting the techniques in [23].Footnote 5 Specifically, we only have to move the algorithms in [23] from GF(p) for a large prime p to GF(2), change the matrix Diffie-Hellman distributions to \({\textsf{SampYes}}_\lambda (\lambda )\), and generate a large number of proofs in parallel to bound the advantage of the adversary. We now give an adaptively sound QA-NIZK \({\mathsf {QANIZK_0}}\) w.r.t. a set of (sampleable) distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) in Fig. 21 as an instance.Footnote 6

Fig. 21
figure 21

Definition of \({\mathsf {QANIZK_0}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\). We require that \(2^{\ell (\cdot )}\) is some super-polynomial in \(\lambda \)

Theorem B.5

If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\mathsf {QANIZK_0}}\) is an \(\mathsf {AC^0[2]}\)-QA-NIZK that is \(\mathsf {NC^1}\)-adaptively sound for all \(\mathbf {{M}}\) in the distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) (see Appendix B.1 for the definition of \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\)).

Proof

First, we note that \(\{{{\textsf{Gen}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Prove}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{{\textsf{Ver}}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplication of a constant number of matrices and sampling random bits.

Perfect correctness and perfect zero-knowledge follow from the fact that for all \(\textbf{y}=\mathbf {{M}}\textbf{x}\) and \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\), we have

$$\begin{aligned} \textbf{x}^\top \mathbf {{P}}_i=\textbf{x}^\top \mathbf {{M}}^\top \mathbf {{K}}_i=\textbf{y}^\top \mathbf {{K}}_i. \end{aligned}$$

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\) be an adversary breaking the adaptive soundness of \({\mathsf {QANIZK_0}}\) with advantage \(\epsilon \), we have the following lemma. \(\square \)

Lemma B.6

There exists an adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained kernel matrix assumption (see Definition B.3), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, with probability \(\epsilon -1/2^\ell \).

Proof

We construct \(b_\lambda \) as follows.

\(b_\lambda \) on input \(\mathbf {{A}}\) samples and , and sets \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\) and \(\mathbf {{C}}_i=\mathbf {{K}}_i\mathbf {{A}}\) for all \(i\in [\ell ]\). Then it sends \(\textsf{crs}=(\mathbf {{A}},(\mathbf {{P}}_i,\mathbf {{C}}_i)_{i=1}^\ell )\) to \(a_\lambda \). When \(a_\lambda \) outputs \((\pi =(\pi _i)_{i=1}^\ell ,\textbf{y})\), \(b_\lambda \) searches j such that

$$\begin{aligned} \pi _j\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j=\textbf{y}^\top \mathbf {{K}}_j\mathbf {{A}} \end{aligned}$$

and

$$\begin{aligned} \pi _j-\textbf{y}^\top \mathbf {{K}}_j\ne \textbf{0}. \end{aligned}$$

If the searching procedure fails, \(b_\lambda \) aborts. \(b_\lambda \) then outputs \(\pi _j-\textbf{y}^\top \mathbf {{K}}_j\).

When \(a_\lambda \) succeeds, we have \(\pi _j\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j\) for all j and \(\textbf{y}\notin {{\,\textrm{Im}\,}}(\mathbf {{M}})\). Let \(\hat{\textbf{a}}\) be a fixed non-zero vector such that \(\hat{\textbf{a}}\notin {{\,\textrm{Im}\,}}(\mathbf {{A}})\). For each i, since \(a_\lambda \) learns no information on \(\mathbf {{K}}_i\) other than \(\mathbf {{M}}^\top \mathbf {{K}}_i\) and \(\mathbf {{K}}_i\mathbf {{A}}\), \(\textbf{y}^\top \mathbf {{K}}_i\hat{\textbf{a}}\) is information-theoretically hidden in the view of \(a_\lambda \), i.e., the probability that there exists j such that \(\pi _j\hat{\textbf{a}}-\textbf{y}^\top \mathbf {{K}}_j\hat{\textbf{a}}\ne 0\) is at least \(1/2^\ell \). Since \(\pi _j\hat{\textbf{a}}-\textbf{y}^\top \mathbf {{K}}_j\hat{\textbf{a}}\ne 0\) implies \(\pi _j-\textbf{y}^\top \mathbf {{K}}_j\ne \textbf{0}\), the probability that \(b_\lambda \) breaks the fine-grained kernel matrix assumption is at least \(\epsilon -1/2^\ell \), completing this part of proof.

\(\square \)

Since the fine-grained kernel matrix assumption holds if \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, putting all above together, Theorem B.5 immediately follows. \(\square \)

1.3 A More Efficient Construction

A disadvantage of the scheme in Appendix B.2 is that we have to generate a large number of proofs in parallel. In this section, we give a more efficient \(\mathsf {NC^1}\)-adaptively sound QA-NIZK \({\mathsf {QANIZK_1}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\) w.r.t. a set of distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) in Fig. 22. As in Definition B.1, we require that \(\mathcal {D}_\lambda \) be witness sampleable, i.e., it outputs a matrix \(\mathbf {{M}}\in \{0, 1\}^{n\times t}\) of rank \(t'< n\) additionally with a matrix (or vector) \(\mathbf {{M}}^\bot \in \{0, 1\}^{n\times (n-t')}\) with rank \(n-t'\) in its kernel.

The proof size of this construction is \((\lambda -1)\cdot (n-t')\). Since \(\mathbf {{M}}\) (or \(\mathbf {{M}}^\top \)) is usually a combination of matrices sampled from \({\textsf{ZeroSamp}}(\lambda )\) in \(\mathsf {NC^1}\), \(n-t'\) is typically a constant number. For instance, when proving that two ciphertexts of the PKE scheme in [13] correspond to the same message or proving the validity of a public key of the PKE scheme in [15], the proof size is only \(\lambda -1\) in contrast to \(\lambda \cdot \ell \) for a large number \(\ell \) in the warm-up construction.

Fig. 22
figure 22

Definition of \({\mathsf {QANIZK_1}}=\{{{\textsf{Gen}}}_\lambda ,{{\textsf{Prove}}}_\lambda ,{{\textsf{Ver}}}_\lambda ,{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\)

Theorem B.7

If \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), then \({\mathsf {QANIZK_1}}\) is an \(\mathsf {AC^0[2]}\)-QA-NIZK that is \(\mathsf {NC^1}\)-adaptively sound for all \(\mathbf {{M}}\) in the distributions \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\) (see Appendix B.1 for the definition of \(\{\mathcal {D}_\lambda \}_{\lambda \in \mathbb {N}}\)).

Proof

First, we note that \(\{{{\textsf{Gen}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Prove}}}_\lambda \}_{\lambda \in \mathbb {N}}\), \(\{{{\textsf{Sim}}}_\lambda \}_{\lambda \in \mathbb {N}}\), and \(\{{{\textsf{Ver}}}_\lambda \}_{\lambda \in \mathbb {N}}\) are computable in \(\mathsf {AC^0[2]}\), since they only involve operations including multiplications of a constant number of matrices and sampling random bits.

Perfect correctness and perfect zero-knowledge follow from the fact that for all \(\textbf{y}=\mathbf {{M}}\textbf{x}\) and \(\mathbf {{P}}_i=\mathbf {{M}}^\top \mathbf {{K}}_i\), we have

$$\begin{aligned} \textbf{x}^\top \mathbf {{P}}_i=\textbf{x}^\top \mathbf {{M}}^\top \mathbf {{K}}_i=\textbf{y}^\top \mathbf {{K}}_i. \end{aligned}$$

Let \(\mathcal {A}=\{a_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) be any adversary against the \(\mathsf {NC^1}\)-adaptive soundness of \({\mathsf {QANIZK_1}}\). We now show that \({\mathsf {QANIZK_1}}\) is adaptively sound against \(\mathsf {NC^1}\) via a sequence of hybrid games as in Fig. 23. The crucial step is to use the technique exploited by our IBKEM to switch \((\mathbf {{K}}_i||\textbf{0})\mathbf {{A}}\) to \((\textbf{0}||\mathbf {{K}}_i)\mathbf {{R}}_0^\top \), and then switch it back to \((\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\mathbf {{A}}\) for \(\mathbf {{K}}_i=\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top \), where \(\mathbf {{R}}_0\) and \(\widetilde{\textbf{r}}\) are intermediate values generated during the sampling procedure for \(\mathbf {{A}}\). \(\square \)

Fig. 23
figure 23

Games \(\textsf{G}_0,\textsf{G}_1,\textsf{G}_2\) for the proof of Theorem B.7. \(\textbf{e}_i\in \{0, 1\}^{n-t'}\) denotes the vector with the ith element being 1 and the others being 0

Lemma B.8

\(\Pr [{\textsf{AS}}^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_0^{a_\lambda }\Rightarrow 1]\).

Proof

In \(\textsf{G}_1\), we generate \(\mathbf {{A}}\) by sampling and , and setting \(\mathbf {{A}}^\top =\mathbf {{R}}_0 \mathbf {{M}}_0^\lambda \mathbf {{R}}_1\). Moreover, for all i, we replace \(\mathbf {{C}}_i=(\mathbf {{K}}_i||\textbf{0})\mathbf {{A}}\) by \(\mathbf {{C}}_i=(\textbf{0}||\mathbf {{K}}_i)\mathbf {{R}}_0^\top \).

The view of \(\mathcal {A}\) in this game is identical to its view in \(\textsf{G}_0\) since the way we generate \(\mathbf {{A}}\) is exactly the “zero-sampling” procedure, and we have

$$\begin{aligned} \mathbf {{C}}_i&=(\mathbf {{K}}_i||\textbf{0})\mathbf {{A}} = (\mathbf {{K}}_i||\textbf{0})\mathbf {{R}}_1^\top {\mathbf {{M}}_0^\lambda }^\top \mathbf {{R}}_0^\top \\&=(\mathbf {{K}}_i||\textbf{0})\left( \begin{array}{ccc}\mathbf {{I}}_{\lambda -1} &{} \textbf{0}\\ \widetilde{\textbf{r}}^\top &{} 1\end{array}\right) \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix} \mathbf {{R}}_0^\top \\&=(\mathbf {{K}}_i||\textbf{0})\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{}\quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix}\mathbf {{R}}_0^\top \\&=(\textbf{0}||\mathbf {{K}}_i)\mathbf {{R}}_0^\top . \end{aligned}$$

\(\square \)

Lemma B.9

\(\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_1^{a_\lambda }\Rightarrow 1]\).

Proof

In \(\textsf{G}_2\), for all i, instead of generating \(\mathbf {{K}}_i\) as a uniformly random matrix, we generate \(\mathbf {{K}}_i\) by randomly sampling and setting \(\mathbf {{K}}_i=\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top \), where \(\textbf{e}_i\in \{0, 1\}^{n-t'}\) denotes the vector with the ith element being 1 and the other bits being 0. Since the distribution of \(\mathbf {{K}}_i\) is still uniform, the view of \(\mathcal {A}\) remains the same. \(\square \)

Lemma B.10

\(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]=\Pr [\textsf{G}_2^{a_\lambda }\Rightarrow 1]\).

Proof

This lemma follows from the fact that for all i, we have

$$\begin{aligned} \mathbf {{M}}\mathbf {{K}}_i=\mathbf {{M}}(\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top )=\mathbf {{M}}\mathbf {{K}}_i' \end{aligned}$$

and

$$\begin{aligned}\begin{aligned} \mathbf {{C}}_i&=(\textbf{0}||\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top )\mathbf {{R}}_0^\top \\&=(\mathbf {{K}}_i'+\mathbf {{M}}^\bot \textbf{e}_i\cdot \widetilde{\textbf{r}}^\top ||\mathbf {{M}}^\bot \textbf{e}_i)\begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \ddots &{}\quad \vdots \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \\ 0 &{}\quad \cdots &{}\quad 0 &{}\quad &{}\quad 1\\ 0 &{}\quad &{} \quad \cdots &{}\quad &{}\quad 0\\ \end{pmatrix}\mathbf {{R}}_0^\top \\&=(\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\left( \begin{array}{ccc}\mathbf {{I}}_{\lambda -1} &{} \textbf{0}\\ \widetilde{\textbf{r}}^\top &{} 1\end{array}\right) {\mathbf {{M}}_0^\lambda }^\top \mathbf {{R}}_0^\top \\&=(\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\mathbf {{A}} \end{aligned}\end{aligned}$$

\(\square \)

Lemma B.11

There exists an adversary \(\mathcal {B}=\{b_\lambda \}_{\lambda \in \mathbb {N}}\in \mathsf {NC^1}\) such that \(b_\lambda \) breaks the fine-grained kernel matrix assumption (see Definition B.3), which holds under \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\) according to Lemma B.4, with probability \(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]\).

Proof

We construct \(b_\lambda \) as follows.

\(b_\lambda \) on input \(\mathbf {{A}}\) samples and , and sets \(\mathbf {{P}}_i=\mathbf {{M}}^\top (\mathbf {{K}}_i'||\textbf{0})\) and \(\mathbf {{C}}_i=(\mathbf {{K}}_i'||\mathbf {{M}}^\bot \textbf{e}_i)\mathbf {{A}}\) for all i. Then it sends \(\textsf{crs}=(\mathbf {{A}},(\mathbf {{P}}_i,\mathbf {{C}}_i)_{i=1}^{n-t'})\) to \(a_\lambda \). When \(a_\lambda \) outputs \((\pi =(\pi _i)_{i=1}^{n-t'},\textbf{y})\), \(b_\lambda \) searches j such that

$$\begin{aligned} (\pi _j||0)\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j=\textbf{y}^\top (\mathbf {{K}}_j'||\mathbf {{M}}^\bot \textbf{e}_j)\mathbf {{A}} \end{aligned}$$

and

$$\begin{aligned} \pi _j||0-\textbf{y}^\top (\mathbf {{K}}_j'||\mathbf {{M}}^\bot \textbf{e}_j)\ne \textbf{0}. \end{aligned}$$

If the searching procedure fails, \(b_\lambda \) aborts. b then outputs \(\pi _j||0-\textbf{y}^\top (\mathbf {{K}}_j'||\mathbf {{M}}^\bot \textbf{e}_j)\).

Since all the operations performed by \(b_\lambda \) are in \(\mathsf {NC^1}\), we have \(\mathcal {B}\in \mathsf {NC^1}\).

When \(a_\lambda \) succeeds, we have \((\pi _j||0)\mathbf {{A}}=\textbf{y}^\top \mathbf {{C}}_j\) for all j and \(\textbf{y}\notin \textsf{Span}(\mathbf {{M}})\). In this case, \(\textbf{y}^\top \mathbf {{M}}^\bot \ne \textbf{0}\), i.e.,

there must exists j such that \(\textbf{y}^\top \mathbf {{M}}^\bot \textbf{e}_i=1\). Hence the probability that \(b_\lambda \) breaks the fine-grained kernel matrix assumption is exactly \(\Pr [\textsf{G}_3^{a_\lambda }\Rightarrow 1]\), completing this part of proof. \(\square \)

Putting all above together, Theorem B.7 immediately follows. \(\square \)

Concurrent Fine-Grained NIZKs. Assuming \(\mathsf {NC^1}\subsetneq \mathsf{\oplus L/poly}\), our work presents an efficient QA-NIZK that achieves perfect zero-knowledge and can handle languages expressible as linear subspaces. Below we compare our QA-NIZK to other existing fine-grained NIZKs [2, 32, 33].

Ball, Dachman-Soled, and Kulkarni [2] previously constructed a NIZK for circuit satisfiability against \(\mathsf {NC^1}\) adversaries in the uniform random string (URS) model, where the setup only samples public coins. Their scheme achieves offline zero-knowledge, meaning that the distribution of honest URSs and proofs is computationally indistinguishable from that of the output of a simulator drawn from a specific distribution. However, their construction is not in the fully fine-grained setting, since their prover requires more computational resources than \(\mathsf {NC^1}\) (even for statements represented as \(\mathsf {NC^1}\) circuits). This requirement is inherent in their construction, since the underlying NIZK for \(\mathsf{\oplus L/poly}\) in their construction requires computing the determinant of a matrix, which cannot be done in \(\mathsf {NC^1}\).

Fig. 24
figure 24

Definitions of the predicate and encoding of an ABE scheme for inner product (with short secret keys)

Fig. 25
figure 25

Definitions of the predicate and encoding of an ABE scheme for non-zero inner product (with short secret ciphertexts)

More recently, Wang and Pan [32] proposed a fully fine-grained NIZK protocol for circuit satisfiability in \(\mathsf {NC^1}\), where all algorithms (including the CRS generator, prover, verifier, and simulator) are in \(\mathsf {NC^1}\). Their scheme can achieve either perfect soundness or perfect zero-knowledge and can be converted into a NIZK in the URS model and a non-interactive zap. Notably, their underlying NIZK for linear languages supports the same class of languages as our QA-NIZK. However, their construction has larger proving/verification cost and proof size than ours. Especially, their proof size is dependent of the statement size, while ours is not.

Fig. 26
figure 26

Definitions of the predicate and encoding of an ABE scheme for boolean span programs. \(\textbf{x}\) satisfies \(\mathbf {{M}}\) w.r.t. some \(\omega \) iff \(\sum _{i:x_i=1}\omega _i\textbf{M}_i=(1,0,\cdots ,0)^\top \), where \(\mathbf {{M}}_i\) denotes the ith row of \(\mathbf {{M}}\). Notice that in the original definition in [10], \(\omega \) is not part of the attribute and is computed from \(\mathbf {{M}}\) and \(\textbf{x}\) in decryption. We put \(\omega \) into the attribute since computing it from \(\mathbf {{M}}\) and \(\textbf{x}\) involves Gaussian elimination which cannot be executed in \(\mathsf {NC^1}\). This does not affect the security of the resulting ABE since \(\omega \) can be efficiently computed publicly in the original encoding scheme anyway

Another fine-grained NIZK is recently proposed by Wang and Pan [33] in a different fine-grained setting under no assumption. Specifically, it treats adversaries in \(\mathsf {AC^0}\) and requires that all algorithms run in \(\mathsf {AC^0}\).

Instantiations of Encodings

In this section, other than the one in Fig. 3, we give several examples of predicate encodings in Figs. 24, 25, and 26. By instantiating our resulting ABE in Sect. 5 with these encodings, we immediately achieve ABEs for inner product, non-zero inner product, and boolean span programs. All the encodings can be performed in \(\mathsf {AC^0[2]}\) since they only involve multiplication of a constant number of matrices.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Pan, J. & Chen, Y. Fine-Grained Secure Attribute-Based Encryption. J Cryptol 36, 33 (2023). https://doi.org/10.1007/s00145-023-09479-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00145-023-09479-x

Keywords

Navigation