Skip to main content
Log in

(In)security of concrete instantiation of Lin17’s functional encryption scheme from noisy multilinear maps

  • Published:
Designs, Codes and Cryptography Aims and scope Submit manuscript

Abstract

Functional encryption (FE) is a novel cryptographic paradigm. In comparison to conventional encryption schemes, FE allows producing secret keys \(sk_f\) corresponding to a function f that decrypt encryptions of \(x_0\) to \(f(x_0)\). Recently, Lin proposed FE for arbitrary degree polynomials from the SXDH assumption to an exact multilinear map (CRYPTO’17). However, there is no concrete instantiation of the scheme in the absence of an exact multilinear map. Although Lin’s FE can be instantiated by noisy multilinear maps such as the GGH13, CLT13, and GGH15 schemes, the security of FE instantiated by noisy multilinear maps is unclear. In this paper, we point out the weakness of the Lin’s FE when it is instantiated by well-known candidates of noisy multilinear maps. In other words, we present a polynomial time attack of the FE on each noisy multilinear map. In the proposed method, our attack captures Lin’s FE for arbitrary degree polynomials instantiated by GGH13 and CLT13 and is also applicable to FE for polynomials of degree \(O(\log _2 \lambda )\) when instantiated by GGH15 under the current parameters where \(\lambda \) is the security parameter.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Given an ideal \(\langle {\mathbf{g}}\rangle \), recovering a short generator \({\mathbf{g}}\) is hard.

  2. Here the size of components in a matrix is small.

  3. \(\rho \) is a parameter used in the CLT13 multilinear map. For the parameter choice, please refer to reference  [19]

  4. As another option, we can encode an i-th component message vector into an i-th diagonal entry. Our analysis still works with this encoding method. Since this method comprising the encoding method and its analysis requires many calculations, and these calculations are placed in Appendix C.

  5. One can easily consider the size of the samples to distinguish two distributions, but it is hard to formally show completeness. Please see this reference [13] for a detailed discussion.

  6. The GGH15 multilinear map does not support homomorphic scalar multiplications; for simplicity we proceed as if scalar multiplication in GGH15 holds. Actually, our attack only uses \(c_{i,j,k} = 1\) or zero for all ijk.

  7. In this case, there is no product of \( D \)’s. So the variance of \(X_{{\mathbf{x}}_b}^{i,3,l}\) is an exact value.

  8. \( D _{i,l,{\mathbf{x}}_b}^3\) is a random vector so that these are random vectors.

  9. \({\mathbf{u}}_{i,l,b}^{(3)}\) is a 3rd element of a random vector \({\mathbf{u}}_{i,l,b}\).

References

  1. Abdalla M., Bourse F., De Caro A., Pointcheval D.: Simple functional encryption schemes for inner products. In: IACR International Workshop on Public Key Cryptography, pp. 733–751. Springer (2015).

  2. Abdalla M., Gong J., Wee H.: Functional encryption for attribute-weighted sums from k-lin. In: Annual International Cryptology Conference, pp. 685–716. Springer (2020).

  3. Agrawal S., Boyen X., Vaikuntanathan V., Voulgaris P., Wee H.: Functional encryption for threshold functions (or fuzzy ibe) from lattices. In: Fischlin M., Buchmann J., Manulis M. (eds.) Public Key Cryptography - PKC 2012, pp. 280–297. Springer, Berlin Heidelberg, Berlin, Heidelberg (2012).

  4. Ananth P., Jain A.: Indistinguishability obfuscation from compact functional encryption. In: Annual Cryptology Conference, pp. 308–326. Springer (2015).

  5. Apon D., Döttling N., Garg S., Mukherjee P.: Cryptanalysis of indistinguishability obfuscations of circuits over ggh13. In: LIPIcs-Leibniz International Proceedings in Informatics, vol. 80. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2017).

  6. Baltico C.E.Z., Catalano D., Fiore D., Gay R.: Practical functional encryption for quadratic functions with applications to predicate encryption. In: Annual International Cryptology Conference, pp. 67–98. Springer (2017).

  7. Bitansky N., Nishimaki R., Passelegue A., Wichs D.: From cryptomania to obfustopia through secret-key functional encryption. J. Cryptol. 33(2), 357–405 (2020).

    Article  MathSciNet  Google Scholar 

  8. Bitansky N., Vaikuntanathan V.: Indistinguishability obfuscation from functional encryption. J. ACM (JACM) 65(6), 1–37 (2018).

    Article  MathSciNet  Google Scholar 

  9. Boneh D., Sahai A., Waters B.: Functional encryption: Definitions and challenges. In: Theory of Cryptography Conference, pp. 253–273. Springer (2011).

  10. Chen Y., Gentry C., Halevi S.: Cryptanalyses of candidate branching program obfuscators. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 278–307. Springer (2017).

  11. Chen Y., Vaikuntanathan V., Wee H.: Ggh15 beyond permutation branching programs: Proofs, attacks, and candidates. In: Annual International Cryptology Conference, pp. 577–607. Springer (2018).

  12. Cheon J.H., Cho W., Hhan M., Kang M., Kim J., Lee C.: Algorithms for crt-variant of approximate greatest common divisor problem. Number-Theoretic Methods in Cryptology (NutMiC) 2019, 195 (2019).

  13. Cheon J.H., Cho W., Hhan M., Kim J., Lee C.: Statistical zeroizing attack: Cryptanalysis of candidates of bp obfuscation over ggh15 multilinear map. In: Annual International Cryptology Conference, pp. 253–283. Springer (2019).

  14. Cheon J.H., Han K., Lee C., Ryu H., Stehlé D.: Cryptanalysis of the multilinear map over the integers. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 3–12. Springer (2015).

  15. Cheon J.H., Hhan M., Kim J., Lee C.: Cryptanalyses of branching program obfuscations over GGH13 multilinear map from the NTRU problem. In: Advances in Cryptology - CRYPTO 2018 - 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19–23, 2018, Proceedings, Part III, pp. 184–210 (2018).

  16. Coron J.S., Gentry C., Halevi S., Lepoint T., Maji H.K., Miles E., Raykova M., Sahai A., Tibouchi M.: Zeroizing without low-level zeroes: New mmap attacks and their limitations. In: Advances in Cryptology–CRYPTO 2015, pp. 247–266. Springer (2015).

  17. Coron J.S., Lee M.S., Lepoint T., Tibouchi M.: Cryptanalysis of ggh15 multilinear maps. In: Annual Cryptology Conference, pp. 607–628. Springer (2016).

  18. Coron J.S., Lee M.S., Lepoint T., Tibouchi M.: Zeroizing attacks on indistinguishability obfuscation over clt13. In: IACR International Workshop on Public Key Cryptography, pp. 41–58. Springer (2017).

  19. Coron J.S., Lepoint T., Tibouchi M.: Practical multilinear maps over the integers. In: Advances in Cryptology–CRYPTO 2013, pp. 476–493. Springer (2013).

  20. Garg S., Gentry C., Halevi S.: Candidate multilinear maps from ideal lattices. In: Eurocrypt, vol. 7881, pp. 1–17. Springer (2013).

  21. Garg S., Gentry C., Halevi S., Raykova M., Sahai A., Waters B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pp. 40–49. IEEE Computer Society (2013).

  22. Garg S., Gentry C., Halevi S., Zhandry M.: Functional encryption without obfuscation. In: Theory of Cryptography Conference, pp. 480–511. Springer (2016).

  23. Gay R.: Functional encryption for quadratic functions, and applications to predicate encryption. IACR Cryptol. 2016, 1106 (2016).

    Google Scholar 

  24. Gay R.: A new paradigm for public-key functional encryption for degree-2 polynomials. In: IACR International Conference on Public-Key Cryptography, pp. 95–120. Springer (2020).

  25. Gentry C., Gorbunov S., Halevi S.: Graph-induced multilinear maps from lattices. In: Theory of Cryptography, pp. 498–527. Springer (2015).

  26. Gentry C., Peikert C., Vaikuntanathan V.: Trapdoors for hard lattices and new cryptographic constructions. In: Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, pp. 197–206. ACM (2008).

  27. Gong J., Qian H.: Simple and efficient fe for quadratic functions. Tech. rep., Cryptology ePrint Archive, Report 2020/1026 (2020).

  28. Goyal V., Pandey O., Sahai A., Waters B.: Attribute-based encryption for fine-grained access control of encrypted data. In: Proceedings of the 13th ACM Conference on Computer and Communications Security, pp. 89–98 (2006).

  29. Hu Y., Jia H.: Cryptanalysis of ggh map. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 537–565. Springer (2016).

  30. Kitagawa F., Nishimaki R., Tanaka K., Yamakawa T.: Adaptively secure and succinct functional encryption: improving security and efficiency, simultaneously. In: Annual International Cryptology Conference, pp. 521–551. Springer (2019).

  31. Komargodski I., Segev G.: From minicrypt to obfustopia via private-key functional encryption. J. Cryptol. 33(2), 406–458 (2020).

    Article  MathSciNet  Google Scholar 

  32. Lin H.: Indistinguishability obfuscation from sxdh on 5-linear maps and locality-5 prgs. In: Annual International Cryptology Conference, pp. 599–629. Springer (2017).

  33. Lin H., Vaikuntanathan V.: Indistinguishability obfuscation from ddh-like assumptions on constant-degree graded encodings. In: Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pp. 11–20. IEEE (2016).

  34. Micciancio D., Peikert C.: Trapdoors for lattices: simpler, tighter, faster, smaller. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 700–718. Springer (2012).

  35. Miles E., Sahai A., Zhandry M.: Annihilation attacks for multilinear maps: cryptanalysis of indistinguishability obfuscation over ggh13. In: Annual Cryptology Conference, pp. 629–658. Springer (2016).

  36. O’Neill A.: Definitional issues in functional encryption. IACR Cryptol. ePrint Arch. 2010, 556 (2010). http://eprint.iacr.org/2010/556.

  37. Pellet-Mary A.: Quantum attacks against indistinguishablility obfuscators proved secure in the weak multilinear map model. In: Advances in Cryptology - CRYPTO 2018 - 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19–23, 2018, Proceedings, Part III, pp. 153–183 (2018).

  38. Ryffel T., Pointcheval D., Bach F., Dufour-Sans E., Gay R.: Partially encrypted deep learning using functional encryption. Adv. Neural Inf. Process. Syst. 32, 4517–4528 (2019).

    Google Scholar 

  39. Sahai A., Waters B.: Fuzzy identity-based encryption. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 457–473. Springer (2005).

  40. Shamir A.: Identity-based cryptosystems and signature schemes. In: Workshop on the Theory and Application of Cryptographic Techniques, pp. 47–53. Springer (1984).

  41. Wee H.: Functional encryption for quadratic functions from k-lin, revisited. In: Theory of Cryptography Conference, pp. 210–228. Springer (2020).

Download references

Acknowledgements

We thank reviewers for helpful discussions. Wonhee Cho received support from Institute for Information & communication Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2016-6-00598, The mathematical structure of functional encryption and its analysis), Jiseung Kim was supported by KIAS Individual Grant CG078201 at Korea Institute for Advanced Study, and the last author was supported by LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX- 0007) operated by the French National Research Agency (ANR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changmin Lee.

Additional information

Communicated by M. Albrecht.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A: Useful tools for computing the variances

In this section, we introduce useful lemmas to augment the computation used in the reference[13]. Note that we consider the random variable, A, the entries of which are independent.

Lemma A.1

Let \({ A }=(A_{i,j})\) be a \(n\times n\) random variable where \(A_{i,t}\) and \(A_{j,t}\) are independent for every \(1 \le i < j \le n\) and \(1 \le t \le n\).

and \({ X }=[X_1, X_2, \ldots , X_n]\) a n-dimensional random vector, which is independent to \({ A }\). Assume the following conditions for all distinct \(i,j,k,l \in [n]\):

$$\begin{aligned}&E[X_i]=0,~E[X_i\cdot X_j]=0, ~ E[X_i^3\cdot X_j]=0, \\&E[X_i^2\cdot X_j\cdot X_k]=0,\text { and } E[X_i\cdot X_j\cdot X_k\cdot X_l]=0. \end{aligned}$$

Then, a n-dimensional random vector \({ Y }=[Y_1, Y_2, \ldots , Y_n] = A \cdot X\) also satisfies the similar constraints

$$\begin{aligned}&E[Y_i]=0,~E[Y_i\cdot Y_j]=0,~ E[Y_i^3\cdot Y_j]=0, \\&E[Y_i^2\cdot Y_j\cdot Y_k]=0, \text { and } E[Y_i\cdot Y_j\cdot Y_k\cdot Y_l]=0. \end{aligned}$$

for all distinct \(i,j,k,l \in [n]\).

Lemma A.2

Let \(\{A_i = (A_i^{j,k})\}_{1 \le i \le t}\) be \(n \times n\) random matrices where

  • \(A_i^{j,k}\) follow a Gaussian distribution \({\mathcal {D}}_{\mathbb {Z},\sigma }\) for all \(1 \le j,k \le n\) and \(1 \le i \le t\),

  • \(A_i^{j,s}\) and \(A_i^{k,s}\) are independent for every \(1 \le j < k \le n\), \(1 \le s \le n\) and \(1 \le i \le t\),

  • \(A_{1}^{i_1, j_1} , \ldots ,A_{t}^{i_t,j_t}\) are mutually (entrywise) independent for every \(1 \le i_k,j_k \le n \) for all k

and \({ X }=(X_{i,j})=\prod _{k=1}^t A _k\) \(n \times n\) is a random variable. For all \(i,j,k \in [n]\), it holds that

$$\begin{aligned}&E[X_{i,j}]=0, ~Var[X_{i,j}]=n^{t-1}\cdot (\sigma ^2)^t, \\&E[X_{i,j}^4]=3\left( n(n+2)\right) ^{t-1}\cdot (\sigma ^2)^{2t}, \\&E[X_{i,j}^2\cdot X_{k,j}^2]= \left( n(n+2)\right) ^{t-1}\cdot (\sigma ^2)^{2t} \end{aligned}$$

Lemma A.3

Let \( A =(A_{i,j})\) be a \(n\times m\) random variable the entries of which satisfy \(E[A_{i,j}]=0\), \(E[A_{i,j}^2]=\sigma _1^2\) and \(E[A_{i,j}^4]\le C\sigma _1^4\) for all \(i\in [n], j\in [m]\) with some constant C, where the entries of A need not to be independent. Let \({ v }=[v_1,\ldots ,v_n]\) and \({ w }=[w_1,\ldots ,w_m]\) be n-dimensional random vectors the entries of which are mutually independent and follow the Gaussian distribution \({\mathcal {D}}_{\mathbb {Z}, \sigma _2}\). If the entries of \( A \) are independent to the entries of \({ v }\) and \({ w }\), then \( Y ={ v }\cdot A \cdot { w }^{T}\) satisfies the following condition:

$$\begin{aligned} E[ Y ]=0, ~E[ Y ^2]= nm\cdot \sigma _1^2\cdot \sigma _2^4,~~ E[ Y ^4]\le (nm)^4\cdot (C\sigma _1^4)\cdot (3\sigma _2^4)^2. \end{aligned}$$

B: Proof of Lemmas

In this section, we provide the proofs of the lemmas in Sect. 4.3.

Proof

(rest of Lemma 4.2) Since this equations hold regardless of the value of i, it is sufficient to deal with only the case of \(i=1\). Let \(E_{u,v}\), \(C_{u,v}\), and \(B_u\) be random variables of the (uv)-th element or u-th element of the random variable \( E _{2,1}\), \( C _1\), \( B _1\), respectively. Also, the message \(x_0, x_1\) are the random variable \(r\cdot u, r\cdot u +1\) where ru follow the distribution \(D_{\mathbb {Z},\sigma }\).

$$\begin{aligned} {\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1 = \sum _{i=1}^n ru\left( \sum _{j=1}^m E_{i,j}\cdot B_j\right) \end{aligned}$$

When \((i,j), (i',j')\) are different, then \(E[(ru)^2\cdot E{i,j}B_j \cdot E{i',j'}B_{j'}]\) is zero. Then, the variance of \({\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1\) holds the following equality:

$$\begin{aligned} Var[{\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1]= & {} E\left[ \left( \sum _{i=1}^n ru\left( \sum _{j=1}^m E_{i,j}\cdot B_j\right) \right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n \sum _{j=1}^m r^2u^2 \cdot E_{i,j}^2 \cdot B_j^2\right] = nm\sigma ^4\cdot \sigma ^2\cdot \sigma '^2. \end{aligned}$$

Proof

(rest of Lemma 4.3) Since this equations hold regardless of the value of i, it is sufficient to deal with only the case of \(i=1\). Let \(E_{u,v}\), \(B_u\) be random variables of the (uv)-th element or u-th element of the random variable \( E _{2,1}\), \( B _1\), respectively. Also, let \(x_0\) be the random variable ru where ru follow the distribution \(D_{\mathbb {Z},\sigma }\).

$$\begin{aligned} E[({\mathbf{J}}\cdot {x}_0 \cdot E _{2,1}\cdot B _1)^4]= & {} E\left[ \left( \sum _{i=1}^n\sum _{j=1}^m ruE_{i,j}\cdot B_j\right) ^4\right] \\\le & {} E\left[ (nm)^3\cdot \left( \sum _{i=1}^n\sum _{j=1}^m r^4u^4 E_{i,j}^4 \cdot B_j^4\right) \right] \\= & {} (nm)^3\cdot nm (9\sigma ^8)\cdot 3\sigma '^4\cdot 3\sigma ^4\\= & {} 3^4\cdot (nm)^4 \cdot (\sigma ^8)\sigma ^4\sigma '^4,\\ \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot E _{2,i}\cdot B _i)^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot E _{2,i}\cdot B _i)^2}\right|\le & {} \frac{3^4\cdot (nm)^4(\sigma ^8)\sigma ^4\sigma '^4}{(nm(\sigma ^4)\sigma ^2\sigma '^2)^2} = 3^4 (nm)^2. \end{aligned}$$

Proof

(rest of Lemma 4.4) Let \(E_{u}\) be random variables of the u-th element of the random variable \( E _{3,i}\). Let \(x_0, {x}_1\) be random variable \(ru, ru+1\) where r, u follow the distribution \(D_{\mathbb {Z},\sigma }\) and \((c_1,c_2,c_3,c_4)\) be random variable \((-r_2t_{2,1}+t_{2,2}(r_2t_{1,1}-r_1)+t_{2,3}r_2t_{1,2},-r_2,r_2t_{1,1}-r_1,r_2t_{1,2})\) where \(r_1, r_2, t_{1,1},t_{1,2},t_{2,1},t_{2,2},t_{2,3}\) follows the distribution \(D_{\mathbb {Z},\sigma }\).

Now, we compute the variances \( Var[{\mathbf{J}}\cdot {x}_0\cdot c_i\cdot E _{3,i}]\), and \( Var[{\mathbf{J}}\cdot {x}_1\cdot c_i\cdot E _{3,i}]\) for each \(i \in [4]\).

  • \(Var[{\mathbf{J}}\cdot {x}_0\cdot c_1\cdot E _{3,1}]\): In this case, we can compute as follows.

    $$\begin{aligned}&Var[{\mathbf{J}}\cdot {x}_0\cdot c_1\cdot E _{3,1}]\\&\quad = E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2t_{2,1}+t_{2,2}r_2t_{1,1}-t_{2,2}r_1+t_{2,3}r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\&\quad = E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{2,1}^2+t_{2,2}^2r_2^2t_{1,1}^2+t_{2,2}^2r_1^2+t_{2,3}^2r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\&\quad = n\sigma ^4\cdot (2\sigma ^6+2\sigma ^4)\sigma '^2, \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_0\cdot c_2\cdot E _{3,2}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_0\cdot c_2\cdot E _{3,2}]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^2)\sigma '^2 \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_0\cdot c_3\cdot E _{3,3}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_0\cdot c_3\cdot E _{3,3}]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{1,1}^2+r_1^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^4+\sigma ^2)\sigma '^2 \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_0\cdot c_4\cdot E _{3,4}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_0\cdot c_4\cdot E _{3,4}]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^4)\sigma '^2 \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_1\cdot c_1\cdot E _{3,1}]\): In this case, we can compute as follows.

    $$\begin{aligned}&Var[{\mathbf{J}}\cdot {x}_1\cdot c_1\cdot E _{3,1}]\\&\quad = E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{2,1}+t_{2,2}r_2t_{1,1}-t_{2,2}r_1+t_{2,3}r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\&\quad = E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (-r_2^2t_{2,1}^2+t_{2,2}^2r_2^2t_{1,1}^2+t_{2,2}^2r_1^2+t_{2,3}^2r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\&\quad = n(\sigma ^4+1)\cdot (2\sigma ^6+2\sigma ^4)\sigma '^2 \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_1\cdot c_2\cdot E _{3,2}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_1\cdot c_2\cdot E _{3,2}]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (-r_2)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^2)\sigma '^2 \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_1\cdot c_3\cdot E _{3,3}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_1\cdot c_3\cdot E _{3,3}]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2t_{1,1}^2+r_1^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^4+\sigma ^2)\sigma '^2\\ \end{aligned}$$
  • \(Var[{\mathbf{J}}\cdot {x}_1\cdot c_4\cdot E _{3,4}]\): In this case, we can compute as follows.

    $$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_1\cdot c_4\cdot E _{3,4}]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^4)\sigma '^2 \end{aligned}$$

    \(\square \)

Proof

(rest of Lemma 4.5) It is enough to show that each ratio is smaller than \(4^3 3^6n^2\) or \(8^3 3^6 n^2\). Let \(E_{u}\) be random variables of the u-th element of the random variable \( E _{3,i}\). Let \(x_0, {x}_1\) be random variable \(ru, ru+1\) where r, u follow the distribution \(D_{\mathbb {Z},\sigma }\) and \((c_1,c_2,c_3,c_4)\) be random variable \((-r_2t_{2,1}+t_{2,2}(r_2t_{1,1}-r_1)+t_{2,3}r_2t_{1,2},-r_2,r_2t_{1,1}-r_1,r_2t_{1,2})\) where \(r_1, r_2, t_{1,1},t_{1,2},t_{2,1},t_{2,2},t_{2,3}\) follows the distribution \(D_{\mathbb {Z},\sigma }\). Similar to the above lemma, we can compute the following.

  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_1 E _{3,1})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_1 E _{3,1})^2}\right| \): It could be calculated as follows. We first compute an upper bound of \(E[({\mathbf{J}}\cdot {x}_0\cdot c_1\cdot E _{3,1})^4]\).

    $$\begin{aligned}&E[({\mathbf{J}}\cdot {x}_0\cdot c_1\cdot E _{3,1})^4]\\&\quad = E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2t_{2,1}+t_{2,2}r_2t_{1,1}-t_{2,2}r_1+t_{2,3}r_2t_{1,2})\cdot E_{i}\right) ^4\right] \\&\quad \le E\left[ (4n)^3\cdot \left( \sum _{i=1}^n r^4u^4\cdot (r_2^4t_{2,1}^4+t_{2,2}^4r_2^4t_{1,1}^4+t_{2,2}^4r_1^4+t_{2,3}^4r_2^4t_{1,2}^4)\cdot E_{i}^4\right) \right] \\&\quad = (4n)^3\cdot n\cdot 9\sigma ^8\cdot (54\sigma ^{12}+18\sigma ^8)\cdot 3\sigma '^4 \end{aligned}$$

    Then, the above lemma provides the values of variance, so we can compute the quantity that we want.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_1 E _{3,1})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_1 E _{3,1})^2}\right|\le & {} \frac{(4n)^3\cdot n\cdot 9\sigma ^8\cdot (54\sigma ^{12}+18\sigma ^8)\cdot 3\sigma '^4}{(n\sigma ^4(2\sigma ^6+2\sigma ^4)\sigma '^2)^2}\\\le & {} \frac{(4n)^3\cdot n\cdot 9\sigma ^8\cdot 54(\sigma ^{6}+\sigma ^4)^2\cdot 3\sigma '^4}{(n\sigma ^4(2\sigma ^6+2\sigma ^4)\sigma '^2)^2} \le 4^3 3^6 n^2 \end{aligned}$$

    The others are almost the same as the above case.

  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_2 E _{3,2})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_2 E _{3,2})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_0\cdot c_2\cdot E _{3,2})^4]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2)\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ n^3\cdot \left( \sum _{i=1}^n (r^4u^4)\cdot (r_2^4)\cdot E_{i}^4\right) \right] \\= & {} n^4\cdot 9\sigma ^8\cdot 3\sigma ^4\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_2 E _{3,2})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_2 E _{3,2})^2}\right|\le & {} \frac{n^4\cdot 9\sigma ^8\cdot 3\sigma ^4\cdot 3\sigma '^4}{(n\sigma ^6\sigma '^2)^2} = 3^4 n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_3 E _{3,3})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_3 E _{3,3})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_0 \cdot c_3 E _{3,3})^4]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ (2n)^3\cdot \left( \sum _{i=1}^n r^4u^4\cdot (r_2^4t_{1,1}^4+r_1^4)\cdot E_{i}^4\right) \right] \\= & {} (2n)^3\cdot n\cdot 9\sigma ^8 \cdot (9\sigma ^8+3\sigma ^4)\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_3 E _{3,3})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_3 E _{3,3})^2}\right|\le & {} \frac{(2n)^3\cdot n\cdot 9\sigma ^8 \cdot (9\sigma ^8+3\sigma ^4)\cdot 3\sigma '^4}{(n\sigma ^4(\sigma ^4+\sigma ^2)\sigma '^2)^2}\\\le & {} \frac{(2n)^3\cdot n\cdot 9\sigma ^8 \cdot 9(\sigma ^4+\sigma ^2)^2\cdot 3\sigma '^4}{(n\sigma ^4(\sigma ^4+\sigma ^2)\sigma '^2)^2}\\= & {} 2^3 3^5 n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_4 E _{3,4})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_4 E _{3,4})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_0 \cdot c_4 E _{3,4})^4]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,2})\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ n^3\cdot \left( \sum _{i=1}^n r^4u^4\cdot (r_2^4t_{1,2}^4)\cdot E_{i}^4\right) \right] \\= & {} n^3\cdot n\cdot 9\sigma ^8\cdot 9\sigma ^8\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot c_4 E _{3,4})^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot c_4 E _{3,4})^2}\right|\le & {} \frac{n^3\cdot n\cdot 9\sigma ^8\cdot 9\sigma ^8\cdot 3\sigma '^4}{(n\sigma ^4\sigma ^4\sigma '^2)^2}\\= & {} 3^5 n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_1 E _{3,1})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_1 E _{3,1})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned}&E[({\mathbf{J}}\cdot {x}_1\cdot c_1\cdot E _{3,1})^4]\\&\quad = E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (-r_2t_{2,1}+t_{2,2}r_2t_{1,1}-t_{2,2}r_1+t_{2,3}r_2t_{1,2})\cdot E_{i}\right) ^4\right] \\&\quad \le E\left[ (8n)^3\cdot \left( \sum _{i=1}^n (r^4u^4+1)\cdot (r_2^4t_{2,1}^4+t_{2,2}^4r_2^4t_{1,1}^4+t_{2,2}^4r_1^4+t_{2,3}^4r_2^4t_{1,2}^4)\cdot E_{i}^4\right) \right] \\&\quad = (8n)^3\cdot n\cdot (9\sigma ^8+1)\cdot (54\sigma ^{12}+18\sigma ^8)\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_1 E _{3,1})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_1 E _{3,1})^2}\right|\le & {} \frac{(8n)^3\cdot n\cdot (9\sigma ^8+1)\cdot (54\sigma ^{12}+18\sigma ^8)\cdot 3\sigma '^4}{(n(\sigma ^4+1)(2\sigma ^6+2\sigma ^4)\sigma '^2)^2}\\\le & {} \frac{(8n)^3\cdot n\cdot 9(\sigma ^4+1)^2\cdot 54(\sigma ^{6}+\sigma ^4)^2\cdot 3\sigma '^4}{(n(\sigma ^4+1)(2\sigma ^6+2\sigma ^4)\sigma '^2)^2} = 8^3 3^6 n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_2 E _{3,2})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_2 E _{3,2})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_1 \cdot c_2 E _{3,2})^4]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (-r_2)\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ (2n)^3\cdot \left( \sum _{i=1}^n (r^4u^4+1)\cdot (r_2^4)\cdot E_{i}^4\right) \right] \\= & {} (2n)^3\cdot n \cdot (9\sigma ^8+1)\cdot 3\sigma ^4\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_2 E _{3,2})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_2 E _{3,2})^2}\right|\le & {} \frac{(2n)^3\cdot n \cdot (9\sigma ^8+1)\cdot 3\sigma ^4\cdot 3\sigma '^4}{(n(\sigma ^4+1)\sigma ^2\sigma '^2)^2}\\\le & {} \frac{(2n)^3\cdot n \cdot 9(\sigma ^4+1)^2\cdot 3\sigma ^4\cdot 3\sigma '^4}{(n(\sigma ^4+1)\sigma ^2\sigma '^2)^2} = 2^33^4n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_3 E _{3,3})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_3 E _{3,3})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_1 \cdot c_3 E _{3,3})^4]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ (4n)^3\cdot \left( \sum _{i=1}^n (r^4u^4+1)\cdot (r_2^4t_{1,1}^4+r_1^4)\cdot E_{i}^4\right) \right] \\= & {} (4n)^3\cdot n\cdot (9\sigma ^8+1) \cdot (9\sigma ^8+3\sigma ^4)\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_3 E _{3,3})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_3 E _{3,3})^2}\right|\le & {} \frac{(4n)^3\cdot n\cdot (9\sigma ^8+1) \cdot (9\sigma ^8+3\sigma ^4)\cdot 3\sigma '^4}{(n(\sigma ^4+1)(\sigma ^4+\sigma ^2)\sigma '^2)^2}\\\le & {} \frac{(4n)^3\cdot n\cdot 9(\sigma ^4+1)^2 \cdot 9(\sigma ^4+\sigma ^2)^2\cdot 3\sigma '^4}{(n(\sigma ^4+1)(\sigma ^4+\sigma ^2)\sigma '^2)^2} = 4^33^5n^2 \end{aligned}$$
  • \(\left| \dfrac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_4 E _{3,4})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_4 E _{3,4})^2}\right| \): It could be also computed as follows. We first can compute the upper bound of the numerator.

    $$\begin{aligned} E[({\mathbf{J}}\cdot {x}_1 \cdot c_4 E _{3,4})^4]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,2})\cdot E_{i}\right) ^4\right] \\\le & {} E\left[ (2n)^3\cdot \left( \sum _{i=1}^n (r^4u^4+1)\cdot (r_2^4t_{1,2}^4)\cdot E_{i}^4\right) \right] \\= & {} (2n)^3\cdot n\cdot (9\sigma ^8+1)\cdot 9\sigma ^8\cdot 3\sigma '^4 \end{aligned}$$

    Thus, we can compute the absolute value of the ratio as follows.

    $$\begin{aligned} \left| \frac{E[({\mathbf{J}}\cdot {x}_1 \cdot c_4 E _{3,4})^4]}{Var({\mathbf{J}}\cdot {x}_1 \cdot c_4 E _{3,4})^2}\right|\le & {} \frac{(2n)^3\cdot n\cdot (9\sigma ^8+1)\cdot 9\sigma ^8\cdot 3\sigma '^4}{(n(\sigma ^4+1)\sigma ^4\sigma '^2)^2}\\\le & {} \frac{(2n)^3\cdot n\cdot 9(\sigma ^4+1)^2\cdot 9\sigma ^8\cdot 3\sigma '^4}{(n(\sigma ^4+1)\sigma ^4\sigma '^2)^2} =2^33^5n^2 \end{aligned}$$

\(\square \)

C: Alternative encoding method to instantiate a Lin’s FE from GGH15

The second issue is that the GGH15 multilinear map is a matrix-wise encoding map, not an entry-wise encoding map like GGH13 and CLT13. Thus, we encode a message vector into a diagonal matrix. Indeed, we will use \(23\times 23\) message matrices to obtain encoded matrices, \(\textsf {hCT}^1\), \(\textsf {hCT}^2\) and \(\textsf {hSK}\). More precisely, for \(\textsf {hCT}_{i,\ell }^1=\textsf {enc}_{i}(-r'_{i,l}\Vert r'_{i,l}\cdot {\mathbf{u}}_l + ({\mathbf{X}}_i^1\Vert {\mathbf{r}}_i^1\Vert {{\mathbf{0}}}_{11}))\), \(\textsf {hCT}_{j,\ell }^2 = \textsf {enc}_{j}(\langle {\mathbf{k}}_{j,l},{\mathbf{u}}_l\rangle \Vert {\mathbf{k}}_{j,l})\), and \(\textsf {hSK}=\{hsk_{k,\ell }\}\) it is defined as follows.

$$\begin{aligned} {\mathbf{M}}_{i,l}^1= & {} \mathbf{diag }(-r'_{i,l}\Vert r'_{i,l}\cdot {\mathbf{u}}_l + ({\mathbf{X}}_i^1\Vert {\mathbf{r}}_i^1\Vert {{\mathbf{0}}}_{11}))\\ {\mathbf{M}}_{j,l}^2= & {} \mathbf{diag }(\langle {\mathbf{k}}_{j,l},{\mathbf{u}}_l\rangle \Vert {\mathbf{k}}_{j,l}),\\ {\mathbf{M}}_{k,l}^3= & {} \mathbf{diag }(hsk_{k,l},hsk_{k,l},\ldots ,hsk_{k,l}) \end{aligned}$$

where \({\mathbf{k}}_{j,l} = ({\mathbf{d}}_l \odot ({\mathbf{X}}^2_n \Vert {\mathbf{r}}^2_n) \Vert \mathbf{0}_{11})\) for every \(i,j \in [n], l\in [23]\) and \(hsk_{k,l}\) is a l-th message of \(\textsf {hSK}_k\).

Therefore, we can obtain \( x_i\cdot x_j\cdot x_k +s_i^1\cdot s_j^2 \cdot s_k^3\cdot r_{ipe}\) by computing the following;

$$\begin{aligned} {\mathbf{J}}\cdot \sum _{l=1}^{23} {\mathbf{M}}_{i,l}^1 \cdot {\mathbf{M}}_{j,l}^2 \cdot {\mathbf{M}}_{k,l}^3\cdot {\mathbf{L}}= x_i\cdot x_j\cdot x_k +s_i^1\cdot s_j^2 \cdot s_k^3\cdot r_{ipe}. \end{aligned}$$

Let \({\mathbf{D}}_{i,l}^d\) be a level-d encoding of message matrix \({\mathbf{M}}_{i,l}^d\) for each ild. Then we obtain the following value to compute \({\mathbf{A}}_{\mathbf{J}}\cdot {\mathbf{D}}_{i,l}^1 \cdot {\mathbf{D}}_{j,l}^2 \cdot {\mathbf{D}}_{k,l}^3\).Footnote 6

$$\begin{aligned}&\sum _{i,j,k=1}^n\sum _{l=1}^{23} {\mathbf{A}}_{\mathbf{J}}\cdot {\mathbf{D}}_{i,l}^1 \cdot {\mathbf{D}}_{j,l}^2 \cdot {\mathbf{D}}_{k,l}^3 \cdot c_{i,j,k}^{6}\\&\quad =\sum _{i,j,k=1}^n c_{i,j,k}(x_i\cdot x_j \cdot x_k +s_i^1 \cdot s_j^2 \cdot s_k^3 \cdot r_{ipe}) \cdot {\mathbf{A}}_3+error_1 \pmod q \\&\quad = \left( f({\mathbf{x}})+r_{ipe} \langle \otimes {\mathbf{s}}^{\le 3}, C \rangle \right) \cdot {\mathbf{A}}_3+error_1 \pmod q \end{aligned}$$

Now, we will explain how to generate \(\textsf {tSK}\) and \(\textsf {tCT}\) using GGH15. Let \({\widetilde{\mathbf{M}}}^1\) and \({\widetilde{\mathbf{M}}}^2\) be \(23\times 23\) matrices \(\mathbf{diag }(-{r_{t,1}} \Vert {r_{t,1}}\cdot {\mathbf{t}}_2 + (\langle {\mathbf{a}},{\mathbf{t}}_1 \rangle \Vert {\mathbf{a}}\Vert {\mathbf{0}}))\) and \(\mathbf{diag }(\langle {\mathbf{t}}_2 , {\mathbf{b}}\rangle \Vert {\mathbf{b}}\Vert {\mathbf{0}})\), respectively, where \({\mathbf{a}}= (\langle \otimes {\mathbf{s}}^{\le 3},{\mathbf{c}}\rangle \Vert 0)\) and \({\mathbf{b}}= (-{r_{t,2}}\Vert {r_{t,2}}\cdot {\mathbf{t}}_1 + (-r_{ipe}\Vert 0))\). Similarly, we define

$$\begin{aligned} {\widetilde{\mathbf{D}}}^1 {{:}{=}} \textsf {enc}_{1}({\mathbf{I}}_{23}),~~ {\widetilde{\mathbf{D}}}^2 {{:}{=}} \textsf {enc}_{2}({\widetilde{\mathbf{M}}}^1 ),~~ {\widetilde{\mathbf{D}}}^3 {{:}{=}} \textsf {enc}_{3}({\widetilde{\mathbf{M}}}^2) \end{aligned}$$

Then we get the following modular equation

$$\begin{aligned} {\mathbf{A}}_{{\mathbf{J}}}\cdot {\widetilde{\mathbf{D}}}^1 \cdot {\widetilde{\mathbf{D}}}^2 \cdot {\widetilde{\mathbf{D}}}^3 = -r_{ipe} \langle \otimes {\mathbf{s}}^{\le 3},{\mathbf{c}}\rangle \cdot {\mathbf{A}}_3 + error_2 \pmod q. \end{aligned}$$

Combined with the above computation of \({\mathbf{A}}_{\mathbf{J}}\cdot {\mathbf{D}}_{i,l}^1 \cdot {\mathbf{D}}_{j,l}^2 \cdot {\mathbf{D}}_{k,l}^3\), we have

$$\begin{aligned}&\displaystyle \sum _{i,j,k=1}^n\displaystyle \sum _{l=1}^{23} {\mathbf{A}}_{\mathbf{J}}\cdot {\mathbf{D}}_{i,l}^1 \cdot {\mathbf{D}}_{j,l}^2 \cdot {\mathbf{D}}_{k,l}^3 \cdot c_{i,j,k} + {\mathbf{A}}_{{\mathbf{J}}}\cdot {\widetilde{\mathbf{D}}}^1\cdot {\widetilde{\mathbf{D}}}^2 \cdot {\widetilde{\mathbf{D}}}^3 \end{aligned}$$
(3)
$$\begin{aligned}= & {} f({\mathbf{x}})\cdot {\mathbf{A}}_3 + error_{1,{\mathbf{x}}} + error_{2,{\mathbf{x}}} \bmod q \end{aligned}$$
(4)

If \(f({\mathbf{x}})\) is zero, then this term, \(error_1 + error_2\), only consists of small terms. Therefore, we can determine whether \(f({\mathbf{x}})\) is zero or not by checking its size.

Cryptanalysis We still apply the statistical zeroizing attack as cryptanalysis of Lin’s FE from GGH15 with simple encoding. Similarly, our strategy is to obtain many identical and independent samples from a distribution that corresponds to the distribution of a value, \(error_1 + error_2\). Then we compute the sample variances of the distribution. This variance depends on the ciphertext message and whether the variance is sufficiently different to distinguish the two messages.

Unfortunately, in a new encoding, the inner product of \(\textsf {tCT}\) and \(\textsf {tSK}\) cannot generate a low-level encoding of zero due to an error. Thus, our analysis is slightly more complex than the previous simple encoding. However, the computations are almost the same.

To obtain identical and independent samples from distributions that follow the zero-test values of \(\textsf {FE.CT}({\mathbf{x}}_0)\) and \(\textsf {FE.CT}({\mathbf{x}}_1)\) without secret keys for two fixed messages \({\mathbf{x}}_0\) and \({\mathbf{x}}_1\), we totally reconstruct Lin’s FE containing all secret and random elements or vectors following the same distributions of each element or vector. Since FE is a polynomial time algorithm, any adversary can generate identical and independent samples. Therefore, by the standard hybrid argument, the distinguishing problem between two distributions with one sample and polynomially many samples are equivalent. We now assume that an adversary has many polynomial samples that follow distributions from zero-test values of \(\textsf {FE.CT}({\mathbf{x}}_0)\) and \(\textsf {FE.CT}({\mathbf{x}}_1)\), respectively.

Let \(X_{x_b}\) be the random variables that correspond to a matrix \(f({\mathbf{x}}_b) \cdot {\mathbf{A}}_3 + error_{1,b}+error_{2,b}\). Then, all computations are the same as the cryptanalysis in Sect. 4.3.

Consider a degree 3 polynomial \(f({\mathbf{x}}) = x_1^3+x_2^3\), which is our target polynomial. Then, we can determine whether or not \(f({\mathbf{x}})\) is zero by computing the below operations of random variables. We now reuse capital bold letters to denote random matrices the entries of which follow a distribution that corresponds to the distribution of the bold matrix.

Let \({\mathbf{x}}_0\) and \({\mathbf{x}}_1\) be (0, 0) and \((1,-1)\), respectively. The two vectors satisfy \(f({\mathbf{x}}_0)=f({\mathbf{x}}_1)=0\), respectively. Our goal is to distinguish two distributions, \( X _{{\mathbf{x}}_0,f}\) and \( X _{{\mathbf{x}}_1,f}\) that follow the zero-test values of \(\textsf {FE.CT}({\mathbf{x}}_0)\) and \(\textsf {FE.CT}({\mathbf{x}}_1)\), respectively. Then it implies that the IND security of Lin’s FE over the GGH15 Multilinear map is not satisfied.

$$\begin{aligned} X _{{\mathbf{x}}_b,f}&{{:}{=}} {\mathbf{A}}_{{\mathbf{J}}}(\sum _{l=1}^{23} D _{1,l,{\mathbf{x}}_b}^1 D _{1,l,{\mathbf{x}}_b}^2 D _{1,l,{\mathbf{x}}_b}^3 +\sum _{l=1}^{23} D _{2,l,{\mathbf{x}}_b}^1 D _{2,l,{\mathbf{x}}_b}^2 D _{2,l,{\mathbf{x}}_b}^3 - {\widetilde{ D }}_{{\mathbf{x}}_b}^1{\widetilde{ D }}_{{\mathbf{x}}_b}^2 {\widetilde{ D }}_{{\mathbf{x}}_b}^3) \\&\equiv _q f({\mathbf{x}}_b) {\mathbf{A}}_3+ {\mathbf{J}}\cdot \sum _{\begin{array}{c} 1\le l \le 23\\ 1\le i \le 2 \\ 1\le j \le 3 \end{array}} \left( \prod _{k=1}^{j-1} M _{i,l,{\mathbf{x}}_b}^k E _{i,l,{\mathbf{x}}_b}^j \prod _{m=j+1}^3 D _{i,l,{\mathbf{x}}_b}^m\right) \\&\quad - {\mathbf{J}}\cdot \prod _{k=1}^{j-1} {\widetilde{ M }}_{{\mathbf{x}}_b}^k {\widetilde{ E }}_{{\mathbf{x}}_b}^j \prod _{m=j+1}^3 {\widetilde{ D }}_{{\mathbf{x}}_b}^m\\ \end{aligned}$$

For convenience, we borrow a new notation \( X _{{\mathbf{x}}_b,f}^{i,j,l}\), \({\widetilde{ X }}_{{\mathbf{x}}_b,f}^{j}\), which is a distribution over \(\mathbb {Z}\) (or abbreviated as \( X _{{\mathbf{x}}_b}^{i,j,l}\), \({\widetilde{ X }}_{{\mathbf{x}}_b}^{j}\)) as follows.

$$\begin{aligned} X _{{\mathbf{x}}_b,f}^{i,j,l}= & {} {\mathbf{J}}\cdot \prod _{k=1}^{j-1} M _{i,l,{\mathbf{x}}_b}^k E _{i,l,{\mathbf{x}}_b}^j \prod _{m=j+1}^3 D _{i,l,{\mathbf{x}}_b}^m\\ {\widetilde{ X }}_{{\mathbf{x}}_b,f}^{j}= & {} {\mathbf{J}}\cdot \prod _{k=1}^{j-1} {\widetilde{ M }}_{{\mathbf{x}}_b}^k {\widetilde{ E }}_{{\mathbf{x}}_b}^j \prod _{m=j+1}^3 {\widetilde{ D }}_{{\mathbf{x}}_b}^m \end{aligned}$$

To prove that \(\mathfrak {P,Q,R}\) in Lemma 4.1 are \(poly(\lambda )\), we state the statistical values of \( X _{{\mathbf{x}}_b}^{i,j,l}\) and \({\widetilde{ X }}_{{\mathbf{x}}_b}^{j}\) for every ijlb. We skip the proof of each lemma.

Lemma C.1

for every \(l=1,\ldots ,23\), \(i=1,2\), \(j=1,2,3\), \(b=1,2\),

$$\begin{aligned}&E[ X _{{\mathbf{x}}_b}^{i,j,l}]=E[{\mathbf{J}}\prod _{k=1}^{j-1} M _{i,l,{\mathbf{x}}_b}^k E _{i,l,{\mathbf{x}}_b}^j \prod _{m=j+1}^3 D _{i,l,{\mathbf{x}}_b}^m]=0,\\&E[{\widetilde{ X }}_{{\mathbf{x}}_b}^{j}]=E[{\mathbf{J}}\prod _{k=1}^{j-1} {\widetilde{ M }}_{{\mathbf{x}}_b}^k {\widetilde{ E }}_{{\mathbf{x}}_b}^j \prod _{m=j+1}^3 {\widetilde{ D }}_{{\mathbf{x}}_b}^m]=0. \end{aligned}$$

Lemma C.2

for every \(l,l'=1,\ldots ,23\), \(i,i'=1,2\), \(j,j'=1,2,3\), \(b=1,2\),

$$\begin{aligned}&E[ X _{{\mathbf{x}}_b}^{i,j,l}\cdot X _{{\mathbf{x}}_{b}}^{i',j',l'}]=0,E[ X _{{\mathbf{x}}_b}^{i,j,l}\cdot {\widetilde{ X }}_{{\mathbf{x}}_b}^{j'}]=0,E[{\widetilde{ X }}_{{\mathbf{x}}_b}^{j}\cdot {\widetilde{ X }}_{{\mathbf{x}}_{b}}^{j'}]=0 \end{aligned}$$

where (ijl) and \((i',j',l')\) are different.

Lemma C.3

For every \(l=1,\ldots ,23\), \(i=1,2\), \(b=1,2\), it holds that

$$\begin{aligned} Var( X _{{\mathbf{x}}_b}^{i,1,l})=Var({\mathbf{J}} E _{i,l,{\mathbf{x}}_b}^1 D _{i,l,{\mathbf{x}}_b}^2 D _{i,l,{\mathbf{x}}_b}^3)= & {} 23m^2 \sigma ^4\sigma '^2,\\ \left| \frac{E[( X _{{\mathbf{x}}_b}^{i,1,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,1,l})^2} \right| = \left| \frac{E[({\mathbf{J}} E _{i,l,{\mathbf{x}}_b}^1 D _{i,l,{\mathbf{x}}_b}^2 D _{i,l,{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}} E _{i,l,{\mathbf{x}}_b}^1 D _{i,l,{\mathbf{x}}_b}^2 D _{i,l,{\mathbf{x}}_b}^3)^2}\right|\le & {} 3^3\cdot (23m)^2.\\ \end{aligned}$$

Lemma C.4

For every \(l=1,\ldots ,23\), \(i=1,2\), it holds that

$$\begin{aligned} Var( X _{{\mathbf{x}}_0}^{i,2,l})=Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_0}^1 E _{i,l,{\mathbf{x}}_0}^2 D _{i,l,{\mathbf{x}}_0}^3)= & {} (4\sigma ^2+22\sigma ^4)m\cdot \sigma ^2 \cdot \sigma '^2,\\ Var( X _{{\mathbf{x}}_1}^{i,2,l})=Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_1}^1 E _{i,l,{\mathbf{x}}_1}^2 D _{i,l,{\mathbf{x}}_1}^3)= & {} (4\sigma ^2+22\sigma ^4+1)m\cdot \sigma ^2 \cdot \sigma '^2,\\ \left| \frac{E[( X _{{\mathbf{x}}_b}^{i,2,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,2,l})^2}\right| =\left| \frac{E[({\mathbf{J}} M _{i,l,{\mathbf{x}}_b}^1 E _{i,l,{\mathbf{x}}_b}^2 D _{i,l,{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_b}^1 E _{i,l,{\mathbf{x}}_b}^2 D _{i,l,{\mathbf{x}}_b}^3)^2}\right|\le & {} 3^4\cdot 23^4\cdot m^2.\\ \end{aligned}$$

Lemma C.5

For every \(i=1,2\), \(Var( X _{{\mathbf{x}}_0}^{i,3,l})=Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_0}^1 M _{i,l,{\mathbf{x}}_0}^2 E _{i,l,{\mathbf{x}}_0}^3)\) and \(Var( X _{{\mathbf{x}}_1}^{i,3,l})=Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_1}^1 M _{i,l,{\mathbf{x}}_1}^2 E _{i,l,{\mathbf{x}}_1}^3)\) it holds thatFootnote 7

$$\begin{aligned} Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_0}^1 M _{i,l,{\mathbf{x}}_0}^2 E _{i,l,{\mathbf{x}}_0}^3)= & {} {\left\{ \begin{array}{ll} (\sigma ^6+2\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l= 1\\ (18\sigma ^{10}+15\sigma ^8+3\sigma ^6)\cdot \sigma '^2 ~\text{ for }~ l=2\\ (2\sigma ^8+3\sigma ^6+\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l=3\\ (4\sigma ^8+2\sigma ^6)\cdot \sigma '^2 ~\text{ for }~ l=4,6,\ldots ,12\\ (4\sigma ^8+4\sigma ^6+\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l=5\\ (2\sigma ^8+\sigma ^6)\cdot \sigma '^2 ~\text{ for }~ l=13,\ldots ,23\\ \end{array}\right. }\\ Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_1}^1 M _{i,l,{\mathbf{x}}_1}^2 E _{i,l,{\mathbf{x}}_1}^3)= & {} {\left\{ \begin{array}{ll} (\sigma ^6+2\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l= 1\\ (18\sigma ^{10}+15\sigma ^8+5\sigma ^6+\sigma ^2)\cdot \sigma '^2 ~\text{ for }~ l=2\\ (2\sigma ^8+3\sigma ^6+\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l=3\\ (4\sigma ^8+2\sigma ^6+2\sigma ^4+1)\cdot \sigma '^2 ~\text{ for }~ l=4\\ (4\sigma ^8+4\sigma ^6+\sigma ^4)\cdot \sigma '^2 ~\text{ for }~ l=5\\ (4\sigma ^8+2\sigma ^6)\cdot \sigma '^2 ~\text{ for }~ l=6,\ldots ,12\\ (2\sigma ^8+\sigma ^6)\cdot \sigma '^2 ~\text{ for }~ l=13,\ldots ,23\\ \end{array}\right. }\\ \left| \dfrac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}\right|= & {} \left| \dfrac{E[({\mathbf{J}} M _{i,l,{\mathbf{x}}_b}^1 M _{i,l,{\mathbf{x}}_b}^2 E _{i,l,{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}} M _{i,l,{\mathbf{x}}_b}^1 M _{i,l,{\mathbf{x}}_b}^2 E _{i,l,{\mathbf{x}}_b}^3)^2}\right| \le 3^6\cdot (39\cdot 23)^2\\ \end{aligned}$$

Lemma C.6

For every \(b=1,2\), it holds that

$$\begin{aligned} Var[{\widetilde{ X }}_{{\mathbf{x}}_b}^{1}]= & {} Var({\mathbf{J}}{\widetilde{ E }}_{{\mathbf{x}}_b}^1{\widetilde{ D }}_{{\mathbf{x}}_b}^2{\widetilde{ D }}_{{\mathbf{x}}_b}^3)= 23m^2\sigma ^4\sigma '^2,\\ Var[{\widetilde{ X }}_{{\mathbf{x}}_b}^{2}]= & {} Var({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ E }}_{{\mathbf{x}}_b}^2{\widetilde{ D }}_{{\mathbf{x}}_b}^3)= 23m\sigma ^2\sigma '^2\\ Var[{\widetilde{ X }}_{{\mathbf{x}}_b}^{3}]= & {} Var({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ M }}_{{\mathbf{x}}_b}^2{\widetilde{ E }}_{{\mathbf{x}}_b}^3)=(2\sigma ^6+4\sigma ^4+2\sigma ^2)\cdot \sigma '^2 ~\text{ and } \\ \left| \frac{E[({\mathbf{J}}{\widetilde{ E }}_{{\mathbf{x}}_b}^1{\widetilde{ D }}_{{\mathbf{x}}_b}^2{\widetilde{ D }}_{{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}}{\widetilde{ E }}_{{\mathbf{x}}_b}^1{\widetilde{ D }}_{{\mathbf{x}}_b}^2{\widetilde{ D }}_{{\mathbf{x}}_b}^3)^2}\right|\le & {} 3^3\cdot (23m)^2,\\ \left| \frac{E[({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ E }}_{{\mathbf{x}}_b}^2 {\widetilde{ D }}_{{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ E }}_{{\mathbf{x}}_b}^2{\widetilde{ D }}_{{\mathbf{x}}_b}^3)^2}\right|\le & {} 3^2\cdot 23^2\\ \left| \frac{E[({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ M }}_{{\mathbf{x}}_b}^2 {\widetilde{ E }}_{{\mathbf{x}}_b}^3)^4]}{Var({\mathbf{J}}{\widetilde{ M }}_{{\mathbf{x}}_b}^1{\widetilde{ M }}_{{\mathbf{x}}_b}^2{\widetilde{ E }}_{{\mathbf{x}}_b}^3)^2}\right|\le & {} 3^4\cdot (8\cdot 23)^2 . \end{aligned}$$

To achieve our objective, it is sufficient to show that \(\mathfrak {P},\mathfrak {Q}\), and \(\mathfrak {R}\) in Lemma 4.1 are \(poly(\lambda )\). First, for proving that \(\mathfrak {P}\) is \(poly(\lambda )\), we observe several statistical properties.

  • Since the covariances between \( X _{x_b}^{i,j,l}\), \({\widetilde{ X }}_{x_b}^{j}\) are zero, \(Var( X _{{\mathbf{x}}_b})\) are just sum of the variances between them.

  • We find some upper or lower bounds

    $$\begin{aligned} (m\cdot \sigma ^2 \sigma '^2)&\le Var( X _{{\mathbf{x}}_1})-Var( X _{{\mathbf{x}}_0}), \\ X _{x_b}^{i,j,l},{\widetilde{ X }}_{x_b}^{j}&\le 23\cdot m^2\cdot (18\sigma ^{10}+15\sigma ^8+5\sigma ^6+\sigma ^2)\cdot \sigma '^2 \end{aligned}$$

    for all ijl and \(b = \{0,1\}\).

Therefore, it holds that

$$\begin{aligned} \dfrac{\max (Var( X _{{\mathbf{x}}_0}),Var( X _{{\mathbf{x}}_1}))}{\left| Var( X _{{\mathbf{x}}_1})-Var( X _{{\mathbf{x}}_0})\right| }&\le \dfrac{140\cdot 23\cdot m^2\cdot (18\sigma ^{10}+15\sigma ^8+5\sigma ^6+\sigma ^2)\cdot \sigma '^2}{m\cdot \sigma ^2 \cdot \sigma '^2}\\&=poly(\lambda ). \end{aligned}$$

Second, we prove that qr are bounded by \(poly(\lambda )\) using the above lemmas.

$$\begin{aligned}&\frac{E[( X _{{\mathbf{x}}_b})^4]}{Var( X _{{\mathbf{x}}_b})^2} = \frac{ E\left[ (\displaystyle \sum _{i,j,l} X _{{\mathbf{x}}_b}^{i,j,l} - \displaystyle \sum _{j} {\widetilde{ X }}_{{\mathbf{x}}_b}^{j})^4\right] }{ Var( X _{{\mathbf{x}}_b})^2 }\\&\le 140^3\cdot \left( \frac{\displaystyle \sum _{i,j,l} E[( X _{{\mathbf{x}}_b}^{i,j,l})^4] + \sum _{j} E[ X _{{\mathbf{x}}_b}^{j})^4]}{ Var( X _{{\mathbf{x}}_b})^2 }\right) \\&\le 140^3\cdot \left( \displaystyle \sum _{i,j,l}\frac{ E[( X _{{\mathbf{x}}_b}^{i,j,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,j,l})^2} + \sum _{j} \frac{E[({\widetilde{ X }}_{{\mathbf{x}}_b}^{j})^4]}{Var({\widetilde{ X }}_{{\mathbf{x}}_b}^{j})^2}\right) \\&\le 140^4\cdot ( 3^4\cdot 23^4\cdot m^2 + 3^6 \cdot (39\cdot 23)^2 + 3^3\cdot (23m)^2)\\&= poly(\lambda ). \end{aligned}$$

Finally, \(\mathfrak {P},\mathfrak {Q}\), and \(\mathfrak {R}\) satisfy the condition of Lemma 4.1. Therefore, the sample variance of error distribution from the zero-test values of \(\textsf {FE.CT}(x_0)\) and \(\textsf {FE.CT}(x_1)\) are distinguished. In other words, Lin’s FE cannot achieve the desired security state.

D: Proof of Lemmas

In this section, we provide proofs of lemmas.

Proof

(rest of Lemma 4.2) Since this equations hold regardless of the value of i, it is sufficient to deal with only the case of \(i=1\). Let \(E_{u,v}\), \(C_{u,v}\), and \(B_u\) be random variables of the (uv)-th element or u-th element of the random variable \( E _{2,1}\), \( C _1\), \( B _1\), respectively. Also, the message \(x_0, x_1\) are the random variable \(r\cdot u, r\cdot u +1\) where ru follow the distribution \(D_{\mathbb {Z},\sigma }\).

$$\begin{aligned} {\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1 = \sum _{i=1}^n ru\left( \sum _{j=1}^m E_{i,j}\cdot B_j\right) \end{aligned}$$

When \((i,j), (i',j')\) are different, then \(E[(ru)^2\cdot E{i,j}B_j \cdot E{i',j'}B_{j'}]\) is zero. Then, the variance of \({\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1\) holds following equality:

$$\begin{aligned} Var[{\mathbf{J}}\cdot x_0\cdot E _{2,1} \cdot B _1]= & {} E\left[ \left( \sum _{i=1}^n ru(\sum _{j=1}^m E_{i,j}\cdot B_j)\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n \sum _{j=1}^m r^2u^2 \cdot E_{i,j}^2 \cdot B_j^2\right] = nm\sigma ^4\cdot \sigma ^2\cdot \sigma '^2. \end{aligned}$$

Proof

(rest of Lemma 4.3) Since this equations hold regardless of the value of i, it is sufficient to deal with only the case of \(i=1\). Let \(E_{u,v}\), \(B_u\) be random variables of the (uv)-th element or u-th element of the random variable \( E _{2,1}\), \( B _1\), respectively. Also, let \(x_0\) be the random variable ru where ru follow the distribution \(D_{\mathbb {Z},\sigma }\).

$$\begin{aligned} E[({\mathbf{J}}\cdot {x}_0 \cdot E _{2,1}\cdot B _1)^4]= & {} E\left[ \left( \sum _{i=1}^n\sum _{j=1}^m ruE_{i,j}\cdot B_j\right) ^4\right] \\\le & {} E\left[ (nm)^3\cdot \left( \sum _{i=1}^n\sum _{j=1}^m r^4u^4 E_{i,j}^4 \cdot B_j^4\right) \right] \\= & {} (nm)^3\cdot nm (9\sigma ^8)\cdot 3\sigma '^4\cdot 3\sigma ^4\\= & {} 3^4\cdot (nm)^4 \cdot (\sigma ^8)\sigma ^4\sigma '^4,\\ \left| \frac{E[({\mathbf{J}}\cdot {x}_0 \cdot E _{2,i}\cdot B _i)^4]}{Var({\mathbf{J}}\cdot {x}_0 \cdot E _{2,i}\cdot B _i)^2}\right|\le & {} \frac{3^4\cdot (nm)^4(\sigma ^8)\sigma ^4\sigma '^4}{(nm(\sigma ^4)\sigma ^2\sigma '^2)^2} = 3^4 (nm)^2. \end{aligned}$$

Proof

(rest of Lemma 4.4) Let \(E_{u}\) be random variables of the u-th element of the random variable \( E _{3,i}\). Let \(x_0, {x}_1\) be random variable \(ru, ru+1\) where r, u follow the distribution \(D_{\mathbb {Z},\sigma }\) and \((c_1,c_2,c_3,c_4)\) be random variable \((-r_2t_{2,1}+t_{2,2}(r_2t_{1,1}-r_1)+t_{2,3}r_2t_{1,2},-r_2,r_2t_{1,1}-r_1,r_2t_{1,2})\) where \(r_1, r_2, t_{1,1},t_{1,2},t_{2,1},t_{2,2},t_{2,3}\) follows the distribution \(D_{\mathbb {Z},\sigma }\). Now, we compute the variances of each case.

$$\begin{aligned} Var[{\mathbf{J}}\cdot {x}_0\cdot c_1\cdot E _{3,1})]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2t_{2,1}+t_{2,2}r_2t_{1,1}-t_{2,2}r_1+t_{2,3}r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{2,1}^2+t_{2,2}^2r_2^2t_{1,1}^2+t_{2,2}^2r_1^2+t_{2,3}^2r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (2\sigma ^6+2\sigma ^4)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_0\cdot c_2\cdot E _{3,2})]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (-r_2)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^2)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_1\cdot c_2\cdot E _{3,2})]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (-r_2)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^2)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_0\cdot c_3\cdot E _{3,3})]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{1,1}^2+r_1^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^4+\sigma ^2)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_1\cdot c_3\cdot E _{3,3})]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,1}-r_1)\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2t_{1,1}^2+r_1^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^4+\sigma ^2)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_0\cdot c_4\cdot E _{3,4})]= & {} E\left[ \left( \sum _{i=1}^n ru\cdot (r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2)\cdot (r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\= & {} n\sigma ^4\cdot (\sigma ^4)\sigma '^2,\\ Var[{\mathbf{J}}\cdot {x}_1\cdot c_4\cdot E _{3,4})]= & {} E\left[ \left( \sum _{i=1}^n (ru+1)\cdot (r_2t_{1,2})\cdot E_{i}\right) ^2\right] \\= & {} E\left[ \sum _{i=1}^n (r^2u^2+1)\cdot (r_2^2t_{1,2}^2)\cdot E_{i}^2\right] \\= & {} n(\sigma ^4+1)\cdot (\sigma ^4)\sigma '^2. \end{aligned}$$

\(\square \)

Proof

(rest of Lemma 4.5)

\(\square \)

E: Proof of Lemmas for alternative encoding

In this section, we provide the proofs of the lemmas in Sect. C.

Proof

(of Lemma C.1 and C.2) We observe that \( X _{{\mathbf{x}}_b}^{i,j,l}\),\({\widetilde{ X }}_{{\mathbf{x}}_b}^{j}\) have \( E _{{\mathbf{x}}_b}\) as a random variable. Therefore, the expectation of them are zero.

When il are equal to \(i',l'\), \( X _{{\mathbf{x}}_b}^{i,j,l}\) and \( X _{{\mathbf{x}}_b}^{i',j',l'}\) are dependent random variables. We assume that \(j<j'\). Note that the random variable \( E _{i,l,{\mathbf{x}}_b}^{j}\) is only dependent to \( D _{i,l,{\mathbf{x}}_b}^{j}\) and the random variable \( X _{{\mathbf{x}}_b}^{i',j',l'}\) do not contain such random variables at the same time. Also, the expectation of \( E _{i,l,{\mathbf{x}}_b}^{j}\) is the zero matrix. Therefore, \(E[ X _{{\mathbf{x}}_b}^{i,j,l}\cdot X _{{\mathbf{x}}_b}^{i',j',l'}]=E[ E _{i,l,{\mathbf{x}}_b}^{j}]\cdot E[*]\) is the zero. Other cases, \( X _{{\mathbf{x}}_b}^{i,j,l}\) and \( X _{{\mathbf{x}}_b}^{i',j',l'}\) are independent random variables so that its covariance \(E[ X _{{\mathbf{x}}_b}^{i,j,l}\cdot X _{{\mathbf{x}}_b}^{i',j',l'}]\) is the zero. Similarly, the expectations of \( X _{{\mathbf{x}}_b}^{i,j,l}\cdot {\widetilde{ X }}_{{\mathbf{x}}_b}^{j},{\widetilde{ X }}_{{\mathbf{x}}_b}^{j}\cdot {\widetilde{ X }}_{{\mathbf{x}}_b}^{j'}\) should be the zero. \(\square \)

Proof

(of Lemma C.3)

Let \(Z_{i,l,{\mathbf{x}}_b}^u\), \(Y_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vectorFootnote 8\( D _{i,l,{\mathbf{x}}_b}^2\cdot D _{i,l,{\mathbf{x}}_b}^3\), \( E _{i,l,{\mathbf{x}}_b}^1\cdot D _{i,l,{\mathbf{x}}_b}^2\cdot D _{i,l,{\mathbf{x}}_b}^3\), respectively. Then, we easily obtain that for all y, all random variables \(Z_{i,l,{\mathbf{x}}_b}^{u}\) have the variance \(m(\sigma ^2)^{2}\).

Let \(E_{u,v}\) be the random variables of (uv)-th entry of the random variable \( E _{i,l,{\mathbf{x}}_b}^{1}\). Then we can compute the variance and kurtosis of \(Y_{i,l,{\mathbf{x}}_b}^u\).

$$\begin{aligned} E[Y_{i,l,{\mathbf{x}}_b}^u]&=E[\sum _{j=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^j]=\sum _{j=1}^m E[E_{u,j}]\cdot E[Z_{i,l,{\mathbf{x}}_b}^j]=0,\\ E[Y_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u'}]&=E\left[ \left( \sum _{j=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j})\cdot (\sum _{k=1}^m E_{u',k}\cdot Z_{i,l,{\mathbf{x}}_b}^{k}\right) \right] \\&=\sum _{j=1}^m\sum _{k=1}^m E[E_{u,j}\cdot E_{u',k}]\cdot E[Z_{i,l,{\mathbf{x}}_b}^{j}\cdot Z_{i,l,{\mathbf{x}}_b}^{k}]=0, ~\text{ for } \text{ distinct } u,u'\\ Var[Y_{i,l,{\mathbf{x}}_b}^u]&=Var[\sum _{j=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j}]\\&=E\left[ \left( \sum _{i=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j}\right) ^2\right] -E[\sum _{i=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j}]^2\\&=E\left[ \left( \sum _{i=1}^m (E_{u,i})^2\cdot (Z_{i,l,{\mathbf{x}}_b}^{j})^2\right) \right] + \sum _{1\le i<i'\le m}E[E_{u,i}\cdot E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j} \cdot Z_{i',l,{\mathbf{x}}_b}^{j}] \\&=m(m(\sigma ^2)^2)\cdot \sigma ^2 = m^{2}\sigma ^6,\\ E[(Y_{i,l,{\mathbf{x}}_b}^{u})^4]&=E\left[ \left( \sum _{j=1}^m E_{u,j}\cdot Z_{i,l,{\mathbf{x}}_b}^{j}\right) ^4\right] \\&\le E\left[ m^3\cdot \left( \sum _{j=1}^m (E_{u,j})^4\cdot (Z_{i,l,{\mathbf{x}}_b}^{j})^4\right) \right] \\&\le m^3 \cdot m\cdot 3\sigma ^4 \cdot (m(\sigma ^2)^{2})^2 = 3m^6\sigma ^{12} \end{aligned}$$

We observe \( X _{{\mathbf{x}}_b}^{i,1,l}=\sum _{u=1}^{23}Y_{i,l,{\mathbf{x}}_b}^{u}\). Then,

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,1,l}]&= E\left[ \left( \sum _{u=1}^{23}Y_{i,l,{\mathbf{x}}_b}^{u}\right) ^2\right] \\&= E\left[ \sum _{u=1}^{23} (Y_{i,l,{\mathbf{x}}_b}^{u})^2 \right] = 23 m^{2}\sigma ^6, \end{aligned}$$

where \(E[Y_{i,l,{\mathbf{x}}_b}^{u} \cdot Y_{i,l,{\mathbf{x}}_b}^{u'}]\) is zero for distinct \(u, u'\).

In addition, the upper bound of \(E[( X _{{\mathbf{x}}_b}^{i,1,l})^4]\) can be computed as follows:

$$\begin{aligned} E[( X _{{\mathbf{x}}_b}^{i,1,l})^4]&=E\left[ \left( \sum _{u=1}^{23} Y_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 23^3\cdot \left( \sum _{u=1}^{23} (Y_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 23^4\cdot 3m^6\sigma ^{12}. \end{aligned}$$

Combining them, we obtain the inequality

$$\begin{aligned} \left| \dfrac{E[( X _{{\mathbf{x}}_b}^{i,1,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,1,l})^2}\right| \le 23^2\cdot 3\cdot m^2= poly(\lambda ). \end{aligned}$$

\(\square \)

Proof

(of Lemma C.4) We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot E _{i,l,{\mathbf{x}}_b}^{2}\cdot D _{i,l,{\mathbf{x}}_b}^{3}\). Let \(Y_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,l,{\mathbf{x}}_b}^2\cdot D _{i,l,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1}\). By the above analysis, we can get the following equations and inequality.

$$\begin{aligned}&E[Y_{i,l,{\mathbf{x}}_b}^u] = 0, E[Y_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u'}]=0, Var(Y_{i,l,{\mathbf{x}}_b}^u)=m\sigma ^4,\\&E[(Y_{i,l,{\mathbf{x}}_b}^{u})^4]\le m^4\cdot 3\sigma ^4 (\sigma ^2)^2 = 3m^4\sigma ^8. \end{aligned}$$

\(M_{i,l,{\mathbf{x}}_b}^u\) is a u-th element of \((-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert \mathbf{0}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11}))\).

We observe the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,2,l}]=E\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^u\right]= & {} \sum _{u=1}^{23} E[M_{i,l,{\mathbf{x}}_b}^u]\cdot E[Y_{i,l,{\mathbf{x}}_b}^u]=0,\\ E[(M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot Y_{i,l,{\mathbf{x}}_b}^{u'})]= & {} E[(M_{i,l,{\mathbf{x}}_b}^u\cdot M_{i,l,{\mathbf{x}}_b}^{u'})]\cdot E[(Y_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u'})]\\= & {} 0 ~\text{ for } \text{ distinct } u,u', \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,2,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,2,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u}\right] \\&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u}\right) ^2\right] -E\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u}\right] ^2\\&=E\left[ \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u})^2\right) \right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(Y_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(4\sigma ^2+22\sigma ^4+(x_{i,b})^2)m\sigma ^2\cdot s^2, \\&E[( X _{{\mathbf{x}}_b}^{i,2,l})^4]=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot Y_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\quad \le E\left[ 23^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (Y_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\quad \le 23^3\cdot (3\sigma ^4+18(9\sigma ^8)+3(9\sigma ^8+3\sigma ^4+6\sigma ^6)+(9\sigma ^8+(x_{i,b})^4\\&\qquad +6(x_{i,b})^2\sigma ^4))\cdot E[(Y_{i,l,{\mathbf{x}}_b}^{u})^4]\\&\quad \le 23^4\cdot (9\sigma ^8+3\sigma ^4+6\sigma ^6)\cdot (m^2\cdot 3\cdot (m\sigma ^2\cdot s^2)^2)~\text{ Since } x_{i,b}\in \{-1,0,1\}, \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,2,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,2,l})^2}&\le \frac{23^4\cdot (9\sigma ^8+3\sigma ^4+6\sigma ^6)\cdot (m^2\cdot 3\cdot (m\sigma ^2\cdot s^2)^2)}{((4\sigma ^2+22\sigma ^4)m\sigma ^2\cdot s^2)^2}\\&\le \frac{23^4}{22} \cdot m^2 \cdot 3. \end{aligned}$$

\(\square \)

Proof

(of Lemma C.5) Only for this lemma, we give the proof of the seven cases depending on l.

Case 1: \(l = 1\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned} M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\ M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b}) ~\text{ where } {\mathbf{k}}_{i,l,b}=({\mathbf{0}}_9,-r_{i,1,b}^2,{\mathbf{0}}_{12}). \end{aligned}$$

Therefore, \(M_{i,l,{\mathbf{x}}_b}^u\) is a u-th element of \((r_{1,i,l,b}'r_{i,b,1}^2\cdot {\mathbf{u}}_{{i,l,b}^{(3)}}^{9},0,0,-r_{1,i,l,b}'r_{i,b,1}^2\cdot {\mathbf{u}}_{i,1,b}^{(3)}+r_{i,b,1}^1 r_{i,b,1}^2,0,\ldots ,0)\).Footnote 9

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(\sigma ^6+2\sigma ^4)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \le E\left[ 2^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 2^3\cdot (27\sigma ^{12}+(27\sigma ^{12}+9\sigma ^8+6\sigma ^{10}))\cdot 3s^4. \end{aligned}$$

Actually, all \(M_{i,l,{\mathbf{x}}_b}^u\) except that \(u=1\) or 4 is the zero matrix. Thus \((\sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})^4\) is less than \(2^3\cdot (\sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4)\). Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le \frac{2^3\cdot (27\sigma ^{12}+(27\sigma ^{12}+9\sigma ^8+6\sigma ^{10}))\cdot 3s^4}{((\sigma ^6+2\sigma ^4)\cdot s^2)^2}\le 2^3\cdot 207. \end{aligned}$$

Case 2: \(l = 2\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=(x_{i,b}{\mathbf{w}}_{1,2}^{(2)},s_{i,b}^2{\mathbf{w}}_{1,2}^{(3)}\Vert \mathbf{0}_7\Vert r_{i,1,b}^2{\mathbf{v}}^{(1)},r_{i,2,b}^2\cdot (-{\mathbf{w}}_{1,2}^{(1)}+\sum _{k=1}^9 {\mathbf{w}}_{1,2}^{(k+1)}{\mathbf{w}}_{1,1}^{(k)})\Vert {\mathbf{0}}_{11}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(18\sigma ^{10}+15\sigma ^8+(3+2(x_{i,b})^2)\sigma ^6+(x_{i,b})^4\sigma ^2)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 5^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 5^3\cdot ( 4806\sigma ^{20} + 702\sigma ^{18}+1611\sigma ^{16}+(72+18(x_{i,b})^2)\sigma ^{14}\\&\quad +(81+54(x_{i,b})^4)\sigma ^{12}+6(x_{i,b})^6\sigma ^8+3(x_{i,b})^8\sigma ^4)\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 5^3\cdot \frac{7353\sigma ^{20}\cdot 3s^4}{(18\sigma ^{10}\cdot s^2)^2}\le 5^3\cdot 69. \end{aligned}$$

Case 3: \(l = 3\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=({\mathbf{0}}_9\Vert r_{i,1,b}^2{\mathbf{v}}^{(2)}, -r_{i,2,b}^2\Vert {\mathbf{0}}_{11}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(2\sigma ^8+3\sigma ^6+\sigma ^4)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 4^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 4^3\cdot (162\sigma ^{16}+12\sigma ^{14}+81\sigma ^{12}+6\sigma ^{10}+9\sigma ^8)\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 4^3\cdot \frac{276\sigma ^{16}\cdot 3s^4}{(2\sigma ^{8}\cdot s^2)^2}\le 4^3\cdot 207. \end{aligned}$$

Case 4: \(l = 4\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=(x_{i,b}\Vert {\mathbf{0}}_8\Vert r_{i,1,b}^2{\mathbf{v}}^{(3)}, -r_{i,2,b}^2{\mathbf{w}}_{1,1}^{(1)}\Vert {\mathbf{0}}_{11}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(4\sigma ^8+2\sigma ^6+2(x_{i,b})^2\sigma ^4+(x_{i,b})^2)\cdot s^2, \\&E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\quad \le E\left[ 4^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\quad \le 4^3\cdot (330\sigma ^{16}+12\sigma ^{14}+(54+12(x_{i,b})^2)\sigma ^{12}\nonumber \\&\qquad +18(x_{i,b})^4\sigma ^8+6(x_{i,b})^6\sigma ^4+(x_{i,b})^8)\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 4^3\cdot \frac{433\sigma ^{16}\cdot 3s^4}{(4\sigma ^{8}\cdot s^2)^2}\le 4^3\cdot 82. \end{aligned}$$

Case 5: \(l = 5\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=(0,s_{i,b}^2\Vert \mathbf{0}_7\Vert r_{i,1,b}^2{\mathbf{v}}^{(4)}, -r_{i,2,b}^2{\mathbf{w}}_{1,1}^{(2)}\Vert \mathbf{0}_{11}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(4\sigma ^8+4\sigma ^6+\sigma ^4)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 4^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 4^3\cdot (330\sigma ^{16}+24\sigma ^{14}+108\sigma ^{12}+6\sigma ^{10}+9\sigma ^8)\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 4^3\cdot \frac{477\sigma ^{16}\cdot 3s^4}{(4\sigma ^{8}\cdot s^2)^2}\le 4^3\cdot 90. \end{aligned}$$

Case 6: \(l = 6,\ldots ,12\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=({\mathbf{0}}_9\Vert r_{i,1,b}^2{\mathbf{v}}^{(l-1)}, -r_{i,2,b}^2{\mathbf{w}}_{1,1}^{(l-3)}\Vert {\mathbf{0}}_{11}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(4\sigma ^8+2\sigma ^6)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 3^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 3^3\cdot (330\sigma ^{16}+12\sigma ^{14}+54\sigma ^{12})\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 3^3\cdot \frac{396\sigma ^{16}\cdot 3s^4}{(4\sigma ^{8}\cdot s^2)^2}\le 3^3\cdot 75. \end{aligned}$$

Case 7: \(l = 13,\ldots ,23\). We now consider a random variable \({\mathbf{J}}\cdot M _{i,l,{\mathbf{x}}_b}^{1}\cdot M _{i,l,{\mathbf{x}}_b}^{2}\cdot E _{i,l,{\mathbf{x}}_b}^{3}\). Let \(E_{i,l,{\mathbf{x}}_b}^u\) be random variables of the u-th element of the random vector, \( E _{i,1,{\mathbf{x}}_b}^3\) and \(M_{i,l,{\mathbf{x}}_b}^u\) be the random variables of the u-th diagonal element of the random variable, \( M _{i,l,{\mathbf{x}}_b}^{1} M _{i,l,{\mathbf{x}}_b}^{2}\).

$$\begin{aligned}&M _{i,l,{\mathbf{x}}_b}^{1}=\mathbf{diag }(-r_{1,i,l,b}'\Vert r_{1,i,l,b}'\cdot {\mathbf{u}}_{i,l,b}+(x_{i,b}\Vert s_{i,b}^1\Vert {\mathbf{0}}_{7}\Vert r_{i,b,1}^1\Vert r_{i,b,2}^1\Vert {\mathbf{0}}_{11})),\\&M _{i,l,{\mathbf{x}}_b}^{2}=\mathbf{diag }(\langle {\mathbf{k}}_{i,l,b},{\mathbf{u}}_{i,l,b}\rangle \Vert {\mathbf{k}}_{i,l,b})\\&\text{ where } {\mathbf{k}}_{i,l,b}=({\mathbf{0}}_9\Vert r_{i,1,b}^2{\mathbf{v}}^{(l-1)}\Vert {\mathbf{0}}_{12}). \end{aligned}$$

According to the above computation, we get the following equations.

$$\begin{aligned} E[ X _{{\mathbf{x}}_b}^{i,3,l}]=0,E[(M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u})\cdot (M_{i,l,{\mathbf{x}}_b}^{u'}\cdot E_{i,l,{\mathbf{x}}_b}^{u'})]=0. ~\text{ for } \text{ distinct } u,u' \end{aligned}$$

Then we can compute the variance and kurtosis of \( X _{{\mathbf{x}}_b}^{i,3,l}\).

$$\begin{aligned} Var[ X _{{\mathbf{x}}_b}^{i,3,l}]&=Var\left[ \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right] =\sum _{u=1}^{23} E[(M_{i,l,{\mathbf{x}}_b}^u)^2]\cdot E[(E_{i,l,{\mathbf{x}}_b}^{u})^2]\\&=(2\sigma ^8+\sigma ^6)\cdot s^2, \\ E[( X _{{\mathbf{x}}_b}^{i,3,1})^4]&=E\left[ \left( \sum _{u=1}^{23} M_{i,l,{\mathbf{x}}_b}^u\cdot E_{i,l,{\mathbf{x}}_b}^{u}\right) ^4\right] \\&\le E\left[ 2^3\cdot \left( \sum _{u=1}^{23} (M_{i,l,{\mathbf{x}}_b}^u)^4\cdot (E_{i,l,{\mathbf{x}}_b}^{u})^4\right) \right] \\&\le 2^3\cdot (162\sigma ^{16}+6\sigma ^{14}+27\sigma ^{12})\cdot 3s^4. \end{aligned}$$

Moreover, it holds that

$$\begin{aligned} \frac{E[( X _{{\mathbf{x}}_b}^{i,3,l})^4]}{Var( X _{{\mathbf{x}}_b}^{i,3,l})^2}&\le 2^3\cdot \frac{396\sigma ^{16}\cdot 3s^4}{(2\sigma ^{8}\cdot s^2)^2}\le 2^3\cdot 147. \end{aligned}$$

\(\square \)

We omit the proof of Lemma C.6 because it can be easily proved by following the above method.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cho, W., Kim, J. & Lee, C. (In)security of concrete instantiation of Lin17’s functional encryption scheme from noisy multilinear maps. Des. Codes Cryptogr. 89, 973–1016 (2021). https://doi.org/10.1007/s10623-021-00854-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10623-021-00854-y

Keywords

Mathematics Subject Classification

Navigation