Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Attribute-Based Encryption (ABE) was introduced by Sahai and Waters [40] in order to realize the vision of fine-grained access control to encrypted data. Using ABE, a user can encrypt a message \(\mu \) with respect to a public attribute-vector \(\mathbf {{x}}\) to obtain a ciphertext \(\mathsf {ct}_\mathbf {{x}}\). Anyone holding a secret key \( \mathsf {sk}_P\), associated with an access policy P, can decrypt the message \(\mu \) if \(P(\mathbf {{x}})=1\). Moreover, the security notion guarantees that no collusion of adversaries holding secret keys \( \mathsf {sk}_{P_1}, \ldots , \mathsf {sk}_{P_t}\) can learn anything about the message \(\mu \) if none of the individual keys allow to decrypt it. Until recently, candidate constructions of ABE were limited to restricted classes of access policies that test for equality (IBE), boolean formulas and inner-products: [1, 2, 8, 12, 14, 15, 3032, 41].

In recent breakthroughs Gorbunov, Vaikuntanathan and Wee [26] and Garg, Gentry, Halevi, Sahai and Waters [20] constructed ABE schemes for arbitrary boolean predicates. The GVW construction is based on the standard Learning With Errors (LWE) problem with sub-exponential approximation factors, whereas GGHSW relies on hardness of a (currently) stronger assumptions over existing multilinear map candidates [16, 18, 21]. But in both these ABE schemes, the size of the secret keys had a multiplicative dependence on the size of the predicate: \(|P| \cdot \mathrm{poly}(\lambda ,d)\) (where d is the depth of the circuit representation of the predicate). In a subsequent work, Boneh et al. [10] showed how to construct ABE for arithmetic predicates with short secret keys: \(|P| + \mathrm{poly}(\lambda ,d)\), also assuming hardness of LWE with sub-exponential approximation factors. However, in [26], the authors also showed an additional construction for a family of branching programs under a milder and quantitatively better assumption: hardness of LWE with polynomial approximation factors. Basing the security on LWE with polynomial approximation factors, as opposed to sub-exponential, results in two main advantages. First, the security of the resulting construction relies on the hardness of a much milder LWE assumption. But moreover, the resulting instantiation has better parameter – small modulo q – leading directly to practical efficiency improvements.

In this work, we focus on constructing an ABE scheme under milder security assumptions and better performance guarantees. We concentrate on ABE for a family of branching programs which is sufficient for most existing applications such as medical and multimedia data sharing [5, 33, 36].

First, we summarize the two most efficient results from learning with errors problem translated to the setting of branching programs (via standard Barrington’s theorem [7]). Let L be the length of a branching program P and let \(\lambda \) denote the security parameter. Then,

  • [26]: There exists an ABE scheme for length L branching programs with large secret keys based on the security of LWE with polynomial approximation factors. In particular, in the instantiation \(| \mathsf {sk}_P| = |L| \times \mathrm{poly}(\lambda )\) and \(q = \mathrm{poly}( L, \lambda )\).

  • [10]: There exists an ABE scheme for length L branching programs with small secret keys based on the security of LWE with quasi-polynomial approximation factors. In particular, \(| \mathsf {sk}_P| = |L| + \mathrm{poly}(\lambda , \log L)\), \(q = \mathrm{poly}( \lambda )^{\log L}\).

To advance the state of the art for both theoretical and practical reasons, the natural question that arises is whether we can obtain the best of both worlds and:

Construct an ABE for branching programs with small secret keys based on the security of LWE with polynomial approximation factors?

1.1 Our Results

We present a new efficient construction of ABE for branching programs from a mild LWE assumption. Our result can be summarized in the following theorem.

Theorem 1

(Informal). There exists a selectively-secure Attribute-Based Encryption for a family of length-L branching programs with small secret keys based on the security of LWE with polynomial approximation factors. More formally, the size of the secret key \( \mathsf {sk}_P\) is \(L + \mathrm{poly}( \lambda , \log L)\) and modulo \(q = \mathrm{poly}( L, \lambda )\), where \(\lambda \) is the security parameter.

Furthermore, we can extend our construction to support arbitrary length branching programs by setting q to some small super-polynomial.

As an additional contribution, our techniques lead to a new efficient constructing of homomorphic signatures for branching programs. In particular, Gorbunov et al. [28] showed how to construct homomorphic signatures for circuits based on the simulation techniques of Boneh et al. [10] in the context of ABE. Their resulting construction is secure based on the short integer solution (SIS) problem with sub-exponential approximation factors (or quasi-polynomial in the setting of branching programs). Analogously, our simulation algorithm presented in Sect. 3.4 can be used directly to construct homomorphic signatures for branching programs based on SIS with polynomial approximation factors.

Theorem 2

(Informal). There exists a homomorphic signatures scheme for the family of length-L branching programs based on the security of SIS with polynomial approximation factors.

High Level Overview. The starting point of our ABE construction is the ABE scheme for circuits with short secret keys by Boneh et al. [10]. At the heart of their construction is a fully key-homomorphic encoding scheme.

It encodes \(a\in \{0,1\}\) with respect to a public key \(\mathbf {A}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}{\mathbb Z}_q^{n \times m}\) in a “noisy” sample:

$$\begin{aligned} \psi _{\mathbf {A},a}=(\mathbf {A}+a\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\end{aligned}$$

where \(\mathbf {{s}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}{\mathbb Z}_q^n\) and \(\mathbf {G}\in {\mathbb Z}_q^{n \times m}\) are fixed across all the encodings and \(\mathbf {{e}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\chi ^m\) (for some noise distribution \(\chi \)) is chosen independently every time. The authors show that one can turn such a key-homomorphic encoding scheme, where homomorphism is satisfied over the encoded values and over the public keys simultaneously, into an attribute based encryption scheme for circuits.

Our first key observation is the asymmetric noise growth in their homomorphic multiplication over the encodings. Consider \(\psi _1,\psi _2\) to be the encodings of \(a_1,a_2\) under public keys \(\mathbf {A}_1,\mathbf {A}_2\). To achieve multiplicative homomorphism, their first step is to achieve homomorphism over \(a_1\) and \(a_2\) by computing

$$\begin{aligned} a_1 \cdot \psi _2 = \left( a_1 \cdot \mathbf {A}_2+\mathbf {(}a_1a_2\mathbf {)}\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+a_1\mathbf {{e}}_2 \end{aligned}$$
(1)

Now, since homomorphism over the public key matrices must also be satisfied in the resulting encoding independently of \(a_1, a_2\) we must replace \(a_1 \cdot \mathbf {A}_2\) in Eq. 1 with operations over \(\mathbf {A}_1, \mathbf {A}_2\) only. To do this, we can use the first encoding \(\psi _1=\left( \mathbf {A}_1+a_1\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}+\mathbf {{e}}_1\) and replace \(a_1 \cdot \mathbf {G}\) with \(a_1 \cdot \mathbf {A}_2\) as follows. First, compute \(\widetilde{\mathbf {A}}_2\in \{0,1\}^{m\times m}\) such that \(\mathbf {G}\cdot \widetilde{\mathbf {A}}_2 = \mathbf {A}_2\). (Finding such \(\widetilde{\mathbf {A}}_2\) is possible since the “trapdoor” of \(\mathbf {G}\) is known publicly). Then compute

$$\begin{aligned} ( \widetilde{\mathbf {A}}_2 )^{\scriptscriptstyle \mathsf {T}}\cdot \psi _1&= \widetilde{\mathbf {A}}_2^{\scriptscriptstyle \mathsf {T}}\cdot \left( (\mathbf {A}_1+a_1\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_1 \right) \nonumber \\&=\left( \mathbf {A}_1\widetilde{\mathbf {A}}_2+a_1\cdot \mathbf {G}\widetilde{\mathbf {A}}_2\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\widetilde{\mathbf {A}}_2\mathbf {{e}}_1\nonumber \\&=\left( \mathbf {A}_1\widetilde{\mathbf {A}}_2+a_1\cdot \mathbf {A}_2\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}'_1 \end{aligned}$$
(2)

Subtracting Eq. 2 from 1, we get \(\left( -\mathbf {A}_1\widetilde{\mathbf {A}}_2+(a_1a_2)\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}'\) which is an encoding of \(a_1a_2\) under the public key \(\mathbf {A}^\times :=-\mathbf {A}_1\widetilde{\mathbf {A}}_2\). Thus,

$$\begin{aligned} \psi _{\mathbf {A}^\times ,a^\times }:= a_1 \cdot \psi _2 - \widetilde{\mathbf {A}}_2^{\scriptscriptstyle \mathsf {T}}\cdot \psi _1 \end{aligned}$$

where \(a^\times :=a_1a_2\). Here, \(\mathbf {{e}}'\) remains small enough because \(\widetilde{\mathbf {A}}_2\) has small (binary) entries. We observe that the new noise \(\mathbf {{e}}' = a_1 \mathbf {{e}}_2 - \widetilde{\mathbf {A}}_2 \mathbf {{e}}_1\) grows asymmetrically. That is, the \(\mathrm{poly}(n)\) multiplicative increase always occurs with respect to the first noise \(\mathbf {{e}}_1\). Naïvely evaluating k levels of multiplicative homomorphism results in a noise of magnitude \(\mathrm{poly}(n)^k\). Can we manage the noise growth by some careful design of the order of homomorphic operations?

To achieve this, comes our second idea: design evaluation algorithms for a “sequential” representation of a matrix branching program to carefully manage the noise growth following the Brakerski-Vaikuntanathan paradigm in the context of fully-homomorphic encryption [13].

First, to generate a ciphertext with respect to an attribute vector \(\mathbf {{x}}= (x_1, \ldots , x_\ell )\) we publish encodings of its individual bits:

$$\begin{aligned} \psi _i \approx \left( \mathbf {A}_i+x_i\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}\end{aligned}$$

We also publish encoding of an initial start state 0Footnote 1:

$$\begin{aligned} \psi ^v_{0} \approx \left( \mathbf {A}^v_0+v_0\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}\end{aligned}$$

The message \(\mu \) is encrypted under encoding \(\mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ e\) (where \(\mathbf {u}\) is treated as the public key) and during decryption the user should obtain a value \(\approx \mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}\) from \(\{\psi _i\}_{i\in [\ell ]},\psi _0^v\) iff \(P(\mathbf {{x}})=1\).

Now, suppose the user wants to evaluate a branching program P on the attribute vector \(\mathbf {{x}}\). Informally, the evaluation of a branching program proceeds in steps updating a special state vector. The next state is determined by the current state and one of the input bits (pertaining to this step). Viewing the sequential representation of the branching program allows us to update the state using only a single multiplication and a few additions. Suppose \(v_t\) represents the state of the program P at step t and the user holds its corresponding encoding \(\psi ^v_t\) (under some public key). To obtain \(\psi ^v_{t+1}\) the user needs to use \(\psi _i\) (for some i determined by the program). Leveraging on the asymmetry, the state can be updated by multiplying \(\psi _i\) with the matrix \(\widetilde{\mathbf {A}}^v_t\) corresponding to the encoding \(\psi ^v_t\) (and then following a few simple addition steps). Since \(\psi _i\) always contains a “fresh” noise (which is never increased as we progress evaluating the program), the noise in \(\psi ^v_{t+1}\) increases from the noise in \(\psi ^v_t\) only by a constant additive factor! As a result, after k steps in the evaluation procedure the noise will be bounded by \(k\cdot \mathrm{poly}(n)\). Eventually, if \(P(\mathbf {{x}})=1\), the user will learn \(\approx \mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}\) and be able to recover \(\mu \) (we refer the reader to the main construction for details).

The main challenge in “riding on asymmetry” for attribute-based encryption is the requirement for satisfying parallel homomorphic properties: we must design separate homomorphic algorithms for operating over the public key matrices and over the encodings that allow for correct decryption. First, we define and design an algorithm for public key homomorphic operations that works specially for branching programs. Second, we design a homomorphic algorithm that works over the encodings that preserves the homomorphism over public key matrices and the bits Footnote 2 and carefully manages the noise growth as illustrated above. To prove the security, we need to argue that no collusion of users is able to learn anything about the message given many secret keys for programs that do not allow for decryption individually. We design a separate public-key simulation algorithm to accomplish this.

1.2 Applications

We summarize some of the known applications of attribute-based encryption. Parno, Raykova and Vaikuntanathan [37] showed how to use ABE to design (publicly) verifiable two-message delegation delegation scheme with a pre-processing phase. Goldwasser, Kalai, Popa, Vaikuntanathan and Zeldovich [24] showed how to use ABE as a critical building block to construct succinct one-query functional encryption, reusable garbled circuits, token-based obfuscation and homomorphic encryption for Turing machines. Our efficiency improvements for branching programs can be carried into all these applications.

1.3 Other Related Work

A number of works optimized attribute-based encryption for boolean formulas: Attrapadung et al. [6] and Emura et al. [17] designed ABE schemes with constant size ciphertext from bilinear assumptions. For arbitrary circuits, Boneh et al. [10] also showed an ABE with constant size ciphertext from multilinear assumptions. ABE can also be viewed as a special case of functional encryptions [9]. Gorbunov et al. [25] showed functional encryption for arbitrary functions in a bounded collusion model from standard public-key encryption scheme. Garg et al. [19] presented a functional encryption for unbounded collusions for arbitrary functions under a weaker security model from multilinear assumptions. More recently, Gorbunov et al. exploited a the asymmetry in the noise growth in [10] in a different context of design of a predicate encryption scheme based on standard LWE [27].

1.4 Organization

In Sect. 2 we present the lattice preliminaries, definitions for ABE and branching programs. In Sect. 3 we present our main evaluation algorithms and build our ABE scheme in Sect. 4. We present a concrete instantiation of the parameters in Sect. 5. Finally, we outline the extensions in Sect. 6.

2 Preliminaries

Notation. Let PPT denote probabilistic polynomial-time. For any integer \(q \ge 2\), we let \(\mathbb {Z}_q\) denote the ring of integers modulo q and we represent \(\mathbb {Z}_q\) as integers in \((-q/2,q/2]\). We let \(\mathbb {Z}_q^{n \times m}\) denote the set of \(n \times m\) matrices with entries in \(\mathbb {Z}_q\). We use bold capital letters (e.g. \(\mathbf {A}\)) to denote matrices, bold lowercase letters (e.g. \(\mathbf {x}\)) to denote vectors. The notation \(\mathbf {A}^{\scriptscriptstyle \mathsf {T}}\) denotes the transpose of the matrix \(\mathbf {A}\). If \(\mathbf {A}_1\) is an \(n \times m\) matrix and \(\mathbf {A}_2\) is an \(n \times m'\) matrix, then \([\mathbf {A}_1 \Vert \mathbf {A}_2]\) denotes the \(n \times (m + m')\) matrix formed by concatenating \(\mathbf {A}_1\) and \(\mathbf {A}_2\). A similar notation applies to vectors. When doing matrix-vector multiplication we always view vectors as column vectors. Also, [n] denotes the set of numbers \(1,\ldots ,n\).

2.1 Lattice Preliminaries

Learning with Errors (LWE) Assumption The LWE problem was introduced by Regev [39], who showed that solving it on the average is as hard as (quantumly) solving several standard lattice problems in the worst case.

Definition 1

( \(\mathsf {LWE}\) ). For an integer \(q = q(n) \ge 2\) and an error distribution \(\chi =\chi (n)\) over \({\mathbb Z}_q\), the learning with errors problem \(\mathsf {dLWE}_{n,m,q,\chi }\) is to distinguish between the following pairs of distributions:

$$\begin{aligned} \{ \mathbf {A}, \mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{x}}\} \quad \text{ and } \quad \{ \mathbf {A}, \mathbf {u}\} \end{aligned}$$

where \(\mathbf {A}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\mathbb {Z}_q^{n \times m}, \mathbf {{s}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\mathbb {Z}_q^n, \mathbf {{x}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\chi ^m, \mathbf {u}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\mathbb {Z}_q^m\).

Connection to Lattices. Let \(B = B(n) \in \mathbb {N}\). A family of distributions \(\chi = \{\chi _n\}_{n\in \mathbb {N}}\) is called B-bounded if

$$\begin{aligned} \Pr [ \chi \in \{ -B,\ldots ,B-1,B\} ] = 1. \end{aligned}$$

There are known quantum [39] and classical [38] reductions between \(\mathsf {dLWE}_{n,m,q,\chi }\) and approximating short vector problems in lattices in the worst case, where \(\chi \) is a B-bounded (truncated) discretized Gaussian for some appropriate B. The state-of-the-art algorithms for these lattice problems run in time nearly exponential in the dimension n [4, 35]; more generally, we can get a \(2^k\)-approximation in time \(2^{\tilde{O}(n/k)}\). Throughout this paper, the parameter \(m = \mathrm{poly}(n)\), in which case we will shorten the notation slightly to \(\mathsf {LWE}_{n,q,\chi }\).

Trapdoors for Lattices and LWE Gaussian Distributions. Let \(D_{{\mathbb {Z}^m,\sigma }}\) be the truncated discrete Gaussian distribution over \(\mathbb {Z}^m\) with parameter \(\sigma \), that is, we replace the output by \(\mathbf {0}\) whenever the \(||\,\cdot \,||_{\infty }\) norm exceeds \(\sqrt{m} \cdot \sigma \). Note that \(D_{{\mathbb {Z}^m,\sigma }}\) is \(\sqrt{m} \cdot \sigma \)-bounded.

Lemma 1

(Lattice Trapdoors [3, 22, 34]). There is an efficient randomized algorithm \(\mathsf {TrapSamp}(1^n,1^m,q)\) that, given any integers \(n \ge 1\), \(q \ge 2\), and sufficiently large \(m = \varOmega (n\log q)\), outputs a parity check matrix \(\mathbf {A}\in {\mathbb Z}_q^{n\times m}\) and a ‘trapdoor’ matrix \(\mathbf {T}_\mathbf {A}\in \mathbb {Z}^{m \times m}\) such that the distribution of \(\mathbf {A}\) is \(\mathrm{negl}(n)\)-close to uniform.

Moreover, there is an efficient algorithm \(\mathsf {SampleD}\) that with overwhelming probability over all random choices, does the following: For any \(\mathbf {u}\in {\mathbb Z}_q^n\), and large enough \(s = \varOmega (\sqrt{n\log q})\), the randomized algorithm \(\mathsf {SampleD}(\mathbf {A},\mathbf {T}_\mathbf {A},\mathbf {u},s)\) outputs a vector \(\mathbf {r}\in \mathbb {Z}^m\) with norm \(||\mathbf {r}||_{\infty } \le ||\mathbf {r}||_2 \le s\sqrt{n}\) (with probability 1). Furthermore, the following distributions of the tuple \((\mathbf {A}, \mathbf {T}_\mathbf {A}, \mathbf {U}, \mathbf {R})\) are within \(\mathrm{negl}(n)\) statistical distance of each other for any polynomial \(k \in \mathbb {N}\):

  • \((\mathbf {A}, \mathbf {T}_\mathbf {A}) \leftarrow \mathsf {TrapSamp}(1^n,1^m,q)\); \(\mathbf {U}\leftarrow {\mathbb Z}_q^{n\times k}\); \(\mathbf {R}\leftarrow \mathsf {SampleD}(\mathbf {A},\mathbf {T}_\mathbf {A},\mathbf {U},s)\).

  • \((\mathbf {A}, \mathbf {T}_\mathbf {A}) \leftarrow \mathsf {TrapSamp}(1^n,1^m,q)\); \(\mathbf {R}\leftarrow (D_{\mathbb {Z}^m,s})^k\); \(\mathbf {U}:= \mathbf {A}\mathbf {R}\pmod {q}\).

Sampling Algorithms We will use the following algorithms to sample short vectors from specific lattices. Looking ahead, the algorithm \(\mathsf {SampleLeft}\) [1, 14] will be used to sample keys in the real system, while the algorithm \(\mathsf {SampleRight}\) [1] will be used to sample keys in the simulation.

Algorithm \(\mathsf {SampleLeft}(\mathbf {A}, \mathbf {B}, \mathbf {T}_{\mathbf {A}}, \mathbf {u}, \alpha )\):

  • Inputs: a full rank matrix \(\mathbf {A}\) in \(\mathbb {Z}_q^{n \times m}\), a “short” basis \(\mathbf {T}_{A}\) of \(\varLambda _q^{\perp }({\mathbf {A}})\), a matrix \(\mathbf {B}\) in \(\mathbb {Z}_q^{n \times m_1}\), a vector \(\mathbf {u}\in \mathbb {Z}_q^n\), and a Gaussian parameter \(\alpha \).

  • Output: Let \(\mathbf {F}:=(\mathbf {A}\ \Vert \ \mathbf {B})\). The algorithm outputs a vector \(\mathbf {{e}}\in \mathbb {Z}^{m+m_1}\) in the coset \(\varLambda _{{\mathbf {F}}+{\mathbf {u}}}\).

Theorem 3

([1, Theorem 17], [14, Lemma 3.2]). Let \(q>2,\ m > n\) and \(\alpha > \Vert {\mathbf {T}_\mathbf {A}}\Vert _{\mathsf {GS}} \cdot \omega (\sqrt{\log (m+m_1)})\). Then \(\mathsf {SampleLeft}(\mathbf {A}, \mathbf {B}, \mathbf {T}_\mathbf {A}, \mathbf {u}, \alpha )\) taking inputs as in (3) outputs a vector \(\mathbf {{e}}\in \mathbb {Z}^{m+m_1}\) distributed statistically close to \(D_{{\varLambda _{{\mathbf {F}}+{\mathbf {u}}},\alpha }}\), where \(\mathbf {F}:=(\mathbf {A}\ \Vert \ \mathbf {B})\).

where \(\Vert {\mathbf {T}}\Vert _{\mathsf {GS}}\) refers to the norm of Gram-Schmidt orthogonalisation of \(\mathbf {T}\). We refer the readers to [1] for more details.

Algorithm \(\mathsf {SampleRight}(\mathbf {A}, \mathbf {G}, \mathbf {R}, \mathbf {T}_{\mathbf {G}}, \mathbf {u}, \alpha )\):

  • Inputs: matrices \(\mathbf {A}\) in \(\mathbb {Z}_q^{n \times k}\) and \(\mathbf {R}\) in \(\mathbb {Z}^{k \times m}\), a full rank matrix \(\mathbf {G}\) in \(\mathbb {Z}_q^{n \times m}\), a “short” basis \(\mathbf {T}_\mathbf {G}\) of \(\varLambda _q^{\perp }({\mathbf {G}})\), a vector \(\mathbf {u}\in \mathbb {Z}_q^n\), and a Gaussian parameter \(\alpha \).

  • Output: Let \(\mathbf {F}:=(\mathbf {A}\ \Vert \ \mathbf {A}\mathbf {R}+ \mathbf {G})\). The algorithm outputs a vector \(\mathbf {{e}}\in \mathbb {Z}^{m+k}\) in the coset \({\varLambda _{{\mathbf {F}}+{\mathbf {u}}}}\).

Often the matrix \(\mathbf {R}\) given to the algorithm as input will be a random matrix in \(\{1,-1\}^{m \times m}\). Let \(S^{m}\) be the m-sphere \(\{\mathbf {x}\in \mathbb {R}^{m+1}\ :\ \left\| {\mathbf {x}} \right\| = 1\}\). We define \(s_{\scriptscriptstyle {R}}:=\left\| {\mathbf {R}} \right\| \) \( := \sup _{\mathbf {x}\in S^{m-1}} \left\| {\mathbf {R}\cdot \mathbf {x}} \right\| \).

Theorem 4

([1, Theorem 19]). Let \(q>2, m > n\) and \(\alpha > \Vert {\mathbf {T}_\mathbf {G}}\Vert _{\mathsf {GS}} \cdot s_{\scriptscriptstyle {R}}\cdot \omega (\sqrt{\log m})\). Then \(\mathsf {SampleRight}(\mathbf {A}, \mathbf {G}, \mathbf {R}, \mathbf {T}_\mathbf {G}, \mathbf {u}, \alpha )\) taking inputs as in (3) outputs a vector \(\mathbf {{e}}\in \mathbb {Z}^{m+k}\) distributed statistically close to \(D_{{\varLambda _{{\mathbf {F}}+{\mathbf {u}}},\alpha }}\), where \(\mathbf {F}:=(\mathbf {A}\ \Vert \ \mathbf {A}\mathbf {R}+ \mathbf {G})\).

Primitive Matrix We use the primitive matrix \(\mathbf {G}\in {\mathbb Z}_q^{n \times m}\) defined in [34]. This matrix has a trapdoor \(\mathbf {T}_\mathbf {G}\) such that \(\left\| {\mathbf {T}_\mathbf {G}} \right\| _{\infty }=2\).

We also define an algorithm \(\mathsf {invG}:{\mathbb Z}_q^{n\times m}\rightarrow {\mathbb Z}_q^{m\times m}\) which deterministically derives a pre-image \(\widetilde{\mathbf {A}}\) satisfying \(\mathbf {G}\cdot \widetilde{\mathbf {A}} = \mathbf {A}\). From [34], there exists a way to get \(\widetilde{\mathbf {A}}\) such that \(\widetilde{\mathbf {A}}\in \{0,1\}^{m \times m}\).

2.2 Attribute-Based Encryption

An attribute-based encryption scheme \(\mathcal {ABE}\) [30] for a class of circuits \(\mathcal {C}\) with \(\ell \) bit inputs and message space \(\mathcal{M}\) consists of a tuple of p.p.t. algorithms \((\mathsf {Params}, \mathsf {Setup}, \mathsf {Enc}, \mathsf {KeyGen}, \mathsf {Dec})\):

  • \(\mathsf {Params}(1^\lambda ) \rightarrow \mathsf {pp}\): The parameter generation algorithm takes the security parameter \(1^\lambda \) and outputs a public parameter \( \mathsf {pp}\) which is implicitly given to all the other algorithms of the scheme.

  • \(\mathsf {Setup}(1^\ell )\rightarrow ( \mathsf {mpk}, \mathsf {msk})\): The setup algorithm gets as input the length \(\ell \) of the input index, and outputs the master public key \( \mathsf {mpk}\), and the master key \( \mathsf {msk}\).

  • \(\mathsf {Enc}( \mathsf {mpk},x,\mu )\rightarrow \mathsf {ct}_\mathbf {{x}}\): The encryption algorithm gets as input \( \mathsf {mpk}\), an index \(x \in \{0,1\}^\ell \) and a message \(\mu \in \mathcal{M}\). It outputs a ciphertext \(\mathsf {ct}_\mathbf {{x}}\).

  • \(\mathsf {KeyGen}( \mathsf {msk},C)\rightarrow \mathsf {sk}_C\): The key generation algorithm gets as input \( \mathsf {msk}\) and a predicate specified by \(C \in \mathcal {C}\). It outputs a secret key \( \mathsf {sk}_C\).

  • \(\mathsf {Dec}(\mathsf {ct}_\mathbf {{x}}, \mathsf {sk}_C) \rightarrow \mu \): The decryption algorithm gets as input \(\mathsf {ct}_\mathbf {{x}}\) and \( \mathsf {sk}_C\), and outputs either \(\perp \) or a message \(\mu \in \mathcal{M}\).

Definition 2

(Correctness). We require that for all \((\mathbf {{x}},C)\) such that \(C(\mathbf {{x}})=1\) and for all \(\mu \in \mathcal{M}\), we have \(\Pr [\mathsf {ct}_\mathbf {{x}}\leftarrow \mathsf {Enc}( \mathsf {mpk},\mathbf {{x}},\mu ); \mathsf {Dec}(\mathsf {ct}_\mathbf {{x}}, \mathsf {sk}_C) = \mu )]= 1\) where the probability is taken over \( \mathsf {pp}\leftarrow \mathsf {Params}(1^\lambda ), ( \mathsf {mpk}, \mathsf {msk}) \leftarrow \mathsf {Setup}(1^\ell )\) and the coins of all the algorithms in the expression above.

Definition 3

(Security). For a stateful adversary \(\mathcal{A}\), we define the advantage function \(\mathsf {Adv}^{\textsc {abe}}_{\mathcal{A}}(\lambda )\) to be

$$\begin{aligned} \Pr \left[ b = b' : \begin{array}{l} \mathbf {{x}},d_{\mathrm {max}}\leftarrow \mathcal{A}(1^\lambda ,1^\ell );\\ \mathsf {pp}\leftarrow \mathsf {Params}(1^\lambda , 1^{d_{\mathrm {max}}});\\ ( \mathsf {mpk}, \mathsf {msk}) \leftarrow \mathsf {Setup}(1^\lambda ,1^\ell ,\mathbf {{x}}^*);\\ (\mu _0,\mu _1) \leftarrow \mathcal{A}^{\mathsf {Keygen}( \mathsf {msk},\cdot )}( \mathsf {mpk}),\\ |\mu _0| = |\mu _1|;\\ b \mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\{0,1\};\\ \mathsf {ct}_\mathbf {{x}}\leftarrow \mathsf {Enc}( \mathsf {mpk},\mathbf {{x}},\mu _b);\\ b' \leftarrow \mathcal{A}^{\mathsf {Keygen}( \mathsf {msk},\cdot )}(\mathsf {ct}_\mathbf {{x}}) \end{array}\right] - \frac{1}{2} \end{aligned}$$

with the restriction that all queries y that \(\mathcal{A}\) makes to \(\mathsf {Keygen}( \mathsf {msk},\cdot )\) satisfies \(C(\mathbf {{x}}^*) = 0\) (that is, \( \mathsf {sk}_C\) does not decrypt \(\mathsf {ct}_\mathbf {{x}}\)). An attribute-based encryption scheme is selectively secure if for all PPT adversaries \(\mathcal{A}\), the advantage \(\mathsf {Adv}^{\textsc {abe}}_{\mathcal{A}}(\lambda )\) is a negligible function in \(\lambda \).

2.3 Branching Programs

We define branching programs similar to [13]. A width-w branching program \(\mathsf {BP}\) of length L with input space \(\{0,1\}^\ell \) and s states (represented by [s]) is a sequence of L tuples of the form \(\left( \mathsf {var}(t),\sigma _{t,0},\sigma _{t,1}\right) \) where

  • \(\sigma _{t,0}\) and \(\sigma _{t,1}\) are injective functions from [s] to itself.

  • \(\mathsf {var}:[L]\rightarrow [\ell ]\) is a function that associates the t-th tuple \(\sigma _{t,0},\sigma _{t,1}\) with the input bit \(x_{\mathsf {var}(t)}\).

The branching program \(\mathsf {BP}\) on input \(\mathbf {{x}}=(x_1,\ldots ,x_\ell )\) computes its output as follows. At step t, we denote the state of the computation by \(\eta _t\in [s]\). The initial state is \(\eta _0=1\). In general, \(\eta _t\) can be computed recursively as

$$\begin{aligned} \eta _t=\sigma _{t,x_{\mathsf {var}(t)}}(\eta _{t-1}) \end{aligned}$$

Finally, after L steps, the output of the computation \(\mathsf {BP}(\mathbf {{x}})=1\) if \(\eta _L=1\) and 0 otherwise.

As done in [13], we represent states with bits rather than numbers to bound the noise growth. In particular, we represent the state \(\eta _t\in [s]\) by a unit vector \(\mathbf {{v}}_t \in \{0,1\}^s\). The idea is that \(\mathbf {{v}}_t[i]=1\) if and only if \(\sigma _{t,x_{\mathsf {var}(t)}}(\eta _{t-1})=i\). Note that we can also write the above expression as \(\mathbf {{v}}_t[i]=1\) if and only if either:

  • \(\mathbf {{v}}_{t-1}\left[ \sigma ^{-1}_{t,0}(i)\right] =1\) and \(x_{\mathsf {var}(t)}=0\)

  • \(\mathbf {{v}}_{t-1}\left[ \sigma ^{-1}_{t,1}(i)\right] =1\) and \(x_{\mathsf {var}(t)}=1\)

This latter form will be useful for us since it can be captured by the following formula. For \(t\in [L]\) and \(i\in [s]\),

$$\begin{aligned} \mathbf {{v}}_t[i]&:=\mathbf {{v}}_{t-1}\left[ \sigma ^{-1}_{t,0}(i)\right] \cdot (1-x_{\mathsf {var}(t)}) + \mathbf {{v}}_{t-1}\left[ \sigma ^{-1}_{t,1}(i)\right] \cdot x_{\mathsf {var}(t)} \\&\, =\mathbf {{v}}_{t-1}\left[ \gamma _{t,i,0}\right] \cdot (1-x_{\mathsf {var}(t)}) + \mathbf {{v}}_{t-1}\left[ \gamma _{t,i,1}\right] \cdot x_{\mathsf {var}(t)} \end{aligned}$$

where \(\gamma _{t,i,0}:=\sigma ^{-1}_{t,0}(i)\) and \(\gamma _{t,i,1}=\sigma ^{-1}_{t,1}(i)\) can be publicly computed from the description of the branching program. Hence, \(\left\{ \mathsf {var}(t),\{\gamma _{t,i,0},\gamma _{t,i,1}\}_{i\in [s]}\right\} _{t\in [L]}\) is also valid representation of a branching program \(\mathsf {BP}\).

For clarity of presentation, we will deal with width-5 permutation branching programs, which is shown to be equivalent to the circuit class \(\mathcal {NC}^1\) [7]. Hence, we have \(s=w=5\) and the functions \(\sigma _0,\sigma _1\) are permutations on [5].

3 Our Evaluation Algorithms

In this section we describe the key evaluation and encoding (ciphertext) evaluation algorithms that will be used in our ABE construction. The algorithms are carefully designed to manage the noise growth in the LWE encodings and to preserve parallel homomorphism over the public keys and the encoded values.

3.1 Basic Homomorphic Operations

We first describe basic homomorphic addition and multiplication algorithms over the public keys and encodings (ciphertexts) based on the techniques developed by Boneh et al. [10].

Definition 4

(LWE Encoding). For any matrix \(\mathbf {A}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}{\mathbb Z}_q^{n \times m}\), we define an LWE encoding of a bit \(a\in \{0,1\}\) with respect to a (public) key \(\mathbf {A}\) and randomness \(\mathbf {{s}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}{\mathbb Z}_q^n\) as

$$\begin{aligned} \psi _{\mathbf {A},\mathbf {{s}}, a} = (\mathbf {A}+a\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\in {\mathbb Z}_q^m \end{aligned}$$

for error vector \(\mathbf {{e}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\chi ^m\) and an (extended) primitive matrix \(\mathbf {G}\in {\mathbb Z}_q^{n \times m}\).

In our construction, however, all encodings will be under the same LWE secret \(\mathbf {{s}}\), hence for simplicity we will simply refer to such an encoding as \(\psi _{\mathbf {A}, a}\).

Definition 5

(Noise Function). For every \(\mathbf {A}\in {\mathbb Z}_q^{n \times m}, \mathbf {{s}}\in {\mathbb Z}_q^n\) and encoding \(\psi _{\mathbf {A}, a} \in {\mathbb Z}_q^m\) of a bit \(a \in \{0,1\}\) we define a noise function as

$$\begin{aligned} \mathsf {Noise}_\mathbf {{s}}(\psi _{\mathbf {A},a}) :=|| \psi _{\mathbf {A}, a} - ( \mathbf {A}+ a \cdot \mathbf {G}) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}\mod q ||_\infty \end{aligned}$$

Looking ahead, in Lemma 8 we show that if the noise obtained after applying homomorphic evaluation is \(\le q/4\), then our ABE scheme will decrypt the message correctly. Now we define the basic additive and multiplicative operations on the encodings of this form, as per [10]. In their context, they refer to a matrix \(\mathbf {A}\) as the “public key” and \(\psi _{\mathbf {A},a}\) as a ciphertext.

Homomorphic Addition This algorithm takes as input two encodings \(\psi _{\mathbf {A}, a},\psi _{\mathbf {A}', a'}\) and outputs the sum of them. Let \(\mathbf {A}^+ = \mathbf {A}+ \mathbf {A}'\) and \(a^+ = a + a'\).

$$\begin{aligned} \mathsf {Add}_\mathsf {en}(\psi _{\mathbf {A}, a},\psi _{\mathbf {A}', a'}): \text{ Output } \psi _{\mathbf {A}^+, a^+}:= \psi _{\mathbf {A}, a} + \psi _{\mathbf {A}', a'}\mod q \end{aligned}$$

Lemma 2

(Noise Growth in \(\mathsf {Add}_\mathsf {en}\) ). For any two valid encodings \(\psi _{\mathbf {A},a}, \psi _{\mathbf {A}',a'} \in {\mathbb Z}_q^m\), let \(\mathbf {A}^+ = \mathbf {A}+ \mathbf {A}'\) and \(a^+ = a + a'\) and \(\psi _{\mathbf {A}^+, a^+}=\mathsf {Add}_\mathsf {en}(\psi _{\mathbf {A}, a},\psi _{\mathbf {A}', a'})\), then we have

$$\begin{aligned} \mathsf {Noise}_{\mathbf {A}^+, a^+}( \psi _{\mathbf {A}^+, a^+} ) \le \mathsf {Noise}_{\mathbf {A}, a}( \psi _{\mathbf {A},a} ) + \mathsf {Noise}_{\mathbf {A}', a'}(\psi _{\mathbf {A}', a'}) \end{aligned}$$

Proof

Given two encodings we have,

$$\begin{aligned} \psi _{\mathbf {A}^+, a^+}= & {} \psi _{\mathbf {A}, a}+\psi _{\mathbf {A}', a'}\\= & {} \left( (\mathbf {A}+a\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\right) + \left( (\mathbf {A}'+a'\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}' \right) \\= & {} \left( (\mathbf {A}+\mathbf {A}')+(a+a')\cdot \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+(\mathbf {{e}}+\mathbf {{e}}')\\= & {} (\mathbf {A}^+ + a^+ \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ (\mathbf {{e}}+\mathbf {{e}}') \end{aligned}$$

Thus, from the definition of the noise function, it follows that

$$\begin{aligned} \mathsf {Noise}_{\mathbf {A}^+, a^+}( \psi _{\mathbf {A},a} + \psi _{\mathbf {A}',a'}) \le \mathsf {Noise}_{\mathbf {A}, a}( \psi _{\mathbf {A},a} ) + \mathsf {Noise}_{\mathbf {A}', a'}(\psi _{\mathbf {A}', a'}) \end{aligned}$$

Homomorphic Multiplication This algorithm takes in two encodings \(\psi _{\mathbf {A}, a}=(\mathbf {A}+a\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_1\) and \(\psi _{\mathbf {A}', a'}=(\mathbf {A}'+a'\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_2\) and outputs an encoding \(\psi _{\mathbf {A}^\times , a^\times }\) where \(\mathbf {A}^\times = - \mathbf {A}\widetilde{\mathbf {A}'}\) and \(a^\times = a a'\) as follows:

$$\begin{aligned} \mathsf {Multiply}_\mathsf {en}(\psi _{\mathbf {A}, a},\psi _{\mathbf {A}', a'}): \text{ Output } \psi _{\mathbf {A}^\times , a^\times }:= - \widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\cdot \psi + a \cdot \psi '. \end{aligned}$$

Note that this process requires the knowledge of the attribute a in clear.

Lemma 3

(Noise Growth in \(\mathsf {Multiply}_\mathsf {en}\) ). For any two valid encodings \(\psi _{\mathbf {A},a}, \psi _{\mathbf {A}',a'} \in {\mathbb Z}_q^m\), let \(\mathbf {A}^\times = -\mathbf {A}\widetilde{\mathbf {A}'}\) and \(a^\times = a a'\) and \(\psi _{\mathbf {A}^\times , a^\times }=\mathsf {Multiply}_\mathsf {en}(\psi _{\mathbf {A}, a},\psi _{\mathbf {A}', a'})\) then we have

$$\begin{aligned} \mathsf {Noise}_{\mathbf {A}^\times , a^\times }( \psi _{\mathbf {A}^\times ,a^\times }) \le m \cdot \mathsf {Noise}_{\mathbf {A}, a}( \psi _{\mathbf {A},a} ) + a \cdot \mathsf {Noise}_{\mathbf {A}', a'}(\psi _{\mathbf {A}', a'}) \end{aligned}$$

Proof

Given two valid encodings, we have

$$\begin{aligned} \psi _{\mathbf {A}^\times , a^\times }= & {} -\widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\cdot \psi + a \cdot \psi '\\= & {} -\widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\big ((\mathbf {A}+a\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\big ) + a \cdot \big ((\mathbf {A}'+a'\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}' \big )\\= & {} \bigg ((-\mathbf {A}\widetilde{\mathbf {A}'}-a\cdot \mathbf {A}')^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}- \widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\bigg ) + \bigg ((a \cdot \mathbf {A}'+a a'\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ a \cdot \mathbf {{e}}' \bigg )\\= & {} \big ((\underbrace{-\mathbf {A}\widetilde{\mathbf {A}}_2}_{\mathbf {A}^\times }) + \underbrace{a a'}_{a^\times }\cdot \mathbf {G}\big )^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \big (\underbrace{- \widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}+ a \cdot \mathbf {{e}}'}_{\mathbf {{e}}^\times }\big ) \end{aligned}$$

Thus, from the definition of the noise function, we must bound the noise \(\mathbf {{e}}^\times \). Hence,

$$\begin{aligned} \left\| {\mathbf {{e}}^\times } \right\| _{\infty } \le \left\| {\widetilde{\mathbf {A}'}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}} \right\| _{\infty } + a \cdot \left\| {\mathbf {{e}}'} \right\| _{\infty }\le m \cdot \left\| {\mathbf {{e}}} \right\| _{\infty } + a \cdot \left\| {\mathbf {{e}}'} \right\| _{\infty } \end{aligned}$$

where the last inequality holds since \(\widetilde{\mathbf {A}'}\in \{0,1\}^{m \times m}\).

Note: This type of homomorphism is different from a standard fully homomorphic encryption (FHE) mainly for the following two reasons.

  • To perform multiplicative homomorphism, here we need one of the input values in clear but the FHE homomorphic operations are performed without the knowledge of the input values.

  • The other big difference is that, here we require the output public key matrices \(\mathbf {A}^+,\mathbf {A}^\times \) to be independent of the input values \(a_1,a_2\). More generally, when given an arbitrary circuit with AND and OR gates along with the matrices corresponding to its input wires, one should be able to determine the matrix corresponding to the output wire without the knowledge of the values of the input wires. But, this property is not present in any of the existing FHE schemes.

3.2 Our Public Key Evaluation Algorithm

We define a (public) key evaluation algorithm \(\mathsf {Eval}_{\mathsf {pk}}\). The algorithm takes as input a description of the branching program \(\mathsf {BP}\), a collection of public keys \(\{\mathbf {A}_i\}_{i \in [\ell ]}\) (one for each attribute bit \(\mathbf {{x}}_i\)), a collection of public keys \(\mathbf {V}_{0,i}\) for initial state vector and an auxiliary matrix \(\mathbf {A}^{\mathsf {c}}\). The algorithm outputs an “evaluated” public key corresponding to the branching program:

$$\begin{aligned} \mathsf {Eval}_{\mathsf {pk}}(\mathsf {BP},\{\mathbf {A}_i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,i}\}_{i\in [5]}, \mathbf {A}^{\mathsf {c}})\rightarrow \mathbf {V}_\mathsf {BP}\end{aligned}$$

The auxiliary matrix \(\mathbf {A}^{\mathsf {c}}\) can be thought of as the public key we use to encode a constant 1. We also define \(\mathbf {A}'_i:=\mathbf {A}^{\mathsf {c}}-\mathbf {A}_i\), as a public key that will encode \(1-\mathbf {{x}}_i\). The output \(\mathbf {V}_\mathsf {BP}\in {\mathbb Z}_q^{n \times m}\) is the homomorphically defined public key \(\mathbf {V}_{L,1}\) at position 1 of the state vector at the Lth step of the branching program evaluation.

The algorithm proceeds as follows. Recall the description of the branching program \(\mathsf {BP}\) represented by tuples \(\left( \mathsf {var}(t),\{\gamma _{t,i,0},\gamma _{t,i,1}\}_{i\in [5]}\right) \) for \(t \in [L]\). The initial state vector is always taken to be \(\mathbf {{v}}_0:=[1,0,0,0,0]\). And for \(t\in [L]\),

$$\begin{aligned} \mathbf {{v}}_t[i]=\mathbf {{v}}_{t-1}\left[ \gamma _{t,i,0}\right] \cdot (1-x_{\mathsf {var}(t)}) + \mathbf {{v}}_{t-1}\left[ \gamma _{t,i,1}\right] \cdot x_{\mathsf {var}(t)} \end{aligned}$$

Our algorithm calculates \(\mathbf {V}_\mathsf {BP}\) inductively as follows. Assume at time \(t-1 \in [L]\), the state public keys \(\{\mathbf {V}_{t-1,i}\}_{i \in [5]}\) are already assigned. We assign state public keys \(\{\mathbf {V}_{t,i}\}_{i \in [5]}\) at time t as follows.

  1. 1.

    Let \(\gamma _0 :=\gamma _{t,i,0}\) and \(\gamma _1:=\gamma _{t,i,1}\).

  2. 2.

    Let \(\mathbf {V}_{t,i}=-\mathbf {A}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}-\mathbf {A}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1}\).

It is important to note that the public key defined at each step of the state vector is independent of any input attribute vector. Now, let \(\mathbf {V}_{L,1}\) be the public key assigned at position 1 at step L of the branching program. We simply output \(\mathbf {V}_\mathsf {BP}:=\mathbf {V}_{L,1}\).

3.3 Our Encoding Evaluation Algorithm

We also define an encoding evaluation algorithm \(\mathsf {Eval}_{\mathsf {en}}\) which we will use in the decryption algorithm of our ABE scheme. The algorithm takes as input the description of a branching program \(\mathsf {BP}\), an attribute vector \(\mathbf {{x}}\), a set of encodings for the attribute (with corresponding public keys) \(\{\mathbf {A}_i, \psi _{i}:=\psi _{\mathbf {A}_i, x_i}\}_{i \in [\ell ]}\), encodings of the initial state vector \(\{\mathbf {V}_{0,i}, \psi _{0,i}:=\psi _{\mathbf {V}_{0,i},\mathbf {{v}}_0[i]}\}_{i \in [5]}\) and an encoding of a constant “1” \(\psi ^\mathsf {c}:=\psi _{\mathbf {A}^{\mathsf {c}}, 1}\). (From now on, we will use the simplified notations \(\psi _i,\psi _{0,i},\psi ^\mathsf {c}\) for the encodings). \(\mathsf {Eval}_{\mathsf {en}}\) outputs an encoding of the result \(y:=\mathsf {BP}(\mathbf {{x}})\) with respect to a homomorphically derived public key \(\mathbf {V}_{\mathsf {BP}}:=\mathbf {V}_{L,1}\).

$$\begin{aligned} \mathsf {Eval}_{\mathsf {en}}\big (\mathsf {BP},\mathbf {{x}},\{\mathbf {A}_i,\psi _i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,i},\psi _{0,i}\}_{i\in [5]}, \mathbf {A}^{\mathsf {c}},\psi ^\mathsf {c}\big )\rightarrow \psi _\mathsf {BP}\end{aligned}$$

Recall that for \(t\in [L]\), we have for all \(i \in [5]\):

$$\begin{aligned} \mathbf {{v}}_t[i]=\mathbf {{v}}_{t-1}\left[ \gamma _{t,i,0}\right] \cdot (1-x_{\mathsf {var}(t)}) + \mathbf {{v}}_{t-1}\left[ \gamma _{t,i,1}\right] \cdot x_{\mathsf {var}(t)} \end{aligned}$$

The evaluation algorithm proceeds inductively to update the encoding of the state vector for each step of the branching program. The key idea to obtain the desired noise growth is that we only multiply the fresh encodings of the attribute bits with the binary decomposition of the public keys. The result is then be added to update the encoding of the state vector. Hence, at each step of the computation the noise in the encodings of the state will only grow by some fixed additive factor.

The algorithm proceeds as follows. We define \(\psi '_i :=\psi _{\mathbf {A}'_i, (1-x_i)}=(\mathbf {A}'_i+(1-x_i) \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}'_i\) to denote the encoding of \(1-x_i\) with respect to \(\mathbf {A}_i' = \mathbf {A}^{\mathsf {c}}- \mathbf {A}_i\). Note that it can be computed using \(\mathsf {Add}_\mathsf {en}( \psi _{\mathbf {A}^{\mathsf {c}}, 1}, -\psi _{\mathbf {A}_i, x_i})\). Assume at time \(t-1 \in [L]\) we hold encodings of the state vector \(\{ \psi _{\mathbf {V}_{t-1,i}, \mathbf {{v}}_t[i]} \}_{i \in [5]}\). Now, we compute the encodings of the new state values:

$$\begin{aligned} \psi _{t,i}&=\mathsf {Add}_\mathsf {en}\left( \mathsf {Multiply}_\mathsf {en}(\psi '_{\mathsf {var}(t)}, \psi _{t-1,\gamma _0}), \mathsf {Multiply}_\mathsf {en}(\psi _{\mathsf {var}(t)},\psi _{t-1,\gamma _1})\right) \end{aligned}$$

where \(\gamma _0:=\gamma _{t,i,0}\) and \(\gamma _1:=\gamma _{t,i,1}\). As we show below (in Lemma 4), this new encoding has the form \(\big ( \mathbf {V}_{t,i} + \mathbf {{v}}_t[i] \cdot \mathbf {G}\big )^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}_{t,i}\) (for a small enough noise term \(\mathbf {{e}}_{t,i}\)).

Finally, let \(\psi _{L,1}\) be the encoding obtained at the Lth step corresponding to state value at position “1” by this process. As we show in Lemma 5, noise term \(\mathbf {{e}}_\mathsf {BP}\) has “low” infinity norm enabling correct decryption (Lemma 8). The algorithm outputs \(\psi _\mathsf {BP}:=\psi _{L,1}\).

Correctness and Analysis

Lemma 4

For any valid set of encodings \(\psi _{\mathsf {var}(t)}, \psi '_{\mathsf {var}(t)}\) for the bits \(x_{\mathsf {var}(t)},(1-x_{\mathsf {var}(t)})\) and \(\{\psi _{t-1,i}\}_{i\in [5]}\) for the state vector \(\mathbf {{v}}_{t-1}\) at step \(t-1\), the output of the function

$$\begin{aligned} \mathsf {Add}_\mathsf {en}\left( \mathsf {Multiply}_\mathsf {en}(\psi '_{\mathsf {var}(t)}, \psi _{t-1,\gamma _0}),\mathsf {Multiply}_\mathsf {en}(\psi _{\mathsf {var}(t)},\psi _{t-1,\gamma _1})\right) \rightarrow \psi _{t,i} \end{aligned}$$

where \(\psi _{t,i}=\big ( \mathbf {V}_{t,i} + \mathbf {{v}}_t[i] \cdot \mathbf {G}\big )^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}_{t,i}\), for some noise term \(\mathbf {{e}}_{t,i}\).

Proof

Given valid encodings \(\psi _{\mathsf {var}(t)},\psi '_{\mathsf {var}(t)}\) and \(\left\{ \psi _{t-1,i}\right\} _{i\in [5]}\), we have:

$$\begin{aligned} \psi _{t,i}= & {} \mathsf {Add}_\mathsf {en}\left( \mathsf {Multiply}_\mathsf {en}(\psi '_{\mathsf {var}(t)}, \psi _{t-1,\gamma _0}),\mathsf {Multiply}_\mathsf {en}(\psi _{\mathsf {var}(t)},\psi _{t-1,\gamma _1})\right) \\= & {} \mathsf {Add}_\mathsf {en}\bigg ( \left[ (-\mathbf {A}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}+(\mathbf {{v}}_t[\gamma _0]\cdot (1-x_{\mathsf {var}(t)}))\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_{1})\right] ,\\&\qquad \left[ (-\mathbf {A}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1}+(\mathbf {{v}}_t[\gamma _1]\cdot x_{\mathsf {var}(t)})\cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_{2})\right] \bigg )\\= & {} \Big [\underbrace{\left( -\mathbf {A}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}-\mathbf {A}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1}\right) }_{\mathbf {V}_{t,i}}+ \underbrace{\big (\mathbf {{v}}_t[\gamma _0]\cdot (1-x_{\mathsf {var}(t)}) + \mathbf {{v}}_t[\gamma _1]\cdot x_{\mathsf {var}(t)}\big )}_{\mathbf {{v}}_t[i]}\cdot \mathbf {G}\Big ]^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_{t,i} \end{aligned}$$

where the first step follows from the correctness of \(\mathsf {Multiply}_\mathsf {en}\) algorithm and last step from that of \(\mathsf {Add}_\mathsf {en}\) with \(\mathbf {{e}}_{t,i} = \mathbf {{e}}_1 + \mathbf {{e}}_2\) where \(\mathbf {{e}}_1 = -\left( \widetilde{\mathbf {V}}_{t-1,\gamma _0}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}'_{\mathsf {var}(t)}- (1-x_{\mathsf {var}(t)}) \cdot \mathbf {{e}}_{t-1,\gamma _0}\) and \(\mathbf {{e}}_2 = -\left( \widetilde{\mathbf {V}}_{t-1,\gamma _1}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}_{\mathsf {var}(t)}- x_{\mathsf {var}(t)} \cdot \mathbf {{e}}_{t-1,\gamma _1}\).

Lemma 5

Let \(\mathsf {Eval}_{\mathsf {en}}\big (\mathsf {BP},\mathbf {{x}},\{\mathbf {A}_i,\psi _i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,i},\psi _{0,i}\}_{i\in [5]}, \mathbf {A}^{\mathsf {c}},\psi ^\mathsf {c}\big ) \rightarrow \psi _\mathsf {BP}\) such that all the noise terms, \(\big \{\mathsf {Noise}_{\mathbf {A}_i,x_i}(\psi _i)\big \}_{i\in [\ell ]},\mathsf {Noise}_{\mathbf {A}^{\mathsf {c}},1}(\psi ^\mathsf {c}),\big \{\mathsf {Noise}_{\mathbf {V}_{0,i},\mathbf {{v}}_0[i]}(\psi _{0,i})\big \}_{i\in [5]}\) are bounded by B, then

$$\begin{aligned} \mathsf {Noise}_{\mathbf {V}_\mathsf {BP},y}(\psi _\mathsf {BP}) \le 3m \cdot L \cdot B + B \end{aligned}$$

Proof

We will prove this lemma by induction. That is, we will prove that at any step t,

$$\begin{aligned} \mathsf {Noise}_{\mathbf {V}_{t,i},\mathbf {{v}}_t[i]}(\psi _{t,i}) \le 3m \cdot t \cdot B+B \end{aligned}$$

for \(i\in [5]\). For the base case, \(t=0\), we operate on fresh encodings for the initial state vector \(\mathbf {{v}}_0\). Hence, we have that, \(\mathsf {Noise}_{\mathbf {V}_{0,i},\mathbf {{v}}_0[i]}(\psi _{0,i}) \le B\), for all \(i\in [5]\). Let \(\{\psi _{t-1,i}\}_{i\in [5]}\) be the encodings of the state vector \(\mathbf {{v}}_{t-1}\) at step \(t-1\) such that

$$\begin{aligned} \mathsf {Noise}_{\mathbf {V}_{t-1,i},\mathbf {{v}}_{t-1}[i]}(\psi _{t-1,i}) \le 3m \cdot (t-1) \cdot B+B \end{aligned}$$

for \(i\in [5]\). We know that \(\psi _{t,i}=\mathsf {Add}_\mathsf {en}\left( \mathsf {Multiply}_\mathsf {en}(\psi '_{\mathsf {var}(t)},\psi _{t-1,\gamma _0}),\mathsf {Multiply}_\mathsf {en}\right. \left. (\psi _{\mathsf {var}(t)},\psi _{t-1,\gamma _1})\right) \). Hence, from Lemmas 2 and 3, we get:

$$\begin{aligned} \mathsf {Noise}_{\mathbf {V}_{t,i},\mathbf {{v}}_t[i]}(\psi _{t,i})\le & {} \left( m \cdot \mathsf {Noise}_{\mathbf {A}'_{\mathsf {var}(t)},(1-x_{\mathsf {var}(t)})}(\psi '_{\mathsf {var}(t)})+ (1-x_{\mathsf {var}(t)})\cdot \mathsf {Noise}_{\mathbf {V}_{t-1,\gamma _0},\mathbf {{v}}_{t-1}[\gamma _0]}\right) \\&\qquad +\left( m \cdot \mathsf {Noise}_{\mathbf {A}_{\mathsf {var}(t)},x_{\mathsf {var}(t)}}(\psi _{\mathsf {var}(t)})+ x_{\mathsf {var}(t)}\cdot \mathsf {Noise}_{\mathbf {V}_{t-1,\gamma _1},\mathbf {{v}}_{t-1}[\gamma _1]}\right) \\= & {} \, \left( m \cdot 2B + (1-x_{\mathsf {var}(t)}) \cdot (3m(t-1)B+B) \right) \\&\qquad + \left( m \cdot B + x_{\mathsf {var}(t)} \cdot (3m(t-1)B+B) \right) \\= & {} \, 3m \cdot t \cdot B + B \end{aligned}$$

where

$$\begin{aligned} \mathsf {Noise}_{\mathbf {A}'_{\mathsf {var}(t)},(1-x_{\mathsf {var}(t)})}(\psi '_{\mathsf {var}(t)})\le \mathsf {Noise}_{\mathbf {A}^{\mathsf {c}},1}(\psi ^\mathsf {c})+\mathsf {Noise}_{-\mathbf {A}_{\mathsf {var}(t)},-x_{\mathsf {var}(t)}}(-\psi _{\mathsf {var}(t)}) \le B +B = 2B \end{aligned}$$

by Lemma 2. With \(\psi _\mathsf {BP}\) being an encoding at step L, we have \(\mathsf {Noise}_{\mathbf {V}_\mathsf {BP},y}(\psi _\mathsf {BP}) \le 3m\cdot L \cdot B +B \). Thus, \(\mathsf {Noise}_{\mathbf {V}_\mathsf {BP},y}(\psi _\mathsf {BP}) = O(m\cdot L \cdot B) \).

3.4 Our Simulated Public Key Evaluation Algorithm

During simulation, we will use a different procedure for assigning public keys to each wire of the input and the state vector. In particular, \(\mathbf {A}_i = \mathbf {A}\cdot \mathbf {R}_i - x_i \cdot \mathbf {G}\) for some shared public key \(\mathbf {A}\) and some low norm matrix \(\mathbf {R}_i\). Similarly, the state public keys \(\mathbf {V}_{t,i} = \mathbf {A}\cdot \mathbf {R}_{t,i} - \mathbf {{v}}_t[i] \cdot \mathbf {G}\). The algorithm thus takes as input the description of the branching program \(\mathsf {BP}\), the attribute vector \(\mathbf {{x}}\), two collection of low norm matrices \(\{\mathbf {R}_i\}, \{\mathbf {R}_{0,i}\}\) corresponding to the input public keys and initial state vector, a low norm matrix \(\mathbf {R}^{\mathsf {c}}\) for the public key of constant 1 and a shared matrix \(\mathbf {A}\). It outputs a homomorphically derived low norm matrix \(\mathbf {R}_{\mathsf {BP}}\).

$$\begin{aligned} \mathsf {Eval}_{\mathsf {SIM}}(\mathsf {BP}, \mathbf {{x}},\{\mathbf {R}_i\}_{i\in [\ell ]}, \{\mathbf {R}_{0,i}\}_{i\in [5]}, \mathbf {R}^{\mathsf {c}},\mathbf {A})\rightarrow \mathbf {R}_\mathsf {BP}\end{aligned}$$

The algorithm will ensure that the output \(\mathbf {R}_\mathsf {BP}\) satisfies \(\mathbf {A}\cdot \mathbf {R}_\mathsf {BP}-\mathsf {BP}(\mathbf {{x}})\cdot \mathbf {G}=\mathbf {V}_\mathsf {BP}\), where \(\mathbf {V}_\mathsf {BP}\) is the homomorphically derived public key.

The algorithm proceeds inductively as follows. Assume at time \(t-1 \in [L]\), the we hold a collection of low norm matrices \(\mathbf {R}_{t-1,i}\) and public keys \(\mathbf {V}_{t-1,i} = \mathbf {A}\cdot \mathbf {R}_{t-1,i} - \mathbf {{v}}_t[i] \cdot \mathbf {G}\) for \(i \in [5]\) corresponding to the state vector. Let \(\mathbf {R}'_i = \mathbf {R}^{\mathsf {c}}- \mathbf {R}_i\) for all \(i \in [\ell ]\). We show how to derive the low norm matrices \(\mathbf {R}_{t,i}\) for all \(i \in [5]\):

  1. 1.

    Let \(\gamma _0:=\gamma _{t,i,0}\) and \(\gamma _1:=\gamma _{t,i,1}\).

  2. 2.

    Compute

    $$\begin{aligned} \mathbf {R}_{t,i}=\left( -\mathbf {R}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}+(1-x_{\mathsf {var}(t)}) \cdot \mathbf {R}_{t-1,\gamma _0}\right) + \left( -\mathbf {R}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1} + x_{\mathsf {var}(t)} \cdot \mathbf {R}_{t-1,\gamma _1})\right) \end{aligned}$$

Finally, let \(\mathbf {R}_{L,1}\) be the matrix obtained at the Lth step corresponding to state value “1” by the above algorithm. Output \(\mathbf {R}_\mathsf {BP}:=\mathbf {R}_{L,1}\). Below, we show that the norm of \(\mathbf {R}_\mathsf {BP}\) remains small and that homomorphically computed public key \(\mathbf {V}_\mathsf {BP}\) using \(\mathsf {Eval}_ \mathsf {pk}\) satisfies that \(\mathbf {V}_\mathsf {BP}= \mathbf {A}\cdot \mathbf {R}_\mathsf {BP}- \mathsf {BP}(\mathbf {{x}}) \cdot \mathbf {G}\).

Lemma 6

(Correctness of \(\mathsf {Eval}_{\mathsf {SIM}}\) ). For any set of valid inputs to \(\mathsf {Eval}_{\mathsf {SIM}}\), we have

$$\begin{aligned} \mathsf {Eval}_{\mathsf {SIM}}(\mathsf {BP}, \mathbf {{x}},\{\mathbf {R}_i\}_{i\in [\ell ]}, \{\mathbf {R}_{0,i}\}_{i\in [5]}, \mathbf {R}^{\mathsf {c}},\mathbf {A})\rightarrow \mathbf {R}_\mathsf {BP}\end{aligned}$$

where \(\mathbf {V}_\mathsf {BP}= \mathbf {A}\mathbf {R}_\mathsf {BP}- \mathsf {BP}(\mathbf {{x}}) \cdot \mathbf {G}\).

Proof

We will prove this lemma by induction. That is, we will prove that at any step t,

$$\begin{aligned} \mathbf {V}_{t,i}=\mathbf {A}\mathbf {R}_{t,i}-\mathbf {{v}}_t[i]\cdot \mathbf {G}\end{aligned}$$

for any \(i\in [5]\). For the base case \(t=0\), since the inputs are valid, we have that \(\mathbf {V}_{0,i}=\mathbf {A}\mathbf {R}_{0,i}-\mathbf {{v}}_0[i]\cdot \mathbf {G}\), for all \(i\in [5]\). Let \(\mathbf {V}_{t-1,i}=\mathbf {A}\mathbf {R}_{t-1,i}-\mathbf {{v}}_{t-1}[i]\cdot \mathbf {G}\) for \({i\in [5]}\). Hence, we get:

$$\begin{aligned} \mathbf {A}\mathbf {R}_{t,i}= & {} \left( -\mathbf {A}\mathbf {R}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}+(1-x_{\mathsf {var}(t)}) \cdot \mathbf {A}\mathbf {R}_{t-1,\gamma _0}\right) + \left( -\mathbf {A}\mathbf {R}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1} + x_{\mathsf {var}(t)} \cdot \mathbf {A}\mathbf {R}_{t-1,\gamma _1})\right) \\= & {} \Big (-\big (\mathbf {A}'_{\mathsf {var}(t)}+(1-x_{\mathsf {var}(t)})\cdot \mathbf {G}\big )\widetilde{\mathbf {V}}_{t-1,\gamma _0}+(1-x_{\mathsf {var}(t)}) \cdot \big (\mathbf {V}_{t-1,\gamma _0}+\mathbf {{v}}_{t-1}[\gamma _0]\cdot \mathbf {G}\big )\Big )\\&\quad +\Big (-\big (\mathbf {A}_{\mathsf {var}(t)}+x_{\mathsf {var}(t)}\cdot \mathbf {G}\big )\widetilde{\mathbf {V}}_{t-1,\gamma _1}+ x_{\mathsf {var}(t)} \cdot \big (\mathbf {V}_{t-1,\gamma _1}+\mathbf {{v}}_{t-1}[\gamma _1]\cdot \mathbf {G}\big )\Big )\\= & {} \Big (-\mathbf {A}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}-(1-x_{\mathsf {var}(t)})\cdot \mathbf {V}_{t-1,\gamma _0}+ (1-x_{\mathsf {var}(t)}) \cdot \mathbf {V}_{t-1,\gamma _0}+ \big ((1-x_{\mathsf {var}(t)})\mathbf {{v}}_{t-1}[\gamma _0]\big )\cdot \mathbf {G}\Big )\\&\quad +\Big (-\mathbf {A}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1}-x_{\mathsf {var}(t)}\cdot \mathbf {V}_{t-1,\gamma _1}+ x_{\mathsf {var}(t)} \cdot \mathbf {V}_{t-1,\gamma _1}+ \big (x_{\mathsf {var}(t)}\mathbf {{v}}_{t-1}[\gamma _1]\big )\cdot \mathbf {G}\Big )\\= & {} \big (\underbrace{-\mathbf {A}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}-\mathbf {A}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1}}_{\mathbf {V}_{t,i}}\big )+ \big (\underbrace{(1-x_{\mathsf {var}(t)})\mathbf {{v}}_{t-1}[\gamma _0]+ \big (x_{\mathsf {var}(t)}\mathbf {{v}}_{t-1}[\gamma _1]}_{\mathbf {{v}}_t[i]}\big )\cdot \mathbf {G}\end{aligned}$$

Hence, we have \(\mathbf {V}_{t,i}=\mathbf {A}\mathbf {R}_{t,i}-\mathbf {{v}}_t[i]\cdot \mathbf {G}\). Thus, at the Lth step, we have by induction that

$$\begin{aligned} \mathbf {V}_\mathsf {BP}=\mathbf {V}_{L,1}=\mathbf {A}\mathbf {R}_{L,1-\mathbf {{v}}_t[i]\cdot \mathbf {G}}=\mathbf {A}\mathbf {R}_\mathsf {BP}-\mathbf {{v}}_t[i]\cdot \mathbf {G}\end{aligned}$$

Lemma 7

Let \(\mathsf {Eval}_{\mathsf {SIM}}\big (\mathsf {BP}, \mathbf {{x}},\{\mathbf {R}_i\}_{i\in [\ell ]}, \{\mathbf {R}_{0,i}\}_{i\in [5]}, \mathbf {R}^{\mathsf {c}},\mathbf {A})\rightarrow \mathbf {R}_\mathsf {BP}\) such that all the “\(\mathbf {R}\)” matrices are sampled from \(\{-1,1\}^{m \times m}\), then

$$\begin{aligned} \left\| {\mathbf {R}_\mathsf {BP}} \right\| _{\infty } \le 3m \cdot L + 1 \end{aligned}$$

Proof

This proof is very similar to that of Lemma 5. We will prove this lemma also by induction. That is, we will prove that at any step t,

$$\begin{aligned} \left\| {\mathbf {R}_{t,i}} \right\| _{\infty } \le 3m \cdot t + 1 \end{aligned}$$

for \(i\in [5]\). For the base case, \(t=0\), the input \(\mathbf {R}_{0,i}\)s are such that, \(\left\| {\mathbf {R}_{t,0}} \right\| _{\infty } = 1\), for all \(i\in [5]\). Let \(\left\| {\mathbf {R}_{t-1,i}} \right\| _{\infty } \le 3m \cdot (t-1) + 1\) for \(i\in [5]\). We know that

$$\begin{aligned} \mathbf {R}_{t,i}=\left( -\mathbf {R}'_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _0}+(1-x_{\mathsf {var}(t)}) \cdot \mathbf {R}_{t-1,\gamma _0}\right) + \left( -\mathbf {R}_{\mathsf {var}(t)}\widetilde{\mathbf {V}}_{t-1,\gamma _1} + x_{\mathsf {var}(t)} \cdot \mathbf {R}_{t-1,\gamma _1})\right) \end{aligned}$$

Hence, we have:

$$\begin{aligned} \left\| {\mathbf {R}_{t,i}} \right\| _{\infty }\le & {} \left( m \cdot \left\| {\widetilde{\mathbf {V}}_{t-1,\gamma _0}} \right\| _{\infty } \cdot \left\| {\mathbf {R}'_{\mathsf {var}(t)}} \right\| _{\infty } + (1-x_{\mathsf {var}(t)})\cdot \left\| {\mathbf {R}_{t-1,\gamma _0}} \right\| _{\infty }\right) \\&\quad +\left( m \cdot \left\| {\widetilde{\mathbf {V}}_{t-1,\gamma _0}} \right\| _{\infty } \cdot \left\| {\mathbf {R}_{\mathsf {var}(t)}} \right\| _{\infty } + x_{\mathsf {var}(t)}\cdot \left\| {\mathbf {R}_{t-1,\gamma _1}} \right\| _{\infty }\right) \\= & {} \left( m \cdot 1 \cdot 2 + (1-x_{\mathsf {var}(t)}) \cdot 3m \cdot (t-1) \right) + \left( m \cdot 1 \cdot 1 + x_{\mathsf {var}(t)} \cdot 3m \cdot (t-1) \right) \\= & {} \, 3m \cdot t + 1 \end{aligned}$$

where \( \left\| {\mathbf {R}'_{i}} \right\| _{\infty }\le \left\| {\mathbf {R}^{\mathsf {c}}+ \mathbf {R}_{i}} \right\| _{\infty } \le \left\| {\mathbf {R}^{\mathsf {c}}} \right\| _{\infty }+\left\| {\mathbf {R}_{i}} \right\| _{\infty } \le 1+1 =2 \). With \(\mathbf {R}_\mathsf {BP}\) being at step L, we have \(\left\| {\mathbf {R}_\mathsf {BP}} \right\| _{\infty } \le 3m\cdot L +1\). Thus, \(\left\| {\mathbf {R}_\mathsf {BP}} \right\| _{\infty } = O(m\cdot L)\).

4 Our Attribute-Based Encryption

In this section we describe our attribute-based encryption scheme for branching programs. We present the scheme for a bounded length branching programs, but note that we can trivially support unbounded length by setting modulo q to a small superpolynomial. For a family of branching programs of length bounded by L and input space \(\{0,1\}^\ell \), we define the \(\mathcal {ABE}\) algorithms \((\mathsf {Params}, \mathsf {Setup},\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\) as follows.

  • \(\mathsf {Params}(1^\lambda , 1^L)\): For a security parameter \(\lambda \) and length bound L, let the LWE dimension be \(n=n(\lambda )\) and let the LWE modulus be \(q=q(n,L)\). Let \(\chi \) be an error distribution over \({\mathbb Z}\) and let \(B=B(n)\) be an error bound. We additionally choose two Gaussian parameters: a “small” Gaussian parameter \(s=s(n)\) and a “large” Gaussian parameter \(\alpha =\alpha (n)\). Both these parameters are polynomially bounded (in \(\lambda ,L\)). The public parameters \( \mathsf {pp}=(\lambda ,L,n,q,m,\chi ,B,s,\alpha )\) are implicitly given as input to all the algorithms below.

  • \(\mathsf {Setup}(1^\ell )\): The setup algorithm takes as input the length of the attribute vector \(\ell \).

    1. 1.

      Sample a matrix with a trapdoor: \((\mathbf {A}, \mathbf {T}_\mathbf {A}) \leftarrow \mathsf {TrapSamp}(1^n,1^m,q)\).

    2. 2.

      Let \(\mathbf {G}\in {\mathbb Z}_q^{n \times m}\) be the primitive matrix with the public trapdoor basis \(\mathbf {T}_\mathbf {G}\).

    3. 3.

      Choose \(\ell + 6\) matrices \(\{\mathbf {A}_i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,1}\}_{i \in [5]}, \mathbf {A}^{\mathsf {c}}\) at random from \({\mathbb Z}_q^{n \times m}\). First, \(\ell \) matrices form the LWE “public keys” for the bits of attribute vector, next 5 form the “public keys” for the initial configuration of the state vector, and the last matrix as a “public key” for a constant 1.

    4. 4.

      Choose a vector \(\mathbf {u}\in {\mathbb Z}_q^n\) at random.

    5. 5.

      Output the master public key

      $$\begin{aligned} \mathsf {mpk}:=\left( \mathbf {A},\mathbf {A}^{\mathsf {c}},\{\mathbf {A}_i\}_{i\in [\ell ]},\{\mathbf {V}_{0,i}\}_{i\in [5]},\mathbf {G},\mathbf {u}\right) \end{aligned}$$

      and the master secret key \( \mathsf {msk}:=\left( \mathbf {T}_\mathbf {A}, \mathsf {mpk}\right) \).

  • \(\mathsf {Enc}( \mathsf {mpk}, \mathbf {{x}}, \mu )\): The encryption algorithm takes as input the master public key \( \mathsf {mpk}\), the attribute vector \(\mathbf {{x}}\in \{0,1\}^\ell \) and a message \(\mu \).

    1. 1.

      Choose an LWE secret vector \(\mathbf {{s}}\in {\mathbb Z}_q^n\) at random.

    2. 2.

      Choose noise vector \(\mathbf {{e}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\chi ^m\) and compute \(\psi _0=\mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\).

    3. 3.

      Choose a random matrix \(\mathbf {R}^{\mathsf {c}}\leftarrow \{-1,1\}^{m\times m}\) and let \(\mathbf {{e}}^\mathsf {c}=(\mathbf {R}^{\mathsf {c}})^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\). Now, compute an encoding of a constant 1:

      $$\begin{aligned} \psi ^\mathsf {c}=\left( \mathbf {A}^{\mathsf {c}}+ \mathbf {G}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}^\mathsf {c}\end{aligned}$$
    4. 4.

      Encode each bit \(i\in [\ell ]\) of the attribute vector:

      1. (a)

        Choose random matrices \(\mathbf {R}_i \leftarrow \{-1,1\}^{m\times m}\) and let \(\mathbf {{e}}_i=\mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\).

      2. (b)

        Compute \(\psi _i=(\mathbf {A}_i+x_i \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_i\).

    5. 5.

      Encode the initial state configuration vector \(\mathbf {{v}}_0 = [1,0,0,0,0]\): for all \(i\in [5]\),

      1. (a)

        Choose a random matrix \(\mathbf {R}_{0,i}\leftarrow \{-1,1\}^{m\times m}\) and let \(\mathbf {{e}}_{0,i}=\mathbf {R}_{0,i}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\).

      2. (b)

        Compute \(\psi _{0,i}=(\mathbf {V}_{0,i}+\mathbf {{v}}_0[i] \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_{0,i}\).

    6. 6.

      Encrypt the message \(\mu \) as \(\tau =\mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+e+\left\lfloor q/2 \right\rceil \mu \), where \(e \leftarrow \chi \).

    7. 7.

      Output the ciphertext

      $$\begin{aligned} \mathsf {ct}_\mathbf {{x}}=\left( \mathbf {{x}},\psi _0,\psi ^\mathsf {c},\{\psi _i\}_{i\in [\ell ]},\{\psi _{0,i}\}_{i\in [5]},\tau \right) \end{aligned}$$
  • \(\mathsf {KeyGen}( \mathsf {msk},\mathsf {BP})\): The key-generation algorithm takes as input the master secret key \( \mathsf {msk}\) and a description of a branching program:

    $$\begin{aligned} \mathsf {BP}:=\big ( \mathbf {{v}}_0, \left\{ \mathsf {var}(t),\{\gamma _{t,i,0},\gamma _{t,i,1}\}_{i\in [5]}\right\} _{t\in [L]} \big ) \end{aligned}$$

    The secret key \( \mathsf {sk}_\mathsf {BP}\) is computed as follows.

    1. 1.

      Homomorphically compute a “public key” matrix associated with the branching program:

      $$\begin{aligned} \mathbf {V}_\mathsf {BP}\leftarrow \mathsf {Eval}_{\mathsf {pk}}(\mathsf {BP},\{\mathbf {A}_i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,i}\}_{i\in [5]},\mathbf {A}^{\mathsf {c}}) \end{aligned}$$
    2. 2.

      Let \(\mathbf {F}=\left[ \mathbf {A}||(\mathbf {V}_\mathsf {BP}+\mathbf {G})\right] \in {\mathbb Z}_q^{n\times 2m}\). Compute \(\mathbf {r}_\mathrm{{out}}\leftarrow \mathsf {SampleLeft}(\mathbf {A},(\mathbf {V}_\mathsf {BP}+\mathbf {G}),\mathbf {T}_\mathbf {A},\mathbf {u},\alpha )\) such that \(\mathbf {F}\cdot \mathbf {r}_\mathrm{{out}}= \mathbf {u}\).

    3. 3.

      Output the secret key for the branching program as

      $$\begin{aligned} \mathsf {sk}_\mathsf {BP}:=(\mathsf {BP},\mathbf {r}_\mathrm{{out}}) \end{aligned}$$
  • \(\mathsf {Dec}( \mathsf {sk}_\mathsf {BP},\mathsf {ct}_\mathbf {{x}})\): The decryption algorithm takes as input the secret key for a branching program \( \mathsf {sk}_\mathsf {BP}\) and a ciphertext \(\mathsf {ct}_\mathbf {{x}}\). If \(\mathsf {BP}(\mathbf {{x}})=0\), output \(\perp \). Otherwise,

    1. 1.

      Homomorphically compute the encoding of the result \(\mathsf {BP}(\mathbf {{x}})\) associated with the public key of the branching program:

      $$\begin{aligned} \psi _\mathsf {BP}\leftarrow \mathsf {Eval}_{\mathsf {en}}(\mathsf {BP},\mathbf {{x}}, \{\mathbf {A}_i,\psi _i\}_{i\in [\ell ]}, \{\mathbf {V}_{0,i},\psi _{0,i}\}_{i\in [5]}, (\mathbf {A}^{\mathsf {c}},\psi ^\mathsf {c})) \end{aligned}$$
    2. 2.

      Finally, compute \(\phi =\mathbf {r}_\mathrm{{out}}^{\scriptscriptstyle \mathsf {T}}\cdot [\psi ||\psi _\mathsf {BP}]\). As we show in Lemma 8, \(\phi = \mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\left\lfloor q/2 \right\rceil \mu + e_\phi \pmod q\), for a short \(e_\phi \).

    3. 3.

      Output \(\mu =0\) if \(|\tau -\phi |<q/4\) and \(\mu =1\) otherwise.

4.1 Correctness

Lemma 8

Let \(\mathcal {BP}\) be a family of width-5 permutation branching programs with their length bounded by L and let \(\mathcal {ABE}=(\mathsf {Params},\mathsf {Setup},\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\) be our attribute-based encryption scheme. For a LWE dimension \(n = n(\lambda )\), the parameters for \(\mathcal {ABE}\) are instantiated as follows (according to Sect. 5):

$$\begin{aligned} \chi&= D_{\mathbb {Z},\sqrt{n}}&B&= O(n)&m&= O(n\log q) \\ q&= \tilde{O}(n^7\cdot L^2)&\alpha&=\tilde{O}(n \log q)^2 \cdot L \end{aligned}$$

then the scheme \(\mathcal {ABE}\) is correct, according to the definition in Sect. 2.2.

Proof

We have to show that the decryption algorithm outputs the correct message \(\mu \), given a valid set of a secret key and a ciphertext.

From Lemma 4, we have that \(\psi _\mathsf {BP}=(\mathbf {V}_\mathsf {BP}+\mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}_\mathsf {BP}\) since \(\mathsf {BP}(\mathbf {{x}})=1\). Also, from Lemma 5, we know that \(\left\| {\mathbf {{e}}_\mathsf {BP}} \right\| _{\infty }= O(m\cdot L \cdot (m \cdot B))= O(m^2 \cdot L \cdot B)\) since our input encodings have noise terms bounded by \(m \cdot B\). Thus, the noise term in \(\phi \) is bounded by:

$$\begin{aligned} \left\| {e_\phi } \right\| _{\infty }= & {} m \cdot \big (\mathsf {Noise}_{\mathbf {A},0}(\psi )+\mathsf {Noise}_{\mathbf {V}_\mathsf {BP},1}(\psi _\mathsf {BP}) \big ) \cdot \left\| {\mathbf {r}_\mathrm{{out}}} \right\| _{\infty }\\= & {} m \cdot (B+ O(m^2\cdot L \cdot B)) \cdot \tilde{O}(n \log q)^2 \cdot L \sqrt{m}\\= & {} O\big ((n \log q)^6 \cdot L^2 \cdot B \big ) \end{aligned}$$

where \(m=O(n\log q)\) and \(\left\| {\mathbf {r}_\mathrm{{out}}} \right\| _{\infty } \le \alpha \sqrt{m} = \tilde{O}(n \log q)^2 \cdot L \sqrt{m}\) according to Sect. 5. Hence, we have

$$\begin{aligned} |\tau -\phi |\le \left\| {e} \right\| _{\infty }+\left\| {e_\phi } \right\| _{\infty } = O\big ((n \log q)^6 \cdot L^2 \cdot B \big )\le q/4 \end{aligned}$$

Clearly, the last inequality is satisfied when \(q=\tilde{O}(n^7 \cdot L^2)\). Hence, the decryption proceeds correctly outputting the correct \(\mu \).

4.2 Security Proof

Theorem 5

For any \(\ell \) and any length bound L, \(\mathcal {ABE}\) scheme defined above satisfies selective security game 3 for any family of branching programs \(\mathsf {BP}\) of length L with \(\ell \)-bit inputs, assuming hardness of \(\mathsf {dLWE}_{n,q,\chi }\) for sufficiently large \(n=\mathrm{poly}(\lambda ),q=\tilde{O}(n^7 \cdot L^2)\) and \(\mathrm{poly}(n)\) bounded error distribution \(\chi \). Moreover, the size of the secret keys grows polynomially with L (and independent of the width of \(\mathsf {BP}\)).

Proof

We define a series of hybrid games, where the first and the last games correspond to the real experiments encrypting messages \(\mu _0,\mu _1\) respectively. We show that these games are indistinguishable except with negligible probability. Recall that in a selective security game, the challenge attribute vector \(\mathbf {{x}}^*\) is declared before the \(\mathsf {Setup}\) algorithm and all the secret key queries that adversary makes must satisfy \(\mathsf {BP}(\mathbf {{x}}^*)=0\). First, we define auxiliary simulated \(\mathcal {ABE}^*\) algorithms.

  • \(\mathsf {Setup}^*(1^\lambda , 1^\ell , 1^L, \mathbf {{x}}^*)\): The simulated setup algorithm takes as input the security parameter \(\lambda \), the challenge attribute vector \(\mathbf {{x}}^*\), its length \(\ell \) and the maximum length of the branching program L.

    1. 1.

      Choose a random matrix \(\mathbf {A}\leftarrow {\mathbb Z}_q^{n \times m}\) and a vector \(\mathbf {u}\) at random.

    2. 2.

      Let \(\mathbf {G}\in {\mathbb Z}_q^{n \times m}\) be the primitive matrix with the public trapdoor basis \(\mathbf {T}_\mathbf {G}\).

    3. 3.

      Choose \(\ell +6\) random matrices \(\{\mathbf {R}_i\}_{i\in [\ell ]}, \{\mathbf {R}_{0,i}\}_{i \in [5]}, \mathbf {R}^{\mathsf {c}}\) from \(\{-1,1\}^{m \times m}\) and set

      1. (a)

        \(\mathbf {A}_i=\mathbf {A}\cdot \mathbf {R}_i-x^* \mathbf {G}\) for \(i\in [\ell ]\),

      2. (b)

        \(\mathbf {V}_{0,i}=\mathbf {A}\cdot \mathbf {R}_{0,i}-\mathbf {{v}}_0[i]\cdot \mathbf {G}\) for \(i\in [5]\) where \(\mathbf {{v}}_0=[1,0,0,0,0]\),

      3. (c)

        \(\mathbf {A}^{\mathsf {c}}=\mathbf {A}\cdot \mathbf {R}^{\mathsf {c}}-\mathbf {G}\).

    4. 4.

      Output the master public key

      $$\begin{aligned} \mathsf {mpk}:=\left( \mathbf {A},\mathbf {A}^{\mathsf {c}},\{\mathbf {A}_i\}_{i\in [\ell ]},\{\mathbf {V}_{0,i}\}_{i\in [5]},\mathbf {G}, \mathbf {u}\right) \end{aligned}$$

      and the secret key

      $$\begin{aligned} \mathsf {msk}:=\left( \mathbf {{x}}^*, \mathbf {A}, \mathbf {R}^{\mathsf {c}}, \{\mathbf {R}_i\}_{i\in [\ell ]},\{\mathbf {R}_{0,i}\}_{i\in [5]} \right) \end{aligned}$$
  • \(\mathsf {Enc}^*( \mathsf {mpk}, \mathbf {{x}}^*, \mu )\): The simulated encryption algorithm takes as input \( \mathsf {mpk}, \mathbf {{x}}^*\) and the message \(\mu \). It computes the ciphertext using the knowledge of short matrices \(\{\mathbf {R}_i\}, \{\mathbf {R}_{0,i}\}, \mathbf {R}^{\mathsf {c}}\) as follows.

    1. 1.

      Choose a vector \(\mathbf {{s}}\in {\mathbb Z}_q^n\) at random.

    2. 2.

      Choose noise vector \(\mathbf {{e}}\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}\chi ^m\) and compute \(\psi _0=\mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {{e}}\).

    3. 3.

      Compute an encoding of an identity as \(\psi ^\mathsf {c}=\left( \mathbf {A}^{\mathsf {c}}\right) ^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+(\mathbf {R}^{\mathsf {c}})^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\).

    4. 4.

      For all bits of the attribute vector \(i\in [\ell ]\) compute

      $$\begin{aligned} \psi _i=(\mathbf {A}_i+x_i \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\end{aligned}$$
    5. 5.

      For all \(i\in [5]\), encode the bits of the initial state configuration vector \(\mathbf {{v}}_0 = [1,0,0,0,0]\)

      $$\begin{aligned} \psi _{0,i}=(\mathbf {V}_{0,i}+\mathbf {{v}}_0[i] \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+\mathbf {R}_{0,i}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\end{aligned}$$
    6. 6.

      Encrypt the message \(\mu \) as \(\tau =\mathbf {u}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+e+\left\lfloor q/2 \right\rceil \mu \), where \(e \leftarrow \chi \).

    7. 7.

      Output the ciphertext

      $$\begin{aligned} \mathsf {ct}=\left( \mathbf {{x}},\psi _0,\{\psi _i\}_{i\in [\ell ]},\psi ^\mathsf {c},\{\psi _{0,i}\}_{i\in [5]},\tau \right) \end{aligned}$$
  • \(\mathsf {KeyGen}^*( \mathsf {msk},\mathsf {BP})\): The simulated key-generation algorithm takes as input the master secret key \( \mathsf {msk}\) and the description of the branching program \(\mathsf {BP}\). It computes the secret key \( \mathsf {sk}_\mathsf {BP}\) as follows.

    1. 1.

      Obtain a short homomorphically derived matrix associated with the output public key of the branching program:

      $$\begin{aligned} \mathbf {R}_\mathsf {BP}\leftarrow \mathsf {Eval}_{\mathsf {SIM}}\big (\mathsf {BP}, \mathbf {{x}}^*, \{\mathbf {R}_i\}_{i\in [\ell ]},\{\mathbf {R}_{0,i}\}_{i\in [5]},\mathbf {R}^{\mathsf {c}}, \mathbf {A}\big ) \end{aligned}$$
    2. 2.

      By the correctness of \(\mathsf {Eval}_{\mathsf {SIM}}\), we have \(\mathbf {V}_\mathsf {BP}=\mathbf {A}\mathbf {R}_\mathsf {BP}- \mathsf {BP}(\mathbf {{x}}^*)\cdot \mathbf {G}\). Let \(\mathbf {F}=\left[ \mathbf {A}||(\mathbf {V}_\mathsf {BP}+\mathbf {G})\right] \in {\mathbb Z}_q^{n\times 2m}\). Compute \(\mathbf {r}_\mathrm{{out}}\leftarrow \mathsf {SampleRight}(\mathbf {A},\mathbf {G},\mathbf {R}_\mathsf {BP},\mathbf {T}_\mathbf {G},\mathbf {u},\alpha )\) such that \(\mathbf {F}\cdot \mathbf {r}_\mathrm{{out}}= \mathbf {u}\) (this step relies on the fact that \(\mathsf {BP}(\mathbf {{x}}^*) = 0\)).

    3. 3.

      Output the secret key for the branching program

      $$\begin{aligned} \mathsf {sk}_\mathsf {BP}:=(\mathsf {BP},\mathbf {r}_\mathrm{{out}}) \end{aligned}$$

Game Sequence. We now define a series of games and then prove that all games Game i and Game i+1 are either statistically or computationally indistinguishable.

  • Game 0: The challenger runs the real ABE algorithms and encrypts message \(\mu _0\) for the challenge index \(\mathbf {{x}}^*\).

  • Game 1: The challenger runs the simulated ABE algorithms \(\mathsf {Setup}^*, \mathsf {KeyGen}^*, \mathsf {Enc}^*\) and encrypts message \(\mu _0\) for the challenge index \(\mathbf {{x}}^*\).

  • Game 2: The challenger runs the simulated ABE algorithms \(\mathsf {Setup}^*, \mathsf {KeyGen}^*\), but chooses a uniformly random element of the ciphertext space for the challenge index \(\mathbf {{x}}^*\).

  • Game 3: The challenger runs the simulated ABE algorithms \(\mathsf {Setup}^*, \mathsf {KeyGen}^*, \mathsf {Enc}^*\) and encrypts message \(\mu _1\) for the challenge index \(\mathbf {{x}}^*\).

  • Game 4: The challenger runs the real ABE algorithms and encrypts message \(\mu _1\) for the challenge index \(\mathbf {{x}}^*\).

Lemma 9

The view of an adversary in Game 0 is statistically indistinguishable from Game 1. Similarly, the view of an adversary in Game 4 is statistically indistinguishable from Game 3.

Proof

We prove for the case of Game 0 and Game 1, as the other case is identical. First, note the differences between the games:

  • In Game 0, matrix \(\mathbf {A}\) is sampled using \(\mathsf {TrapSamp}\) algorithm and matrices \(\mathbf {A}_i, \mathbf {A}^{\mathsf {c}}, \mathbf {V}_{0,j} \in {\mathbb Z}_q^{n \times m}\) are randomly chosen for \(i\in [\ell ],j\in [5]\). In Game 1, matrix \(\mathbf {A}\in \mathbb {Z}_p^{n \times m}\) is chosen uniformly at random and matrices \(\mathbf {A}_i = \mathbf {A}\mathbf {R}_i - x_i^* \cdot \mathbf {G}\), \(\mathbf {A}^{\mathsf {c}}= \mathbf {A}\mathbf {R}^{\mathsf {c}}- \mathbf {G}\), \(\mathbf {V}_{0,j} = \mathbf {A}\mathbf {R}_{0,j} - \mathbf {{v}}_0[j] \cdot \mathbf {G}\) for randomly chosen \(\mathbf {R}_i,\mathbf {R}^{\mathsf {c}},\mathbf {R}_{0,j} \in \{-1,1\}^{m \times m}\).

  • In Game 0, each ciphertext component is computed as:

    $$\begin{aligned} \psi _i= & {} (\mathbf {A}_i + x_i^* \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}_i = (\mathbf {A}_i + x_i^* \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\\ \psi ^\mathsf {c}= & {} (\mathbf {A}^{\mathsf {c}}+ \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}^{\mathsf {1}} = (\mathbf {A}^{\mathsf {c}}+ \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ (\mathbf {R}^{\mathsf {c}})^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\\ \psi _{0,j}= & {} (\mathbf {V}_{0,j} + \mathbf {{v}}_0[j] \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}_i = (\mathbf {V}_{0,j} + \mathbf {{v}}_0[j] \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {R}_{0,j}^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}\end{aligned}$$

    On the other hand, in Game 1 each ciphertext component is computed as:

    $$\begin{aligned} \psi _i = (\mathbf {A}_i + x_i^* \cdot \mathbf {G})^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}= (\mathbf {A}\mathbf {R}_i)^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\mathbf {{e}}= \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\big ( \mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}\big ) \end{aligned}$$

    Similarly, \(\psi ^\mathsf {c}=(\mathbf {R}^{\mathsf {c}})^{\scriptscriptstyle \mathsf {T}}(\mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}})\) and \(\psi _{0,j}=\mathbf {R}_{0,j}^{\scriptscriptstyle \mathsf {T}}(\mathbf {A}\mathbf {{s}}+\mathbf {{e}})\).

  • Finally, in Game 0 the vector \(\mathbf {r}_\mathrm{{out}}\) is sampled using \(\mathsf {SampleLeft}\), whereas in Game 1 it is sampled using \(\mathsf {SampleRight}\) algorithm.

For sufficiently large \(\alpha \) (See Sect. 5), the distributions produced in two games are statistically indistinguishable. This follows readily from [2, Lemma 4.3], Theorems 3 and 4. Please refer to the full version [29] for a detailed proof.

Lemma 10

If the decisional \(\mathsf {LWE}\) assumption holds, then the view of an adversary in Game 1 is computationally indistinguishable from Game 2. Similarly, if the decisional \(\mathsf {LWE}\) assumption holds, then the view of an adversary in Game 3 is computationally indistinguishable from Game 2.

Proof

Assume there exist an adversary \(\mathrm {Adv}\) that distinguishes between Game 1 and Game 2. We show how to break \(\mathsf {LWE}\) problem given a challenge \(\{ (\mathbf {{a}}_i, y_i) \}_{i \in [m+1]}\) where each \(y_i\) is either a random sample in \({\mathbb Z}_q\) or \(\mathbf {{a}}_i^{\scriptscriptstyle \mathsf {T}}\cdot \mathbf {{s}}+ e_i\) (for a fixed, random \(\mathbf {{s}}\in {\mathbb Z}_q^n\) and a noise term sampled from the error distribution \(e_i \leftarrow \chi \)). Let \(\mathbf {A}= [ \mathbf {{a}}_1, \mathbf {{a}}_2, \ldots , \mathbf {{a}}_m ] \in {\mathbb Z}_q^{n \times m}\) and \(\mathbf {u}= \mathbf {{a}}_{m+1}\). Let \(\psi _0^* = [ y_1, y_2, \ldots , y_m ]\) and \(\tau = y_{m+1} + \mu \left\lfloor q/2 \right\rfloor \).

Now, run the simulated \(\mathsf {Setup}^*\) algorithm where \(\mathbf {A}, \mathbf {u}\) are as defined above. Run the simulated \(\mathsf {KeyGen}^*\) algorithm. Finally, to simulate the challenge ciphertext set \(\psi _0^*, \tau \) as defined above and compute

$$\begin{aligned} \psi _i = \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\cdot \psi _0^* = \mathbf {R}_i^{\scriptscriptstyle \mathsf {T}}\big ( \mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}}\big ) \end{aligned}$$

for \(i\in [\ell ]\). Similarly, \(\psi ^\mathsf {c}=(\mathbf {R}^{\mathsf {c}})^{\scriptscriptstyle \mathsf {T}}( \mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}})\) and \(\psi _{0,j}=\mathbf {R}_{0,j}^{\scriptscriptstyle \mathsf {T}}( \mathbf {A}^{\scriptscriptstyle \mathsf {T}}\mathbf {{s}}+ \mathbf {{e}})\), for \(j\in [5]\). Note that if \(y_i\)’s are \(\mathsf {LWE}\) samples, then this corresponds exactly to the Game 1. Otherwise, the ciphertext corresponds to an independent random sample as in Game 2 by the left-over hash lemma. Thus, an adversary which distinguishes between Game 1 and Game 2 can also be used to break the decisional LWE assumption with almost the same advantage. The computational indistinguishability of Game 3 and Game 2 follows from the same argument.

Thus, Game 0 and Game 4 are computationally indistinguishable by the standard hybrid argument and hence no adversary can distinguish between encryptions of \(\mu _0\) and \(\mu _1\) with non-negligible advantage establishing the selective security of our ABE scheme.

5 Parameter Selection

This section provides a concise description on the selection of parameters for our scheme, so that both correctness (see Lemma 8) and security (see Theorem 5) of our scheme are satisfied.

For a family of width-5 permutation branching programs \(\mathcal {BP}\) of bounded length L, with the LWE dimension n, the parameters can be chosen as follows: (we start with an arbitrary q and we will instantiate it later)

  • The parameter \(m=O(n \log q)\). The error distribution \(\chi =D_{\mathbb {Z},\sqrt{n}}\) with parameter \(\sigma =\sqrt{n}\). And, the error bound \(B=O(\sigma \sqrt{n})=O(n)\).

  • The “large” Gaussian parameter \(\alpha =\alpha (n,L)\) is chosen such that the output of the \(\mathsf {SampleLeft}\) and the \(\mathsf {SampleRight}\) algorithms are statistically indistinguishable from each other, when provided with the same set of inputs \(\mathbf {F}\) and \(\mathbf {u}\). The \(\mathsf {SampleRight}\) algorithm (Algorithm 3) requires

    $$\begin{aligned} \alpha > \Vert {\mathbf {T}_\mathbf {G}}\Vert _{\mathsf {GS}} \cdot \left\| {\mathbf {R}_\mathsf {BP}} \right\| \cdot \omega (\sqrt{\log m}) \end{aligned}$$
    (3)

    From Lemma 7, we have that \(\left\| {\mathbf {R}_\mathsf {BP}} \right\| _{\infty }= O(m \cdot L)\). Then, we get:

    $$\begin{aligned} \left\| {\mathbf {R}_\mathsf {BP}} \right\| := \sup _{\mathbf {x}\in S^{m-1}} \left\| {\mathbf {R}_\mathsf {BP}\cdot \mathbf {x}} \right\| \le m \cdot \left\| {\mathbf {R}_\mathsf {BP}} \right\| _{\infty } \le O(m^2 \cdot L) \end{aligned}$$

    Finally, from Eq. 3, the value of \(\alpha \) required for the \(\mathsf {SampleRight}\) algorithm is

    $$\begin{aligned} \alpha \ge O(m^2 \cdot L) \cdot \omega (\sqrt{\log m}) \end{aligned}$$
    (4)

    The value of the parameter \(\alpha \) required for the \(\mathsf {SampleLeft}\) algorithm (Algorithm 3) is

    $$\begin{aligned} \alpha \ge \Vert {\mathbf {T}_\mathbf {A}}\Vert _{\mathsf {GS}} \cdot \omega (\sqrt{\log 2m}) \ge O(\sqrt{n\log q})\cdot \omega (\sqrt{\log 2m}) \end{aligned}$$
    (5)

    Thus, to satisfy both Eqs. 4 and 5, we set the parameter

    $$\begin{aligned} \alpha \ge O(m^2 \cdot L) \cdot \omega (\sqrt{\log m})=\tilde{O}(n\log q)^2 \cdot L \end{aligned}$$

When our scheme is instantiated with these parameters, the correctness (see Lemma 8) of the scheme is satisfied when \(O((n \log q)^6 \cdot L^2 \cdot B)< q/4\). Clearly, this condition is satisfied when \(q=\tilde{O}(n^7 L^2)\).

6 Extensions

We note a few possible extensions on our basic construction that lead to further efficiency improvements. First, we can support arbitrary width branching programs by appropriately increasing the dimension of the state vector in the encryption. Second, we can switch to an arithmetic setting, similarly as it was done in [10].