(Inner-Product) Functional Encryption with Updatable Ciphertexts

We propose a novel variant of functional encryption which supports ciphertext updates, dubbed ciphertext-updatable functional encryption. Such a feature further broadens the practical applicability of the functional encryption paradigm and allows for fine-grained access control even after a ciphertext is generated. Updating ciphertexts is carried out via so-called update tokens which a dedicated party can use to convert ciphertexts. However, allowing update tokens requires some care for the security definition. Our contribution is threefold: We define our new primitive with a security notion in the indistinguishability setting. Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tags of the decryption key and the ciphertext match, then decryption succeeds. Furthermore, we allow ciphertexts to switch their tags to any other tag via update tokens. Such tokens are generated by the holder of the main secret key and can only be used in the desired direction. We present a generic construction of CUFE for any functionality as well as predicates different from equality testing on tags which relies on the existence of indistinguishability obfuscation (iO). We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., LWE) in the random-oracle model. On the technical level, we build on the recent functional encryption schemes with fine-grained access control and linear operations on encrypted data (Abdalla et al., AC’20) and introduce an additional ciphertext updatability feature. Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed. Overall, such construction enriches the set of known inner-product functional encryption schemes with the additional updatability feature of ciphertexts.


Introduction
Functional encryption [19,52,55] is an exciting encryption paradigm that allows finegrained access control over encrypted data.In contrast to conventional encryption, which is all-or-nothing, in functional encryption (FE) there is a main secret key msk that allows to generate constrained functional decryption keys.More precisely, every decryption key sk f is associated with a function f and given an encryption Enc(mpk, x) of some message x under the main public key mpk, the decryption with sk f only reveals f (x), but nothing more about x. 1Since its introduction, FE has been subject to intense study which can broadly be categorized into two areas.Firstly, works that consider general functionalities and thereby mostly focusing on feasibility results.This typically results in constructions beyond practical interest, as they rely on indistinguishability obfuscation (iO) or need to impose severe restrictions on the number of keys given to an adversary.Secondly, works that restrict the power by only supporting limited classes of functions that are of particular interest for practical applications, i.e., linear and quadratic functions.Here, the main focus is then on concrete and efficient constructions.One such approach that attracted a lot of research are FE schemes for the inner-product functionality (IPFE), i.e., keys are associated with vectors y, messages are vectors x and decryption reveals x, y .Initially proposed by Abdalla et al. [2], a line of work improved the security guarantees [3,13,15,16,24], extended it to the multi-input [8,12] as well as the decentralized setting [4][5][6]22,46].Although this functionality is very simple, it has already shown to be useful in privacy-preserving machine learning [49], money-laundering detection [31], search in encrypted data streams [18], video data analytics, 2 or data marketplaces [43].Limitations of large-scale deployment of FE A problem for the practical adoption of FE is that every issued functional decryption key inherently leaks some information.For the inner-product functionality and thus IPFE, this is particularly problematic.Specifically, if n is the dimension of the vectors, then obtaining n decryption keys in general allows to recover the full plaintext.Consequently, as soon as IPFE is deployed in some larger-scale setting, this represents a severe limitation.To mitigate this problem and make IPFE more practical, Abdalla, Catalano, Gay, and Ursu [9] recently introduced the notion of IPFE with fine-grained access control providing strong security guarantees.3 Loosely speaking, the idea is that ciphertexts are produced with respect to an access policy (e.g., expressed by monotone span programs) and decryption keys are in addition to being bound to a function also associated with an attribute.Decryption then only works if the attribute in the key satisfies the access policy in the ciphertext.It is important to stress that when aiming for reasonable security which allows collusion of functional decryption keys, this approach is non-trivial as a naive composition of IPFE with attribute-based encryption (ABE) or identity-based encryption (IBE) suffers from simple mix-and-match attacks.Abdalla et al. provide pairing-based attribute-based constructions covering monotone span programs (AB-IPFE) and lattice-based identitybased constructions (IB-IPFE). Nuyen et al. [51] propose more efficient pairing-based constructions and investigate the approach of Abdalla et al. in a multi-client setting.Recently, Lai et al. [45] as well as Pal and Dutta [53] also present lattice-based AB-IPFE constructions.
This concept of Abdalla et al. firstly mitigates the leakage problem of plain IPFE, as now this inherent limitation on the number of issued functional decryption key only applies per identity in IB-IPFE (or attribute policy in AB-IPFE).This can be viewed as partitioning the keys such that the aforementioned limitation applies to each of these partitions, making it much more scalable.Secondly, it more closely reflects the situation in large-scale systems where even in the case of FE, one wants to enforce a more fine-grained control over who is allowed to learn some particular information of the encrypted plaintexts.Thirdly, this concept overcomes the problem of a trivial approach, i.e., encrypting data separately under an IPFE public key for each recipient, which would result in a linear blow-up of the ciphertexts.Motivation towards more flexibility in fine-grained access control Abdalla et al. [9] make an important step towards applicability of FE in large-scale systems.But it still seems limited when it comes to dynamic aspects.For instance, the medical example used in [9] envisions that doctors in a hospital may be able to compute on a different set of encrypted data than employees of a health insurance company.What happens if the access to data for the insurance company should be expanded?This would either mean to encrypt all the data anew under the policy that is satisfied by the insurance company or to issue additional keys to the insurance company.While in this medical setting this might still be manageable, there are other examples where this seems hard to achieve.
Let us therefore consider the emerging domain of data marketplaces. 4These are platforms that allow customers to buy access to data or statistical analysis on data offered by a potentially huge set of data owners via data brokers.The available data sets can range from business intelligence and research, demographic or health, firmographic, and market data to public data.(IP)FE seems to be an interesting tool for this application.But while the use of IPFE (in a multi-client setting) has recently been proposed in [43] to realize a privacy-aware data marketplace, it does so in a way that reveals the evaluations in plain to the data brokers.Now, one could imagine using the approach in [9] to let data owners encrypt their data under certain policies (or identities), whereas data buyers are given functional keys (with respect to a certain identity or attribute) and data brokers basically only distribute the data (and possibly perform some aggregation tasks).Still, it seems cumbersome to have a fine-grained control over what buyers can access if the access policies are fixed in the ciphertexts.
We now envision that in addition to having a fine-grained control, we allow the data brokers to update the policies (attributes/identities) in existing ciphertexts in order to add more flexibility.Let us now focus on the specific case of policies being represented via the equality predicate, and thus ciphertexts and function keys are labeled and decryption yields the function of the message if both labels match.We call those labels tags and one can also think of these labels as identities (as done in [9]).Data brokers should have the capability to update ciphertexts in a way that they can change the tags in ciphertexts using some additional information (called an update token), but they should not learn the function evaluations and thus the privacy of the data of the owners is guaranteed.To keep a fine-grained control over ciphertext updates in such a broker scenario, we want to restrict the updates of a ciphertext to a single update and the token to only work in one direction, i.e., from tag t to t but not vice versa.Thus, already updated ciphertexts cannot be updated anymore.While it is possible to consider schemes that support multiple updates and/or bidirectional tokens, we believe that this is rather dangerous in such applications.For instance, this could allow moving ciphertexts to tags for which they were not intended, e.g., from a tag t to t and then to t via two updates, whereas it might be not intended that it is possible to move all ciphertexts from t to t , but rather only ones under t to t and ones under t to t .
We note that this functionality goes beyond what is provided by IPFE with fine-grained access control due to Abdalla et al. [9], as in their work ciphertexts are not updatable, i.e., they do not straightforwardly provide the possibilities that a tag (identity) in a ciphertext can be changed.But as we will see, the work in [9] can serve as a starting point for our lattice-based construction.We note that a trivial construction based upon [9] that encrypts a message multiple times under different tags (identities) in parallel fails to provide the desired functionality.In particular, it does not allow to dynamically decide to which tag a ciphertext can be updated as the desired tags would have to be known at the time of producing the ciphertext, something that we want to avoid in our approach to solve the above problem!Consequently, we are looking for a solution where we can potentially switch a ciphertext to any tag from a large (i.e., exponential) tag space.
Since currently (IP)FE schemes that achieve the desired properties are absent in the cryptographic literature, in this work we ask: Can we define and construct (IP)FE schemes with fine-grained access control and ciphertext updatability?

Our Contribution
We answer the above question affirmatively via our threefold contribution (a) We define a new primitive dubbed ciphertext-updatable functional encryption (CUFE) along with a security notion in the indistinguishability setting.Within CUFE, functional decryption keys and ciphertexts are labeled with tags such that only if the tag in the decryption key and ciphertext match, then decryption succeeds.Furthermore, we allow fresh ciphertexts to update its tag t to any other tag t via so-called update tokens.An update token from t to t is generated by the holder of the main secret and can only be used in the desired direction, i.e., from t to t .In a nutshell, the distinguishing feature is that we allow changing the tag after a ciphertext was generated (which is not known to be achieved by existing work).(b) We present a generic construction of CUFE for any functionality and more powerful predicates than equality testing on tags, which relies on the existence of indistinguishability obfuscation (iO).(c) We present a practical construction of CUFE for the inner-product functionality from standard assumptions (i.e., the learning-with-errors (LWE) assumption) in the random-oracle model.Proving security for such a construction turned out to be non-trivial, particularly when revealing keys for the updated challenge ciphertext is allowed.In general, this further enriches the approach presented in line of Abdalla et al. [9] with the updatability feature of ciphertexts.Notably, our construction relies on lattice-based assumptions which are plausibly post-quantum.
Defining ciphertext updatability for FE CUFE can be seen as tag-based FE scheme with tag space T .As in FE, key generation outputs a main public-secret key pair (mpk, msk), where the decryption keys sk f,t for some function f ∈ F and tag t ∈ T are derived from msk.In CUFE, however, msk is also used to derive update tokens t→t .Now, encryption takes some tag t and message x and outputs a ciphertext C t .Then, using t→t , any honest-but-curious party 5 can take the update token to update C t to C t without learning anything about the encrypted message.Correctness guarantees that if the tags of the function key and the ciphertext match, and only a single update has happened, then decryption succeeds and outputs f (x).
Defining security needs some care as we want that tokens can update ciphertexts only toward the tag specified in the update token and updated ciphertext should not be allowed to be further updated.That is, a token t→t can only switch tags from t to t and not vice versa.As in the work of Abdalla et al. [9], the adversary is allowed to query decryption keys for any functionality f such that the function evaluation on the challenge ciphertext yields f (x 0 ) = f (x 1 ), for adversarially chosen messages x 0 , x 1 , if the policy is fulfilled.In our constructions, we restrict the policy to the equality test on tags of the functional decryption key and the ciphertext (we discuss extensions in Sect.4.3) which ensures a simple access control for our envisioned applications.
Concerning updated ciphertexts, we have the following situation.Since the concept of update tokens is not foreseen in conventional forms of FE, we need to consider additional aspects for our security notions.We have to deal with the fact that tokens can potentially not only be used to update ciphertexts from some tag t to another tag t , but could also be used to invert a ciphertext update.This is partly reminiscent of providing adequate and strong security guarantees in proxy re-encryption (PRE) [26,28].Having those in mind, we define an indistinguishability-based notion IND-CUFE-CPA, which guarantees that an adversary cannot distinguish ciphertexts for a certain challenge target tag and adversarially chosen messages.
More concretely, as outlined in our motivation, we only want to allow updating the tags of ciphertexts once and only in one direction.In order to capture these properties, we provide the adversary in addition to a key generation oracle (as in plain FE) access to additional oracles.Firstly, we allow the adversary to adaptively query corrupted and honest update tokens as well as also provide encryption and honest-ciphertext-update oracles.Furthermore, we want to naturally allow the adversary to see decryption keys for honestly updated challenge ciphertexts.
We show that we can prove our CUFE construction from LWE secure in such a model for the inner-product functionality.Indeed, the tricky part in the proof is to allow the adversary to retrieve functional decryption keys for honestly updated challenge ciphertexts (i.e., it does not see the update token, but has access to an update oracle; see below for detailed discussion).We note that our iO-based construction satisfies the security model for any functionality (see below).CUFE for any function from iO.The starting point of our construction is the (semiadaptively secure) FE construction due to Waters [57], which relies on indistinguishability obfuscation (iO) and the punctured programming approach.The main ingredient of Waters' construction is a primitive called puncturable deterministic encryption (PDE), which can be constructed from puncturable PRFs using the hidden trigger mechanism of Sahai and Waters [56].A PDE scheme is a symmetric and deterministic encryption scheme, which additionally has a feature that given a key k pde and a pair of messages m 0 , m 1 , it produces a punctured key k m 0 ,m 1 pde that can decrypt all ciphertexts except for those encrypting either m 0 or m 1 . 6Using PDE one can construct a (semi-adaptively secure) FE scheme as follows: The setup algorithm samples a puncturable PRF key k prf for function F, which it sets as the main secret key, and generates an obfuscation of the program PInit, which it sets as the main public key.The program PInit takes as input a randomness r , computes a point p = PRG(r ), derives a PDE key as k pde = F(k prf , p), and outputs the pair ( p, k pde ).The encryption algorithm can then use the obfuscated program PInit to encrypt a message m by first sampling a randomness r , running the obfuscated program on r to receive ( p, k pde ), and finally, computing the ciphertext as C := ( p, c := Enc pde (k pde , m)).The functional secret key sk f , for a function f , is also created as an obfuscation of a program PKey, which has f hardcoded.This program takes as input a ciphertext C := ( p, c), uses p to derive the key k pde , decrypts c using k pde to obtain the message m, and finally, outputs f (m).Hence, the decryption algorithm simply involves running the obfuscated program PKey on the ciphertext.
In order to introduce tags for the ciphertexts, a first step is to extend the PDE to its tagbased variant that we dubbed puncturable tag-based deterministic encryption (PTDE).It works analogously to PDE, except that the ciphertexts are associated with tags and puncturing happens not only at a pair of messages m 0 , m 1 , but also at a tag t.Hence, a punctured key k t,m 0 ,m 1 ptde can decrypt all ciphertexts except for those encrypting either m 0 or m 1 under the tag t.Now, the challenging part is to update the ciphertexts.In order to restrict that an updated ciphertext cannot be updated anymore, we use two different puncturable PRF keys as part of the main secret key, k prf,o for the original ciphertexts and k prf,u for the updated ciphertext.Analogous to the aforedescribed construction of Waters [57], these PRF keys are used to derive PTDE keys in our case.For the update operation, we now need to switch the ciphertexts encrypted under the key k ptde (derived from k prf,o ) and tag t to a new ciphertext under the key k ptde (derived from k prf,u ) and tag t .In order to do this we introduce a third program, called PUpdate, which given as input a ciphertext C t (under a tag t) and a randomness r , first decrypts the input ciphertext C t , and then, re-encrypts it deterministically under the new key k ptde and tag t to produce the updated ciphertext C t .Due to the deterministic nature of the used cryptographic primitives, such as PTDE and puncturable PRF, we can rely solely on (plain) iO for the update operation, instead of requiring probabilistic iO [25].CUFE for inner-products from standard assumptions The starting point for the construction from standard assumptions is the identity-based inner-product functional encryption scheme from the LWE assumption by Abdalla et al. [9].Their construction essentially combines the LWE-based inner-product FE scheme from Agrawal et al. [15]-we will refer to this scheme as ALS-with a LWE-based IBE scheme, e.g., the IBEs from [37] or [1].The latter is especially of interest for us: Starting from a public key A, it is possible to derive an identity-specific matrix A id for some identity id.This A id describes a trapdoor function for which it is hard to compute a short preimage.Yet, given the trapdoor for A, which is stored as part of the main secret key, it is possible to derive sk id as trapdoor for A id .Notably, sk id is a matrix which can be projected to functional decryption keys for inner-products •, y , hence giving sk id, y .
While this idea incidentally gives rise to a tag-based inner-product FE construction, producing update tokens to transform ciphertexts from the source to the target tag is non-obvious.We want to note, however, that this is one of the core challenges that is solved by proxy re-encryption in the public-key encryption setting.It is, however, nontrivial to combine a proxy re-encryption scheme with a functional encryption scheme without running into issues with collusion.Indeed, consider a black-box approach that combines both worlds by encrypting the FE ciphertext with a PRE.Now, consider two colluding users t and t who have functional secret keys for distinct f and f .Now, if a ciphertext is re-encrypted to t, they can use their PRE secret key to remove the PRE layer.Then, both t and t can evaluate their functions by simply sharing the decapsulated FE ciphertext.Therefore, a CUFE scheme requires tighter intertwining of the two concepts to prevent mix-and-match-style and other attacks.
Still, ideas found in lattice-based proxy re-encryption constructions help us to turn ALS combined with tag-based keys into a secure CUFE.We quickly revisit the construction by Fan and Liu [33] of a tag-based proxy re-encryption scheme.Their idea is to set up the user-specific matrices from a global public matrix A. Given such a fixed matrix A, the matrix for a user u is then set to be A u = [A|A u,1 |A u,2 ] where A u,i = −AR u,i with R u,i , for i = 1, 2 contained in the secret key.Encryption follows a dual-Regev approach [37] based on the user dependent matrix A u and a random freshly sampled tag t ∈ T .Re-encryption keys from user u to user u are generated by sampling matrices X 01 , X 02 , X 11 , X 12 using R u,1 , R u,2 such that for any matrix B. In their construction, h is a map used to describe the "ciphertext level" (either freshly generated, h(1) or updated, h(2)), whereas B stems from a function producing matrices on input of a tag and the map h.Using as tag space T a large set with "unit differences" property, as introduced in [47], i.e., for any for any t i , t j ∈ T , t i = t j , one has h(t i − t j ) = h(t i ) − h(t j ) ∈ Z n×n q is invertible, Fan and Liu prove their construction secure in the standard model.Their proof strategy crucially relies on the "unit differences" property together with the fact that the scheme is tag-based: The "challenge" tag, i.e., the tag associated with the challenge ciphertext, is randomly sampled at the beginning of the security game, and the public parameter is produced by embedding such "challenge" tag in them.This allows the reduction to correctly answer any allowed adversary's query, while at the same time embedding an LWE instance in the challenge ciphertext.
The setting of CUFE is, however, vastly different in nature as ciphertexts are not equipped with levels, there are no per-user public keys, and tags have a different meaning, and in particular, they are not randomly sampled at encryption time, but are specified by the encryptor.Yet, this method to set up the matrices such that one can update dual-Regev style ciphertexts from one matrix to another is helpful to construct the update tokens.Additionally, with dual-Regev inspired ciphertexts we are also able to set up keys as matrices in such a way that we are able to first sample a tag-specific trapdoor from the main secret key which is then projected to a functional secret key.Consequently, our construction intertwines the functional encryption features from ALS with tag-based ciphertext updates in a non-black-box manner.
As the construction is not black-box, neither is the proof.First, we move to the randomoracle model in order to embed the challenge tag in the public parameters, even though in our setting such tag is specified by the encryptor, by crucially exploiting the fact that the reduction can guess the challenge tag among the random-oracle queries made by the adversary.Given such modification, the main technical challenge in the proof comes from having to produce updates of the challenge ciphertext and function keys for the respective target tags.Embedding an ALS instance (as done for the challenge identity in [9]) for each of these tags does not work as the different instances should be related in order to simulate the derived matrices of these tags correctly.On the other hand, using a single ALS instance to simulate function keys for multiple tags leads, if done in the trivial way, to producing function keys related to each other, and thus again to a view for the adversary distinguishable from the expected one.However, this drawback can be overcome by "re-randomizing" the function keys in a way that it "hides" the function key provided by the ALS challenger (similarly to Lai et al. [45]).In this way, the adversary's view is indistinguishable from that in the real experiment.We remark that since the reduction needs to perform guesses in order to correctly produce public parameters and answer adversary's queries, one has to make sure that the probability space over which the reduction needs to guess has at most polynomial size.In particular, such constraint will allow us to prove the lattice-based scheme secure, but only against adversary that can request at most a bounded number of update tokens per tag and honest updates of the challenge ciphertext.We discuss more in detail these restrictions in Sect. 5. On the other hand, while this is certainly a limitation in general, for the concrete applications we envision, one can always set parameters so that such bounds are large enough to accommodate requirements of real world scenarios.

Related Work
While we are not aware of any previous work that tries to achieve the desired goals via ciphertext updatability, a related concept is that of controlled functional encryption (C-FE) [50].This approach enhances FE with an authority that needs to be involved in the decryption process and thus allows a fine-grained control over which ciphertexts can be decrypted by a holder of a functional key.Consequently, the access control is enforced by the authority and by dynamically changing which user is allowed to decrypt which ciphertexts one can view this as achieving similar goals as with ciphertext updatability.However, the major difference is that C-FE requires an interactive decryption procedure between the user and authority and thus requires the authority to be online and available all the time.This would potentially hinder scalability in large-scale systems.In contrast, our approach is oblivious to the users.Furthermore, the requirement of an always online authority that needs to be fully trusted might be problematic and undesirable.This trust issue has recently been addressed by distributing the trust in the authority via the concept of Multi-Authority C-FE [11], however, this incurs further communication overhead.Another related (but conceptually different) line of work is updating policies in ABE [34,42].In general, these works combine ciphertextpolicy ABE with PRE in order to update the policy associated with the ciphertext.However, these works neither consider (IP)FE schemes nor are sufficient for our envisioned applications.Our work can be seen as a combination of IBE/ABE with FE augmented by updatability, and, hence, updatability needs to consider and tie both parts together.

Preliminaries
Notation For n ∈ N, let [n] := {1, . . ., n}, and let λ ∈ N be the security parameter.For a finite set S, we denote by s ← S the process of sampling s uniformly from S. Let y ← A(λ, x) be the process of running an algorithm A on input (λ, x) with access to uniformly random coins and assigning the result to y. (We may omit to mention the λ-input explicitly and assume that all algorithms take λ as input.)To make the random coins r explicit, we write A(λ, x; r ).We use ⊥ to indicate that an algorithm terminates with an error and A B when A has oracle access to B, where B may return as a distinguished special symbol.We say an algorithm A is probabilistic polynomial time (PPT) if the running time of A is polynomial in λ.Given x ∈ Z n , we denote by x its Euclidean norm, i.e., for i .For a matrix R, by R, we denote the result of applying Gram-Schmidt orthogonalization to the columns of R. By R , we will denote the Euclidean norm of the longest column of R, and by s 1 (R) its spectral norm, i.e., the largest singular value of R. A function f is negligible if its absolute value is smaller than the inverse of any polynomial (i.e., if ).We may write q = q(λ) if we mean that the value q depends polynomially on λ.Given two different distributions X and Y over a countable domain D, we denote their statistical distance as SD(X, and say that X and Y are SD(X, Y ) close.

Pseudorandom Generators
We recall the definition of a (Boolean) pseudorandom generator (PRG).Definition 1. (Pseudorandom Generator) A stretch-m(•) pseudorandom generator is a (Boolean) function PRG : {0, 1} * → {0, 1} * mapping n-bit inputs to m(n)-bit outputs (also known as the stretch) that is computable by a uniform PPT machine, and for any non-uniform PPT adversary A, there exists a negligible function negl, such that, for all n ∈ N, the following holds

Puncturable Pseudorandom Functions
Puncturable pseudorandom functions (PRFs), introduced by Sahai and Waters [56], are PRFs for which a key can be given out, such that it allows evaluation of the PRF on all inputs, except for a designated polynomial-size set of inputs.

Definition 2. (Puncturable PRFs [56]
) A puncturable family of PRFs PRF is given by a triple of algorithms (Gen, F, Puncture) and a pair of computable functions n = n(λ) and m = m(λ), satisfying the following conditions: Functionality preserved under puncturing For every PPT adversary A that outputs a set S ⊆ {0, 1} n , such that for all x ∈ {0, 1} n where x ∈ S, we have that: Pseudorandom at punctured points.For every PPT adversary (A 1 , A 2 ), where A 1 outputs a set S ⊆ {0, 1} n and a state σ , consider an experiment that samples k ← PRF.Gen F (1 λ ) and k S ← PRF.Puncture F (k, S), then we have where F(k, S) denotes the concatenation of F(k, x 1 ), . . ., F(k, x k ), such that S = {x 1 , . . ., x k } is the enumeration of the elements of S in lexicographic order and U denotes the uniform distribution over bits.
The GGM tree-based PRF construction [36] from one-way functions yields a puncturable PRF where the punctured key sizes are polynomial in the size of the set S [20].
In this work, we also make use of injective families of PRFs [56,57]: A statistically injective (puncturable) PRF family with failure probability (•) is a family of (puncturable) PRFs PRF, such that with probability 1 − (λ) over the random choice of key k ← PRF.Gen F (1 λ ), we have that F(k, •) is injective.
If the failure probability function (•) is not specified, then we assume that (•) is a negligible function in the security parameter λ.Sahai and Waters [56] showed that as-suming the existence of one-way functions there exists statistically injective puncturable PRF family with failure probability 2 − (λ) .

Indistinguishability Obfuscation
We recall the definition of indistinguishability obfuscation.Definition 4. (Indistinguishability Obfuscator [35]) A PPT algorithm iO is an indistinguishability obfuscator (iO) for a circuit class {C λ } λ∈N if it satisfies the following conditions: Functionality For any security parameter λ ∈ N, any circuit C ∈ C λ , and any input x, we have that Indistinguishability For any PPT distinguisher D and for any pair of circuits We further say that iO is subexponentially secure if for any PPT D the above advantage is smaller than 2 −λ ε for some 0 < ε < 1.

Ciphertext-Updatable Functional Encryption
We present our definitional framework of ciphertext-updatable functional encryption (CUFE).CUFE is a tag-based functional encryption (FE) scheme defined on functionality F : X → Y and tag space T .Key generation outputs a main public-secret key pair (mpk, msk), where from msk, the function keys sk f,t for some function f ∈ F and tag t ∈ T can be derived.Encryption is done according to some tag t ∈ T and message x ∈ X .Now, if the tag of the function key and the ciphertext match, then decryption succeeds and outputs f (x).Furthermore, we want to allow switching of tags, i.e., from t to t , in a ciphertext once, which is carried out via tokens t→t .Such a token can be used to update a ciphertext C t to a ciphertext C t under the tag t specified in the token but not vice versa, i.e., from t to t. Definition 5. A CUFE scheme CUFE for functionality F : X → Y with message space X and tag space T is a tuple of the PPT algorithms: Setup(λ, F) : on input security parameter λ ∈ N and a class of functions F, the setup algorithm outputs a main public-secret key pair (mpk, msk).KeyGen(msk, f, t) : on input msk, function f ∈ F, and tag t ∈ T , the key generation algorithm outputs a function key sk f,t .TokGen(msk, t, t ) : on input msk and tags t, t ∈ T , the token generation algorithm outputs an update token t→t . Enc Remark 1.Notice that the correctness of the CUFE scheme only guarantees that nonupdated ciphertexts for tag t can be updated to tag t using the update token t→t and still be decrypted correctly.Looking ahead to the CPA security notion, this will be the only possible use of the update token.Any other successful use (e.g., updating ciphertexts in the reverse direction or updating already updated ciphertexts) will allow the adversary to win the security experiment (see below).Hence, a secure CUFE construction implies that the update token can only be used to update a non-updated ciphertext to an updated one (assuming the tags match), but not vice versa and not multiple times (i.e., to "update" an already updated ciphertext is not possible as this would penalize CUFE security).
Intuition of our CPA security notions for CUFE Updating ciphertexts via tokens is closely related to the realm of proxy re-encryption (PRE) [10,17] and, indeed, we start from the recent PRE state-of-the-art security model by Cohen [26] and carefully adapt such a model to our needs in the chosen-plaintext-attack indistinguishability setting.Moreover, due to the updatability of ciphertexts and thus the concept of update tokens not being present in plain FE, we need to require additional aspects for our security guarantees.Such tokens could potentially be used to also switch function keys or even invert updated ciphertexts.In that vein, we define an indistinguishability-based notion, we dub IND-CUFE-CPA, which guarantees that an adversary cannot distinguish ciphertexts for a certain target tag t * and adversarially chosen messages (x * 0 , x * 1 ).We only want to allow updating the tags of ciphertexts via the token, only in one direction, and only from non-updated to updated ciphertexts.In order to capture these properties, we provide the adversary in addition to KeyGen (as in plain FE) access to four more oracles.Two of those additional oracles are related to the generation of tokens and the other two are needed to ensure security related to updatability of honestly generated ciphertexts.
Concerning the oracles for the token generation, we allow the adversary to adaptively query corrupted tokens via CorTokGen and honest tokens via HonTokGen.The former mirrors attacks where the adversary gets complete control over tokens while the latter allows the adversary to query the generation of an honest token without access to the token itself.
Moreover, we also provide Enc and HonUpdate oracles.Thereby, Enc allows generating honest ciphertexts (under mpk) and HonUpdate allows updating ciphertexts which have been honestly8 generated via Enc without revealing the update token to the adversary.See that via HonTokGen, the adversary can query an honest token generation and the experiment can use such a token for the honest update.
The validity of the adversary is checked in the end of the security game.Essentially, the adversary is valid if and only if: (a) the adversary cannot trivially distinguish the challenge ciphertext, (b) the adversary has not received update tokens towards t for the challenge ciphertexts where it has queried function keys under t with f (x * 0 ) = f (x * 1 ), (c) the adversary has only queried updated challenge ciphertexts for which it has function keys that satisfy f (x * 0 ) = f (x * 1 ).If the adversary is valid and it has correctly guessed which message was encrypted in the challenge ciphertext, the adversary wins the game.IND-CUFE-CPA security We say that a CUFE scheme is IND-CUFE-CPA-secure if any PPT adversary succeeds in the following experiment only with probability negligibly larger than 1/2.The experiment starts by computing the initial main public and secret key pair (mpk, msk) ← Setup(λ, F), initializes empty sets K, C, UC, HT , CT to track keys, ciphertexts, updated ciphertexts, honest and corrupted tokens, respectively, as well as initializes the counters c, uc, ht, ct for ciphertexts, updated ciphertexts, honest tokens and corrupted tokens, respectively.
At some point, the adversary outputs target tag and messages (t * , x * 0 , x * 1 ).Next, the experiment tosses a coin b, computes C * ← Enc(mpk, x * b , t * ), adds (0, C * , t * ) to C, and gives C * to the adversary.The adversary eventually outputs a guess b , where the experiment returns 1 if b = b and the adversary is valid.In the adaptive security game the adversary has full access to all oracles from the beginning, whereas in the selective security game the adversary only gets access to the oracles after committing to the target tag t * and challenge messages (x * 0 , x * 1 ). is defined in Fig. 1.
, the adversary has only queried updated challenge ciphertexts for which it has function keys that satisfy f Remark 2. We model the experiment semi-adaptive (i.e., the target tag and messages are chosen by the adversary before it has access to oracles, but after it has seen the main public key) as well as adaptive (i.e., the adversary has access to the oracles before specifying target tag and messages).Note that in Fig. 1, we cover this by setting O 1 = ⊥, i.e., the adversary has no access to oracles in the first phase, or O 1 = O, i.e., the adversary has access to the oracles throughout both phases, respectively.We note that it would also be possible to define the experiment in a weaker selective setting or either choosing only the tag or the messages in a semi-adaptive sense.This is straightforward to model and we omit it for the sake of simplicity.Moreover, we note that to move from a selective to an adaptive setting, one can utilize the standard technique of complexity leveraging if one is willing to accept that message and/or tag spaces are polynomially bounded in the security parameter.

Generic Construction of CUFE and Extensions
In this section, we present a generic construction of CUFE for any function from indistinguishability obfuscation that provides semi-adaptive IND-CUFE-CPA security.For the sake of consistency, we opt to present it for the equality predicate on tags and then extend the expressiveness of predicates beyond the equality testing on tags.We show that due to the way our construction is built, it easily supports any predicate that can be represented as a circuit of arbitrary polynomial size.Moreover, we conjecture that one can obtain adaptive FE security either using the black-box transformation of Ananth et al. [7] along with applying complexity leveraging over the tag space or by directly extending the adaptively secure FE construction of Waters [57] to the CUFE setting.-Setup(1 λ ), on input a security parameter 1 λ , outputs a key k.
-Enc(k, t, m), on input a key k, a tag t ∈ T and a message m, outputs a ciphertext c. -Dec(k, t, c), on input a key k, a tag t ∈ T and a ciphertext c, outputs a message m ∈ M ∪ {⊥}.-Puncture(k, t, m 0 , m 1 ), on input a key k, a tag t ∈ T and a pair of messages m 0 , m 1 ∈ M, outputs a new key k t,m 0 ,m 1 (the superscript is used to indicate the tag and messages where the key is punctured).
Correctness We say that a PTDE scheme is correct if there exists a negligible function negl, such that for all λ ∈ N, for all t ∈ T , for all pairs of messages m 0 , m 1 ∈ M, for all k ← .Setup (1 λ ) and k t,m 0 ,m 1 ← .Puncture(k, t, m 0 , m 1 ), for all m = m 0 , m 1 , it holds that Moreover, we have that for all m (including m 0 , m 1 ), Next, we recall the notion of (selective) indistinguishability security for PTDE.
Definition 8. (Indistinguishability Security for PTDE) A PTDE scheme is indistinguishability secure, if for all PPT adversaries A it holds that Remark 3. Our definition allows for a key to be punctured at two messages and a tag, which extends the original PDE definition given in [57] with a tag puncturing.We note that this differs from puncturable tag-based encryption given by Chvojka et al. [23], which allows puncturing only at tags instead and constitutes a randomized encryption scheme.

Construction of PTDE
We extend the PDE construction given by Waters [57] to additionally consider tags.Our PTDE scheme has message space M = {0, 1} λ and tag space T = {0, 1} .We make use of two (puncturable) PRF families, where the first one is an injective puncturable PRF F 1 that takes inputs from λ bits to = (λ) bits, and the second one F 2 takes inputs from bits to λ bits.The construction is as follows.
-Setup(1 λ ): Sample keys k 1 ← PRF.Gen 2 ).The correctness for the non-punctured keys follows by observation, and correctness for key k t,m 0 ,m 1 on all messages m = m 0 , m 1 holds as long as , which holds because F 1 is injective.The security follows straightforwardly from the (punctured) PRF security of F 1 and F 2 and is established with the following theorem.
Theorem 1.Let F 1 and F 2 be secure puncturable pseudorandom functions.Then, our construction is (selectively) indistinguishable secure PTDE scheme.
Proof.The security proof follows via a sequence of hybrid games.Hereafter, let -Game 0 : This corresponds to the honest execution of the (selective) indistinguishability game of PTDE.-Game 1 : This is identical to Game 0 with the exception that the challenger ran- ), for some PRF key k prf,1 , and otherwise, it outputs 0 to indicate that z 0 , z 1 were random values.
We observe that if z 0 , z 1 are generated as z 0 = F 1 (k prf,1 , m 0 ) and z 1 = F 1 (k prf,1 , m 1 ), then B gives the view of Game 0 to A. Otherwise, if z 0 and z 1 were chosen randomly, then the view is of Game 1 .Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have non-negligible advantage against the puncturable PRF security game.

Lemma 2. If F 2 is a selectively secure puncturable PRF, then it holds that
Proof.The proof of this lemma follows analogously to that of Lemma 1.

We note that since
are all chosen uniformly at random in Game 2 , we have that the challenge ciphertexts c b := (c b 1 , c b 2 ), for b ∈ {0, 1}, informationtheoretically hide the bit b.This final information-theoretic argument depends on the fact that the distribution of PRF.Puncture F 1 (k prf,1 , {m 0 , m 1 }) is the same as PRF.Puncture F 1 (k prf,1 , {m 1 , m 0 }).This concludes the proof of Theorem 1.

Generic CUFE from iO for any Function
The generic construction is inspired by the punctured programming approach to construct functional encryption from indistinguishability obfuscation, given by Waters [57].More precisely, the construction makes use of indistinguishability obfuscation iO, puncturable tag-based deterministic encryption (PTDE) scheme , puncturable pseudorandom function F and pseudorandom generator PRG.The construction is described below (where the parts in blue in programs PInit:2, PKey:2 and PUpdate:2 highlight the changes with respect to programs PInit:1, PKey:1 and PUpdate:1): -Setup(1 λ , F): Compute the following: Output the main public/secret key pair (mpk Output the updated ciphertext C t . 12 -Dec(sk f,t := P f,t , C t ): Run the obfuscated program f (m) ← P f,t (C t ) and output f (m).
Correctness The correctness of our construction follows straightforwardly from the correctness of the puncturable tag-based deterministic encryption scheme , puncturable pseudorandom function F, pseudorandom generator PRG, obfuscator iO, and the description of the programs PInit:1, PKey:1 and PUpdate:1. 9The program is padded to the size equal to max{|PInit: 10 The program is padded to the size equal to max{|PKey: 11 The program is padded to the size equal to max{|PUpdate: 12 We assume that it is easy to distinguish between updated and fresh ciphertext.This is without loss of generality as we can simply append a bit to the ciphertexts to achieve this distinguishability.

Else
Next, we present the proof of IND-CUFE-CPA security of our generic construction.
Theorem 2. Let be a puncturable tag-based deterministic encryption scheme, F be a secure puncturable pseudorandom function, PRG be a secure pseudorandom generator, iO be an indistinguishability obfuscator for the circuit class C λ .Then, our generic construction is a semi-adaptively IND-CUFE-CPA-secure CUFE scheme.
Proof.The proof is organized in a sequence of hybrid games, where initially the challenger encrypts m b for a random bit b ∈ {0, 1}, and we gradually (in multiple steps) change the encryption into an encryption of m 0 , which is independent of the bit b.We first define the sequence of games, and then, show (based on the security of different primitives) that any PPT adversary's advantage in each game must be negligibly close to the previous game.Hereafter, let Game -Game 0 : This corresponds to the honest execution of the semi-adaptive variant of the indistinguishability game given in Sect.-Game 3 : This is identical to Game 2 with the exception that for answering each secret key query ( f, t) ∈ (F × T ), the challenger does the following: Compute . Then, let c 0 , c 1 consist of c 0 , c 1 in lexicographic order, 13 and the challenger responds with P f,t ← iO(PKey: -Game 4 : This is identical to Game 3 with the exception that for answering each token generation query (t, t ) ∈ (T × T ), the challenger does the following: Compute , for k ptde ← F(k prf,u , r ) and random r ∈ {0, 1} λ .Then, sort and order c 0 , c 1 , c 0 , c 1 lexicographically, 14 and respond with Proof.We describe a PPT reduction algorithm B that plays the PRG security game.First, B creates the main public/secret key pair (mpk, msk) (as in Game 0 ).Next, B receives a PRG challenge p ∈ {0, 1} 2λ .Then, B runs the adversary A and executes the CUFE security game (as described in Game 0 ), with the exception that when computing the challenge ciphertext it sets p * := p.We note that since B generates everything else itself (as in Game 0 , it has all the necessary information to answer the oracle queries of A).Lastly, if A wins, i.e., b = b, then B outputs 1 to indicate that p was in the image of PRG, and otherwise, it outputs 0 to indicate that p was chosen randomly.We observe that if the PRG challenger generated p = PRG(r ), for some r ∈ {0, 1} λ , then B gives the view of Game 0 to A. Otherwise, if p was chosen randomly, then the view is of Game 1 .Therefore, if A can distinguish between the two games with nonnegligible advantage, then B must also have non-negligible advantage against the PRG security game.

Lemma 4. If iO is an indistinguishability obfuscator for the circuit class C λ , then it holds that Game
Proof.We construct a distinguisher B for iO.B proceeds as in Game 1 , with the exception that it computes the punctured PRF key 14 Here we sort c 0 and c 1 lexicographically and then we order c 0 and c 1 according to this sort, i.e., if and generates the two circuits the iO challenger and receives back a program P, which it sets as mpk := P pp := P, and returns it to the CUFE adversary A. The rest of the execution is identical to Game 1 .If A wins, i.e., b = b, then B outputs 0 to indicate that P was an obfuscation of C 0 , and otherwise, it outputs 1 to indicate that P was an obfuscation of C 1 .
We observe that if the iO challenger generated P as an obfuscation of C 0 , then B gives the view of Game 1 to A. Otherwise, if P was generated as an obfuscation of C 1 , then the view is that of Game 2 .Moreover, the programs are functionally equivalent with all but negligible probability, because p * lies outside the image of PRG with probability at least 1 − 2 λ .Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have non-negligible advantage against the iO security game for the circuit class C λ .

Lemma 5. If iO is an indistinguishability obfuscator for the circuit class C λ , then it holds that Game
Proof.To prove this lemma, we consider a hybrid argument.Let Q k = Q k (λ) denote the number of secret key queries issued by the CUFE adversary A. For i ∈ [0, Q k ], we define Game 2,i to be equivalent to Game 2 with the exception that the first i secret key queries are handled as in Game 3 and the last Q k − i are handled as in Game 2 .Note that Game 2,0 is the same as Game 2 and Game 2,Q k is the same as Game 3 .Hence, to prove security we need to establish that no adversary can distinguish between Game 2,i and Game 2,i+1 , for i ∈ [0, Q k − 1], with non-negligible advantage.
We construct a distinguisher B for iO.B proceeds as in Game 2 , except that the first i secret key queries are answered as in Game3.For query i + 1, B computes , where f and t are the queried function and tag, respectively.Then, let c 0 , c 1 consist of c 0 , c 1 in lexicographic order, B generates the two circuits the iO challenger and receives back a program P, which it sets as sk f,t := P f,t := P, and returns it to the CUFE adversary A as the query answer.If A wins, i.e., b = b, then B outputs 0 to indicate that P was an obfuscation of C 0 , and otherwise, it outputs 1 to indicate that P was an obfuscation of C 1 .
We observe that if the iO challenger generated P as an obfuscation of C 0 , then B gives the view of Game 2,i to A. Otherwise, if P was generated as an obfuscation of C 1 , then the view is that of Game 2,i+1 .Moreover, the programs are functionally equivalent with all but negligible probability, because the only difference in the programs is that the response is hardwired for the two inputs (i.e., for the challenge ciphertexts).Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have non-negligible advantage against the iO security game for the circuit class C λ .

Lemma 6. If iO is an indistinguishability obfuscator for the circuit class C λ , then it holds that Game
Proof.To prove this lemma, we consider a hybrid argument.Let Q t denote the total number of token generation queries issued by the CUFE adversary A, where denote the number of honest and corrupted token generation queries, respectively.For i ∈ [0, Q t ], we define Game 3,i to be equivalent to Game 3 with the exception that the first i token generation queries are handled as in Game 4 and the last Q t −i are handled as in Game 3 .Note that Game 3,0 is the same as Game 3 and Game 3,Q t is the same as Game 4 .Hence, to prove security we need to establish that no adversary can distinguish between Game 3,i and Game 3,i+1 , for i ∈ [0, Q t − 1], with non-negligible advantage.
We construct a distinguisher B for iO.B proceeds as in Game 3 , except that the first i token generation queries are answered as in Game 4 .For query ) and random r ∈ {0, 1} λ , where t, t are the queried tags.Then, B sorts and orders c 0 , c 1 , c 0 , c 1 lexicographically.B generates the two circuits the iO challenger and receives back a program P, which it sets as t→t := P t→t := P. If the query was a corrupted token generation query, then B sends t→t to the CUFE adversary A as the query answer, and otherwise, it stores it locally.If A wins, i.e., b = b, then B outputs 0 to indicate that P was an obfuscation of C 0 , and otherwise, it outputs 1 to indicate that P was an obfuscation of C 1 .
We observe that if the iO challenger generated P as an obfuscation of C 0 , then B gives the view of Game 3,i to A. Otherwise, if P was generated as an obfuscation of C 1 , then the view is of Game 3,i+1 .Moreover, the programs are functionally equivalent with all but negligible probability, because the only difference in the programs is that the response is hardwired for the two inputs (i.e., for the challenge ciphertexts).Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have non-negligible advantage against the iO security game for the circuit class C λ .Lemma 7. If F is a selectively secure puncturable PRF, then it holds that Game 4 ≈ Game 5 .
Proof.We describe a PPT reduction algorithm B that plays the selective puncturable PRF security game.B proceeds as in Game 4 in its interaction with the CUFE adversary A, except that it chooses a random p * ∈ {0, 1} 2λ and submits it to the punctured PRF challenger.B receives back a punctured PRF key k We observe that if z is generated as F(k prf , p * ), then B gives the view of Game 4 to A. Otherwise, if z was chosen randomly, then the view is that of Game 5 .Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have non-negligible advantage against the puncturable PRF security game.Lemma 8.If is a selectively secure puncturable tag-based deterministic encryption scheme, then it holds that Game 5 ≈ Game 6 .
Proof.We note that the only difference between Game 5 and Game 6 is that in Game 6 the CUFE challenger always encrypts m * 0 , whereas in Game 5 the encrypted message could be m * 0 or m * 1 , depending on the coin flip b.Moreover, when b = 0, the views of these two games are identical.Hence, if there is any difference in adversary A's advantage in guessing b between Game 5 and Game 6 it must be solely conditioned on b = 1.
We describe a PPT reduction algorithm B that plays the selective puncturable tagbased deterministic encryption (PTDE) security game.B proceeds as in Game 5 , except that it submits the challenge messages m * 0 , m * 1 and tag t * (given by A) to the PTDE challenger, which replies with a punctured PTDE key Let c 0 , c 1 consist of c 0 , c 1 in lexicographic order.Then, for answering each secret key query (of the form , and uses the punctured PTDE key k ptde to construct P f,t := PKey:2[k Similarly, for answering each token generation query (of the form (t, t )), B guesses a γ ∈ {0, 1}, computes c γ ← .Enc(k ptde , t , m * γ ), c 1−γ ← .Enc(k ptde , t , m * 1−γ ), for k ptde ← F(k prf,u , r ) and random r ∈ {0, 1} λ .Then, B uses the previously computed values and the punctured PTDE key k ptde to construct and answer the token generation query.We note here that the guessed γ incurs a 1/2 security loss.Encryption queries are answered in a straightforward way using the program PInit:2[k Lastly, if A wins, i.e., b = b, then B outputs 1 to indicate that c * := c 0 was an encryption of m * 1 , and otherwise, it outputs 0 to indicate that c * := c 0 was an encryption of m * 0 .We observe that if c * := c 0 is generated as .Enc(k * ptde , m 1 ), then B gives the view of Game 5 (conditioned on b = 1) to A. Otherwise, if c * := c 0 is generated as .Enc(k * ptde , m 0 ), then the view is of Game 6 .Therefore, if A can distinguish between the two games with non-negligible advantage, then B must also have nonnegligible advantage against the puncturable tag-based deterministic encryption security game.
This concludes the proof of Theorem 2.

Extending Supported Predicates
For our generic construction, it is easily possible to extend it from supporting the equality test predicate (i.e., tags) to more powerful predicates, i.e., an access control mechanism known from ABE in the terminology of [9].
Let us follow the notation of Gorbunov et al. [38], who construct ABE for any circuit of arbitrary polynomial size.Thus, let ind be an bit public index (used for encryption) and P a Boolean predicate (associated with secret keys) and decryption should only work if P(ind) = 1.Now, we can simply associate function keys with more expressive predicates P (encode them into PKey) instead of tags and use as public tags for the PTDE scheme the public index ind (i.e., the attributes).In the decryption circuit P f,P , one simply checks if for label ind and hard-coded P it holds that P(ind) = 1.
Switching the public index in a ciphertext from ind to some ind , i.e., change the attributes in the ciphertext, can simply be done by viewing the public indices as the tags in the current solution.Now, this represents a generalization of our generic construction where we only have the equality predicate P t (t) = 1 if and only if t = t.

Lattice-Based CUFE Construction for Inner Products
After recalling the syntax and properties of the main sampling algorithms used in latticebased constructions, we will build a CUFE scheme for inner-products from the LWE assumption in the random-oracle model in this section.For a further exposition of lattice preliminaries, we refer the reader to Appendix A.2.

Lattice Definitions and Algorithms
For any matrix A ∈ Z n×m q , we define the orthogonal q-ary lattice of A as ⊥ q (A) := { u ∈ Z m : A u = 0 mod q}.
The normal Gaussian distribution of mean 0 and variance σ 2 is the distribution on R with probability density function 1 σ √ 2π 1 e x 2 /(2σ 2 ) .The lattice Gaussian distribution with support a lattice ⊆ Z m , standard deviation σ and centered at c ∈ Z m , is defined as: The following algorithms will be used in lattice construction, and their properties needed in the security proof.
Lemma 9. ([37] Preimage Sampleable Functions) For any prime q = poly(n), any m ≥ 5n log q, and any s ≥ m 2.5 ω( √ log m), it holds that there exist PPT algorithms TrapGen, SampleD, SamplePre such that: 1. TrapGen computes (A, T) ← TrapGen(1 n , 1 m ), where A ∈ Z n×m q is statistically close to uniform and T ⊂ ⊥ q (A) is a basis with T ≤ m 2.5 .The matrix A (and q) is public, while the good basis T is the trapdoor.
2. SampleD samples matrices Z from D Z m×m ,s , 3. The trapdoor inversion algorithm SamplePre(A, T, D, s), for D ∈ Z n×m q , outputs a matrix Z ∈ Z m×m such that AZ = D.
In addition, it holds that the following distributions D 1 , D 2 are statistically close: Then there exists PPT algorithm.SampleLeft(A, T A , B, D, σ ) that outputs a matrix X ∈ Z 2m×m , distributed statistically close to D D q (A|B),σ .

Lattice Construction
We are building on the work of Abdalla et al. [9], who gave the first constructions, one in the standard model (SM) and one in the random-oracle model (ROM), of a lattice-based identity-based IPFE scheme, and proved their security 15 under the LWE q,α,n assumption (Definition 10).Their constructions are in turn based on the IPFE scheme of Agrawal et al. [15], ALS, described in Fig. 2.
In our construction, we start from the ROM scheme of Abdalla et al. [9] and enhanced their design in order to allow distinguishing fresh and updated ciphertexts.To prove its security, we rely on the programmability of random oracles H 1 , H 2 , H 3 : T → Z n×m q , where T is the tag space.Notice that programmability of random oracles is required in the security proof to simulate the new supported functionality, i.e., updating ciphertexts.Thus, even though our construction is only proved secure in the ROM, it also supports a richer class of functionalities than previous works.Our lattice-based CUFE construction is described in Fig. 3. Dimensions of matrices involved in the construction are presented in Table 1.
Table 1.Matrices, vectors, and respective dimensions used in the construction .
In order to update ciphertexts, it is therefore necessary to update the two parts of a given ciphertext to the prescribed new tag, while preserving the common randomness, the underlying plaintext, and, at the same time, without increasing the error term too much.Latter would prevent correct decryption of updated ciphertexts.This can be done using techniques inspired by [21,33].Moreover, since the randomness is given by uniform vector in Z n q and the encryption scheme is additively homomorphic, ciphertexts can be easily re-randomized.
To update a ciphertext from t to t , we want to produce a 2m × 2m matrix Vice versa, in the security proof, we will leverage on the programmability of the random oracles H 1 , H 2 , and H 3 : whenever the source tag t equals the challenge tag t * , X t,t , Y t,t , and t→t ,2 will be sampled from the appropriate distributions, H 2 (t ) = B t ,2 will be set to equal AX t,t + B t,1 Y t,t , and H 3 (t ) = D t to H t,1 • t→t ,2 + D t .For all other pair of tags, t, t , the token ( t→t ,1 , t→t ,2 ) is produced using the trapdoor of H 1 (t) = B t,1 : The matrix B t,1 will be produced using the TrapGen algorithm, and the update token will be produced using such trapdoor.
To update a ciphertext (ct t,1,1 , ct t,2,2 ), given the appropriate token ( t→t ,1 , t→t ,2 ), fresh randomness r ← Z n q and noises f The functional secret keys, {sk t, , y } =1,2 , can be produced as follows: 1. for the challenge tag t * : for = 1, using the ALS challenger, and for = 2, using the trapdoor of B t * ,2 .2. for tags, t = t * , for which no update token of the form ( t * →t,1 , t * →t,2 ) was queried but to which the challenge ciphertext was updated: using the trapdoor of B t,1 , or again the ALS challenger for = 2. 3. for all other tags: using the trapdoor of B t, for = 1, 2.

Parameters and Correctness
In our construction, ciphertexts encode vectors x ∈ {0, . . ., P} m under a tag t.Secret keys correspond to a tag t and a vector y ∈ {0, . . ., V } m .Fuchsbauer et al. [32] and Jafargholi et al. [39] could also offer useful insights on how to overcome the current limitation of our construction.Theorem 4. (Security) Let λ be the security parameter.Fix parameters q, n, m, α, σ , ρ, ρ 1 , ρ 2 , μ, and τ as above.Then, under the above restrictions on the adversary, the CUFE scheme described in Proof.We proceed in a series of hybrids, consider A to be a PPT adversary, and λ to be the security parameter.We denote by Adv Game i (A) the advantage of A in Game i.Let Q h be the number of random-oracle queries made by the adversary, Q t be the maximum number of TokGen-oracle queries of the form (t, t i ) for any fixed tag t, and Q u be the maximum number of Update-oracle queries on input the challenge ciphertext.We will assume, without loss of generality, that any adversary making key generation queries of the form ( y, t), update queries of the form (t, t , •, •), or token generation queries of the form (t, t ) will first query the random oracle H on t and t .(We can make this assumption because for every adversary A, we can compile it into an adversary A that exhibits this behavior.)Game 0 .This is the original IND-CUFE-CPA game.Game 1 .This is the same as previous game, except that we guess the tag t * which will be used for the challenge messages.Instead of guessing directly t * among the set of tags T , which would incur an exponential loss, we guess the index of the random-oracle query in which the adversary queries H to get H t * ,1 and D t * .If the guess is incorrect, we abort.This results in a 1 Q h security loss.Game 2 .This is the same as previous game, except that we guess for which tags t the adversary will query an update token of the form ( t * →t ,1 , t * →t ,2 ).If the guess was incorrect, we abort.As above, instead of guessing directly the tag t among the set of tags T , which would incur an exponential loss, we guess the indices of the randomoracle query in which the adversary queries H to get H t ,2 and D t .This will result in a Game 3 .This is the same as previous game, except that we guess for which tags t the adversary will query the Update-oracle on input the challenge ciphertext.As above, instead of guessing directly the tag t among the set of tags T , which would incur in an exponential loss, we guess the indices of the random-oracle query in which the adversary queries H to get H t ,2 and D t .If the guess is incorrect, we abort.This results be the list of random-oracle queries made by the adversary.Let i * ∈ [Q h ] be the index of the query corresponding to the challenge tag, i.e., t i * = t * .Let QT be the list of indices {i k } k≤Q t for which the adversary will query an update token from the challenge tag t * , and let QU be the list of indices { j k } k≤Q u for which the adversary will query the Update-oracle for a ciphertext encrypted under the challenge tag t * .Game 4 .This is the same as previous game, except for the following modifications.For each of i k ∈ QT , we sample X t The rest of the game is as before.Notice that, by the invariance under permutation of the Gaussian distribution, we have that t i →t,2 ← D Z 2 m×m ,ρ .Moreover, as expected.Applying again Lemma 9, we also obtain that the distribution of CorTokGenoracle's replies is statistically close to that of Game 4 .Game 6 .This is the same as previous game, except for the following modifications.Now, for all i ∈ {i The rest of the game is as before.Notice that, by the invariance under permutation of the Gaussian distribution, we have that Z t i , ← D Z 2 m×2 m ,ρ .Moreover, as expected.Therefore, the distribution KeyGen -oracle's replies is, by Lemma 9, statistically close to that of Game 5 .
Game 7 .This is the same as previous game, except for the following modifications.We modify how Enc -and HonUpdate-oracles are handled for ciphertexts different from the challenge one.Every time the adversary makes a query to the Enc -oracle of the form ( x, t), we return (ct  (1) , where we used again Lemma 9 to bound the norm of t→t ,1 f and e 2 + e 3 + t→t ,2 f .Therefore, the distribution of Enc -and HonUpdate-oracle's replies is statistically close to that of Game 6 .Game 8 .The only queries for which we still need the main secret key T A are the HonUpdate-oracle queries on input the challenge ciphertext, and the functional secret key queries for the challenge tag t * (with = 1) and the tags t j k with { j k } k≤Q u (for = 2).We now perform a reduction to the security of the ALS [15] encryption scheme.We reduce to the AD-IND security of ALS.We first obtain from the challenger public keys A ALS , D ALS .Now, equipped with the knowledge of t * , we define Game 8 to be the same as Game 7 , except for the following changes: • The matrix A is replaced with A ALS instead of being generated with TrapGen.
and forward it to the adversary.
In this game, the advantage of the adversary is upper bounded by the advantage of breaking the ALS scheme, i.e., that Adv Game 8 (A) ≤ Adv ALS (A).It remains to show that Game 8 is indistinguishable from Game 7 .We show that the update of the challenge ciphertext and function keys for tag t j k , with k ∈ [Q u ], are statistically close to those obtained in Game 7 .An identical argument to that used in [9] proves the same for the challenge tag t * .We start by considering the function keys.Since the parameter of the Gaussian distribution from which R t j k is sampled is superpolynomially bigger than the norm of Z ALS , by the Smudging Lemma 12 we have that sk y + R t j k is distributed statistically close to D Z m×m ,ρ 2 .Moreover, we have that H t j k ,2 • sk t j k ,2, y = (A|AS t j k ) sk y + R t j k y Z t j k y = Ask y + AR t j k y + AS t j k Z t j k y = D t j k y, as expected.As far as the update of the challenge ciphertext is concerned, as before, since the parameter of the distribution from which g 2 is drawn is superpolynomially bigger than the norm of the other error terms in the expression of ct t j k ,2 , again by the Smudging Lemma 12, we obtain that the distribution of the ciphertext so obtained is statistically close to that of Game 7 .
Putting everything together, we obtain that • Adv ALS (A) + negl(λ). 17Notice that it is possible to rely on the Smudging Lemma here as well.To simplify the proof we use the properties of NoiseGen, as done by [9], and directly refer to their security proof.

Conclusion
In this work, we proposed ciphertext-updatable functional encryption (CUFE), a variant of functional encryption which allows switching ciphertexts produced with respect to one tag to one under another tag using an update token for this tag pair.We have provided practical motivation for such a primitive and then defined an (adaptive) security notion in the indistinguishability setting for CUFE.We presented two constructions, where the first construction is a generic construction of CUFE for any functionality, which can also be extended to predicates other than the equality testing on tags.This construction is based on indistinguishability obfuscation (iO) and is proven to achieve semi-adaptive security.The second construction is a (plausibly) post-quantum CUFE for the inner-product functionality that relies on standard assumptions from lattices.The lattice-based construction achieves the stronger adaptive security notion, albeit with certain restrictions on the validity of the adversary and bound on the number of oracle queries.We leave several questions as interesting open problems.Firstly, to construct a CUFE scheme that satisfies our adaptive security model without any further restrictions or bound on the number of oracle queries.Secondly, to construct practical CUFE schemes for a richer class of functionalities, e.g., quadratic functions, which can further broaden the scope of application.Thirdly, we consider it an interesting direction to study multi-input as well as multi-client extensions of CUFE similarly as it has been done for IB-and AB-IPFE in [9] and [51], respectively.
Noise Re-randomization.The following procedure of NoiseGen(R, s) for noise re-randomization, was described in [44].NoiseGen(R, s): given a matrix R ∈ Z m×t , and s ∈ R + such that s

Fig. 1 .
Fig. 1.The IND-CUFE-CPA security notion for CUFE.If O 1 = ⊥, then we call the experiment semiadaptive, and if O 1 = O, then we call it adaptive.If O 1 = ⊥ and mpk is not initially given to A, then we call the experiment selective .

4. 1 .Definition 7 .
Puncturable Tag-Based Deterministic EncryptionOur generic construction relies on a primitive called puncturable tag-based deterministic encryption (PTDE), which can be seen as a tag-based variant of puncturable deterministic encryption (PDE) introduced by Waters[57].(Puncturable Tag-Based Deterministic Encryption) A puncturable tagbased deterministic encryption (PTDE) scheme with message space M and tag space T consists of the following algorithms: (possibly) randomized algorithms Setup and Puncture, along with deterministic algorithms Enc and Dec.

domly chooses c b 1 , c 1−b 1 ( 1 = F 1 (k 1 , 2 , c 1−b 2 (
when computing the challenge ciphertext) instead of computing c b 1 = F 1 (k 1 , m b ) and c 1−b m 1−b ).-Game 2 : This is identical to Game 1 with the exception that the challenger randomly chooses c b when computing the challenge ciphertext) instead of computing c b 2

3 .
More precisely, the adversary is given the main public key mpk, then the adversary selects a challenge tag t * and a challenge message pair m * 0 , m * 1 , and the challenger choses a bit b ∈ {0, 1} and encrypts m * b in the challenge ciphertext.-Game 1 : This is identical to Game 0 with the exception that the challenger choses a random p * ∈ {0, 1} 2λ during the computation of the challenge ciphertext, instead of choosing a random r * ∈ {0, 1} λ and computing p * ← PRG(r * ).-Game 2 : This is identical to Game 1 with the exception that the challenger computes the punctured key k p * prf,o ← PRF.Puncture F (k prf,o , p * ) and sets P pp ← iO(PInit:2[k p * prf,o ]).
p * prf and a challenge value z.B sets k * ptde := z and uses the punctured PRF key k p * prf to compute the challenge ciphertext and answer the oracle queries of A as in Game 4 .If A wins, i.e., b = b, then B outputs 1 to indicate that z = F(k prf , p * ), for some PRF key k prf , and otherwise, it outputs 0 to indicate that z was a random value.
* ) ∈ K with f (x * 0 ) = f (x * 1 ) (i.e., the adversary cannot trivially distinguish the challenge ciphertext), b) there is no (f, t) ∈ K with f (x * 0 ) = f (x * 1) and (•, t * , t) ∈ CT (i.e., the adversary has not received update tokens towards t for the challenge ciphertexts where it has queried function keys under t with f (x * 0 If F 1 is a selectively secure puncturable PRF, then it holds that Game 0 ≈ Game 1 . Proof.We describe a PPT reduction algorithm B that plays the selective puncturable tag-based deterministic encryption (PTDE) security game.B receives (t, m 0 , m 1 ) from A and proceeds as in Game 0 , except that it samples a bit b ∈ {0, 1}, submits m b , m 1−b to the punctured PRF challenger.B receives back a punctured PRF key k m b ,m 1−b prf and challenge values z 0 , z 1 .B sets k ptde Game 5 : This is identical to Game 4 with the exception that the challenger samples a random k * ptde instead of computing it as k * ptde ← F(k prf,o , p * ).-Game 6 : This is identical to Game 5 with the exception that the challenger encrypts m * 0 , i.e., the challenger computes c * ← .Enc(k * ptde , t * , m * 0 ) and outputs ( p * , c * ).Lemma 3. If PRG is a secure pseudorandom generator, then it holds that Game 0 ≈ Game 1 .
,t i k ← D Z m×m ,ρ , and t * →t,2 ← D Z 2 m×m ,ρ .Then, we set H 2 (t i k ) := B t i k ,2 := AX t * ,t i k + B t * ,1 Y t * ,t i k and H 3 (t i k ) := D t i k = H t * ,1 t * →t,2 + D t * .When the adversary queries the CorTokGen oracle on input (t, t ) we return t * →t,1 := I m X t * ,t i k 0 Y t * ,t i k and t * →t,2 , to the adversary.The rest of the game is as before.By Lemma 9, each of the token ( t * →t,1 , t * →t,2 ) is distributed statistically close to the previous game.Game 5 .This is the same as previous game, except for the following modifications.For all i ∈ [Q h ], i = i * , we sample (B t i ,1 , T B t i ,1 ) ← TrapGen(1 n , 1 m ) and set H 1 (t i ) := B t i ,1 .Whenever the adversary makes a query to the CorTokGen oracle of the form (t i , t), we reply using T B t i ,1 instead of T A : -sample X t i ,t ← D Z m×m ,ρ , run Y t ,t ← SamplePre(B t i ,1 , T B t i ,1 , B t,2 −AX t i ,t , ρ), along with R t i →t,2 ← SampleLeft(B t i ,1 , T B t i ,1 , A, D t − D t i , ρ).Return * ,t i k , Y t * t,1,1 , ct t,1,2 ) ← Enc(mpk, t, x), add (c, C t , t, x) to C, and increment c.Whenever the adversary makes a query to the HonUpdate-oracle of the form (t, t , i, •), we check if(•, t, t , •) is in HT and if (i, •, t, x) is in C for some x ∈ Z m q .If so, we sample r ← Z n q , g 1 ← D Z 2 m ,τ , g 2 ← D Z m ,τ , and return (ct t ,2,1 , ct t ,2,2 ), where ct t ,2,1 := H t ,2 r + g 1 , ct t ,2,2 := D t r + g 2 + q K • x, otherwise we return ⊥.By the Smudging Lemma 12, since the parameter of the Gaussian distribution from which f 1 and f 2 are sampled is superpolynomially bigger than the norm of t→t ,1 f and e 2 + e 3 + t→t ,2 f , we get that SD D Z n ,τ , D Z,τ, t→t ,1 f , SD D Z n ,τ , D Z,τ, e 2 + e 3 + t→t ,2 f ≤ 1 λ ω • We sample S t * ← {±1} m×m and Z t * ← D Z m×m ,ρ 1 , program H 1 (t * ) := AS t * and set H 3 (t ALS + AS t * Z t * .•Similarly,foreach k ∈ [Q u ], we sample S t j k ← {±1} m×m and R t j k , Z t j k ← D Z m×m ,ρ 2 , program H 2 (t j k ) := AS t j k and set H 3 (t j k ) := D t j k = D ALS + AR t j k + AS t j k Z t j k • For key queries of the form (t, y), we forward y to the challenger of the AD-IND security of ALS, which replies with sk y = Z ALS • y, where Z ALS is the main secret key of the ALS scheme.If t = t * , we set sk t * ,1, y := sk y Z t * y , and using T B t * ,2 we compute sk t * ,2, y .If t = t j k for some k ∈ [Q u ], then we set sk t j k ,2, y := sk y + R t j k y Z t * y ,and using T B t j k ,1 we compute sk t j k ,1, y .One forwards both to the adversary.•Whenthe adversary finally submits a challenge ( x 0 , x 1 ), we forward it to the ALS challenger, which replies with ct = (ct ALS 1 , ct ALS 2 ).We computect t * ,1 = (ct ALS 1 |(S t * ) • ct ALS 1 ), ct t * ,2 = ct ALS 2 + (R t * + S t * Z t * ) • ct ALS 1 + NoiseGen((R t * + S t * Z t * ) , s), forward (ct t * ,1 , ct t * ,2 ) back to the adversary.(The properties of the algorithm NoiseGen are recalled in Lemma 13 from Appendix A.2.) 17 • Whenever the adversary queries the HonUpdate oracle on input the challenge ciphertext (ct t * ,1 , ct t * ,2 ) and target tag t j k , we compute ct t j k ,1 = (ct ALS 1 |(S t j k ) • ct ALS 1 ) + H t j k ,2 r + g 1 , ct t j k ,2 = ct ALS 2 + (R t j k + S t j k Z t j k ) • ct ALS 1 * ) := D t * := D 2 > s 1 (RR ), it first samples e 1 := R e + (s 2 I m − RR ) 1 2 e , where I m ∈ Z m×m denotes the identity matrix, and e ← D t σ , and e ← D m √ 2σ are independent spherical continuous Gaussian noises.Then, it samples e 2 ← D Z m − e 1 ,s √ 2σ , and return e 1 + e 2 ∈ Z m q .We have the following lemma.Lemma 13. ([44] Noise Distribution) Let R ← Z m×t and s ≥ s 1 (R).The following distributions are statistically close: Distribution 1: e ← D Z t ,σ , and e ← NoiseGen(R, s).Output R e + e .Distribution 2: Output e ← D Z m ,2 sσ .Lemma 14. ([1] Bounding Norm of a {±1} k×m Matrix) Let R be a matrix chosen uniformly at random from {±1} k×m .There exists a universal constant C , for which: Lemma 15. ([29] Bounding Spectral Norm of a Gaussian Matrix) Let Z ∈ R n×m be a sub-Gaussian random matrix with parameter ρ.There exists a universal constant C such that for any t ≥ 0, we have s 1 (Z) ≤ C • ρ( √ n + √ m + t) except with probability at most 2 e π t 2 .