Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Indistinguishability obfuscation (\(i\mathcal {O}\)) [5, 16] has emerged as a powerful cryptographic primitive in the past few years. It has proven sufficient to construct a plethora of cryptographic primitives, many of them for the first time,[4, 8, 10, 12, 30]. Recently, \(i\mathcal {O}\) also proved instrumental in proving the hardness of complexity class \({\mathsf {PPAD}}\) [7].

A major challenge in this direction of research stems from the fact that \(i\mathcal {O}\) by itself is “too weak” to work with. The standard security of \(i\mathcal {O}\) may not even hide any secrets present in the underlying programs. Therefore, the crucial part of most \(i\mathcal {O}\)-based constructions lies in developing novel techniques for using \(i\mathcal {O}\) to obfuscate “programs with secrets”.

Despite its enormous power, we only know of a limited set of techniques for working with \(i\mathcal {O}\). In complex situations, these techniques often run into what we call the sub-exponential barrier. More specifically, the security proof of many \(i\mathcal {O}\)-based constructions end up considering an exponential number of hybrid experiments in order to make just one change in the underlying obfuscation. The goal is usually to eliminate all “troublesome” inputs, one at a time, that may be affected by the change. There are often exponentially many such inputs, resulting in a sub-exponential loss in the security reduction.

To make matters worse, a sub-exponential loss seems inherent to achieving \(i\mathcal {O}\) from “simple” assumptions, such as those based on multilinear mapsFootnote 1. Indeed, all known security proofs for \(i\mathcal {O}\) relative to “simple” assumptionsFootnote 2 iterate over all (exponentially-many) inputs anyway, and there are reasons to believe that this loss may be necessary [18]Footnote 3. Indeed, any reduction from \(i\mathcal {O}\) to a simple assumption would need to work for equivalent programs, but should fail for inequivalent programs (since inequivalent programs can be distinguished). Thus, such a reduction would seemingly need to decide if two programs compute equivalent functions; assuming \(\mathcal {P}\ne \mathcal {NP}\), this in general cannot be done in polynomial time. This exponential loss would then carry over to any application of \(i\mathcal {O}\), even if the \(i\mathcal {O}\)-to-application security reduction only incurred a polynomial loss. On the other hand, this exponential loss does not seem inherent to the vast majority of \(i\mathcal{O}\) applications. This leaves us in an undesirable situation where the only way we know to instantiate an application from “simple” assumptions requires sub-exponential hardness assumptions, even though sub-exponential hardness is not inherent to the application.

One application for which an exponential loss does not appear inherent is Functional encryption (\(\mathcal {FE}\)), and indeed starting from the work of Garg et al. [17], it has been shown in [26, 27] how to build \(\mathcal {FE}\) from progressively simpler assumptions on multilinear maps with only a polynomial loss. Therefore, to bypass the difficulties above, we ask the following:

Can applications of \(i\mathcal {O}\) be based instead on

\(\mathcal {FE}\) with a polynomial security reduction?

There are two results that give us hope in this endeavor. First, it is known that \(\mathcal {FE}\) is actually equivalent to \(i\mathcal {O}\), except that the \(\mathcal {FE}\)-to-\(i\mathcal {O}\) reduction [3, 9] incurs an exponential loss. This hints at the possibility that, perhaps, specializing the \(\mathcal {FE}\)-to-\(i\mathcal {O}\)-to-application reduction to particular applications can aleviate the need for sub-exponential hardness.

Second and very recently, Garg et al. [19] took upon the issue of sub-exponential loss in \(i\mathcal {O}\)-based constructions in the context of \({\mathsf {PPAD}}\) hardness. They developed techniques to eliminate the sub-exponential loss in the work of Bitansky et al. [7] and reduced the hardness of \({\mathsf {PPAD}}\) to the hardness of standard, polynomially-secure \(i\mathcal {O}\) (and injective one-way functions). More importantly for us, they also presented a new reduction which bases the hardness of \({\mathsf {PPAD}}\) on standard polynomially-secure functional encryption, thus giving essentially the first non-trivial instance of using \(\mathcal {FE}\) to build applications with only a polynomial loss.

This Work. Our goal is to develop techniques to break the sub-exponential barrier in cryptographic constructions based on \(i\mathcal {O}\)and \(\mathcal {FE}\). Towards this goal, we build upon and significantly extend the techniques in [19]. Our techniques are applicable, roughly, to any \(i\mathcal {O}\) setting where the computation is changed on just a polynomial number of points; on all other points, the exact same circuit is used to compute the outputs. Notice that for such settings there exists an efficient procedure for checking functional equivalence. This enables us to argue indistinguishability based only on polynomial hardness assumptions. As it turns out, for many applications of \(i\mathcal {O}\), the hybrid arguments involve circuits with the above specified structure. In this work, we focus on two such applications: trapdoor permutations and universal samplers.

We start with the construction of trapdoor permutations of Bitanksy et al. [8] based on sub-exponentially secure \(i\mathcal {O}\). We improve their work by constructing trapdoor permutations based only on polynomially-secure \(i\mathcal {O}\) (and one-way permutations). We further extend our results and obtain a construction based on standard, polynomial hard, functional encryption (instead of \(i\mathcal {O}\)). Together with the result of [17, 26, 27], this gives us trapdoor permutations based on simple polynomial-hard assumptions on multilinear maps.

We then consider universal samplers, a notion put forward by Hofheinz et al. [23]. It allows for a single trusted setup which can be used to sample common parameters for any protocol. Hofheinz et al. construct universal samplers from \(i\mathcal {O}\). They also show how to use them to construct multi-party non-interactive key-exchange (NIKE) and broadcast encryption.

We consider the task of constructing universal samplers from the weaker notion of only polynomially-secure functional encryption. As noted earlier, we cannot use the generic reduction of [3, 9] between \(\mathcal {FE}\) and \(i\mathcal {O}\) since it incurs sub-exponential loss. Intuitively, a fresh approach that is not powerful enough to imply \(i\mathcal {O}\) is essential to obtaining a polynomial-time reduction for this task.

We present a new construction of universal samplers directly from \(\mathcal {FE}\). We also consider the task of constructing multiparty NIKE for an unbounded number of users based on \(\mathcal {FE}\). As detailed later, this turns out to be non-trivial even given the work of Hofheinz et al. This is because the definitions presented in [23] are not completely suitable to deal with an unbounded number of users. To support unbounded number of users, we devise a new security notion for universal samplers called interactive simulation. We present a construction of universal samplers based on \(\mathcal {FE}\) that achieves this notion and gives us multiparty NIKE for unbounded number of users.

Remark 1

Our construction of TDP from FE is weaker in comparison to our construction from iO (and the construction of Bitansky et al. in [8]). In particular, given the random coins used to sample the function and the trapdoor, the output of the sampler is no longer pseudorandom. This property is important for some applications of TDPs like the construction of OT.

An Overview of Our Approach. In the following sections, we present a detailed overview of our approach of constructing Universal Samplers and NIKE for unbounded number of parties. Our techniques used for constructing trapdoor permutations are closely related to the techniques developed in proving \({\mathsf {PPAD}}\)-hardness of Garg et al. [19]. However, constructing trapdoor permutations poses additional challenges, namely the design of an efficient sampling algorithm that samples a domain element. Solving this problem requires development of new techniques and we elaborate them in the full version [20].

1.1 Universal Samplers and Multiparty Non-interactive Key Exchange from \(\mathcal {FE}\)

Multiparty Non-Interactive Key Exchange (multiparty NIKE) was one of the early applications of multilinear maps and \(i\mathcal {O}\). In multiparty NIKE, n parties simultaneously post a single message to a public bulletin board. Then they each read off the contents of the board, and are then able to derive a shared key K which is hidden to any adversary that does not engage in the protocol, but is able to see the contents of the public bulletin board.

Boneh and Silverberg [11] show that multilinear maps imply multiparty NIKE. However, (1) their protocol requires an a priori bound on the number of users n, and (2) due to limitations with current multilinear map candidates [13, 15], the protocol requires a trusted setup. The party that runs the trusted setup can also learn the shared key k, even if that party does not engage in the protocol.

Boneh and Zhandry [12] show how to use \(i\mathcal {O}\) to remove the trusted setup. Later, Ananth et al. [1] shows how to remove the bound on the number of users by using the very strong differing inputs obfuscation. Khurana et al. [25] further modify the Boneh-Zhandry protocol to get unbounded users with just \(i\mathcal {O}\). In [12, 25], \(i\mathcal {O}\) is invoked on programs for which are guaranteed to be equivalent; however it is computationally infeasible to actually verify this equivalence. Thus, following the arguments of [18], it would appear that any reduction to a few simple assumptions, no matter how specialized to the particular programs being obfuscated, would need to incur an exponential loss. Hence, these approaches do not seem suitable to achieving secure multiparty NIKE from polynomially secure \(\mathcal {FE}\).

Universal Samplers. Instead, we follow an alternate approach given by Hofheinz et al. [23] using universal samplers. A universal sampler is an algorithm that takes as input the description of a sampling procedure (say, the sampling procedure for the common parameters of some protocol) and outputs a sample from that procedure (a set of parameters for that protocol). The algorithm is deterministic, so that anyone running the protocol on a given sampling procedure gets the same sampled parameters. Yet the generated parameters should be “as good as” a freshly generated set of parameters. Therefore, the only set of common parameters needed for all protocols is just a single universal sampler. When a group of users wish to engage in a protocol involving a trusted setup, they can each feed the setup procedure of that protocol into the universal sampler, and use the output as the common parameters.

Unfortunately, defining a satisfactory notion of “as good as” above is non-trivial. Hofheinz et al. give two definitions: a static definition which only remains secure for a bounded number of generated parameters, as well as an adaptive definition that is inherently tied to the random oracle model, but allows for an unbounded number of generated parameters. They show how to use the stronger definitions to realize primitives such as adaptively secure multiparty non-interactive key exchange (NIKE) and broadcast encryption.

In this work, we focus on the standard model, and here we review the static standard-model security definition for universal samplers. Fix some bound k on the number of generated parameters. Intuitively, the k-time static security definition says that up to k freshly generated parameters \(s_1,\dots ,s_k\) for sampling algorithms \(C_1,\dots ,C_k\) can be embedded into the universal sampler without detection. Thus, if the sampler is used on any of the sampling algorithms \(C_i\), the generated output will be the fresh sample \(s_i\). Formally, there is a simulator \({\mathsf {Sim}}\) that takes as input up to k sampler/sample pairs \((C_i,s_i)\), and outputs a simulated universal sampler \({\mathsf {Sampler}}\), such that \({\mathsf {Sampler}}(C_i)=s_i\). As long as the \(s_i\) are fresh samples from \(C_i\), the simulated universal sampler will be indistinguishable from a honestly generated sampler.

Fortunately for us, the \(i\mathcal {O}\)-based construction of [23] only invokes \(i\mathcal {O}\) on programs for which it is trivial to verify equivalence. Thus, there seems hope that universal samplers can be based on simple assumptions without an exponential loss. In particular, there is hope to base universal samplers on the polynomial hardness of functional encryption.

Application to Multiparty NIKE. From the static definition above, it is straightforward to obtain a statically secure multiparty NIKE protocol analogous to the adaptive protocol of Hofheinz et al. [23]. Each party simply publishes a public key \({\mathsf {pk}}_i\) for a public key encryption scheme, and keeps the corresponding secret key \({\mathsf {sk}}_i\) hidden. Then to generate the shared group key, all parties run \({\mathsf {Sampler}}\) on the sampler \(C_{{\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n}\). Here, \(C_{{\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n}\) is the randomized procedure that generates a random string K, and encrypts K under each of the public keys \({\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n\), resulting in n ciphertexts \(c_1,\dots ,c_n\) which it outputs. Then party i decrypts \(c_i\) using \({\mathsf {sk}}_i\). The result is that all parties in the protocol learn K.

Meanwhile, an eavesdropper who does not know any of the secret keys will only have the public keys, the sampler, and thus the ciphertexts \(c_i\) outputted by the sampler. The proof that the eavesdropper will not learn K is as follows. First, we consider a hybrid experiment where K is generated uniformly at random, and the universal sampler is simulated on sampler \(C_{{\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n}\), and sample \(s=(c_1,\dots ,c_n)\), where \(c_i\) are fresh encryptions of K under each of the public keys \({\mathsf {pk}}_i\). 1-time static security of the universal sampler implies that this hybrid is indistinguishable to the adversary from the real world. Next, we change each of the \(c_i\) to encrypt 0. Here, indistinguishability follows from the security of the public key encryption scheme. In this final hybrid, the view of the adversary is independent of the shared secret key K, and security follows.

Unbounded Multiparty NIKE. One limitation of the protocol above is that the number of users must be a priori bounded. There are several reasons for this, the most notable being that in order to simulate, the universal sampler must be as large as the sample \(s=(c_1,\dots ,c_n)\), which grows with n. Thus, once the universal sampler is published, the number of users is capped. Unfortunately, the only prior protocols for achieving an unbounded number of users, [1, 25], seems inherently tied to the Boneh-Zhandry approach, and it is not clear that their techniques can be adapted to universal samplers.

In order to get around this issue, we change the sampling procedure \(C_{{\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n}\) fed into the universal sampler. Instead, we feed in circuits of the form \(D_{{\mathsf {pk}},{\mathsf {pk}}'}\), which generate a new secret and public key \(({\mathsf {sk}}'',{\mathsf {pk}}'')\), encrypt \({\mathsf {sk}}''\) under both \({\mathsf {pk}}\) and \({\mathsf {pk}}'\), and output both encryptions as well as the new public key \({\mathsf {pk}}''\). A group of users with public keys \({\mathsf {pk}}_1,\dots ,{\mathsf {pk}}_n\) then generates the shared key in an iterative fashion as follows. Run the universal sampler on \(D_{{\mathsf {pk}}_1,{\mathsf {pk}}_2}\), obtaining a new public key \({\mathsf {pk}}_3'\), as well as encryptions of the corresponding secret key \({\mathsf {sk}}_3'\) under both \({\mathsf {pk}}_1,{\mathsf {pk}}_2\). Notice that users 1 and 2 can both recover \({\mathsf {sk}}_3'\) using their secret keys. Then run the universal sampler on \(D_{{\mathsf {pk}}_3,{\mathsf {pk}}'_3}\), obtaining a new public key \({\mathsf {pk}}_4'\) and encryptions of the corresponding secret key \({\mathsf {sk}}_4'\). Notice that user 3 can recover \({\mathsf {sk}}_4'\) by decrypting the appropriate ciphertext using \({\mathsf {sk}}_3\), and users 1 and 2 can recover \({\mathsf {sk}}_4'\) by decrypting the other ciphertext using \({\mathsf {sk}}_3'\). Continue in this way until public key \({\mathsf {pk}}_{n+1}'\) is generated, and all users 1 through n recover the corresponding secret key \({\mathsf {sk}}_{n+1}'\). Set \({\mathsf {sk}}_{n+1}'\) to be the shared secret key.

For security, since an eavesdropper does not know any of the secret keys and the ciphertexts are “as good as” fresh ciphertexts, he should not be able to decrypt any of the ciphertexts in the procedure above. However, turning this intuition into a security proof using the static notion of security is problematic. The straightforward approach requires constructing a simulated \({\mathsf {Sampler}}\) where the outputs on each of the circuits \(D_{{\mathsf {pk}}_i,{\mathsf {pk}}_i'}\) are fresh samples. Then, each of the ciphertexts in the samples are replaced with encryptions of 0 (instead of the correct secret decryption key). However, as there are n such circuits, a standard incompressibility argument shows that \({\mathsf {Sampler}}\) must grow linearly in n. Thus again, once the universal sampler is published, the number of users is capped.

Simulating at Fewer Points. To get around this issue, we devise a sequence of hybrids where in each hybrid, we only need replace \(\log n\) outputs of the sampler with fresh samples. The core idea is the following. Say that a circuit \(D_{{\mathsf {pk}}_i,{\mathsf {pk}}_i'}\) has been “treated” if the public key \({\mathsf {pk}}_{i+1}'\) outputted by the universal sampler is freshly sampled and the corresponding ciphertexts are changed to encrypt 0 (instead of the secret key \({\mathsf {sk}}_{i+1}'\)). We observe that to switch circuit \(D_{{\mathsf {pk}}_i,{\mathsf {pk}}_i'}\) from untreated to treated, circuit \(D_{{\mathsf {pk}}_{i-1},{\mathsf {pk}}_{i-1}'}\) needs to currently be treated so that the view of the adversary is independent of the secret key \({\mathsf {sk}}_i'\). However the status of all the other circuits is irrelevant. Moreover, once we have treated \(D_{{\mathsf {pk}}_i,{\mathsf {pk}}_i'}\), we can potentially “untreat” \(D_{{\mathsf {pk}}_{i-1},{\mathsf {pk}}_{i-1}'}\) and reset its ciphertexts to the correct values, assuming \(D_{{\mathsf {pk}}_{i-2},{\mathsf {pk}}_{i-2}'}\) is currently treated. Our goal is to start from no treated circuits, and arrive at a hybrid where \(D_{{\mathsf {pk}}_n,{\mathsf {pk}}'_n}\) is treated, which implies that the view of the adversary is independent of the shared secret \({\mathsf {sk}}_{n+1}\).

This gives rise to an interesting algorithmic problem. The goal is to get a pebble at position n, where the only valid moves are (1) placing or removing a pebble at position 1, or (2) placing or removing a pebble at position i provided there is currently a pebble at position \(i-1\). We desire to get a pebble at position n while minimizing the number of pebbles used at any time. The trivial solution is to place a pebble at 1, then 2, and so on, requiring n pebbles. We show a pebbling scheme that gets a pebble to position n using only \(\approx \log n\) pebbles by removing certain pebbles as we go. Interestingly, the pebbling scheme is exactly same as the one used in [6] in the context of reversible computation. The pebbling scheme is also efficient: the number of moves is polynomial in n.

Using our pebbling algorithm, we derive a sequence of hybrids corresponding to each move in the algorithm. Thus we show that the number of circuits that need simulating can be taken to be \(\approx \log n\).

A New Universal Sampler Definition. Unfortunately, we run into a problem when trying to base security on the basic static sampler definition of Hofheinz et al. [23]. The issue stems from the fact that the simulator in the static definition requires knowing all of the circuits \(D_{{\mathsf {pk}}_{i},{\mathsf {pk}}'_{i}}\) up front. However, in our pebbling approach, some of the \({\mathsf {pk}}'_i\) (and thus the \(D_{{\mathsf {pk}}_{i},{\mathsf {pk}}'_{i}}\)) are determined by the sampler \({\mathsf {Sampler}}\) - namely, all the \({\mathsf {pk}}'_i\) for which \(D_{{\mathsf {pk}}_{i-1},{\mathsf {pk}}'_{i-1}}\) is “untreated”. Thus we encounter a circularity where we need to know \({\mathsf {Sampler}}\) to compute the circuit \(D_{{\mathsf {pk}}_{i},{\mathsf {pk}}'_{i}}\), but we need \(D_{{\mathsf {pk}}_{i},{\mathsf {pk}}'_{i}}\) in order to simulate the \({\mathsf {Sampler}}\).

To get around this issue, we devise a new security notion for universal samplers that allows for interactive simulation. That is, before the simulator outputs \({\mathsf {Sampler}}\), we are allowed to query it on various inputs, learning what the output of the sampler will be on that input (called as the read query). Moreover, we are allowed to feed circuit/sample pairs (Cs) (called as write query) interactively, potentially after seeing some of the sample outputs, and the simulator will guarantee that the simulated \({\mathsf {Sampler}}\) will output s on C. For security, we require that for a statically chosen query index \(i^*\) and a circuit \(C^*\) the simulator’s outputs in the following two cases are computationally indistinguishable:

  1. 1.

    \({i^*}^{th}\) query is a read query on \(C^*\).

  2. 2.

    \({i^{*}}^{th}\) query is a write query on \((C^*,s^*)\) where \(s^*\) is fresh sample from \(C^*\).

This new definition allows us to avoid the circularity above and complete the security proof for our NIKE protocol.

Construction. Before we describe our construction of universal samplers from \(\mathcal {FE}\), we first describe a construction from \(i\mathcal {O}\) that satisfies the above definition of interactive simulation.

The universal sampler is an obfuscation of a circuit that has a puncturable \(\mathsf {PRF}\) key K hardwired in its description and on input C outputs \(C(;\mathsf {PRF}_K(C))\) i.e. it uses the \(\mathsf {PRF}\) key to generate the random coins. This is precisely the same construction as given by Hofheinz et al. [23] for the static security case. To prove that this construction satisfies the stronger definition of interactive simulation we construct a simulator that works as follows. It first samples a fresh \(\mathsf {PRF}\) key \(K'\) and answers the read queries using it. At the end of the simulation, it outputs an obfuscation of a circuit that has the \(\mathsf {PRF}\) key \(K'\) as well as \((C_i,s_i)\) for every write query made by the adversary hardwired in its description. When run on input C where C is one of the write queries, it outputs the corresponding s. On other inputs, it outputs \(C(;\mathsf {PRF}_{K'}(C))\).

The security is shown via a hybrid argument. The initial hybrid corresponds to the output of the simulator when the challenge query (made at index \(i^*\)) is a write query on \((C_{i^*},s_{i^*})\) where \(s_{i^*}\) is a fresh random sample from \(C_{i^*}\). We first change the obfuscated circuit to have the \(\mathsf {PRF}\) key \(K'\) punctured at \(C_{i^*}\). This is possible since the circuit does not use \(K'\) to compute the output on \(C_{i^*}\). Relying on the security of puncturable \(\mathsf {PRF}\), we change \(s_{i^*}\) from \(C_{i^*}(;r)\) where r is random string to \(C_{i^*}(;\mathsf {PRF}_{K'}(C_{i^*}))\). We then unpuncture the key \(K'\) and finally remove \(C_{i^*},s_{i^*}\) from the hardwired list.

We adapt the above construction from \(i\mathcal {O}\) to the \(\mathcal {FE}\) setting using techniques from [9, 19]. Recall that the “obfuscated” universal sampler consists of \(\ell + 1\) (\(\ell \) is the maximum size of the input circuit) function keys (where each function key computes a bit extension function) along with an initial ciphertext \(c_{\phi }\) that encrypts the empty string \({\phi }\) and a prefix constrained \(\mathsf {PRF}\) key K Footnote 4. These bit extension functions form a natural binary tree structure and “parsing” an input circuit C corresponds to traveling along the path from the root to the leaf labeled C. Each node x along the path from the root to C contains the key K prefix constrained at x. The prefix constrained \(\mathsf {PRF}\) key appearing at the leaf C is precisely equal to the \(\mathsf {PRF}\) value at C and we use this to generate a “pseudorandom” sample from C.

We are now ready to describe the construction of our simulator. As in the \(i\mathcal {O}\) case, the simulator samples a random prefix constrained \(\mathsf {PRF}\) key \(K'\) and uses it to answer the read queries made by the adversary. Recall that for every write query \((C_i,s_i)\) the adversary makes, the simulator must ensure that the sampler on \(C_i\) outputs \(s_i\). The simulator accomplishes this by “tunneling” the underlying binary tree along path \(C_i\). To give a bit more details, the simulator “forces” the function keys at every level i to output a precomputed value say \(\mathsf {V}_i\) (instead of the bit-extension) if the input to the function matches with a prefix of \(C_i\). At the leaf level, if the input matches \(C_i\) then the function outputs \(s_i\). Illustration of “tunneling” is given in Fig. 1. We now explain how this “tunneling” is done.

Fig. 1.
figure 1

Illustration of “tunneling” on \(C_i = 010\) and \(\kappa = 3\).

At a high level, the “tunneling” is achieved by triggering a hidden “trapdoor” thread in the function keys using techniques illustrated in [2, 19]. This technique proceeds by first encrypting a set of precomputed values under a symmetric key sk and hardwires them in the description of bit-extension function in each level. The symmetric key sk is encrypted in the initial ciphertext \(c_{\phi }\) along with the empty string and the prefix constrained \(\mathsf {PRF}\) key. The trapdoor thread (that is triggered only along the write query paths) uses this secret key sk to decrypt the hardcoded ciphertext and outputs the appropriate pre-computed value.

To complete the security proof, we want to show that we can indistinguishably “tunnel” the binary tree along a new path \(C^*_i\) and output \(s^*_i\) which is a fresh random sample from \(C^*_i\) at the leaf. Recall that in the construction of Garg et al. in [19] a single secret key sk is used to for computing the encryptions of pre-computed values along multiple paths. But having a single secret key does not allow us to “tunnel” along a new path \(C^*_i\) as this secret key already appears in the initial ciphertext \(c_{\phi }\). Hence, we cannot rely on the semantic security of symmetric key encryption to augument the pre-computed values to include values along the new path \(C^*_i\). In order to get around this issue, we use multiple secret keys: one for each write queryFootnote 5 which enables us to “tunnel” along a new path \(C^*_i\).

2 Preliminaries

\(\kappa \) denotes the security parameter. A function \(\mu (\cdot ) : \mathbb {N} \rightarrow \mathbb {R}^+\) is said to be negligible if for all polynomials \(\mathsf {poly}(\cdot )\), \(\mu (k) < \frac{1}{\mathsf {poly}(k)}\) for large enough k. We will use PPT to denote Probabilistic Polynomial Time algorithm. We denote [k] to be the set \(\{1,\cdots ,k\}\). We will use \(\mathsf {negl}(\cdot )\) to denote an unspecified negligible function and \(\mathsf {poly}(\cdot )\) to denote an unspecified polynomial. We denote the identity polynomial by \(I(\cdot )\) i.e. \(I(x) = x\). All adversarial functions are modeled as polynomial sized circuits. We assume that all cryptographic algorithms take the security parameter in unary as input and would not explicitly mention it in all cases. We assume without loss of generality that the length of the random tape used by all cryptographic algorithms is \(\kappa \).

A binary string \(x \in \{0,1\}^{k}\) is represented as \(x_1 \cdots x_{k}\). \(x_1\) is the most significant (or the highest order bit) and \(x_{k}\) is the least significant (or the lowest order bit). The i-bit prefix \(x_1 \cdots x_i\) of the binary string x is denoted by \({x}_{[i]}\). We denote |x| to be the length of the binary string \(x \in \{0,1\}^*\). We use \(x \Vert y\) to denote concatenation of binary strings x and y. We say that a binary string y is a prefix of x if and only if there exists a string \(z \in \{0,1\}^{*}\) such that \(x = y \Vert z\).

We assume the reader’s familiarity with standard cryptographic primitives like injective pseudorandom generator, puncturable pseudorandom functions, indistinguishability obfuscation, functional encryption, symmetric and public key encryption. Below, we give the definition of Prefix Constrained Pseudorandom Function [19].

Prefix Constrained Pseudorandom Function. A \(\mathsf {PCPRF}\) is a tuple of algorithms \((\mathsf {KeyGen}_{\mathcal {PCPRF}},\mathsf {PrefixCons})\) with the following syntax. \(\mathsf {KeyGen}_{\mathcal {PCPRF}}\) takes the security parameter (encoded in unary) and descriptions of two polynomials \(p_{in}\) and \(p_{out}\) as input and outputs a \(\mathsf {PCPRF}\) key \(S \in \{0,1\}^{\kappa }\). \(\mathsf {PrefixCons}\) is a deterministic algorithm and has two modes of operation:

  1. 1.

    Normal Mode: In the normal mode, \(\mathsf {PrefixCons}\) takes a \(\mathsf {PCPRF}\) key S and a string \(y \in \cup _{k=0}^{p_{in}(\kappa )}{\{0,1\}^{k}}\) and outputs a prefix constrained key \(S_y \in \{0,1\}^{\kappa }\) if \(|y| < p_{in}(\kappa )\); else outputs \(S_y \in \{0,1\}^{p_{out}(\kappa )}\). We assume that \(S_y\) contains implicit information about |y|.

  2. 2.

    Repeated Constraining Mode: In the repeated constraining mode, \(\mathsf {PrefixCons}\) takes a prefix constrained key \(S_y\) and a string \(z \in \cup _{k=0}^{p_{in}(\kappa )}{\{0,1\}^{k}}\) as input and works as follows. If \(|y| + |z| > p_{in}(\kappa )\), it outputs \(\bot \); else if \(|y| + |z| < p_{in}(\kappa )\), it outputs the prefix constrained key \(S_{y \Vert z} \in \{0,1\}^{\kappa }\); else it outputs \(S_{y \Vert z} \in \{0,1\}^{p_{out}(\kappa )}\).

Henceforth, unless it is not directly evident from the context, we will not explicitly mention if \(\mathsf {PrefixCons}\) is in the normal mode or in the repeated constraining mode. We note that there is no explicit evaluation procedure for \(\mathsf {PCPRF}\) and the output of \(\mathsf {PCPRF}\) on an input \(x \in \{0,1\}^{p_{in}(\kappa )}\) is given by \(\mathsf {PrefixCons}(S,x) \in \{0,1\}^{p_{out}(\kappa )}\).

We now give the formal definition of \(\mathsf {PCPRF}\).

Definition 1

A prefix constrained pseudorandom function \(\mathcal {PCPRF}\) is a tuple of PPT algorithms \((\mathsf {KeyGen}_{\mathcal {PCPRF}},\mathsf {PrefixCons})\) satisfying the following properties:

  • Functionality is preserved under repeated constraining: For all \(\kappa \), polynomials \(p_{in}(\cdot ), p_{out}(\cdot )\) and for all \(x \in \cup _{k \in [p_{in}({\kappa })]}{\{0,1\}^{k}}\), \(y,z \in \{0,1\}^*\) s.t. \(x = y \Vert z\),

    $$\begin{aligned} \Pr [\mathsf {PrefixCons}(\mathsf {PrefixCons}(S,y),z) = \mathsf {PrefixCons}(S,x)] = 1 \end{aligned}$$

    where \(S \leftarrow \mathsf {KeyGen}_{\mathcal {PCPRF}}(1^{\kappa },p_{in}(\cdot ),p_{out}(\cdot ))\).

  • Pseudorandomness at constrained prefix: For all \({\kappa }\), polynomials \(p_{in}(\cdot ), p_{out}(\cdot )\), for all \(x \in \cup _{k \in [p_{in}({\kappa })]}{\{0,1\}^{k}}\), and for all poly sized adversaries \(\mathcal {A}\)

    $$\begin{aligned} |\Pr [\mathcal {A}(\mathsf {PrefixCons}(S,x),\mathsf {Keys}) = 1] - \Pr [\mathcal {A}(U_{{\ell }},\mathsf {Keys}) = 1]| \le \mathsf {negl}({\kappa }) \end{aligned}$$

    where \(S \leftarrow \mathsf {KeyGen}_{\mathcal {PCPRF}}(1^{\kappa },p_{in}(\cdot ),p_{out}(\cdot ))\), \(\ell = |\mathsf {PrefixCons}(S,x)|\) and \(\mathsf {Keys} = \{\mathsf {PrefixCons}(S,{x}_{[i-1]}\Vert (1-x_i))\}_{i \in [|x|]}\).

The above properties are satisfied by the construction of the pseudorandom function in [21].

Notation. For a key \(S_i\) (indexed by i), we will use \(S_{i,y}\) to denote \(\mathsf {PrefixCons}(S_i,y)\).

3 TDP from IO in Poly Loss

We consider trapdoor permutation with pseudorandom sampling which is a weakened notion than the traditional uniform sampling. We refer the reader to [8] for a formal definition.

3.1 Construction of Trapdoor Permutations

In this section, we give a construction of trapdoor permutations and prove the one-wayness assuming the existence polynomially hard \(i\mathcal {O}\), puncturable pseudorandom function \(\mathcal {PRF}\) and injective \(\mathcal {PRG}\) (used only in the proof).

Theorem 1

Assuming the existence of one-way permutations and indistinguishablity obfuscation against polytime adversaries there exists a trapdoor permutation with pseudorandom sampling.

Our Construction. Our construction uses the following primitives:

  1. 1.

    An indistinguishability Obfuscator \(i\mathcal {O}\).

  2. 2.

    A puncturable pseudorandom function \(\mathcal {PRF}= (\mathsf {KeyGen}_{\mathcal {PRF}},\mathsf {PRF},\mathsf {Punc})\).

  3. 3.

    A length doubling pseudorandom generator \(\mathsf {PRG}: \{0,1\}^{\kappa /2} \rightarrow \{0,1\}^{\kappa }\).

  4. 4.

    Additionally, in the proof of security, we use a length doubling injective pseudorandom generator \(\mathsf {InjPRG}: [2^{\kappa /4}] \rightarrow [2^{\kappa /2}]\).

The formal description of our construction appears in Fig. 2.

Due to lack of space we give the proof of security in the full version of our paper [20].

Fig. 2.
figure 2

Construction of trapdoor permutation

Fig. 3.
figure 3

Public key, sampler and the inverter for the trapdoor permutations

4 Trapdoor Permutation from FE

We start by defining a weaker (with respect to pseudorandom sampling) notion of trapdoor permutation.

Definition 2

An efficiently computable family of functions:

$$\begin{aligned} \mathcal {TDP} = \{\mathsf {TDP}_{PK}: D_{PK} \rightarrow D_{PK} \text { and } PK \in \{0,1\}^{\mathsf {poly}(\kappa )}\} \end{aligned}$$
Fig. 4.
figure 4

\(\mathsf {Exp}_{\mathcal {A},b,\mathsf {wPRS}}\)

over the domain \(D_{PK}\) with associated (probabilistic) \(({\mathsf {KeyGen}},\mathsf {SampGen})\) algorithms is a weakly samplable trapdoor permutation if it satisfies:

  • Trapdoor Invertibility: For any \((PK,SK) \leftarrow {\mathsf {KeyGen}}(1^{\kappa })\), \(\mathsf {TDP}_{PK}\) is a permutation over \(D_{PK}\). For any \(y \in D_{PK}\), \(TDP^{-1}_{SK}(y)\) is efficiently computable given the trapdoor SK.

  • Weak Pseudorandom Sampling: For any \((PK,SK) \leftarrow {\mathsf {KeyGen}}(1^{\kappa })\) and \(\mathsf {Samp}\leftarrow \mathsf {SampGen}(SK)\), \(\mathsf {Samp}(\cdot )\) samples pseudo random points in the domain \(D_{PK}\). Formally, for any polysized distinguisher \(\mathcal {A}\),

    $$\begin{aligned} \left| \Pr \left[ \mathsf {Exp}_{\mathcal {A},0,\mathsf {wPRS}} = 1\right] - \Pr \left[ \mathsf {Exp}_{\mathcal {A},1,\mathsf {wPRS}} = 1 \right] \right| \le \mathsf {negl}(\kappa ) \end{aligned}$$

    where \(\mathsf {Exp}_{\mathcal {A},b,\mathsf {wPRS}}\) is described in Fig. 4.

  • One-wayness: For all poly sized adversaries \(\mathcal {A}\),

Remark 2

The requirement of pseudorandom sampling considered in Bitanksy et al.’s work [8] is stronger than the one considered here in sense that they require the pseudorandomness property to hold even when given the random coins used by \({\mathsf {KeyGen}}\) and the \(\mathsf {SampGen}\) algorithms. We do not achieve the stronger notion in this work. In particular, given the random coins used in \(\mathsf {SampGen}\) the sampler’s output is no longer pseudorandom. Therefore, our trapdoor permutations can be only used in applications where an honest party runs the \({\mathsf {KeyGen}}\) and \(\mathsf {SampGen}\) algorithm. It cannot be used for example to achieve receiver privacy in EGL Oblivious Transfer protocol [14].

In this section, we construct trapdoor permutation satisfying the Definition 2 from polynomially hard public key functional encryption, prefix puncturable pseudorandom function, left half injective pseudorandom generator, strong randomness extractor and public key encryption with random public keys.

Theorem 2

Assuming the existence of one-way permutations, single-key, selective secure, public key functional encryption and public key encryption with (pseudo) random public keys, there exists a weakly samplable trapdoor permutation.

We now recall the special key structure [19] which forms a crucial part of our construction of trapdoor permutation.

Notation. We treat \(1^i +1\) as \(0^i\) and \(\phi +1\) as \(\phi \). Let \(\mathsf {LeftInjPRG}\) be a left half injective pseudorandom generator. Let \(\tau \) be the size of public key output by \(\mathsf {PK.KeyGen}(1^{\kappa })\). Below, for every \(i \in [{\kappa +\tau }]\), \(S_i \leftarrow \mathsf {KeyGen}_{\mathcal {PCPRF}}(1^{\kappa }.C_i(\cdot ),I(\cdot ))\) where \(C_i(\kappa ) = i\) and \(I(\kappa ) = \kappa \). Recall \(S_{i,x}\) denotes a prefix constrained PRF key \(S_i\) constrained at a prefix x.

Special Key Structure.

For the empty string \(x =\phi \), these sets can be initialized as follows.

$$\begin{aligned} \mathsf {U}_{\phi } = \bigcup _{i \in [{\tau +\kappa }]} \mathsf {U}_\phi ^i\quad \quad \quad&\mathsf {U}_\phi ^i = \{S_i\} \\ \mathsf {V}_{\phi } = \bigcup _{i \in [{\tau +\kappa }]} \mathsf {V}_\phi ^i\quad \quad \quad&\mathsf {V}_\phi ^i = \{S_i\} \\ \mathsf {W}_{\phi } = \bigcup _{i \in [{\tau +\kappa }]} \mathsf {W}_\phi ^i\quad \quad \quad&\mathsf {W}_{\phi }^i = \emptyset \end{aligned}$$

Jumping ahead, the set of keys in \(U_x\) would be used by the sampler to generate the set of associated signatures on the sampled point. The set \(W_x\) (called as the vestigial set in [19]) is used to check the validity of input i.e. checking whether the input belongs to the domain. The set \(V_x\) is used to generate the associated signatures on the “next” point as defined by the permutation.

Our Construction. The construction of weakly samplable trapdoor permutation uses the following primitives:

  1. 1.

    A single-key, selective secure public key functional encryption scheme \(\mathcal {FE}\).

  2. 2.

    A prefix constrained pseudorandom function \(\mathcal {PCPRF}\).

  3. 3.

    An injective length doubling pseudorandom generator \(\mathsf {InjPRG}: \{0,1\}^{\kappa /8} \rightarrow \{0,1\}^{\kappa /4}\)

  4. 4.

    A length doubling Left half injective pseudorandom generator \(\mathsf {LeftInjPRG}: \{0,1\}^{\kappa } \rightarrow \{0,1\}^{2\kappa }\)

In the construction, we denote \(\mathsf {SK.Enc}_{sk_1,\cdots ,sk_{n}}(m)\) to be \(\mathsf {SK.Enc}_{sk_n}(\mathsf {SK.Enc}_{sk_{n-1}}(\cdots \mathsf {SK.Enc}_{sk_1}(m)))\). The formal description our construction appears in Fig. 5.

Setting \(\mathsf {rand}(\cdot )\) We set \(\mathsf {rand}(\kappa )\) to be the maximum number of random bits needed to generate \({\tau +\kappa }\) encryptions under \(\gamma _1,\cdots ,\gamma _{\kappa }\) as well as \({\tau +\kappa }+1\) encryptions under the public keys pk.

Due to shortage of space, we defer the proof of Theorem 2 to the full version of the paper [20].

5 Universal Samplers

Intuitively, a universal sampler, defined by Hofheinz et al. [23] is a box that takes as input the description of a sampling procedure, and outputs a fresh-looking sample according to the sampling procedure. The difficulty is that we want the box to be public code, and that every user, when they run the sampler on a particular procedure, gets the same result. Moreover, we want the sample to appear as if it were a fresh random sample.

Fig. 5.
figure 5figure 5

Construction of TDP from \(\mathcal {FE}\)

Fig. 6.
figure 6

Circuits for simulating public key.

Fig. 7.
figure 7

Circuits for simulating sampler

5.1 Definition

A Universal Sampler consists of an algorithm \({\mathsf {Setup}}\) that takes as input a security parameter \(\kappa \) (encoded in unary) and a size bound \(\ell (\cdot )\), random tape size \(r(\cdot )\) and an output size \(t(\cdot )\). It outputs a program \({\mathsf {Sampler}}\). \({\mathsf {Sampler}}\) takes as input a circuit of size at most \(\ell (\kappa )\), uses \(r(\kappa )\) bits of randomness and outputs an \(t(\kappa )\)-bit string.

Intuitively, \({\mathsf {Sampler}}(C)\) will be a pseudorandom sample from C: \({\mathsf {Sampler}}(C)=C(s)\) for some s pseudorandomly chosen based on C. We will actually not formalize a standalone correctness requirement, but instead correctness will follow from our security notion.

For security, we ask that the sample output by \({\mathsf {Sampler}}(C)\) actually looks like a fresh random sample from C. Unfortunately, formalizing this requirement is tricky. Hofheinz et al. [23] defined two notions: the first is a “static” and “bounded” security notion, while the second stronger notion is “adaptive” and “unbounded”. The latter definition requires random oracles, so it is unfortunately uninstantiable in the standard model. We will provide a third definition which strikes some middle ground between the two, and is still instantiable in the standard model.

Definition 3

A Universal Sampler given by \({\mathsf {Setup}}\) is n-time statically secure with interactive simulation if there exists an efficient randomized simulator \({\mathsf {Sim}}\) such that the following hold.

  • \({\mathsf {Sim}}\) takes as input \(\kappa \) (encoded in unary) and three polynomials \(\ell (\cdot ),r(\cdot ),t(\cdot )\) (for ease of notation, we denote \(\ell = \ell (\kappa )\), \(r = r(\kappa )\) and \(t = t(\kappa )\)), and ultimately will output a simulated sampler \({\mathsf {Sampler}}\). However, before doing so, \({\mathsf {Sim}}\) provides the following interface for additional input:

    • Read queries: here the user submits an input circuit C of size at most \(\ell \), that uses r bits of randomness and has output length t. \({\mathsf {Sim}}\) will respond with a sample s that will ultimately be the output of the simulated sampler on C. \({\mathsf {Sim}}\) supports an unbounded number of Read queries.

    • Set queries: here the user submits in input circuit C of size at most \(\ell \), that uses r bits of randomness with output length t, as well as a sample s of length t. \({\mathsf {Sim}}\) will record (Cs), and set the output of the simulated sampler on C to be s. \({\mathsf {Sim}}\) supports up to n Set queries. We require that there is no overlap between circuits C in Read and Set queries, and that all Set queries are for distinct circuits.

    • Finish query: here, the user submits nothing, and \({\mathsf {Sim}}\) closes its interfaces, terminates, and outputs a sampler \({\mathsf {Sampler}}\).

    • \({\mathsf {Sim}}\) must be capable of taking the queries above in any order.

  • Correctness. \({\mathsf {Sampler}}\) is consistent with any queries made. That is, if a Read query was made on C and the response was s, then \({\mathsf {Sampler}}(C)=s\). Similarly, if a Set query was made on (Cs), then \({\mathsf {Sampler}}(C)=s\).

  • Indistinguishability from honest generation. Roughly, this requirement says that in the absence of any Write queries, and honest and simulated sampler are indistinguishable. More precisely, the advantage of any polynomial-time algorithm A is negligible in the following experiment:

    • The challenger flips a random bit b. If \(b=0\), the challenger runs \({\mathsf {Sampler}}\leftarrow {\mathsf {Setup}}(1^\kappa ,\ell ,r,t)\). If \(b=1\), the challenger initiates \({\mathsf {Sim}}(1^\kappa ,\ell ,r,t)\).

    • A is allowed to make Read queries on arbitrary circuits C of size at most \(\ell \), using r bits of randomness and output length t. If \(b=0\), the challenger runs \(s\leftarrow {\mathsf {Sampler}}(C)\) and responds with s. If \(b=1\), the challenger forwards C to \({\mathsf {Sim}}\) as a Read query, and when \({\mathsf {Sim}}\) responds with s, the challenger forwards s to A.

    • Finally, A sends a Finish query. If \(b=0\), the challenger then sends \({\mathsf {Sampler}}\) to A. If \(b=1\), the challenger sends a Finish query to \({\mathsf {Sim}}\), gets \({\mathsf {Sampler}}\) from \({\mathsf {Sim}}\), and forwards \({\mathsf {Sampler}}\) to A.

    • A then tries to guess b. The advantage of A is the advantage A has in guessing b.

  • Pseudorandomness of samples. Roughly, this requirement says that, in the simulated sampler, if an additional Set query is performed on (Cs) where s is a fresh sample from C, then the simulated sampler is indistinguishable from the case where the Set query was not performed. More precisely, the advantage of any polynomial-time algorithm B is negligible in the following experiment:

    • The challenger flips a random bit b. It then initiates \({\mathsf {Sim}}(1^\kappa ,\ell ,r,t)\).

    • B first makes a Challenge query on circuit \(C^*\) of size at most \(\ell \), using r bits of randomness and output length t, as well as an integer \(i^*\).

    • B is allowed to make arbitrary Read and Set queries, as long as the number of Set queries is at most \(n-1\), and the queries are all on distinct circuits that are different from \(C^*\). The Read and Set queries can occur in any order; the only restriction is that the Challenge query comes before all Read and Set queries.

    • After \(i^*-1\) Read and Set queries, the challenger does the following:

      • \({*}\) If \(b=0\), the challenger makes a Read query to \({\mathsf {Sim}}\), and forwards the response \(s^*\) to B.

      • \({*}\) If \(b=1\), the challenger computes a fresh random sample \(s^*\leftarrow C^*(r)\), and makes a Set query to \({\mathsf {Sim}}\) on \((C^*,s^*)\). Then it gives \(s^*\) to B.

      • Thus the \(i^*\)th query made to \({\mathsf {Sim}}\) is on circuit \(C^*\), and the only difference between \(b=0\) and \(b=1\) is whether the output of the simulated sampler will be a pseudorandom sample or a fresh random sample from \(C^*\).

    • B is allowed to continue making arbitrary Read and Set queries, as long as the number of Set queries is at most \(n-1\) and the queries are all on distinct circuits that are different from \(C^*\).

    • Finally B makes a Finish query, at which point the challenger makes a Finish query to \({\mathsf {Sim}}\). It obtained a simulated sampler \({\mathsf {Sampler}}\), which it then gives to B.

    • B then tries to guess b. The advantage of B is the advantage B has in guessing b.

5.2 Construction from FE

In this section, we will construct Universal Samplers that satisfies Definition 3 from polynomially hard, compact Functional Encryption and Prefix Constrained Pseudorandom Function (which is implied by Functional Encryption).

Theorem 3

Assuming the existence of selective secure, single key, compact public key functional encryption there exists an Universal Sampler scheme satisfying Definition 3.

Our Construction. The formal description our construction appears in Fig. 8.

Fig. 8.
figure 8

Setup and evaluating the sampler

Fig. 9.
figure 9

Circuits for simulating public key.

Due to lack of space, we give the proof of security in the full version of the paper [20].

6 Multiparty Non-interactive Key Exchange

In this section, we build multiparty non-interactive key exchange for an unbounded number of users. Moreover, in constrast to the original multilinear map protocols [15], our protocol has no trusted setup.

6.1 Definition

A multiparty key exchange protocol consists of:

  • \({\mathsf {Publish}}(\kappa )\) takes as input the security parameter and outputs a user secret \({\mathsf {sv}}\) and public value \({\mathsf {pv}}\). \({\mathsf {pv}}\) is posted to the bulletin board.

  • \({\mathsf {KeyGen}}(\{{\mathsf {pv}}_j\}_{j\in S},{\mathsf {sv}}_i,i)\) takes as input the public values of a set S of users, plus one of the user’s secrets \({\mathsf {sv}}_i\). It outputs a group key \(k\in {\mathcal {K}}\).

For correctness, we require that all users generate the same key:

$$\begin{aligned} {\mathsf {KeyGen}}(\{{\mathsf {pv}}_j\}_{j\in S},{\mathsf {sv}}_i,i)={\mathsf {KeyGen}}(\{{\mathsf {pv}}_j\}_{j\in S},{\mathsf {sv}}_{i'},i') \end{aligned}$$

for all \(({\mathsf {sv}}_j,{\mathsf {pv}}_j)\leftarrow {\mathsf {Publish}}(\kappa )\) and \(i,i'\in S\). For security, we have the following:

Definition 4

A non-interactive multiparty key exchange protocol is statically secure if the following distributions are indistinguishable for any polynomial-sized set S:

$$\begin{aligned}\{{\mathsf {pv}}_j\}_{j\in S},k&\text { where }({\mathsf {sv}}_j,{\mathsf {pv}}_j)\leftarrow {\mathsf {Publish}}(\kappa )\forall j\in S, k\leftarrow {\mathsf {KeyGen}}(\{{\mathsf {pv}}_j\}_{j\in S},s_1,1)\text { and }\\ \{{\mathsf {pv}}_j\}_{j\in S},k&\text { where }({\mathsf {sv}}_j,{\mathsf {pv}}_j)\leftarrow {\mathsf {Publish}}(\kappa )\forall j\in G, k\leftarrow {\mathcal {K}}\end{aligned}$$

Notice that our syntax does not allow a trusted setup, as the original constructions based on multilinear maps [11, 13, 15] require. Boneh and Zhandry [12] give the first multiparty key exchange protocol without trusted setup, based on obfuscation. A construction of obfuscation from a finite set of assumptions with polynomial security appears implausible due to an argument of [18]. Notice as well that our syntax does not allow the key generation to depend on the number of users who wish to share a group key. To date, prior key exchange protocols satisfying this property relied on strong knowledge variants of obfuscation [1]. Recently Khurana, Rao and Sahai in [25] constructed a key exchange protocol supporting unbounded number of users based on indistinguishability obfuscation and a tool called as somewhere statistically binding hash functions [24]. Here, we get an unbounded protocol based on functiona encryption only, and without using complexity leveraging.

6.2 Construction

Our construction will use the universal samplers built in Sect. 5, as well as any public key encryption scheme.

  • \({\mathsf {Publish}}(\kappa )\). Run \(({\mathsf {sk}},{\mathsf {pk}})\leftarrow \mathsf {PK.KeyGen}(\kappa )\). Also run the universal sampler setup algorithm \({\mathsf {Sampler}}\leftarrow {\mathsf {Setup}}(\kappa ,\ell ,t)\) where output size \(\ell \) and circuit size bound t will be decided later. Output \({\mathsf {pv}}=({\mathsf {pk}},{\mathsf {Sampler}})\) as the public value and keep \({\mathsf {sv}}={\mathsf {sk}}\) as the secret value.

  • \({\mathsf {KeyGen}}(\{({\mathsf {pk}}_j,{\mathsf {Sampler}}_j)\}_{j\in S},{\mathsf {sk}}_i,i)\). Interpret S as the set [1, n] for \(n=|S|\), choosing some canonical ordering for the users in S (say, the lexicographic order of their public values). Define \({\mathsf {Sampler}}={\mathsf {Sampler}}_1\). Define \(C_{{\mathsf {pk}},{\mathsf {pk}}'}\) for two public keys \({\mathsf {pk}},{\mathsf {pk}}'\) to be the circuit that samples a random \(({\mathsf {sk}}'',{\mathsf {pk}}'')\leftarrow \mathsf {PK.KeyGen}(\kappa )\), then encrypts \({\mathsf {sk}}''\) under both \({\mathsf {pk}}\) and \({\mathsf {pk}}'\), obtaining encryptions c and \(c'\) respectively, and then outputs \(({\mathsf {pk}}'',c,c')\). Let \(D_{{\mathsf {pk}},{\mathsf {pk}}'}\) be a similar circuit that samples a uniformly random string \({\mathsf {sk}}''\) in the key space of \(\mathcal {PKE}\), encrypts \({\mathsf {sk}}''\) to get \(c,c'\) as before, and outputs \((0,c,c')\) where 0 is a string of zeros with the same length as a public key for \(\mathcal {PKE}\). Let \(\ell \) the length of \(({\mathsf {pk}}'',c,c')\) and let t be the size of \(C_{{\mathsf {pk}},{\mathsf {pk}}'}\) (which we will assume is at least as large as \(D_{{\mathsf {pk}},{\mathsf {pk}}'}\)). Next, define \({\mathsf {pk}}_2'={\mathsf {pk}}_1\), and recursively define \(({\mathsf {pk}}'_{j+1},c_j,c'_j)={\mathsf {Sampler}}(C_{{\mathsf {pk}}_j,{\mathsf {pk}}_j'})\) for \(j=2,\dots ,n-1\). Define \({\mathsf {sk}}'_{j+1}\) to be the secret key corresponding to \({\mathsf {pk}}'_{j+1}\), which is also the secret key encrypted in \(c_j,c'_j\). Finally, define \((0,c_n,c_n')={\mathsf {Sampler}}(D_{{\mathsf {pk}}_n,{\mathsf {pk}}_n'})\), and define \({\mathsf {sk}}_{n+1}'\) to be the secret key encrypted in \(c_n,c_n'\). First, it is straightforward that given \(\{{\mathsf {pk}}_j\}_{j\in [n]}\) and \({\mathsf {Sampler}}\), it is possible to compute \({\mathsf {pk}}'_{j},c_j,c_j'\) for all \(k\in [2,n]\). Thus anyone, including an eavesdropper, can compute these values. Next, we claim that if additionally given secret keys \({\mathsf {sk}}_j\) or \({\mathsf {sk}}_j'\), it is possible to compute \({\mathsf {sk}}_{j+1}'\). Indeed, \({\mathsf {sk}}_{j+1}'\) can be computed by decrypting \(c_j\) (using \({\mathsf {sk}}_j\)) or decrypting \(c_j'\) (using \({\mathsf {sk}}_j'\)). By iterating, it is possible to compute \({\mathsf {sk}}_{k}'\) for every \(k>j\). This implies that all users in [n] can compute \({\mathsf {sk}}_{n+1}\).

Security. We now argue that any eavesdropper cannot learn any information about \({\mathsf {sk}}\). Our theorem is the following:

Theorem 4

If \(\mathcal {PKE}\) is a secure public key encryption scheme and \({\mathsf {Setup}}\) is a m-time statically secure universal sampler with interactive simulation, the the construction above is a statically secure NIKE for up to \(2^m\) users. In particular, by setting \(m=\kappa \), the scheme is secure for an unbounded number of users.

Due to lack of space, we give the proof of Theorem 4 in the full version of the paper [20].