Thanks to our new \((t_\ell ,t_r,n)\)-RGTSSS, we do not need to use a VOPRF, as in [23], which is at the cost of complex zero-knowledge proofs. We can now describe our general structure of PPSS protocol, using an OPRF as black-box. We thereafter provide two instantiations, with two appropriate OPRFs, in the same vein as the ones proposed in [23], using similar computational assumptions (see the full version [1]):
-
the first OPRF relies on the \(\mathsf {CDH}\) evaluation, similar to the protocol 2HashDH, but without NIZKs. The PPSS construction is then quite similar to [24].
-
the second OPRF is an oblivious evaluation of the Naor-Reingold PRF [25]. Then, in the PPSS, the gain of the zero-knowledge proofs by the server is quite significant.
5.1 General Description
As already presented in the high-level description, our protocols are in two phases: the initialization phase which is assumed to be executed in a safe environment and the reconstruction phase during which the password only is considered correct, while all the other inputs can be faked by the adversary.
Initialization. We assume that each server \(S_i\) owns a key pair \((\mathsf {sk}_i,\mathsf {pk}_i)\) that defines a PRF \(F_i\), with public parameters defined by \(\mathsf {pk}_i\) and a secret key defined by \(\mathsf {sk}_i\), that admits an OPRF protocol to allow a user with input m to evaluate \(F_i(m)\) without leaking any information on m to the server.
We additionally use a \((t_\ell ,t_r,n)\)-robust gap threshold secret sharing scheme and a non-malleable commitment scheme (see the full version [1]). Since we already are in the random-oracle model for the PRF, we can implement the commitment scheme with a simple second-preimage-resistant hash function \(H_\mathsf {Com}\), which allows a better efficiency. The user U first chooses a secret password \(\mathsf {pw}\):
-
1.
the user interacts with n servers to obliviously evaluate \(\pi _i = F_i(\mathsf {pw})\), and \(\varPi = (\mathsf {pk}_i)_i\) is the tuple of the public keys of the involved servers;
-
2.
for a random value \(R = K \Vert r\), where K is the random secret key the user wants to reconstruct and r some random coins for the commitment. The user generates \((s_1, \ldots , s_n, \mathsf {SSInfo})\leftarrow \textsf {ShareGen} (R)\), so that any subset of \(t_r\) shares among \(\{s_1, \ldots , s_n\}\) can efficiently recover R;
-
3.
then, the user builds \(\sigma _i = \pi _i \oplus s_i\), for \(i=1,\ldots ,n\), and sets \(\varSigma = (\sigma _i)_i\);
-
4.
the user generates \(\mathsf {Com}= H_\mathsf {Com}(\mathsf {pw},\varPi ,\varSigma ,\mathsf {SSInfo},K;r)\). We denote by \(\mathsf {PInfo}= (\varPi ,\varSigma ,\mathsf {SSInfo},\mathsf {Com})\) the public information that the user will need later to recover his secret K;
-
5.
the user thus gives \(\mathsf {PInfo}\) to all the servers.
We stress that during this initialization phase, all the values of \(\varPi \) are the real public keys and \((\pi _i)_i\) are the correct evaluations of the PRFs. On the opposite, during the reconstruction phase, all the values in \(\mathsf {PInfo}\) will be provided by the servers, but through the adversary, who might alter them.
Reconstruction. For the reconstruction, the user interacts with at least \(t_r\) servers, that provide him \(\mathsf {PInfo}= (\varPi ,\varSigma ,\mathsf {SSInfo},\mathsf {Com})\), and help him to compute \(\pi _i = F_i(\mathsf {pw})\) for several values of i, using \(\mathsf {pk}_i\) from \(\varPi \). No information is trusted anymore, and so the reconstruction phase perform several verifications:
-
1.
the user first limits the oblivious evaluations of \(\pi _i = F_i(\mathsf {pw})\) to the servers that sent the same majority tuple \(\mathsf {PInfo}= (\varPi ,\varSigma ,\mathsf {SSInfo},\mathsf {Com})\). If the number of such servers is less than \(t_r\), one aborts with \(K \leftarrow \perp \);
-
2.
for all these \(\pi _i\) (or similarly, all the i he kept), the user computes \(s_i = \sigma _i \oplus \pi _i\), using \(\sigma _i\) from \(\varSigma \) (from \(\mathsf {PInfo}\));
-
3.
using these \(\{s_i\}\) with at least \(t_r\) correct shares, and \(\mathsf {SSInfo}\) (from \(\mathsf {PInfo}\)), with RGTSSS, the user reconstructs the shared secret R (or aborts with \(K \leftarrow \perp \) if the reconstruction fails);
-
4.
the user parses the secret R as \(K \Vert r\), and checks, from \(\mathsf {PInfo}\), whether \(\mathsf {Com}= H_\mathsf {Com}(\mathsf {pw},\varPi ,\varSigma ,\mathsf {SSInfo},K;r)\);
-
5.
if the verification succeeds, K is the expected secret key, otherwise the user aborts with \(K \leftarrow \perp \).
5.2 Protocol I: One-More-Gap-Diffie-Hellman-Based PRF
Our first instantiation is based on \(\mathsf {CDH}\)-like assumptions in the random-oracle model. The arithmetic is in a finite cyclic group \(\mathbb {G}= \langle g \rangle \) of prime order q. We need a full-domain hash function \(H_1\) onto \(\mathbb {G}\), and another hash function \(H_2\) onto \({\{0,1\}}^{\ell _2}\). The commitment scheme uses a simple hash function \(H_\mathsf {Com}= H_3\) onto \({\{0,1\}}^{\ell _3}\).
For a private key \(\mathsf {sk}= x \in \mathbb {Z}_q\), we consider the pseudorandom function \(F_{x}(m) = H_2(m, g^{x}, H_1(m)^{x})\), for any bitstring \(m\in {\{0,1\}}^*\), where the public key is \(\mathsf {pk}= y = g^{x}\). In the full version [1], we prove this is indeed a PRF, as already shown in [23].
In addition, it admits an oblivious evaluation, that does not leak any information, thanks to the three simulators \(\mathcal {S} im \), \(\mathcal {S} im _U\) and \(\mathcal {S} im _S\), as presented in Fig. 2: \(\mathcal {S} im \) simulates an honest transcript, \(\mathcal {S} im _U\) simulates an honest user interacting with a malicious server, and \(\mathcal {S} im _S\) simulates an honest server with a malicious user. These simulators will be used by our simulator in the full security proof. They generate perfectly indistinguishable views to the adversary, but they require \(\mathsf {CDH}_g(y,\cdot )\) and \(\mathsf {DDH}_g(y,\cdot ,\cdot )\) evaluation, and thus oracle access when the secret keys are not known. Since the indistinguishability of the PRF relies on the \(\mathsf {CDH}_g(y,\cdot )\) assumption, the overall security relies on the One-More Gap Diffie-Hellman (\(\mathsf {OMGDH}\)) assumption (see the full version [1]) as shown in the last step of the proof.
Theorem 2
For any adversary \(\mathcal {A}\), against the Protocol I, that corrupts no more than \(q_c\) servers, involves at most \(q_s\) instances of the servers, \(q_u\) instances of the user, and asks at most \(q_1\), \(q_2\), \(q_3\) queries to \(H_1\), \(H_2\), \(H_3\), respectively
$$\mathsf {Adv}(\mathcal {A}) \le \left( q_u+ \frac{4q_s}{ n - 4q_c} \right) \times \frac{1}{\#\mathcal {D}} + \varepsilon .$$
where \(\varepsilon = n \times \mathsf {Succ}^\mathsf {omgdh}(q_1,q_s,t,n\cdot q_u+ q_2) + (q_3^2+2) \cdot 2^{-\ell _3}/4\).
Security Proof. The complete and detailed proof of the Theorem is given in the full version [1]. The rough idea is the following: in the real attack game, we focus on a unique user, against a static adversary (the corrupted servers are known right after the initialization, and before any reconstruction attempt). All the parameters are honestly generated, the simulator knows the secret informations to answers the queries, and two random keys \(K_0\) (random) and \(K_1\) (real), as well as a bit b, are selected randomly to answer \(\mathsf {Test}\)-queries. In the final game, we simulate all the answers to the adversary without using a password. A random value will be chosen at the very end of the simulation and used as a password in order to decide if some bad events should have occurred, which will immediately upper-bound the advantage of the adversary.
We first modify the way \(\mathsf {Execute}\)-queries are answered, using \(\mathcal {S} im \) that perfectly simulates honest transcripts user-servers, and we set user’s key to \(K_1\).
Then, we deal with \(\mathsf {Send}\)-queries to the honest user, trying to exclude the cases of a fake public information \(\mathsf {PInfo}'\) (sent by the majority of servers): first, we do as before if the commitment \(\mathsf {Com}'\) in \(\mathsf {PInfo}'\) is different from the expected value C generated during the initialization, but eventually we set \(K\leftarrow \perp \). This would just make a difference for the adversary if \(\mathsf {Com}'\) indeed contains the good password \(\mathsf {pw}\), which is defined as the event \(\mathsf {PWinC}\). This event \(\mathsf {PWinC}\) can be evaluated using the list of queries asked to \(H_3\). Then, a similar argument applies when a wrong \(\mathsf {PInfo}'\) is sent, but with a correct \(\mathsf {Com}\), under the binding propriety of the commitment \(H_3\).
Once we have fixed this, and we trust the public values, we can use \(\mathcal {S} im _U\), that perfectly simulates a flow A from the user to a server, and can decide on the honest behavior of the servers. Then \(\mathcal {S} im _U\) accepts with \(K\leftarrow K_1\) in the honest case or aborts with \(K\leftarrow \perp \) otherwise. Hence, we remark that we answer \(\mathsf {Send}\)-queries without calling the \(H_1\) or \(H_2\) oracles, but just using \(K_1\), and no secret sharing reconstruction is used anymore.
Next step is to replace all the shares in the initialization phase by random and independent values. We know that until the adversary does not get more than \(t_\ell =n/4\) of these shares, it cannot detect whether they are random or correct. We define the event \(\mathsf {PWinF}\) to be the bad event, where the adversary has enough evaluations of the PRF to notice the change. Again, our simulator is able to decide the event \(\mathsf {PWinF}\) by checking whether \(\mathsf {pw}\) has been queried with the right inputs to \(H_2\), and how many times. We eventually replace the hash value \(\mathsf {Com}\) in the initialization phase by a random \(\mathsf {Com}\).
One can note that, in the end, the password \(\mathsf {pw}\) is not used anymore during the simulation, but just to determine whether the events \(\mathsf {PWinC}\) or \(\mathsf {PWinF}\) happened. In addition, \(K_1\) does not appear anymore during the initialization phase, hence one cannot make any difference between \(K_0\) and \(K_1\): \(\mathsf {Succ}_{\mathcal {A}} = 1/2\) in the last game. As a consequence, \(\mathsf {Adv}(\mathcal {A}) \le \Pr [\mathsf {PWinC}] + \Pr [\mathsf {PWinF}] + \varepsilon \), where \(\varepsilon \) comes from the collisions or guesses in the random oracles. To evaluate the two events \(\mathsf {PWinC}\) or \(\mathsf {PWinF}\) to happen, we choose a random password \(\mathsf {pw}\) at the very end only: \(\Pr [\mathsf {PWinC}]\) is clearly upper-bounded by \(q_u/\#\mathcal {D}\), since \(q_u\) is the maximal number of fake commitment attempts containing the right \(\mathsf {pw}\) that can be different from the expected ones; \(\mathsf {PWinF}\) means that the adversary managed to get \(n/4-q_c\) evaluations of the PRFs under the chosen \(\mathsf {pw}\), since it can evaluate on its own the values under the \(q_c\) corrupted servers. But unless the adversary gets more evaluations than the number \(q_s\) of queries asked to the servers (which can be proven under the \(\mathsf {OMGDH}\) assumption), the number of bad passwords (for which the knows at least \(n/4-q_c\) evaluations of the PRFs) is less than \(q_s/(n/4-q_c)\). So the probability that the chosen \(\mathsf {pw}\) is such a bad password is less than \(q_s/(n/4-q_c) \times 1/\#\mathcal {D}\).
5.3 Protocol II: DDH-Based PRF
Our second instantiation makes use of the Naor and Reingold [25] pseudorandom function. We consider the group \(\mathbb {G}=\langle g \rangle \) of prime order q that is a safe prime: \(q=2s+1\). In the multiplicative group of scalar \(\mathbb {Z}^*_q\), we consider the cyclic group \(\mathbb {G}_s\) of order s (this is the group of elements in \(\mathbb {Z}^*_q\) with Jacobi symbol equals to +1). In both groups, the \(\mathsf {DDH}\) assumption can be made.
The PRF key is a tuple \(a = (a_0,a_1, \dots , a_\ell ) {\,\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}\,}(\mathbb {G}_s\backslash \{1\})^{\ell +1}\), and \(F_a(x) = g^{a_0\prod a_i^{x_i}}\), where \(x =(x_1, x_2, \dots , x_\ell )\in {\{0,1\}}^\ell \). This function has been proven to be a PRF under the \(\mathsf {DDH}\) assumption [25] on \(\ell \)-bit inputs. It also admits a simple oblivious evaluation (just the messages C and G from Fig. 3), using a multiplicatively homomorphic encryption scheme in \(\mathbb {G}_s\), such as ElGamal for \((\mathsf {Enc}_\mathsf {pk}, \mathsf {Dec}_\mathsf {sk})\), which allows the computation of C from x, \(\alpha \), and the ciphertexts \(\{c_i\}_i\). Unfortunately, without additional proofs, this is not secure against malicious users, since it works only for honest inputs \(x\in {\{0,1\}}^\ell \). Hence the more involved protocol presented in Fig. 3 that makes use of a zero-knowledge proof of knowledge of \((x_i)_i\in {\{0,1\}}^\ell \) and \(\alpha \in \mathbb {G}_s\). This can be efficiently done under the sole \(\mathsf {DDH}\) assumption. Whereas our oblivious evaluation of the PRF is in the standard model, overall, the PPSS protocol based on this OPRF is in the random-oracle model as it makes use of the RGTSSS. As a consequence, one could replace the interactive ZK proofs by NIZK proofs “à la Schnorr”. This would reduce the number of flows to only 2. The full proof of our protocol II (including the DDH-based OPRF) can be found in the full version [1].