Keywords

1 Introduction

Password-based authenticated key exchange (PAKE) protocols [1] allow users who share only a short, low-entropy password to agree on a cryptographically strong session key. PAKE protocols are fascinating from a theoretical perspective, as they can be viewed as a means of “bootstrapping” a common cryptographic key from the (essentially) minimal setup assumption of a short, shared secret. PAKE protocols are also important in practice, since passwords are perhaps the most common and widely-used means of authentication. In this paper, we consider PAKE protocols in the group setting where the number of users involved in the computation of a common session key can be large.

The difficulty in designing password-based protocols is to prevent off-line dictionary attacks whereby an eavesdropping adversary exhaustively enumerates passwords, attempting to match the correct password to the eavesdropped session. However, the adversary can always correctly determine the correct password via an on-line dictionary attack in which the adversary tries to impersonate one of the parties using each possible password. Although an on-line dictionary attack is not avoidable, the damage it may cause can be mitigated by other means such as limiting the number of failed login attempts. Roughly, a secure password-based protocol guarantees that an exhaustive on-line dictionary attack is the “best” possible strategy for an adversary.

1.1 Related Work

Group Key Exchange Protocols. Bresson et al. [2] introduced a formal security model for group key exchange protocols and proposed the first provably secure protocol for this setting. Their protocol use a ring structure for the communication, in which each user has to wait for the message from his predecessor before producing his own. Unfortunately, the nature of their communication structure makes their protocols quite impractical for large groups due to the number of rounds of communication linear in the number of group users. Later, Burmester and Desmedt [3, 4] proposed a more efficient and practical group key exchange protocol, in which the number of rounds of communication is constant. Their protocol has been formally analyzed by Katz and Yung [5], who also proposed the first constant round and fully scalable authenticated group key exchange protocol which is provably secure in the standard model. Recently, Boneh and Zhandry [6] constructed the first multiparty non-interactive key exchange protocol requiring no trusted setup, and gave the formal security proof in the static and semi-static models.

Password-Based Group Key Exchange Protocols. Adding password authentication services to a group key exchange protocol is not trivial since redundancy in the flows of the protocol can open the door to password dictionary attacks. Bresson et al. [7] proposed the first solution to the group Diffie-Hellman key exchange problem in the password-based scenario. However, their protocol has a total number of rounds which is linear in the number of group users and their security analysis requires ideal models, which is impractical for large groups. Later, two different password-based versions [8, 9] of Burmester-Desmedt protocol were proposed, and unfortunately, both of them are not secure [10]. Also, Abdalla et al. [10] demonstrated the first password-based group key exchange protocol in a constant number of rounds. Their protocol is provably secure in the random oracle and ideal cipher models.

To date, there are only a few general approaches for constructing password-based group key exchange protocols in the standard model (i.e., without random oracles). Abdalla and Pointcheval [11] constructed the first such protocol with a proof of security in the standard model. Their protocol combines smooth projective hash function with the construction of Burmester and Desmedt [3, 4] and includes only 5 rounds communication, but requires a common reference string model. Later, Abdalla et al. [12] presented a compiler, that transforms any provably secure (password-based) two-party key exchange protocol into a provably secure (password-based) group key exchange protocol with two more rounds of communication. Their compiler uses non-interactive and non-malleable commitment schemes as main technical tools, also requires a common reference string model.

1.2 Technical Contributions

Round complexity is a central measure of efficiency for any interactive protocol. In this paper, our main goal is to further improve bounds on the round complexity of password-based group key exchange protocol.

Towards this goal, we propose the first one-round password-based group key exchange protocol which is provably secure in the standard model. Our main tool is indistinguishability obfuscation, for which a candidate construction was recently proposed by Garg et al. [14]. The essential idea is the following: the public parameter consists an obfuscated program for a pseudorandom function PRF which requires knowledge of the password pw to operate, so that each user in the group can independently evaluate the obfuscated program to obtain the output session key. To prevent the off-line dictionary attack, we require the random value \(r_i\) used for generating the ciphertext \(c_i\) also as input of the obfuscated program.

Our second contribution is two-round password-based group key exchange protocol without any setup. The existing constructions require a trusted setup to publish public parameters, which means whoever generates the parameters can obtain all group users’ passwords and compute the agreed session key. However, this may be less appealing than the “plain” model where there is no additional setup. Motivated by this observation, we propose a completely new approach to password-based group key exchange protocol with no trusted setup. The resulting scheme is the first secure password-based group key exchange protocol which does not rely on a random oracle or a setup, only requires two rounds of communication. Our central challenge is how to create a way to let each group user run setup for himself securely. In fact, at a first glance, it seems that letting each user publish an obfuscated program might fully resolve this problem. However, such an approach fails because a potentially malicious program can be replaced by an adversary. Specifically, an adversary may publish a malicious program that simply outputs the input password. To prevent such attacks, we extend the Burmester-Desmedt protocol framework [3, 4] to the password setting, where the Diffie-Hellman key exchanges are replaced by indistinguishability obfuscation, and let each user generate two obfuscated programs. The first obfuscated program is used to obtain other users’ random value, and the second program is used to generate the shared key with the user’s neighbors, where the output of the first program is only as the input of the second program. Moreover, each user’s partial message broadcasted for computing the session key is generated by his own program and cannot be replaced. Thus, even if the adversary replace some programs, any password information will not be disclosed.

1.3 Outline of the Paper

The rest of this paper is organized as follows. Section 2 recalls the security model usually used for password-based group key exchange protocol, and Sect. 3 recalls the definition of different cryptographic primitives essential for our study. We then propose two round-optimal constructions for password-based group key exchange protocol in Sects. 4 and 5, respectively. Sections 6 concludes.

2 Password-Based Group Key Exchange

In this section, we briefly recall the formal security model for password-based group key exchange protocols as presented in [10] (which is based on the model by Bresson [13]).

In a password-based group key exchange protocol, we assume for simplicity a fixed, polynomial-size set \(\mathcal {U} = \{U_1,\ldots , U_l\}\) of potential users. Each user \(U \in \mathcal {U}\) may belong to several subgroup \(\mathcal {G}\subseteq \mathcal {U}\), each of which has a unique password \(pw_\mathcal {G}\) associated to it. The password \(pw_\mathcal {G}\) is known to all the users \(U_i\in \mathcal {G}\) wishing to establish a common session key.

Let \(U^{\langle i\rangle }\) denote the i-th instance of a participant U and b be a bit chosen uniformly at random. During the execution of the protocol, an adversary \(\mathcal {A}\) could interact with protocol participants via several oracle queries, which model adversary’s possible attacks in the real execution. All possible oracle queries are listed in the following:

  • Execute(\(U_1^{\langle i_1\rangle }, \ldots , U_n^{\langle i_n\rangle }\)): This query models passive attacks in which the attacker eavesdrops on honest executions among the user instances \(U_1^{\langle i_1\rangle }, \ldots , U_n^{\langle i_n\rangle }\). It returns the messages that were exchanged during an honest execution of the protocol.

  • Send(\(U^{\langle i\rangle }, m\)): This oracle query is used to simulate active attacks, in which the adversary may tamper with the message being sent over the public channel. It returns the message that the user instance \(U^{\langle i\rangle }\) would generate upon receipt of message m.

  • Reveal(\(U^{\langle i\rangle }\)): This query models the possibility that an adversary gets session keys. It returns to the adversary the session key of the user instance \(U^{\langle i\rangle }\).

  • Test(\(U^{\langle i\rangle }\)): This query tries to capture the adversary’s ability to tell apart a real session key from a random one. It returns the session key for instance \(U^{\langle i\rangle }\) if \(b=1\) or a random number of the same size if \(b=0\). This query is called only once.

Besides the above oracle queries, some terminologies are defined as follows.

  • Partnering: Let the session identifier sid\(^i\) of a user instance \(U^{\langle i\rangle }\) be a function of all the messages sent and received by \(U^{\langle i\rangle }\) as specified by the protocol. Let the partner identifier pid\(^i\) of a user instance \(U^{\langle i\rangle }\) be the set of all participants with whom \(U^{\langle i\rangle }\) wishes to establish a common session key. Two instances \(U_1^{\langle i_1\rangle }\) and \(U_2^{\langle i_2\rangle }\) are said to be partnered if and only if pid\(_1^{i_1} =\ \)pid\(_2^{i_2}\) and sid\(_1^{i_1} =\ \)sid\(_2^{i_2}\).

  • Freshness: We say an instance \(U^{\langle i\rangle }\) is fresh if the following conditions hold: (1) \(U^{\langle i\rangle }\) has accepted the protocol and generated a valid session key; (2) No Reveal queries have been made to \(U^{\langle i\rangle }\) or to any of its partners.

Correctness. The correctness of password-based group key exchange protocol requires that, whenever two instances \(U_1^{\langle i_1\rangle }\) and \(U_2^{\langle i_2\rangle }\) are partnered and have accepted, both instances should hold the same non-null session key.

Security. For any adversary \(\mathcal {A}\), let \({ Succ}(\mathcal {A})\) be the event that \(\mathcal {A}\) makes a single Test query directed to some fresh instance \(U^{\langle i\rangle }\) at the end of a protocol P and correctly guesses the bit b used in the Test query. Let \(\mathcal {D}\) be the user’s password dictionary (i.e., the set of all possible candidate passwords). The advantage of \(\mathcal {A}\) in violating the semantic security of the protocol P is defined as:

$$Adv _{P,\mathcal {D}}(\mathcal {A}) = |2Pr[Succ(\mathcal {A})]-1|.$$

Definition 1

(Security). A password-based group key exchange protocol P is said to be secure if for every dictionary \(\mathcal {D}\) and every (non-uniform) polynomial-time adversary \(\mathcal {A}\),

$$Adv _{P,\mathcal {D}}(\mathcal {A})< \mathcal {O}(q_s)/|\mathcal {D}|+negl(\lambda ),$$

where \(q_s\) is the number of Send oracle queries made by the adversary to different protocol instances and \(\lambda \) is a security parameter.

3 Preliminaries

In this section we start by briefly recalling the definition of different cryptographic primitives essential for our study. Let \(x\leftarrow S\) denote a uniformly random element drawn from the set S.

3.1 Indistinguishability Obfuscation

We will start by recalling the notion of indistinguishability obfuscation (iO) recently realized in [14] using candidate multilinear maps [15].

Definition 2

(Indistinguishability Obfuscation). An indistinguishability obfuscator iO for a circuit class \(\mathcal {C}_\lambda \) is a PPT uniform algorithm satisfying the following conditions:

  • iO(\(\lambda , C\)) preserves the functionality of C. That is, for any \(C\in \mathcal {C}_\lambda \), if we compute \(C'=\)iO\((\lambda , C)\), then \(C'(x)= C(x)\) for all inputs x.

  • For any \(\lambda \) and any two circuits \(C_0, C_1\in \mathcal {C}_\lambda \) with the same functionality, the circuits iO(\(\lambda , C_0\)) and iO(\(\lambda , C_1\)) are indistinguishable. More precisely, for all pairs of PPT adversaries (Samp, D) there exists a negligible function \(\alpha \) such that, if

    $$\text {Pr}[\forall x, C_0(x)=C_1(x): (C_0, C_1, \tau )\leftarrow \text {Samp}(\lambda )]>1- \alpha (\lambda )$$

    then

    $$|\text {Pr}[\text {D}(\tau , \text {iO}(\lambda , C_0))=1]-\text {Pr}[\text {D}(\tau , \text {iO}(\lambda , C_1))=1]|<\alpha (\lambda )$$

In this paper, we will make use of such indistinguishability obfuscators for all polynomial-size circuits:

Definition 3

(Indistinguishability Obfuscation for P/ poly ). A uniform PPT machine iO is called an indistinguishability obfuscator for P/poly if the following holds: Let \(\mathcal {C}_\lambda \) be the class of circuits of size at most \(\lambda \). Then iO is an indistinguishability obfuscator for the class \(\{\mathcal {C}_\lambda \}\).

3.2 Constrained Pseudorandom Functions

A pseudorandom function (PRF) [16] is a function PRF: \(\mathcal {K}\times \mathcal {X}\rightarrow \mathcal {Y}\) where PRF(\(k,\cdot \))is indistinguishable from a random function for a randomly chosen key k. Following Boneh and Waters [17], we recall the definition of constrained pseudorandom function.

Definition 4

(Constrained Pseudorandom Function). A PRF F: \(\mathcal {K}\times \mathcal {X}\rightarrow \mathcal {Y}\) is said to be constrained with respect to a set system \(\mathcal {S}\subseteq 2^{\mathcal {X}}\) if there is an additional key space \(\mathcal {K}_\mathcal {C}\) and two additional algorithms:

  • F.constrain(k, S): On input a PRF key \(k\in \mathcal {K}\) and the description of a set S\(\in \mathcal {S}\) (so that S \(\subseteq \mathcal {X}\)), the algorithm outputs a constrained key \(k_S \in \mathcal {K}_\mathcal {C}\).

  • F.eval(\(k_S, x\)): On input \(k_S \in \mathcal {K}_\mathcal {C}\) and \(x \in \mathcal {X}\), the algorithm outputs

    $$\begin{aligned} F.eval(k_S, x) = \left\{ \begin{array}{ll} F(k,x) &{} \text { if } x \in S\\ \bot &{} \text {otherwise} \end{array} \right. \end{aligned}$$

For ease of notation, we write \(F(k_S,x)\) to represent \(F.eval(k_S, x)\).

Security. Intuitively, we require that even after obtaining several constrained keys, no polynomial time adversary can distinguish a truly random string from the PRF evaluation at a point not queried. This intuition can be formalized by the following security game between a challenger and an adversary \(\mathcal {A}\).

Let F: \(\mathcal {K}\times \mathcal {X}\rightarrow \mathcal {Y}\) be a constrained PRF with respect to a set system \(\mathcal {S}\subseteq 2^{\mathcal {X}}\). The security game consists of three phases:

  • Setup Phase. The challenger chooses a random key \(K\leftarrow \mathcal {K}\) and a random bit \(b\leftarrow \{0,1\}\).

  • Query Phase. In this phase, \(\mathcal {A}\) is allowed to ask for the following queries:

    • \(\bullet \) Evaluation Query: On input \(x\in \mathcal {X}\), it returns F(K, x).

    • \(\bullet \) Key Query: On input \(S\in \mathcal {S}\), it returns F.constrain(K, S).

    • \(\bullet \) Challenge Query: \(\mathcal {A}\) sends \(x\in \mathcal {X}\) as a challenge query. If \(b = 0\), the challenger outputs F(K, x). Else, the challenger outputs a random element \(y\leftarrow \mathcal {Y}\).

  • Guess Phase. \(\mathcal {A}\) outputs a guess \(b'\) of b.

Let \(E\subseteq \mathcal {X}\) be the set of evaluation queries, \(C\subseteq \mathcal {S}\) be the set of constrained key queries and \(Z\subseteq \mathcal {X}\) the set of challenge queries. \(\mathcal {A}\) wins if \(b = b'\) and \(E\bigcap Z = \phi \) and \(C\bigcap Z = \phi \). The advantage of \(\mathcal {A}\) is defined to be Adv\(_{\mathcal {A}}^F(\lambda ) = |\text {Pr}[\mathcal {A} \text {wins}]-1/2|\).

Definition 5

The PRF F is a secure constrained PRF with respect to \(\mathcal {S}\) if for all probabilistic polynomial time adversaries \(\mathcal {A}\), Adv\(_{\mathcal {A}}^F(\lambda )\) is negligible in \(\lambda \).

3.3 CCA Secure Encryption

Definition 6

(Public-Key Encryption). A public-key encryption scheme \(\Sigma \) consist of three algorithms:

  • Gen: (randomized) key generation algorithm. It outputs a pair (pk, sk) consisting of a public key and a secret key, respectively.

  • Enc: (randomized) encryption algorithm. It outputs a ciphertext \(c = Enc_{pk}(m)\) for any message m and a valid public key pk.

  • Dec: deterministic decryption algorithm. It outputs \(m = Dec_{sk}(c)\) or \(\bot = Dec_{sk}(c)\) for a ciphertext c and a secret key sk.

In order to make the randomness used by Enc explicit, we write \(Enc_{pk}(m;r)\) to highlight the fact that random coins r are used to encrypt the message m.

Perfect Correctness. We say that the encryption scheme has perfect correctness if for overwhelming fraction of the randomness used by the key generation algorithm, for all messages we have Pr\([Dec_{sk}(Enc_{pk}(m)) = m] = 1\).

CCA Security [18]. The CCA security of the \(\Sigma \) = (Gen; Enc; Dec) is defined via the following security game between a challenger and an adversary \(\mathcal {A}\):

  1. 1.

    The challenger generates \((pk; sk)\leftarrow Gen(1^\lambda )\) and \(b\leftarrow \{0,1\}\), and gives pk to \(\mathcal {A}\).

  2. 2.

    The adversary \(\mathcal {A}\) asks decryption queries c, which are answered with the message \(Dec_{sk}(c)\).

  3. 3.

    The adversary \(\mathcal {A}\) inputs \((m_0,m_1)\) with \(|m_0| = |m_1|\) to the challenger, and receives a challenge ciphertext \(c^* =Enc_{pk}(m_b)\).

  4. 4.

    The adversary \(\mathcal {A}\) asks further decryption queries \(c\ne c^*\), which are answered with the message \(Dec_{sk}(c)\).

  5. 5.

    The adversary \(\mathcal {A}\) outputs a bit \(b'\), and wins the game if \(b' = b\).

We say that a PKE scheme \(\Sigma \) is CCA secure if for all (non-uniform) probabilistic polynomial time adversaries \(\mathcal {A}\), \(|\text {Pr}[b' = b]-1/2|\) is negligible.

4 One-Round Password-Based Group Key Exchange Protocol

In this section we present our construction of a one-round password-based group key exchange protocol. The idea is the following: each user broadcasts a ciphertext \(c_i\) of the password pw using random \(r_i\). In the setup phase, a key K is chosen for a constrained pseudorandom function PRF. The shared session key will be the function PRF evaluated at the concatenation of the ciphertexts \(c_i\) and pw. To allow each user to compute the session key, the setup will publish an obfuscated program for PRF which requires knowledge of the password pw to operate. However, the adversary may obtain the obfuscated program for PRF and then mount an off-line dictionary attack, that is, the adversary guesses password \(pw^*\) and inputs it to the obfuscated program. By observing whether the program outputs \(\bot \), the adversary can find the correct password. Therefore, besides the password pw, the random \(r_i\) is also required as input of the obfuscated program. In this way, all users can compute the session key, but anyone else without the password, will therefore be unable to compute the session key.

A formal description appears in Fig. 1. The correctness is trivial by inspection. For security, we have the following theorem.

Fig. 1.
figure 1

An honest execution of the password-based group key exchange protocol

Theorem 1

If \(\Sigma \) is a CCA-secure public-key encryption scheme, PRF a secure constrained PRF, and iO a secure indistinguishability obfuscator, then the protocol in Fig. 1 is a secure password-based group key exchange protocol.

Proof. Fix a PPT adversary \(\mathcal {A}\) attacking the password-based group key exchange protocol. We use a hybrid argument to bound the advantage of \(\mathcal {A}\). Let \(\mathbf {Hyb}_0\) represent the initial experiment, in which \(\mathcal {A}\) interacts with the real protocol as defined in Sect. 2. We define a sequence of experiments \(\mathbf {Hyb}_1,\ldots \), \(\mathbf {Hyb}_5\), and denote the advantage of adversary \(\mathcal {A}\) in experiment \(\mathbf {Hyb}_i\) as:

$$\text {Adv}_i(\mathcal {A})\overset{\text {def}}{=} 2\cdot \text {Pr} [\mathcal {A} \ \text {succeeds in}\ \mathbf {Hyb}_i]-1.$$

We bound the difference between the adversary’s advantage in successive experiments, and then bound the adversary’s advantage in the final experiment. Finally, combining all the above results gives the desired bound on Adv\(_0(\mathcal {A})\), the adversary’s advantage when attacking the real protocol.

Fig. 2.
figure 2

The program \(P_{PGKE}\)

Experiment \(\mathbf {Hyb}_1\) . In this experiment, whenever a session key is needed to be computed by an honest simulated user instance \(U^{\langle i \rangle }\), we directly compute it as \(sk_U^{\langle i \rangle } = PRF(K,c_1,\cdots ,c_n,pw_i)\) instead of by calling the obfuscated program \(P_{iO}(c_1,\cdots ,c_n,pw_i,i,\) \(r_i)\).

Lemma 1

\(\text {Adv}_{0}(\mathcal {A}) = \text {Adv}_{1}(\mathcal {A}).\)

Proof

Notice that, for an honest simulated instance, the verification procedure \(c_i = \mathbf {Enc}_{pk}(pw;r_i)\) in program \(P_{PGKE}\) will always holds. Therefore, this verification step could be omitted without changing the adversary’s view and advantage.

Experiment \(\mathbf {Hyb}_2\) . For each honest simulated user instance \(U_i^{\langle s\rangle }\), which is involved in either an \(\mathbf {Execute}\) or a \(\mathbf {Send}\) query, we compute \(c_i = \mathbf {Enc}_{pk}(pw_0;r_i)\) instead of \(c_i = \mathbf {Enc}_{pk}(pw_i;r_i)\), where \(pw_0\) represents some dummy password not in the dictionary \(\mathbf {Dict}\) but in the plaintext space of the encryption scheme \(\Sigma \).

Lemma 2

\(| \text {Adv}_{1}(\mathcal {A}) - \text {Adv}_{2}(\mathcal {A}) | < negl(\lambda )\).

Proof

First note that, with respect to the honest simulated users, the verification procedure in program \(P_{PGKE}\) has been removed in the last experiment. Denote by \(q_{es} = q_{exe}+q_{send}\). We define \(\mathbf {Hyb}_1^{(\eta )}\) (\(0 \le \eta \le n\cdot q_{es}\)) to be a sequence of hybrid variants of experiment \(\mathbf {Hyb}_1\) such that, for every \(\eta = n\cdot \xi + \gamma , 0 \le \xi < q_{es}, 0 \le \gamma \le n\), the first \(\xi \) \(\mathbf {Execute}\) or \(\mathbf {Send}\) queries are answered according to experiment \(\mathbf {Hyb}_2\), the last \(q_{es}-\xi -1\) queries are replied the same as in experiment \(\mathbf {Hyb}_1\); when the \((\xi +1)\)-th \(\mathbf {Execute}\) or \(\mathbf {Send}\) oracle is asked, the first \(\gamma \) ciphertexts of \({(c_1,c_2,\cdots ,c_n)}\) are computed according to experiment \(\mathbf {Hyb}_2\) and the rest \(n-\gamma \) ciphertexts are treated the same as in experiment \(\mathbf {Hyb}_1\). As one can easily verify, the hybrids \(\mathbf {Hyb}_1^{(0)}\) and \(\mathbf {Hyb}_1^{(n\cdot q_{es})}\) are equivalent to the experiments \(\mathbf {Hyb}_1\) and \(\mathbf {Hyb}_2\), respectively.

In such case, if there is an adversary \(\mathcal {A}\) whose advantage gap between \(\mathbf {Hyb}_1\) and \(\mathbf {Hyb}_2\) are non-negligible in security parameter, there would exist an \(\eta \) such that the adversary’s advantage gap between \(\mathbf {Hyb}_1^{(\eta -1)}\) and \(\mathbf {Hyb}_1^{(\eta )}\) are non-negligible. Then, we would be able to build an adversary \(\mathcal {B}\) violating the CPA security of the encryption scheme \(\Sigma \) with non-negligible advantage from the adversary \(\mathcal {A}\) as follows.

Upon receiving the public key pk of the encryptions scheme \(\Sigma \) from his challenger, the adversary \(\mathcal {B}\) initializes the public parameters for the group key exchange protocol. It selects a random \(K \in \mathcal {K}\), chooses password \(pw_i\) for every users \(U_i\in \mathbf {U}\), and picks a bit \(b \in \{0,1\}\) for answering the \(\mathbf {Test}\) oracle. Then, for \(\eta = n\cdot \xi + \gamma \), it simulates the \(\mathbf {Execute}, \mathbf {Send}, \mathbf {Reveal}\) and \(\mathbf {Test}\) oracles exactly as in hybrid \(\mathbf {Hyb}_1^{(\eta )}\) except for the \(\gamma \)-th ciphertext of \({(c_1,c_2,\cdots ,c_n)}\) in the \((\xi +1)\)-th \(\mathbf {Execute}\) or \(\mathbf {Send}\) oracle. In this case, the adversary \(\mathcal {B}\) gives pw and \(pw_0\) to its challenger to obtain a challenging ciphertext \(c_i^{*}\) that is either \(\mathbf {Enc}_{pk}(pw)\) or \(\mathbf {Enc}_{pk}(pw_0)\), and it uses this ciphertext in place of \(c_i\) to answer the \((\xi +1)\)-th Execute query. At last, \(\mathcal {B}\) checks whether \(\mathcal {A}\) succeeds or not. If \(\mathcal {A}\) succeeds in this hybrid game, then \(\mathcal {B}\) outputs 1. Otherwise, it outputs 0.

The distinguishing advantage of \(\mathcal {B}\) is exactly equal to the adversary \(\mathcal {A}\)’s advantage gap between \(\mathbf {Hyb}_1^{(\eta -1)}\) and \(\mathbf {Hyb}_1^{(\eta )}\). Then, the lemma follows by notice that the encryption scheme \(\Sigma \) is a CPA secure one.

Experiment \(\mathbf {Hyb}_3\) . In this experiment, we first let the simulator record the corresponding decryption key sk when generating the public key pk. Then, we define the following event:

\(\texttt {PwdGuess}:\) During the experiment, an honest user instance \(U^{\langle i \rangle }\) with password \(pw_i\) is activated by some input message \((c_1,\cdots ,c_{i-1},\perp ,c_{i+1},\cdots ,c_n)\), such that there exists some index \(j\in [n]\) and \(j \ne i\) satisfying \(Dec_{sk}(c_j) = pw_i\).

Whenever the event \(\texttt {PwdGuess}\) happens, the adversary is declared successful and the experiment ends; Otherwise, the experiment is simulated in the same way as in the last experiment.

Lemma 3

\(\text {Adv}_{2}(\mathcal {A}) \le \text {Adv}_{3}(\mathcal {A}) \).

Proof

Even when the event \(\texttt {PwdGuess}\) happens in experiment \(\mathbf {Hyb}_2\), the adversary would not necessarily succeed in this case. As a result, the modification made in experiment \(\mathbf {Hyb}_3\) introduces a new way for the adversary to succeed.

Experiment \(\mathbf {Hyb}_4\). Replace the \(PRF(\cdot )\) in \(P_{PGKE}\) by an constrained pseudorandom function \(PRF^{C}(\cdot )\), arriving at the program \(P^{\prime }_{PGKE}\) given in Fig. 3. The constrained set C is defined as \( C =\mathcal {M}^n \times \mathbf {Dict}\setminus \{ (c_1,c_2,\cdots ,c_n,pw):pw\in \mathbf {Dict},\forall i\in [n], c_i \notin Enc_{pk}(pw),\) and \(\exists j \in [n], c_j \in Enc_{pk}(pw_0)\}. \)

Lemma 4

\(| \text {Adv}_{3}(\mathcal {A}) - \text {Adv}_{4}(\mathcal {A}) | < negl(\lambda )\).

Proof

Because the dummy password \(pw_0\) in experiment \(\mathbf {Hyb}_2\) is derived from the plaintext space randomly, then with overwhelming probability (in fact, bigger than \((1-n/|\mathbf {Dict}|\cdot 2^{\lambda })\)), the input to the pseudorandom function PRF in program \(P_{PGKE}\) will belong to the set C defined as above. Therefore, with overwhelming probability, the modified program \(P^{\prime }_{PGKE}\) has the same functionality with the original program \(P_{PGKE}\). The security of the indistinguishable obfuscator iO implies that the adversary’s advantage gap between the experiment \(\mathbf {Hyb}_4\) and \(\mathbf {Hyb}_3\) is no more than the probability that \(P^{\prime }_{PGKE}\) differs from \(P_{PGKE}\), thus is negligible. The lemma’s result follows.

Fig. 3.
figure 3

The program \(P'_{PGKE}\)

Experiment \(\mathbf {Hyb}_5\) . Recall that only the situation when event \(\texttt {PwdGuess}\) does not happen (i.e., \((c_1,c_2,\cdots ,c_n,pw_i)\) \( \notin C\)) is considered since \(\mathbf {Hyb}_3\). Then, when a session key is needed to be computed by an honest user instance, we evaluate it as \(sk_U^{\langle i \rangle } \leftarrow _{R} \{0,1\}^{\lambda }\) instead of \(sk_U^{\langle i \rangle } = PRF(K,c_1,c_2,\cdots ,c_n,pw_i)\).

Lemma 5

\(| \text {Adv}_{4}(\mathcal {A}) - \text {Adv}_{5}(\mathcal {A}) | < negl(\lambda )\).

Proof

We reduce the problem of distinguishing the experiments \(\mathbf {Hyb}_4\) and \(\mathbf {Hyb}_5\) to the security of constrained PRF presented above. Assume that \(\mathcal {A}\) is a protocol adversary that is defined as in \(\mathbf {Hyb}_4\). We construct a PRF adversary \(\mathcal {B}\) against the security of the constrained pseudorandom function PRF as follows. When the adversary \(\mathcal {B}\) receives the constrained key \(k_{C}\) of PRF with respected to the constrained set C, it simulates the protocol execution for \(\mathcal {A}\) as in \(\mathbf {Hyb}_4\). Note that the program \(P^{\prime }_{PGKE}\) is used in this experiment and all the queries asked by \(\mathcal {A}\) could be answered with overwhelming probability by utilizing it. However, when a honest simulated user instance needs to generate a session key, \(\mathcal {B}\) asks its own challenge query, getting back either a value computed from the function PRF or a value selected uniformly at random, and used it as the session key. Finally, \(\mathcal {B}\) checks whether \(\mathcal {A}\) succeeds or not. If \(\mathcal {A}\) succeeds, then \(\mathcal {B}\) outputs 1. Otherwise, it outputs 0.

It follows that the advantage of \(\mathcal {B}\) is exactly equal to the adversary \(\mathcal {A}\)’s advantage gap between \(\mathbf {Hyb}_4\) and \(\mathbf {Hyb}_5\).

Bounding the Advantage in Hyb \(_5\) . Consider the different ways for the adversary to succeed in Hyb\(_5\):

  1. 1.

    The Event \(\texttt {PwdGuess}\) happens;

  2. 2.

    The adversary successfully guesses the bit used by the Test oracle.

Since all oracle instances are simulated using dummy passwords, the adversary’s view is independent of the passwords that are chosen for each group of users. Then we have Pr[PwdGuess]\( \le Q({\lambda })/D_{\lambda }\), where \(Q({\lambda })\) denotes the number of Send oracle queries and \(D_{\lambda }\) denotes the dictionary size. Conditioned on \(\texttt {PwdGuess}\) not occurring, the adversary can succeed only in case 2. But since all session keys defined throughout the experiment are chosen uniformly and independently at random, the probability of success in this case is exactly 1/2. Then, we have

$$\begin{aligned} Pr[Success]\le & {} Pr[\texttt {PwdGuess}]+Pr[Success|\overline{\texttt {PwdGuess}}]\cdot (1-Pr[\texttt {PwdGuess}]) \\= & {} \frac{1}{2} + \frac{1}{2}\cdot Pr[\texttt {PwdGuess}] \\\le & {} \frac{1}{2} + \frac{Q({\lambda })}{2\cdot D_{\lambda }} \end{aligned}$$

and so Adv\(_5(\mathcal {A})\le \frac{Q({\lambda })}{D_{\lambda }}\). Taken together, Lemmas 1–5 imply that Adv\(_0(\mathcal {A})\le \frac{Q({\lambda })}{D_{\lambda }}+negl({\lambda })\) as desired. \(\Box \)

5 Two-Round Password-Based Group Key Exchange Protocol with No Setup

In this section, we show how to remove the trusted setup and common reference string (CRS) from the password-based group key exchange protocol in the previous section. Intuitively, letting each user publish an obfuscated program and run setup for himself might fully resolve this problem. However, unlike the protocol I in the previous section, the obfuscated programs generated by group users are susceptible to a “replace” attack - i.e., the adversary may replace the program with a malicious program that simply outputs the input password. Then, the message broadcasted by an honest user may disclose the information about password. With this message, the adversary can mount an off-line dictionary attack and obtain the password, thus breaking the security of protocol. We believe that such attacks are the principle reason that the existing constructions require a trusted setup to publish public parameters.

To overcome the above difficulties, we present a new methodology for constructing password-based group key exchange protocol with no setup. The basic idea of our construction follows the Burmester-Desmedt [3, 4] construction where the Diffie-Hellman key exchanges are replaced by indistinguishability obfuscation. As in the Burmester-Desmedt protocol, our protocol assumes a ring structure for the users so that we can refer to the predecessor and successor of a user. Each user in the group will run setup for himself and his neighbors (predecessor and successor), and generate two obfuscated programs.

  • The first obfuscated program \(P^{iO-dec}\) is used to obtain other users’ random value s. This program takes as input “ciphertext” c and user password pw, and outputs the corresponding “plaintext” s.

  • The second obfuscated program \(P^{iO}\) is used to generate the shared key with the user’s neighbors. This program takes as input two random value \(s_i\) and \(s_{i+1}\) generated by the user \(U_i\) and its neighbor \(U_{i+1}\) respectively, and outputs the shared \(K_i\). However, to make the key \(K_i\) shared only between \(U_i\) and its neighbor \(U_{i+1}\) (i.e., other users cannot obtain the key \(K_i\)), an obfuscated program \(P^{iO}\) for PRF will be required the knowledge of a seed r to operate. More precisely, each user generates a seed \(r_i\) and computes \(s_i = PRG(r_i)\), where PRG is a pseudorandom generator.

In our protocol, each group user \(U_i\) executes two correlated instances to obtain \(K_{i-1}\) and \(K_i\), one with his predecessor and one with his successor so each user can authenticate his neighbors, and then computes and broadcasts \(X_i = K_i/K_{i-1}\). After this round, each user is capable of computing the group session key SK. For the message \(X_i = K_i/K_{i-1}\) broadcasted by \(U_i\) in the second round, \(K_i\) is generated by \(U_i\)’s own program \(P_i\), which cannot be replaced. Moreover, the output of the first program \(P^{iO-dec}\) is only as the input of the second program \(P^{iO}\). Thus, even if the adversary replace \(P^{iO-dec}\) and \(P^{iO}\) into malicious programs, from the messages broadcasted, he cannot obtain any information about the password or the session key.

Fig. 4.
figure 4

An honest execution of No-Setup password-based group key exchange protocol

A formal description appears in Fig. 4. In an honest execution of the protocol, it is easy to verify that all honest users in the protocol will terminate by accepting and computing the same \(MSK = \prod _{j=1}^{n}K_j\) and the same session key SK. Therefore, the correctness of the protocol follows directly. For the security, we have the following theorem.

Fig. 5.
figure 5

The program \(P_i^{dec}\)

Fig. 6.
figure 6

The program \(P_i\)

Theorem 2

If PRG is a secure pseudorandom generator, PRF a secure constrained PRF, and iO a secure indistinguishability obfuscator, then the protocol in Fig. 4 is a secure password-based group key exchange protocol with no setup.

Proof. Fix a PPT adversary \(\mathcal {A}\) attacking the password-based group key exchange protocol. We construct a sequence of experiments \(\mathbf {Hyb}_0, \ldots , \mathbf {Hyb}_{13}\), with the original experiment corresponding to \(\mathbf {Hyb}_0\). Let Adv\(_i(\mathcal {A})\) denote the advantage of \(\mathcal {A}\) in experiment \(\mathbf {Hyb}_i\). To prove the desired bound on Adv\((\mathcal {A})=\) Adv\(_0(\mathcal {A})\), we bound the effect of each change in the experiment one the advantage of \(\mathcal {A}\), and then show that Adv\(_{13}(\mathcal {A})\le \frac{Q(\lambda )}{D(\lambda )}\) (where, recall, \(Q(\lambda )\) denotes the number of on-line attacks made by \(\mathcal {A}\), and \(D(\lambda )\) denotes the dictionary size).

Experiment Hyb \(_1\) . Here we change the way Execute queries are answered. Specifically, for \(i=1, \ldots , n\), we will choose random \(s_i^L, s_i^R \in \{0,1\}^{2\lambda }\) instead of generating them from PRG. Let the set \(\widehat{S}= \{s_i^L, s_i^R| i=1, \ldots , n\}\). The security of PRG yields the Lemma 6.

Lemma 6

\(\mid \text {Adv}_0(\mathcal {A})- \text {Adv}_1(\mathcal {A})\mid \le negl(\lambda ).\)

Experiment Hyb \(_2\) . In this experiment, We constrain the PRF \(F_2\) so that it can only be evaluated at points \((s_1, s_2, U_1, U_2)\) where \(s_1 \notin \widehat{S}\) or \(s_2 \notin \widehat{S}\). Then we replace \(F_2\) with \(F_2^C\) in the program \(P_i\), arriving at the program \(P'_i\) given in Fig. 7. In respond to a query Execute, output \(P_i^{iO}\) = iO (\(P'_i\)).

Lemma 7

\(\mid \text {Adv}_1(\mathcal {A})- \text {Adv}_2(\mathcal {A})\mid \le negl(\lambda ).\)

Proof. Note that with overwhelming probability, none of \(s \in S\) in Experiment Hyb\(_1\) has a pre-image under PRG. Therefore, with overwhelming probability, there is no input to \(P_i^{iO}\) that will cause PRF \(F_2\) to be evaluated on points of the form \((s_1, s_2, U_1, U_2)\) where \(s_1 \in \widehat{S}\) and \(s_2\in \widehat{S}\). We can conclude that the programs \(P_i\) and \(P'_i\) are functionally equivalent. Then based on the indistinguishability obfuscation property, it is easy to see that the hybrids Hyb\(_1\) and Hyb\(_2\) are computationally indistinguishable. Thus, security of iO yields the lemma.

Fig. 7.
figure 7

The program \(P'_i\)

Experiment Hyb \(_3\) . In this experiment, we change once again the simulation of the Execute queries so that the value \(K_i\) for \(i=1, \ldots , n\) are chosen as a random string of the appropriate length.

Lemma 8

\(\mid \text {Adv}_2(\mathcal {A})- \text {Adv}_3(\mathcal {A})\mid \le negl(\lambda ).\)

Proof. This follows from the security of PRF as a constrained PRF (as in Definition 4). We construct a PRF adversary \(\mathcal {B}\) that breaks the security of PRF as a constrained PRF as follows: adversary \(\mathcal {B}\) simulates the entire experiment for \(\mathcal {A}\). In response to Execute(\(U_1^{\langle i_1\rangle }, \ldots , U_n^{\langle i_n\rangle }\)) query, \(\mathcal {B}\) computes \(c_i^L, c_i^R\) with correct password pw exactly as in experiment Hyb\(_2\). \(\mathcal {B}\) also asks its PRF \(F_2\) oracle and thus always reveals the correct key. At the end of the experiment, \(\mathcal {B}\) makes a real-or-random challenge query for the constrained function PRF\(^C\) as defined above. One can easily see that, \(\mathcal {B}\) is given a real PRF or a random value, then its simulation is performed exactly as in experiment Hyb\(_2\) or experiment Hyb\(_3\), respectively. Thus, the distinguishing advantage of \(\mathcal {B}\) is exactly \(\mid \text {Adv}_2(\mathcal {A})- \text {Adv}_3(\mathcal {A})\mid \).

Experiment Hyb \(_4\) . In this experiment, we change once again the simulation of the Execute queries so that the value MSK is chosen as a random string of the appropriate length.

Lemma 9

\(\text {Adv}_3(\mathcal {A})= \text {Adv}_4(\mathcal {A}).\)

Proof. Note that in the simulation of Execute oracle in experiment Hyb\(_3\), the values \(K_i\) for \(i=1, \ldots , n\) are chosen at random. Then, from the transcript \(T = \{ X_1, \ldots , X_n \}\) that the adversary receives as output for an Execute query, the values \(K_i\) are constrained by the following n equations.

$$\begin{aligned} X_1= & {} K_1 / K_n \\ \vdots \\ X_n= & {} K_n / K_{n-1} \end{aligned}$$

Of these equations, only \(n-1\) are linearly independent. Furthermore, we have

$$MSK = \prod _{i=1}^{n} K_i.$$

Since the last equation is linearly independent of the previous ones, MSK that each user computes in an Execute query is independent of the transcript T that the adversary sees. Thus, no computationally unbounded adversary can tell the experiment Hyb\(_3\) apart from Hyb\(_4\), i.e. \(\text {Adv}_3(\mathcal {A})= \text {Adv}_4(\mathcal {A})\).

Experiment Hyb \(_5\) . In this experiment, we change once more the simulation of the Execute queries so that the session key SK is chosen uniformly at random. The security of PRG guarantees that its output is statistically close to be uniform when given a random value as input, which yields the Lemma 10.

Lemma 10

\(\mid \text {Adv}_4(\mathcal {A})- \text {Adv}_5(\mathcal {A})\mid \le negl(\lambda ).\)

Experiment Hyb \(_6\) . In this experiment, we change last time the simulation of the Execute queries. Specifically, in response to a query Execute(\(U_1^{\langle i_1\rangle }, \ldots , U_n^{\langle i_n\rangle }\)) we now compute \(c_i^L = s_i^L + F_1(pw_0, U_i, U_{i-1})\) and \(c_i^R = s_i^R + F_1(pw_0, U_{i+1}, U_{i})\) for \(i = 1, \ldots , n\), where \(pw_0\) represents some dummy password not in the dictionary \(\mathbf {Dict}\). We note that in the simulation of Execute oracle in experiment Hyb\(_1\), the values \(s_i^L, s_i^R\) for \(i=1, \ldots , n\) are chosen at random, and the function \(F_1\) is pseudorandom. So the Lemma 11 holds.

Lemma 11

\(\mid \text {Adv}_5(\mathcal {A})- \text {Adv}_6(\mathcal {A})\mid \le negl(\lambda ).\)

Experiment Hyb \(_7\) . In this experiment we begin to modify the Send oracle. Let Send \(_0(\Pi _{U_i}^j,\) \({U_1,\cdots ,U_n})\) denote a “prompt” message that causes the user instance \(\Pi _{U_i}^j\) to initiate the protocol in a group \(\mathcal {G} = \{U_1,\cdots ,U_n\}\) that contains user \(U_i\); let Send \(_1(\Pi _{U_i}^j,\{(c_1^L, c_1^R, P_1^{iO-dec},\) \(P_1^{iO}), \ldots , (c_n^L, c_n^R, P_n^{iO-dec}, P_n^{iO})\})\) denote sending the message \(\{(c_1^L, c_1^R, P_1^{iO-dec},P_1^{iO}), \ldots ,\) \((c_n^L, c_n^R, P_n^{iO-dec}, P_n^{iO})\}\) to user instance \(\Pi _{U_i}^j\); let Send \(_2(\Pi _{U_i}^j,\{X_1, \ldots , X_n\})\) denote sending the message \(\{X_1, \ldots , X_n\}\) to user instance \(\Pi _{U_i}^j\).

In experiment Hyb\(_7\) we modify the way Send \(_0\) query is handled. In response to a query Send \(_0(\Pi _{U_i}^j, {U_1,\cdots ,U_n})\), \(\Pi _{U_i}^j\) chooses random \(s_i^L, s_i^R \in \{0,1\}^{2\lambda }\) instead of generating them from PRG and computes the \(c_i^L = s_i^L + F_1(pw_0, U_i, U_{i-1})\) and \(c_i^R = s_i^R + F_1(pw_0, U_{i+1}, U_{i})\), where \(pw_0\) represents some dummy password not in the dictionary \(\mathbf {Dict}\).

Lemma 12

\(\mid \text {Adv}_6(\mathcal {A})- \text {Adv}_7(\mathcal {A})\mid \le negl(\lambda ).\)

Proof. The proof is similar to those of Lemmas 6 and 11, and follows easily from the security of PRG and PRF.

Experiment Hyb \(_8\) . In this experiment, we change again the simulation of the Send \(_0\) query. We constrain the PRF \(F_1\) so that it can only be evaluated at points \((pw, U_1, U_2)\) where pw is contained in the dictionary \(\mathbf {Dict}\). Then we replace \(F_1\) with \(F_1^C\) in the program \(P_i^{dec}\), arriving at the program \(\hat{P}_i^{dec}\) given in Fig. 8. In respond to a query Send \(_0\), output \(P_i^{iO-dec}\) = iO (\(\hat{P}_i^{dec}\)).

Fig. 8.
figure 8

The program \(\hat{P}_i^{dec}\)

Lemma 13

\(\mid \text {Adv}_7(\mathcal {A})- \text {Adv}_8(\mathcal {A})\mid \le negl(\lambda ).\)

Proof. Since the group users share a password chosen uniformly at random from the dictionary \(\mathbf {Dict}\), we can conclude that the programs \(P_i^{dec}\) and \(\hat{P}_i^{dec}\) are functionally equivalent. Then based on the indistinguishability obfuscation property, it is easy to see that the hybrids Hyb\(_7\) and Hyb\(_8\) are computationally indistinguishable. Thus, security of iO yields the lemma.

Experiment Hyb \(_9\) . In this experiment, we change the simulation of the Send \(_1\) query. In response to a query Send \(_1\), if \(\{(c_1^L, c_1^R, P_1^{iO-dec}, P_1^{iO}), \ldots , (c_n^L, c_n^R, P_n^{iO-dec}, P_n^{iO})\}\) was output by a previous query of the form Send \(_0\), the values \(K_i\) and \(K_{i-1}\) are chosen uniformly at random. As the lemma below shows, the difference in the advantage between Hyb\(_8\) and Hyb\(_9\) is negligible. The proof of Lemma 14 is omitted here since it follows easily from the security of PRF F\(_1\) as a constrained PRF, where the outputs of Send \(_0\) are always using dummy password.

Lemma 14

\(\mid \text {Adv}_8(\mathcal {A})- \text {Adv}_9(\mathcal {A})\mid \le negl(\lambda ).\)

Experiment Hyb \(_{10}\) . In this experiment, we change again the simulation of the Send \(_1\) query. In response to a query Send \(_1\), if \(\{(c_1^L, c_1^R, P_1^{iO-dec}, P_1^{iO}), \ldots , (c_n^L,\) \(c_n^R, P_n^{iO-dec}, P_n^{iO})\}\) was generated by the adversary using correct password pw, the experiment ends.

Lemma 15

\( \text {Adv}_{9}(\mathcal {A})\le \text {Adv}_{10}(\mathcal {A}).\)

Proof. The only situation in which Hyb\(_{10}\) proceeds differently from Hyb\(_{9}\) occurs when the adversary correctly guess the password. All this does is introduce a new way for the adversary to succeed, so \(\text {Adv}_{9}(\mathcal {A})\le \text {Adv}_{10}(\mathcal {A}).\)

Experiment Hyb \(_{11}\) . In this experiment, we change once more the simulation of the Send \(_1\) query. In response to a query Send \(_1\), if \(\{(c_1^L, c_1^R, P_1^{iO-dec}, P_1^{iO}), \ldots , (c_n^L, c_n^R,\) \( P_n^{iO-dec}, P_n^{iO})\}\) was generated by the adversary using incorrect password, the values \(K_i\) is chosen uniformly at random.

Lemma 16

\(\mid \text {Adv}_{10}(\mathcal {A})- \text {Adv}_{11}(\mathcal {A})\mid \le negl(\lambda ).\)

Proof. Note that, for user instance \(\Pi _{U_i}^j\), the program \(P_i^{iO}\) can not be altered by the adversary. Moreover, since \(s_i^R\) as one of inputs of \(P_i^{iO}\) is generated, the adversary can not get it or change it. Then the proof follows easily from the security of PRF \(F_2\) as a constrained PRF. We construct a PRF adversary \(\mathcal {B}\) that breaks the security of PRF as a constrained PRF as follows: adversary \(\mathcal {B}\) simulates the entire experiment for \(\mathcal {A}\). In response to Send query, \(\mathcal {B}\) responds with correct password pw exactly as in experiment Hyb\(_{10}\). \(\mathcal {B}\) also asks its PRF \(F_2\) oracle and thus always reveals the correct key. At the end of the experiment, \(\mathcal {B}\) makes a real-or-random challenge query for the constrained function PRF\(^C\) as defined above. One can easily see that, \(\mathcal {B}\) is given a real PRF or a random value, then its simulation is performed exactly as in experiment Hyb\(_{10}\) or experiment Hyb\(_{11}\), respectively. Thus, the distinguishing advantage of \(\mathcal {B}\) is exactly \(\mid \text {Adv}_{10}(\mathcal {A})- \text {Adv}_{11}(\mathcal {A})\mid \).

Experiment Hyb \(_{12}\) . In this experiment, we change once more the simulation of the Send \(_2\) query. In response to a query Send \(_2(\Pi _{U_i}^j,\{X_1, \ldots , X_n\})\), the value MSK is chosen uniformly at random.

Lemma 17

\( \text {Adv}_{11}(\mathcal {A})= \text {Adv}_{12}(\mathcal {A}).\)

Proof. The proof of Lemma 17 uses arguments similar to those in the proof of Lemma 9, omitted.

Experiment Hyb \(_{13}\) . In this experiment, we change again the simulation of the Send \(_2\) query so that the session key SK is chosen uniformly at random. The security of PRG guarantees that its output is statistically close to be uniform when given a random value as input, which yields the Lemma 18.

Lemma 18

\(\mid \text {Adv}_{12}(\mathcal {A})- \text {Adv}_{13}(\mathcal {A})\mid \le negl(\lambda ).\)

Bounding the Advantage in Hyb \(_{13}\) . We now conclude the experiment Hyb\(_{13}\). First, the session keys of all accepting instances are chosen at random. Second, all oracle instances are simulated using dummy passwords, so the adversary’s view of the protocol is independent of the passwords that are chosen for each group of users. Finally, the probability that an adversary guesses the correct password is at most \(\frac{Q(\lambda )}{D_\lambda }\). Similar to the proof of Theorem 1, we have Adv\(_{13}(\mathcal {A})\le \frac{Q(\lambda )}{D_\lambda }\). Taken together, Lemmas 6–18 imply that Adv\(_0(\mathcal {A})\le \frac{Q(\lambda )}{D_\lambda }+negl(\lambda )\) as desired. \(\Box \)

6 Conclusion

In this paper, we proposed two round-optimal constructions for password-based group key exchange protocol. In particular we obtain a one-round protocol in the common reference string model and a two-round protocol in the “plain” model where there is no additional setup. Both protocols are provably secure in the standard model. It remains an interesting open problem to further reduce the computational costs of group users, whilst maintaining its optimal communication rounds.