Abstract
We propose the first tight security proof for the ordinary twomessage signed Diffie–Hellman key exchange protocol in the random oracle model. Our proof is based on the strong computational Diffie–Hellman assumption and the multiuser security of a digital signature scheme. With our security proof, the signed DH protocol can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate any security loss. We abstract our approach with a new notion called verifiable key exchange. In contrast to a known tight threemessage variant of the signed Diffie–Hellman protocol (Gjøsteen and Jager, in: Shacham, Boldyreva (eds) CRYPTO 2018, Part II. LNCS, Springer, Heidelberg, 2018), we do not require any modification to the original protocol, and our tightness result is proven in the “SingleBitGuess” model which we know can be tightly composed with symmetric cryptographic primitives to establish a secure channel. Finally, we extend our approach to the group setting and construct the first tightly secure group authenticated key exchange protocol.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Authenticated key exchange (AKE) protocols are protocols where two users can securely share a session key in the presence of active adversaries. Beyond passively observing, adversaries against an AKE protocol can modify messages and adaptively corrupt users’ longterm keys or the established session key between users. Hence, it is very challenging to construct a secure AKE protocol.
The signed Diffie–Hellman (DH) key exchange protocol is a classical AKE protocol. It is a twomessage (namely, two messagemoves or oneround) protocol and can be viewed as a generic method to transform a passively secure Diffie–Hellman key exchange protocol [19] into a secure AKE protocol using digital signatures. Figure 1 visualizes the protocol. The origin of signed DH is unclear to us, but its idea has been used in and serves as a solid foundation for many wellknown AKE protocols, including the StationtoStation protocol [20], IKE protocol [26], the one in TLS 1.3 [42] and many others [7, 24, 29, 30, 33].
Tight Security. Security of a cryptographic scheme is usually proven by constructing a reduction. Asymptotically, a reduction reduces any efficient adversary \(\mathcal {A}\) against the scheme into an adversary \(\mathcal {R}\) against the underlying computational problem. Concretely, a reduction provides a security bound for the scheme, \( \varepsilon _\mathcal {A}\le \ell \cdot \varepsilon _\mathcal {R}\), where \(\varepsilon _\mathcal {A}\) is the success probability of \(\mathcal {A}\) and \(\varepsilon _\mathcal {R}\) is that of \(\mathcal {R}\). We say a reduction is tight if \(\ell \) is a small constant and the running time of \(\mathcal {A}\) is approximately the same as that of \(\mathcal {R}\). For the same scheme, it is more desirable to have a tight security proof than a nontight one, since a tight security proof enables implementations without the need to compensate a security loss with increased parameters.
MultiChallenge Security for AKE. An adversary against an AKE protocol has full control of the communication channel and, additionally, it can adaptively corrupt users’ longterm keys and reveal session keys. The goal of an adversary is to distinguish between a (nonrevealed) session key and a random bitstring of the same length, which is captured by the \(\textsc {Test}\) query. We follow the BellareRogaway (BR) model [5] to capture these capabilities, but formalize it with the gamebased style of [28]. Instead of weak perfect forward secrecy, our model captures the (full) perfect forward secrecy.
Unlike the BR model, our model captures multichallenge security, where an adversary can make \(T\) many \(\textsc {Test}\) queries which are answered with a single random bit. This is a standard and wellestablished multichallenge notion, and [28] called it “SingleBitGuess” (SBG) security. Another multichallenge notion is the “MultiBitGuess” (MBG) security where each \(\textsc {Test}\) query is answered with a different random bit. Although several tightly secure AKE protocols [2, 24, 36, 46] are proven in the MBG model, we stress that the SBG model is wellestablished and allows tight composition of the AKE with symmetric cryptographic primitives, which is not the case for the nonstandard MBG model. Thus, the SBG multichallenge model is more desirable than the MBG model. More details about this have been provided by Jager et al.[28, Introduction] and CohnGordon et al.[14, Section 3].
The NonTight Security of Signed DH. Many existing security proofs of signed DHlike protocols [7, 29, 30] lose a quadratic factor, \(O(\mu ^2 S^2)\), where \(\mu \) and \(S\) are the maximum numbers of users and sessions. In the SBG model with \(T\) many \(\textsc {Test}\) queries, these proofs also lose an additional multiplicative factor \(T\).
At CRYPTO 2018, Gjøsteen and Jager [24] proposed a tightly secure variant of it by introducing an additional message move into the ordinary signed DH protocol. They showed that if the signature scheme is tightly secure in the multiuser setting then their protocol is tightly secure. They required the underlying signature scheme to be strongly unforgeable against adaptive Corruption and ChosenMessage Attacks (\(\mathsf {StCorrCMA}\)) which is a notion in the multiuser setting and an adversary can adaptively corrupt some of the honest users to see their secret keys. Moreover, they constructed a tightly multiuser secure signature scheme based on the decisional Diffie–Hellman (DDH) assumption in the random oracle model [4]. Combining these two results, they gave a practical three message fully tight AKE. We note that their tight security is proven in the less desirable MBG model, and, to the best of our knowledge, the MBG security can only nontightly imply the SBG security [28]. Due to the “commitment problem”, the additional message is crucial for the tightness of their protocol. In particular, the “commitment problem” seems to be the reason why most security proofs for AKEs are nontight.
1.1 Our Contribution
In this paper, we propose a new tight security proof of the ordinary twomessage signed Diffie–Hellman key exchange protocol in the random oracle model. More precisely, we prove the security of the signed DH protocol tightly based on the multiuser security of the underlying signature scheme in the random oracle model. Our proof improves upon the work of Gjøsteen and Jager [24] in the sense that we do not require any modification to the signed DH protocol and our tight multichallenge security is in the SBG model. This implies that our analysis supports the optimal implementation of the ordinary signed DH protocol with theoretically sound security in a meaningful model.
Our technique is a new approach to resolve the “commitment problem”. At the core of it is a new notion called verifiable key exchange protocols. We first briefly recall the “commitment problem” and give an overview of our approach.
Technical Difficulty: The “commitment problem”. As explained in [24], this problem is the reason why almost all proofs of classical AKE protocols are nontight. In a security proof of an AKE protocol, the reduction needs to embed a hard problem instance into the protocol messages of \(\textsc {Test}\) sessions so that in the end the reduction can extract a solution to the hard problem from the adversary \(\mathcal {A}\). After the instance is embedded, \(\mathcal {A}\) has not committed itself to which sessions it will query to \(\textsc {Test}\) yet, and, for instance, \(\mathcal {A}\) can ask the reduction for \(\textsc {Reveal}\) queries on sessions with a problem instance embedded to get the corresponding session keys. At this point, the reduction cannot respond to these \(\textsc {Reveal}\) queries. A natural way to resolve this is to guess which sessions \(\mathcal {A}\) will query \(\textsc {Test}\) on, and to embed a hard problem instance in those sessions only. However, this introduces an extremely large security loss. To resolve this “commitment problem”, a tight reduction should be able to answer both \(\textsc {Test}\) and \(\textsc {Reveal}\) for every session without any guessing. Gjøsteen and Jager achieved this for the signed DH by adding an additional message.
In this paper, we show that this additional message is not necessary for tight security.
Our Approach: Verifiable Key Exchange. In this work, we, for simplicity, use the signed Diffie–Hellman protocol based on the plain Diffie–Hellman protocol [19] (as described in Fig. 1) to explain our approach. In the technical part, we abstract and present our idea with a new notion called verifiable key exchange protocols.
Let \(\mathbb {G}:=\langle g \rangle \) be a cyclic group of primeorder p where the computational Diffie–Hellman (CDH) problem is hard. Let \((g^\alpha ,g^\beta )\) (where \(\alpha ,\beta \leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p\)) be an instance of the CDH problem. By its random selfreducibility, we can efficiently randomize it to multiple independently distributed instances \((g^{\alpha _i},g^{\beta _i})\), and given some \(g^{\alpha _i \beta _i}\), we can extract the solution \(g^{\alpha \beta }\).
For preparation, we assume that a \(\textsc {Test}\) session does not contain any valid forgeries. This can be tightly justified by the \(\mathsf {StCorrCMA}\) security of the underlying signature scheme, which can be instantiated with the recent scheme in [17].
After that, an adversary can only observe the protocol transcripts or forward the honestly generated transcripts in arbitrary orders. This is the most important step for bounding such an adversary tightly without involving the “commitment problem”. Our reduction embeds the randomized instance \((g^{\alpha _i},g^{\beta _i})\) into each session. Now it seems we can answer neither \(\textsc {Test}\) nor \(\textsc {Reveal}\) queries: The answer has the form \(K:=\mathsf {H}(\text {ctxt},g^{xy})\), but the term \(g^{xy}\) cannot be computed by the reduction, since \(g^x\) and \(g^y\) are from the CDH challenge and the reduction knows neither x nor y. However, our reduction can answer this by carefully simulating the random oracle \(\mathsf {H}\) and keeping the adversary’s view consistent. More precisely, we answer \(\textsc {Test}\) and \(\textsc {Reveal}\) queries with a random K, and we carefully program the random oracle \(\mathsf {H}\) so that adversary \(\mathcal {A}\) cannot detect this change. To achieve this, when we receive a random oracle query \(\mathsf {H}(\text {ctxt},Z)\), we answer it consistently if the secret element Z corresponds to the context \(\text {ctxt}\) and \(\text {ctxt}\) belongs to one of the \(\textsc {Test}\) or \(\textsc {Reveal}\) queries. This check can be efficiently done by using the strong DH oracle [1]. Our approach is motivated by the twomessage AKE in [14].
The approach described above can be abstracted by a notion called verifiable key exchange (VKE) protocols. Roughly speaking, a VKE protocol is firstly passively secure, namely a passive observer cannot compute the secret session key. Additionally, a VKE provides an oracle to an adversary to check whether a session key belongs to some honestly generated session, and to forward messages in a different order to create nonmatching sessions. This VKE notion gives rise to a tight security proof of the signed DH protocol. We believe this is of independent interest.
On the Strong CDH Assumption. Our techniques require the Strong CDH assumption [1] for the security of our VKE protocol. We refer to [15, Appendix C] for a detailed analysis of this assumption in the generic group model (GGM). Without using the GGM, we can use the twinning technique [13] to remove this strong assumption and base the VKE security tightly on the (standard) CDH assumption. This approach will double the number of group elements. Alternatively, we can use the group of signed Quadratic Residues (QR) [27] to instantiate our VKE protocol, and then the VKE security is tightly based on the factoring assumption (by [27, Theorem 2]).
RealWorld Impacts. As mentioned earlier, the signed DH protocol serves as a solid foundation for many realworld protocols, including the one in TLS 1.3 [42], IKE [26], and the StationtoStation [20] protocols. We believe our approach can naturally be extended to tighten the security proofs of these protocols. In particular, our notion of VKE protocols can abstract some crucial steps in a recent tight proof of TLS 1.3 [15].
Another practical benefit of our tight security proof is that, even if we implement the underlying signature with a standardized, nontight scheme (such as Ed25519 [8] or RSAPKCS \(\#\)1 v1.5 [40]), our implementation does not need to lose the additional factor that is linear in the number of sessions. In today’s Internet, there can be easily \(2^{30}\) sessions per year. For instance, Facebook has about \(2^{30}\) monthly active users^{Footnote 1}. Assuming that each user only logs in once a month, this already leads to \(2^{30}\) sessions.
1.2 Protocol Comparison
We compare the instantiation of signed DH according to our tight proof with the existing explicitly authenticated key exchange protocols in Fig. 2. For complete tightness, all these protocols require tight multiuser security of their underlying signature scheme. We implement the signature scheme in all protocols with the recent efficient scheme from Diemert et al. [17] whose signatures contain 3 \(\mathbb {Z}_p\) elements, and whose security is based on the DDH assumption. The implementation of TLS is according to the recent tight proofs in [15, 18], and we instantiate the underlying signature scheme with the same DDHbased scheme from [17].
We note that the nontight protocol from CohnGorden et al. [14], whose security loss is linear in the number of users, has better communication efficiency (2, 0, 0). However, its security is weaker than all protocols listed in Fig. 2, since their protocol is only implicitly authenticated and achieves weak perfect forward secrecy.
We detail the comparison with \(\textsf {JKRS}\) [28]. Using the DDHbased signature scheme in [17], the communication complexity of our signed DH protocol is (2, 0, 6), while that of \(\textsf {JKRS}\) is (5, 1, 3). We suppose the efficiency of our protocol is comparable to \(\textsf {JKRS}\).
Our main weakness is that our security model is weaker than that of \(\textsf {JKRS}\). Namely, ours does not allow adversaries to corrupt any internal secret state. We highlight that our proof does not inherently rely on any decisional assumption. In particular, if there is a tightly multiuser secure signature scheme based on only search assumptions, our proof directly gives a tightly secure AKE based on search assumptions only, which is not the case for [28].
1.3 An Extension and Open Problems
We extend our approach to group AKE (GAKE) protocols, where a group of users can agree on a session key, and construct the first tightly secure GAKE protocol. Research on GAKE has a long history and several GAKE protocols have been constructed in the literature [9,10,11, 25, 31]. However, none of the existing GAKE protocols enjoy a tight security proof. We suppose that tight security is more desirable for GAKE than AKE, since many applications require GAKE protocols (such as online audio–video conference systems and instant messaging [43]) are often in a truly largescale setting.
Similar to the twoparty setting, we propose the notion of verifiable group key exchange (VGKE) protocols and transform a VGKE to GAKE using a signature scheme. Our transformation is tightnesspreserving. As an instantiation of our approach, we prove that under the strong CDH assumption the classical Burmester–Desmedt protocol is a tightly secure VGKE protocol [12]. Combining with a tightly \(\mathsf {StCorrCMA}\)secure signature (for instance, [17]), it yields the first tightly secure GAKE protocol. Alternatively, our transformation can be viewed as a tight improvement on the (nontight) generic compiler of Katz and Yung [31] where we require the underlying nonauthenticated group key exchange protocol to be verifiable.
Open Problems. We do not know of any tightly multiuser secure signature schemes with corruptions based on a search assumption, and the schemes in [39] based on search assumptions do not allow any corruption. It is therefore insufficient for our purpose, and we leave constructing a tightly secure AKE based purely on search assumptions as an open problem.
1.4 History of This Paper
This is the full version of a paper published at CTRSA 2021 [38]. The main change here is to extend our approach in the group key exchange setting and propose the first tightly secure GAKE protocol (cf. Sect. 6). Due to this main extension, we (slightly) change the title to the current one. Moreover, we give a detailed proof for the multiuser security of Schnorr’s signature scheme in the generic group model (cf. Appendix A).
2 Preliminaries
For \( n \in \mathbb {N}\), let \([n] = \{1, \ldots , n\}\). For a finite set \(\mathcal {S}\), we denote the sampling of a uniform random element x by \(x \leftarrow _{\scriptscriptstyle \$}\mathcal {S}\). By \(\llbracket B\rrbracket \), we denote the bit that is 1 if the evaluation of the Boolean statement B is true and 0 otherwise.
Algorithms. For an algorithm \(\mathcal {A}\) which takes x as input, we denote its computation by \(y \leftarrow \mathcal {A}(x)\) if \(\mathcal {A}\) is deterministic, and \(y \leftarrow _{\scriptscriptstyle \$}\mathcal {A}(x)\) if \(\mathcal {A}\) is probabilistic. We assume all the algorithms (including adversaries) in this paper to be probabilistic unless we state it. We denote an algorithm \(\mathcal {A}\) with access to an oracle \(\textsc {O}\) by \(\mathcal {A}^\textsc {O}\). In terms of running time, if a reduction’s running time \(t'\) is dominated by that of an adversary t (more precisely, \(t'=t+s\) where \(s \ll t\)), we write \(t'\approx t\).
Games. We use codebased games [6] to present our definitions and proofs. We implicitly assume all Boolean flags to be initialized to 0 (false), numerical variables to 0, sets to \(\emptyset \) and strings to \(\bot \). We make the convention that a procedure terminates once it has returned an output. \(G^\mathcal {A}\Rightarrow b\) denotes the final (Boolean) output b of game G running adversary \(\mathcal {A}\), and if \(b=1\) we say \(\mathcal {A}\) wins G. The randomness in \(\mathrm{Pr}[G^\mathcal {A}\Rightarrow 1]\) is over all the random coins in game G. Within a procedure, “abort” means that we terminate the run of an adversary \(\mathcal {A}\).
Digital signatures. We recall the syntax and security of a digital signature scheme. Let \(\mathsf {par}\) be some system parameters shared among all participants.
Definition 1
(Digital Signature) A digital signature scheme \({\mathsf {SIG}}:= ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Ver}})\) is defined as follows.

The key generation algorithm \( {\mathsf {Gen}}(\mathsf {par})\) returns a public key and a secret key \((\textsf {pk},\textsf {sk})\). We assume that \(\textsf {pk}\) implicitly defines a message space \(\mathcal {M}\) and a signature space \(\Sigma \).

The signing algorithm \({\mathsf {Sign}}(\textsf {sk}, m \in \mathcal {M})\) returns a signature \(\sigma \in \Sigma \) on \( m \).

The deterministic verification algorithm \({\mathsf {Ver}}(\textsf {pk}, m ,\sigma )\) returns 1 (accept) or 0 (reject).
\({\mathsf {SIG}}\) is perfectly correct, if for all \((\textsf {pk},\textsf {sk})\in {\mathsf {Gen}}(\mathsf {par})\) and all messages \( m \in \mathcal {M}\), \({\mathsf {Ver}}(\textsf {pk}, m ,{\mathsf {Sign}}(\textsf {sk}, m ))=1\).
In addition, we say that \({\mathsf {SIG}}\) has \(\alpha \) bits of (public) key minentropy if an honestly generated public key \(\textsf {pk}\) is chosen from a distribution with at least \(\alpha \) bits minentropy. Formally, for all bitstrings \(\textsf {pk}'\) we have \( \mathrm{Pr}[\textsf {pk}= \textsf {pk}': (\textsf {pk},\textsf {sk}) \leftarrow _{\scriptscriptstyle \$}{\mathsf {Gen}}(\mathsf {par})] \le 2^{\alpha }.\)
We include the definition of entropy here because our proofs require an estimate on the probability of a collision in the public keys.
Definition 2
(\(\mathsf {StCorrCMA}\) Security [17, 24]) A digital signature scheme \({\mathsf {SIG}}\) is \((t,\varepsilon ,\mu , {Q}_{s}, {Q}_{\textsc {Cor}})\text {}\mathsf {StCorrCMA}\) secure (Strong unforgeability against Corruption and Chosen Message Attacks), if for all adversaries \(\mathcal {A}\) running in time at most t, interacting with \(\mu \) users, making at most \({Q}_{s}\) queries to the signing oracle \(\textsc {Sign}\), and at most \({Q}_{\textsc {Cor}}\) (\({Q}_{\textsc {Cor}}<\mu \)) queries to the corruption oracle \(\textsc {Corr}\) as in Fig. 3, we have
Security in the random oracle model. A common approach to analyze the security of signature schemes that involve a hash function is to use the random oracle model [4] where hash queries are answered by an oracle \(\mathsf {H}\), where \(\mathsf {H}\) is defined as follows: On input x, it first checks whether \(\mathsf {H}(x)\) has previously been defined. If so, it returns \(\mathsf {H}(x)\). Otherwise, it sets \(\mathsf {H}(x)\) to a uniformly random value in the range of \(\mathsf {H}\) and then returns \(\mathsf {H}(x)\). We parameterize the maximum number of hash queries in our security notions. For instance, we define \((t,\varepsilon ,\mu , {Q}_{s}, {Q}_{\textsc {Cor}},Q_{\mathsf {H}})\text{ }\mathsf {StCorrCMA}\) as security against any adversary that makes at most \(Q_{\mathsf {H}}\) queries to \(\mathsf {H}\) in the \(\mathsf {StCorrCMA}\) game. Furthermore, we make the standard convention that any random oracle query that is asked as a result of a query to the signing oracle in the \(\mathsf {StCorrCMA}\) game is also counted as a query to the random oracle. This implies that \({Q}_{s}\le Q_{\mathsf {H}}\).
Signature schemes. The tight security of our authenticated key exchange (AKE) protocols is established based on the \(\mathsf {StCorrCMA}\) security of the underlying signature schemes. To obtain a completely tight AKE, we use the recent signature scheme from [17] to implement our protocols.
By adapting the nontight proof in [23], the standard unforgeability against chosenmessage attacks (\(\mathsf {UF}\text{ }\mathsf {CMA}\)) notion for signature schemes implies the \(\mathsf {StCorrCMA}\) security of the same scheme nontightly (with security loss \(\mu \)). Thus, many widely used signature schemes (such as the Schnorr [44], Ed25519 [8] and RSAPKCS \(\#\)1 v1.5 [40] signature schemes) are nontightly \(\mathsf {StCorrCMA}\) secure. We do not know any better reductions for these schemes. We leave proving the \(\mathsf {StCorrCMA}\) security of these schemes without losing a linear factor of \(\mu \) as a future direction. However, our tight proof for the signed DH protocol strongly indicates that the aforementioned nontight reduction is optimal for these practical schemes. This is because if we can prove these schemes tightly secure, we can combine them with our tight proof to obtain a tightly secure AKE with unique and verifiable private keys, which may contradict the impossibility result from [14].
For the Schnorr signature, we analyze its \(\mathsf {StCorrCMA}\) security in the generic group model (GGM) [37, 45]. We recall the Schnorr signature scheme below and show the GGM bound of its \(\mathsf {StCorrCMA}\) security in Theorem 1.
Let \(\mathsf {par}= (p,g,\mathbb {G})\), where \(\mathbb {G}= \langle g\rangle \) is a cyclic group of prime order p with a hard discrete logarithm problem. Let \(\mathsf {H}:\{0,1\}^* \rightarrow \mathbb {Z}_p\) be a hash function. Schnorr’s signature scheme, \(\mathsf {Schnorr}:=({\mathsf {Gen}},{\mathsf {Sign}},{\mathsf {Ver}})\), is defined as follows:
Theorem 1
(\(\mathsf {StCorrCMA}\) Security of \(\mathsf {Schnorr}\) in the GGM) Schnorr’s signature \({\mathsf {SIG}}\) is \((t, \varepsilon , \mu , {Q}_{s}, {Q}_{\textsc {Cor}}, {Q}_\mathsf {H})\)\(\mathsf {StCorrCMA}\)secure in the \(\mathrm {GGM}\) and in the programmable random oracle model, where
Here, \(Q_\mathbb {G}\) is the number of group operations queried by the adversary.
The proof of Theorem 1 is following the approach in [3, 32]: We first define an algebraic interactive assumption, \({\mathsf {CorrIDLOG}}\), which is tightly equivalent to the \(\mathsf {StCorrCMA}\) security of \(\mathsf {Schnorr}\), and then we analyze the hardness of \({\mathsf {CorrIDLOG}}\) in the GGM. \({\mathsf {CorrIDLOG}}\) stands for Interactive Discrete Logarithm with Corruption. It is motivated by the \({\mathsf {IDLOG}}\) (Interactive Discrete Logarithm) assumption in [32]. \({\mathsf {CorrIDLOG}}\) is a stronger assumption than \({\mathsf {IDLOG}}\) in the sense that it allows an adversary to corrupt the secret exponents of some public keys. Details are given in Appendix A.
3 Security Model for TwoMessage Authenticated Key Exchange
In this section, we use the security model in [28] to define the security of twomessage authenticated key exchange protocols. This section is almost verbatim to Section 4 of [28]. We highlight the difference we make for our protocol: Since our protocols do not have security against (ephemeral) state reveal attacks (as in the extended CanettiKrawczyk (eCK) model [34]), we do not consider state reveals in our model.
A twomessage key exchange protocol \(\mathsf {AKE}:= (\mathsf {Gen_{AKE}}, \mathsf {Init}_{\textsc {I}}, \mathsf {Der_{R}}, \mathsf {Der}_{\textsc {I}})\) consists of four algorithms which are executed interactively by two parties as shown in Fig. 4. We denote the party which initiates the session by \(\mathsf {P}_i\) and the party which responds to the session by \(\mathsf {P}_r\). The key generation algorithm \(\mathsf {Gen_{AKE}}\) outputs a key pair \((\textsf {pk},\textsf {sk})\) for one party. The initialization algorithm \(\mathsf {Init}_{\textsc {I}}\) inputs the initiator’s longterm secret key \(\textsf {sk}_i\) and the responder’s longterm public key \(\textsf {pk}_r\), and outputs a message \(m_{i}\) and a state \(\text {st}\). The responder’s derivation algorithm \(\mathsf {Der_{R}}\) takes as input the responder’s longterm secret key, the initiator’s public key \(\textsf {pk}_i\) and a message \(m_{i}\). It computes a message \(m_{r}\) and a session key K. The initiator’s derivation algorithm \(\mathsf {Der}_{\textsc {I}}\) inputs the initiator’s longterm key \(\textsf {sk}_i\), the responder’s longterm public key \(\textsf {pk}_r\), the responder’s message \(m_{r}\) and the state \(\text {st}\). Note that the responder is not required to save any internal state information besides the session key K.
We give a security game written in pseudocode. We define a model for explicit authenticated protocols achieving (full) forward secrecy instead of weak forward secrecy. Namely, an adversary in our model can be active and corrupt the user who owns the \(\textsc {Test}\) session \(\text {sID}^*\), and the only restriction is that if there is no matching session to \(\text {sID}^*\), then the peer of \(\text {sID}^*\) must not be corrupted before the session finishes.
Here, explicit authentication means entity authentication in the sense that a party can explicitly confirm that he is talking to the actual owner of the recipient’s public key. The key confirmation property is only implicit [21], where a party is assured that the other identified party can compute the same session key. The game \(\mathsf {IND}\text {}\mathsf {FS}\) is given in Figs. 5 and 6. We refer readers to [16] for more details on different types of authentication for key exchange protocols.
Execution Environment. We consider \(\mu \) parties \(\mathsf {P}_1,\dotsc ,\mathsf {P}_\mu \) with longterm key pairs \((\textsf {pk}_n,\textsf {sk}_n)\), \(n\in [\mu ]\). When two parties A and B want to communicate, the initiator, say, A first creates a session. To identify this session, we increase the global identification number \(\text {sID}\) and assign the current state of \(\text {sID}\) to identify this session owned by A. The state of \(\text {sID}\) will increase after every assignment. Moreover, a message will be sent to the responder. The responder then similarly creates a corresponding session which is assigned the current state of \(\text {sID}\). Hence, each conversation includes two sessions. We then define variables in relation to the identifier \(\text {sID}\):

\(\text {init}[\text {sID}]\in [\mu ]\) denotes the initiator of the session.

\(\text {resp}[\text {sID}]\in [\mu ]\) denotes the responder of the session.

\(\text {type}[\text {sID}]\in \{\text {``In''},\text {``Re''}\}\) denotes the session’s view, i. e. whether the initiator or the responder computes the session key.

\(\mathsf {Msg_I}[\text {sID}]\) denotes the message that was computed by the initiator.

\(\mathsf {Msg_R}[\text {sID}]\) denotes the message that was computed by the responder.

\(\text {state}[\text {sID}]\) denotes the (secret) state information, i. e. ephemeral secret keys.

\(\text {sKey}[\text {sID}]\) denotes the session key.
To establish a session between two parties, the adversary is given access to oracles \(\textsc {Session}_{\mathrm{I}}\) and \(\textsc {Session}_{\mathsf {R}}\), where the first one starts a session of type \(\text {``In''}\) and the second one of type \(\text {``Re''}\). The \(\textsc {Session}_{\mathsf {R}}\) oracle also runs the \(\mathsf {Der_{R}}\) algorithm to compute its session key and complete the session, as it has access to all the required variables. In order to complete the initiator’s session, the oracle \(\textsc {Der}_{\textsc {I}}\) has to be queried.
Following [28], we do not allow the adversary to register adversarially controlled parties by providing longterm public keys, as the registered keys would be treated no differently than regular corrupted keys. If we would include the key registration oracle, then our proof requires a stronger notion of signature schemes in the sense that our signature challenger can generate the system parameters with some trapdoor. With the trapdoor, the challenger can simulate a valid signature under the adversarially registered public keys. This is the case for the Schnorr signature and the tight scheme in [17], since they are honestverifier zeroknowledge and the aforementioned property can be achieved by programming the random oracles.
Finally, the adversary has access to oracles \(\textsc {Corr}\) and \(\textsc {Reveal}\) to obtain secret information. We use the following Boolean values to keep track of which queries the adversary made:

\(\text {corrupted}[n]\) denotes whether the longterm secret key of party \(\mathsf {P}_n\) was given to the adversary.

\(\text {revealed}[\text {sID}]\) denotes whether the session key was given to the adversary.

\(\text {peerCorrupted}[\text {sID}]\) denotes whether the peer of the session was corrupted and its longterm key was given to the adversary at the time the owner’s session key was computed, which is important for forward security.
The adversary can forward messages between sessions or modify them. By that, we can define the relationship between two sessions:

Matching Session: Two sessions \(\text {sID}\) and \(\text {sID}'\) match if the same parties are involved (\(\text {init}[\text {sID}]=\text {init}[\text {sID}']\) and \(\text {resp}[\text {sID}]=\text {resp}[\text {sID}']\)), the messages sent and received are the same (\(\mathsf {Msg_I}[\text {sID}]=\mathsf {Msg_I}[\text {sID}']\) and \(\mathsf {Msg_R}[\text {sID}]=\mathsf {Msg_R}[\text {sID}']\)) and they are of different types (\(\text {type}[\text {sID}]\ne \text {type}[\text {sID}']\)).
Our protocols use signatures to preserve integrity so that any successful nomatch attacks described in [35] will lead to a signature forgery and thus can be excluded.
Finally, the adversary is given access to oracle \(\textsc {Test}\), which can be queried multiple times and which will return either the session key of the specified session or a uniformly random key. We use one bit b for all test queries, and store test sessions in a set \(\mathcal {S}\). The adversary can obtain information on the interactions between two parties by querying the longterm secret keys and the session key. However, for each test session, we require that the adversary does not issue queries such that the session key can be trivially computed. We define the properties of freshness and validity which all test sessions have to satisfy:

Freshness: A (test) session is called fresh if the session key was not revealed. Furthermore, if there exists a matching session, we require that this session’s key is not revealed and that this session is not also a test session.

Validity: A (test) session is called valid if it is fresh and the adversary performed any attack which is defined in the security model. We capture this with attack Table 2.
Attack Tables. We define validity of different attack strategies. All attacks are defined using variables to indicate which queries the adversary may (not) make. We consider three dimensions:

whether the test session is on the initiator’s (\(\text {type}[\text {sID}^*]=\)“In”) or the responder’s side (\(\text {type}[\text {sID}^*]=\)“Re”),

all combinations of longterm secret key reveal, taking into account when a corruption happened (\(\text {corrupted}\) and \(\text {peerCorrupted}\) variables),

the number of matching sessions, i.e., whether the adversary acted passively (matching session) or actively (no matching session).
The purpose of these tables is to make our proofs precise by listing all the possible attacks.
How to read the tables. Table 1 lists all possible attacks from an adversary. By excluding trivial attacks and merging similar attacks, we obtain Table 2 from Table 1. If the set of variables corresponding to a test session is set as in any row of Table 2, this row will evaluate to \({\textbf {true}}\) in line 10 in Fig. 6. We now describe the different attacks in Table 1 in more detail:
 Row 0.:

If the protocol does not use appropriate randomness, it should not be considered secure. In this case, there can be multiple matching sessions to a test session, which an adversary can take advantage of. For an honest run of the protocol, the underlying minentropy ensures that this attack will only happen with negligible probability.
 Row 1.:

Here, the tested session has one matching session, is of type \(\text {``In''}\), and both parties might be corrupted. Since there is a matching session, the adversary has acted passively during the execution of the protocol. Thus, even if both parties were corrupted during the execution, the adversary cannot break the \(\mathsf {AKE}\) security without breaking the passive security of the underlying protocol. Hence, it should make no difference if the parties were corrupted before or after the key was computed, and the \(\text {corrupted}\) and \(\text {peerCorrupted}\) columns can take any value.
 Row 2.:

This attack is similar to the one above, the only difference is the session type.
 Row 3.:

Here, the responder of the session was corrupted when the initiator computed its key, and there is no matching session. This means that the adversary has performed an active attack and changed or reordered the message being sent. This can lead to a trivial attack, because the adversary can impersonate the responder with the corrupted secret key. By knowing the underlying message, he can compute the same session key as the initiator will compute, and test the initiators session. Whether the adversary corrupts the initiator makes no difference, and hence this column can take any value.
 Row 4.:

Similar to the attack above, with the types switched, and hence the initiator was corrupted by the time the responder computed the key. This leads to a trivial attack in the same way.
 Row 5.:

Here, there is no matching session, but we specify that the responder was not corrupted when the initiator computed its key. The adversary can choose whether or not to corrupt the initiator before the responder computes its key. The key point is that whether he can impersonate the initiator or not, he does not know the internal state of the initiator, and to break security he must either break the underlying key exchange protocol, or impersonate the responder and break the authentication directly. Hence, this column can take any value. After the initiators key is computed, it should not matter whether the responder gets corrupted or not, and hence, this column can also take any value.
 Row 6.:

Similar to above, but with the types changed so that the initiator was not corrupted when the responder computed its key.
From the 6 attacks in total presented in Table 1, rows (3.) and (4.) are trivial wins for the adversary and can thus be excluded. Note that rows (1.) and (2.) denote similar attacks against initiator and responder sessions. Since the session’s type does not change the queries the adversary is allowed to make in this case, we can merge these rows. For the same reason, we can also merge rows (5.) and (6.). The resulting table is given in Table 2.
Attacks covered in our model capture forward secrecy (FS) and key compromise impersonation (KCI) attacks.
Note that we do not include reflection attacks, where the adversary makes a party run the protocol with himself. For the \(\mathsf {KE}_{\mathsf {DH}}\) protocol, we could include these and create an additional reduction to the square Diffie–Hellman assumption (given \(g^x\), to compute \(g^{x^2}\)), but for simplicity of our presentation we will not consider reflection attacks in this paper.
For all test sessions, at least one attack has to evaluate to true. Then, the adversary wins if he distinguishes the session keys from uniformly random keys which he obtains through queries to the \(\textsc {Test}\) oracle.
Definition 3
(Key Indistinguishability of AKE) We define game \(\mathsf {IND}\text {}\mathsf {FS}\) as in Figs. 5 and 6. A protocol \(\mathsf {AKE}\) is \((t,\varepsilon , \mu , S, T, {Q}_{\textsc {Cor}})\text{ }\mathsf {IND}\text {}\mathsf {FS}\)secure if for all adversaries \(\mathcal {A}\) attacking the protocol in time t with \(\mu \) users, \(S\) sessions, \(T\) test queries and \({Q}_{\textsc {Cor}}\) corruptions, we have
Note that if there exists a session which is neither fresh nor valid, the game outputs the bit b, which implies that \(\mathrm{Pr}[\mathsf {IND}\text {}\mathsf {FS}^{\mathcal {A}} \Rightarrow 1] = 1/2\), giving the adversary an advantage equal to 0. This captures that an adversary will not gain any advantage by performing a trivial attack.
4 Verifiable Key Exchange Protocols
A key exchange protocol \(\mathsf {KE}:=(\mathsf {Init}_{\textsc {I}}, \mathsf {Der_{R}}, \mathsf {Der}_{\textsc {I}})\) can be run between two (unauthenticated) parties \(i\) and \(r\), and can be visualized as in Fig. 4, but with differences where (1): parties do not hold any public key or private key, and (2): public and private keys in algorithms \(\mathsf {Init}_{\textsc {I}}, \mathsf {Der_{R}}, \mathsf {Der}_{\textsc {I}}\) are replaced with the corresponding users’ (public) identities.
The standard signed Diffie–Hellman (DH) protocol can be viewed as a generic way to transform a passively secure key exchange protocol to an actively secure AKE protocol using digital signatures. Our tight transformation does not modify the construction of the signed DH protocol, but requires a security notion (i.e., OneWayness against Honest and key Verification attacks, or \(\mathsf {OW\text{ }HV}\)) that is (slightly) stronger than passive security: Namely, in addition to passive attacks, an adversary is allowed to check if a key corresponds to some honestly generated transcripts and to forward transcripts in a different order to create nonmatching sessions. Here, we require that all the involved transcripts must be honestly generated by the security game and not by the adversary. This is formally defined by Definition 4 with security game \(\mathsf {OW\text{ }HV}\) as in Fig. 7.
Definition 4
(OneWayness against Honest and key Verification attacks (\(\mathsf {OW\text{ }HV}\))) A key exchange protocol \(\mathsf {KE}\) is \((t, \varepsilon , \mu , S, Q_{V}) \text{ }\mathsf {OW\text{ }HV}\) secure, where \(\mu \) is the number of users, \(S\) is the number of sessions and \(Q_{V}\) is the number of calls to \(\textsc {KVer}\), if for all adversaries \(\mathcal {A}\) attacking the protocol in time at most t, we have
We require that a key exchange protocol \(\mathsf {KE}\) has \(\alpha \) bits of minentropy, i.e., that for all messages \(m'\) we have \(\mathrm{Pr}[m = m'] \le 2^{\alpha },\) where m is output by either \(\mathsf {Init}_{\textsc {I}}\) or \(\mathsf {Der_{R}}\).
4.1 Example: Plain Diffie–Hellman Protocol
We show that the plain Diffie–Hellman (DH) protocol over primeorder group [19] is a \(\mathsf {OW\text{ }HV}\)secure key exchange under the strong computational DH (\(\mathsf {StCDH}\)) assumption [1]. We use our syntax to recall the original DH protocol \(\mathsf {KE}_{\mathsf {DH}}\) in Fig. 8.
Let \(\mathsf {par}=(p, g ,\mathbb {G})\) be a set of system parameters, where \(\mathbb {G}:=\left\langle g\right\rangle \) is a cyclic group of prime order p.
Definition 5
(Strong CDH Assumption) The strong CDH (\(\mathsf {StCDH}\)) assumption is said to be \((t, \varepsilon , Q_{\textsc {Dh}})\)hard in \(\mathsf {par}=(p, g ,\mathbb {G})\), if for all adversaries \(\mathcal {A}\) running in time at most t and making at most \(Q_{\textsc {Dh}}\) queries to the DH predicate oracle \(\textsc {Dh}_{a}\), we have:
where the DH predicate oracle \(\textsc {Dh}_{a}(C,D)\) outputs 1 if \(D=C^{a}\) and 0 otherwise.
Lemma 1
Let \(\mathsf {KE}_{\mathsf {DH}}\) be the DH key exchange protocol as in Fig. 8. Then \(\mathsf {KE}_{\mathsf {DH}}\) has \(\alpha = \log _2{p}\) bits of minentropy, and for every adversary \(\mathcal {A}\) that breaks the \((t, \varepsilon , \mu , S, Q_{V})\text{ }\mathsf {OW\text{ }HV}\)security of \(\mathsf {KE}_{\mathsf {DH}}\), there is an adversary \(\mathcal {B}\) that breaks the \((t',\varepsilon ', Q_{\textsc {Dh}})\text{ }\mathsf {StCDH}\) assumption with
Proof
The minentropy assertion is straightforward, as the DH protocol generates messages by drawing exponents \(x,y \leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p\) uniformly as random.
We prove the rest of the lemma by constructing a reduction \(\mathcal {B}\) which inputs the \(\mathsf {StCDH}\) challenge \((A, B)\) and is given access to the decisional oracle \(\textsc {Dh}_{a}\). \(\mathcal {B}\) simulates the \(\mathsf {OW\text{ }HV}\) security game for the adversary \(\mathcal {A}\), namely answers \(\mathcal {A}\)’s oracle access as in Fig. 9. More precisely, \(\mathcal {B}\) uses the random selfreducibility of \(\mathsf {StCDH}\) to simulate the whole security game, instead of using the \(\mathsf {Init}_{\textsc {I}}\) and \(\mathsf {Der_{R}}\) algorithms. The most relevant codes are highlighted with bold line numbers.
We show that \(\mathcal {B}\) simulates the \(\mathsf {OW\text{ }HV}\) game for \(\mathcal {A}\) perfectly:

Since \(X\) generated in line 26 and \(Y\) generated in line 37 are uniformly random, the outputs of \(\textsc {Session}_{\mathrm{I}}\) and \(\textsc {Session}_{\mathsf {R}}\) are distributed as in the real protocol. Note that the output of \(\textsc {Der}_{\textsc {I}}\) does not get modified.

For \(\textsc {KVer}(\text {sID},K)\), if K is a valid key that corresponds to session \(\text {sID}\), then there must exist sessions \(\text {sID}_1\) and \(\text {sID}_2\) such that \(\text {type}[\text {sID}_1]=\text {``In''}\) (defined in line 24) and \(\text {type}[\text {sID}_2]=\text {``Re''}\) (defined in line 34) and
$$\begin{aligned} K = (B\cdot g^{\alpha [\text {sID}_2]})^{(a+\alpha [\text {sID}_1])} =Y^{a} \cdot Y^{\alpha [\text {sID}_1]}. \end{aligned}$$(2)where \(\mathsf {Msg_I}[\text {sID}]=\mathsf {Msg_I}[\text {sID}_1]=A\cdot g^{\alpha [\text {sID}_1]}\) (defined in line 26) and \(\mathsf {Msg_R}[\text {sID}]=\mathsf {Msg_R}[\text {sID}_2]=Y:=B\cdot g^{\alpha [\text {sID}_2]}\) (defined in line 37). Thus, the output of \(\textsc {KVer}(\text {sID},K)\) is the same as that of \(\textsc {Dh}_{a}(Y, K/Y^{\alpha [\text {sID}_1]})\).
Finally, \(\mathcal {A}\) returns \(\text {sID}^* \in [\text {cnt}_{\text {S}}]\) and a key \(K^*\). If \(\mathcal {A}\) wins, then \(\textsc {KVer}(\text {sID}^*, K^*)=1\) which means that there exists sessions \(\text {sID}_1\) and \(\text {sID}_2\) such that \(\text {type}[\text {sID}_1] = \text {``In''}\), \(\text {type}[\text {sID}_2]=\text {``Re''}\) and
where \(Y=\mathsf {Msg_R}[\text {sID}_2]=B\cdot g^{\alpha [\text {sID}_2]}\). This means \(\mathcal {B}\) breaks the \(\mathsf {StCDH}\) with \(g^{ab}=K^*/(Y^{\alpha [\text {sID}_1]}\cdot A^{\alpha [\text {sID}_2]})\) as in line 08, if \(\mathcal {A}\) break the \(\mathsf {OW\text{ }HV}\) of \(\mathsf {KE}_{\mathsf {DH}}\). Hence, \(\varepsilon =\varepsilon '\). The running time of \(\mathcal {B}\) is the running time of \(\mathcal {A}\) plus one exponentiation for every call to \(\textsc {Session}_{\mathrm{I}}\) and \(\textsc {Session}_{\mathsf {R}}\), so we get \(t \approx t'\). The number of calls to \(\textsc {Dh}_{a}\) is the number of calls to \(\textsc {KVer}\), plus one additional call to verify the adversary’s forgery, and hence \(Q_{\textsc {Dh}}=Q_{V}+1\). \(\square \)
Group of Signed Quadratic Residues Our construction of a key exchange protocol in Fig. 8 is based on the \(\mathsf {StCDH}\) assumption over a prime order group. Alternatively, we can instantiate our \(\mathsf {VKE}\) protocol in a group of signed quadratic residues \(\mathbb{QR}\mathbb{}_N^+\) [27]. As the \(\mathsf {StCDH}\) assumption in \(\mathbb{QR}\mathbb{}_N^+\) groups is tightly implied by the factoring assumption (by [27, Theorem 2]), our \(\mathsf {VKE}\) protocol is secure based on the classical factoring assumption.
5 Signed Diffie–Hellman, revisited
Following the definition in Sect. 3, we want to construct a \(\mathsf {IND}\text {}\mathsf {FS}\)secure authenticated key exchange protocol \(\mathsf {AKE}=(\mathsf {Gen_{AKE}}, \mathsf {Init}_{\textsc {I}}, \mathsf {Der}_{\textsc {I}}, \mathsf {Der_{R}})\) by combining a \(\mathsf {StCorrCMA}\)secure signature scheme \({\mathsf {SIG}}=({\mathsf {Gen}},{\mathsf {Sign}},{\mathsf {Ver}})\), a \(\mathsf {OW\text{ }HV}\)secure key exchange protocol \(\mathsf {KE}=(\mathsf {Init}_{\textsc {I}}', \mathsf {Der}_{\textsc {I}}',\mathsf {Der_{R}}')\), and a random oracle \(\mathsf {H}\). The construction is given in Fig. 10, and follow the execution order from Fig. 4.
We now prove that this construction is in fact a secure \(\mathsf {AKE}\) protocol.
Theorem 2
For every adversary \(\mathcal {A}\) that breaks the \((t,\varepsilon , \mu , S,T, Q_\mathsf {H}, {Q}_{\textsc {Cor}})\text{ }\mathsf {IND}\text {}\mathsf {FS}\)security of a protocol \(\mathsf {AKE}\) constructed as in Fig. 10, we can construct an adversary \(\mathcal {B}\) against the \((t',\varepsilon ', \mu , {Q}_{s}, {Q}_{\textsc {Cor}}') \text{ }\mathsf {StCorrCMA}\)security of a signature scheme \({\mathsf {SIG}}\) with \(\alpha \) bits of key minentropy, and an adversary \(\mathcal {C}\) against the \((t'',\varepsilon '', \mu , S', Q_{V})\text{ }\mathsf {OW\text{ }HV}\) security of a key exchange protocol \(\mathsf {KE}\) with \(\beta \) bits of minentropy, such that
Proof
We will prove this by using the following hybrid games, which are illustrated in Fig. 11.
Game \(G_0\): This is the \(\mathsf {IND}\text {}\mathsf {FS}\) security game for the protocol \(\mathsf {AKE}\). We assume that all longterm keys, and all messages output by \(\mathsf {Init}_{\textsc {I}}\) and \(\mathsf {Der_{R}}\) are distinct. If a collision happens, the game aborts. To bound the probability of this happening, we use that \({\mathsf {SIG}}\) has \(\alpha \) bits of key minentropy, and \(\mathsf {KE}\) has \(\beta \) bits of minentropy. We can upper bound the probability of a collision happening in the keys as \(\mu ^2/2^{\alpha +1}\) for \(\mu \) parties, and the probability of a collision happening in the messages as \(S^2/2^{\beta +1}\) for \(S\) sessions, as each session computes one message. Thus, we have
Game \(G_1\): In this game, when the oracles \(\textsc {Der}_{\textsc {I}}\) and \(\textsc {Session}_{\mathsf {R}}\) try to derive a session key, they will abort if the input message does not correspond to a previously sent message, and the corresponding signature is valid w.r.t. an uncorrupted party (namely, \(\mathcal {A}\) generates the message itself).
This step is to exclude the active attacks where an adversary creates its own messages. An adversary cannot notice this change, since it requires the adversary to forge a signature on the underlying \(\mathsf {St}\text{ }\mathsf {UF}\text{ }\mathsf {CMA}\) secure signature scheme. Later on, we will formally prove this. Moreover, this is the preparation step for reducing an \(\mathsf {IND}\text {}\mathsf {FS}\) adversary of \(\mathsf {AKE}\) to an \(\mathsf {OW\text{ }HV}\) adversary of \(\mathsf {KE}\). Note that in this game we do not exclude all the nonmatching \(\textsc {Test}\) sessions, but it is already enough for the “\(\mathsf {IND}\text {}\mathsf {FS}\)to\(\mathsf {OW\text{ }HV}\)” reduction. For instance, \(\mathcal {A}\) can still force some responder session to be nonmatching by reusing some of the previous initiator messages to query \(\textsc {Session}_{\mathsf {R}}\), and then \(\mathcal {A}\) uses the nonmatching responder session to query \(\textsc {Test}\).
The only way to distinguish \(G_0\) and \(G_1\) is to trigger the new abort event in either line 19 (i.e., \(\mathsf {AbortDer_{\mathsf {R}}}\)) or line 39 (i.e., \(\mathsf {AbortDer_{\mathsf {I}}}\)) of Fig. 11. We define the event \(\mathsf {AbortDer}:=\mathsf {AbortDer_{\mathsf {I}}}\vee \mathsf {AbortDer_{\mathsf {R}}}\) and have that
To bound this probability, we construct an adversary \(\mathcal {B}\) against the \((t',\varepsilon ', \mu , {Q}_{s}, {Q}_{\textsc {Cor}}') \text{ }\mathsf {StCorrCMA}\)security of \({\mathsf {SIG}}\) in Fig. 12.
We note that \(\mathsf {AbortDer}\) is true only if \(\mathcal {A}\) performs attacks 5+6 in Table 2 which may lead to a session without any matching session. If \(\mathsf {AbortDer}={\textbf {true}}\) then \(\Sigma \) is defined in lines 26 and 42 of Fig. 12 and \(\Sigma \) is a valid \(\mathsf {StCorrCMA}\) forge for \({\mathsf {SIG}}\). We only show that for the case when \(\mathsf {AbortDer_{\mathsf {R}}}={\textbf {true}}\) here, and the argument is similar for the case when \(\mathsf {AbortDer_{\mathsf {I}}}={\textbf {true}}\). Given that \(\mathsf {AbortDer_{\mathsf {R}}}\) happens, we have that \({\mathsf {Ver}}(\textsf {pk}_i,X,\sigma _{i})=1\) and \(\text {peerCorrupted}[\text {sID}] = {\textbf {false}}\). Due to the criteria in line 40, the pair \((X,\sigma _{i})\) has not been output by \(\textsc {Session}_{\mathrm{I}}\) on input \((i, r)\) for any \(r\), and hence \((i, X)\) has never been queried to the \(\textsc {Sign}'\) oracle. Therefore, \(\mathcal {B}\) aborts \(\mathcal {A}\) in the \(\mathsf {IND}\text {}\mathsf {FS}\) game and returns \((i,X,\sigma _{i})\) to the \(\mathsf {StCorrCMA}\) challenger to win the \(\mathsf {StCorrCMA}\) game. Therefore, we have
which implies that
The running time of \(\mathcal {B}\) is the same as that of \(\mathcal {A}\), plus the time used to run the key exchange algorithms \(\mathsf {Init}_{\textsc {I}}', \mathsf {Der_{R}}', \mathsf {Der}_{\textsc {I}}'\) and the signature verification algorithm \({\mathsf {Ver}}\). This gives \(t' \approx t\). For the number of signature queries, we have \({Q}_{s}\le S\), since \(\textsc {Session}_{\mathsf {R}}\) can abort before it queries the signature oracle, and the adversary can reuse messages output by \(\textsc {Session}_{\mathrm{I}}\). For the number of corruptions, we have \({Q}_{\textsc {Cor}}' = {Q}_{\textsc {Cor}}\).
Game \(G_2\): Intuitively, since in \(G_1\) an adversary \(\mathcal {A}\) is not allowed to create its own message to attack the protocol, \(\mathcal {A}\) can only use the honestly generated messages, but it may forward these messages in an different order. The \(\mathsf {OW\text{ }HV}\) security of the underlying \(\mathsf {KE}\) allows us to tightly prove that such an \(\mathcal {A}\) cannot distinguish a real session key from a random one, which conclude our security proof. To formally prove it, in \(G_2\), \(\textsc {Test}\) oracle always returns a uniformly random key, independent on the bit b (Fig. 13).
Since we have excluded collisions in the messages output by the experiment, it is impossible to create two sessions of the same type that compute the same session key. Hence, an adversary must query the random oracle \(\mathsf {H}\) on the correct input of a test session to detect the change between \(G_1\) and \(G_2\) (which is only in case \(b=0\)). More precisely, we have \(\mathrm{Pr}[G_2^\mathcal {A}\Rightarrow 1\mid b=1]=\mathrm{Pr}[G_1^\mathcal {A}\Rightarrow 1\mid b=1] \) and
To bound Eq. (7), we construct an adversary \(\mathcal {C}\) to \((t'',\varepsilon '', \mu , S', Q_{V})\)break the \(\mathsf {OW\text{ }HV}\) security of \(\mathsf {KE}\). The input to \(\mathcal {C}\) is the number of parties \(\mu \), and system parameters \(\mathsf {par}\). In addition, \(\mathcal {C}\) has access to oracles \(\textsc {Session}_{\mathrm{I}}', \textsc {Session}_{\mathsf {R}}',\textsc {Der}_{\textsc {I}}'\) and \( \textsc {KVer}\).
We firstly show that the outputs of \(\textsc {Session}_{\mathrm{I}}\), \(\textsc {Session}_{\mathsf {R}}\) and \(\textsc {Der}_{\textsc {I}}\) (simulated by \(\mathcal {C}\)) are distributed the same as in \(G_1\). Due to the abort conditions introduced in \(G_1\), for all sessions that has finished computing a key without making the game abort, their messages are honestly generated, although they may be in a different order and there are nonmatching sessions. Hence, \(\textsc {Session}_{\mathrm{I}}\), \(\textsc {Session}_{\mathsf {R}}\) and \(\textsc {Der}_{\textsc {I}}\) can be perfectly simulated using \(\textsc {Session}_{\mathrm{I}}'\), \(\textsc {Session}_{\mathsf {R}}'\) and \(\textsc {Der}_{\textsc {I}}'\) of the \(\mathsf {OW\text{ }HV}\) game and the signing key.
It is also easy to see that the random oracle \(\mathsf {H}\) simulated by \(\mathcal {C}\) has the same output distribution as in \(G_1\). We stress that if line 66 is executed then adversary \(\mathcal {A}\) may use the \(\text {sID}\) to distinguish \(G_2\) and \(G_1\) for \(b=0\), which is the only case for \(\mathcal {A}\) to see the difference. At the same time, we obtain a valid attack \(\Sigma :=(\text {sID}, K^*)\) for the \(\mathsf {OW\text{ }HV}\) security. Thus, we have
As before, the running time of \(\mathcal {C}\) is that of \(\mathcal {A}\), plus generating and verifying signatures, and we have \(t'' \approx t\). Furthermore, \(S' = S\), as the counter for the \(\mathsf {OW\text{ }HV}\) game increases once for every call to \(\textsc {Session}_{\mathrm{I}}\) and \(\textsc {Session}_{\mathsf {R}}\).
At last, for game \(G_2\) we have \(\mathrm{Pr}[G_{2}^\mathcal {A}\Rightarrow 1] =\frac{1}{2}\), as the response from the \(\textsc {Test}\) oracle is independent of the bit b. Summing up all the equations, we obtain
and \(t' \approx t, \quad {Q}_{s}\le S, \quad {Q}_{\textsc {Cor}}'={Q}_{\textsc {Cor}}, \quad t'' \approx t, \quad S' = S, \quad Q_{V}\le Q_\mathsf {H}\).
\(\square \)
6 An Extension: Tightly Secure Group Authenticated Key Exchange
6.1 Security Model for Group Authenticated Key Exchange
We consider tworound broadcast group authenticated key exchange protocols that are executed interactively between \(\mu >2\) parties. Each round corresponds to a messages broadcast. Formally, it is defined as \({\mathsf {GAKE}}= ({\mathsf {\mathsf {Gen}}}_{{\mathsf {GAKE}}}, {\mathsf {Init}}, {\mathsf {Res}}, {\mathsf {Der}})\) consisting of four algorithms. It is visualized as in Fig. 14. We denote the set of potential participants by \(\mathsf {P}= (\mathsf {P}_{1}, \ldots , \mathsf {P}_{\mu })\). Before the protocol is executed for the first time, each party \(\mathsf {P}_{i} \in \mathsf {P}\) runs the algorithm \({\mathsf {\mathsf {Gen}}}_{{\mathsf {GAKE}}}(\mathsf {par})\) to generate his own longterm public and private keys \((\textsf {pk}_{i}, \textsf {sk}_{i})\).
Our tworound \({\mathsf {GAKE}}\) protocol allows all parties in a group \(\mathcal {Q}\subseteq \mathsf {P}\) to establish a common secret key. For a party \(\mathsf {P}_{i}\), we say that \(\mathcal {P}_i\) is the rest of the group from \(\mathsf {P}_{i}\)’s view, and we can write \(\mathcal {Q}= \{\mathsf {P}_{i}\} \cup \mathcal {P}_i\). By a slight abuse of notation, we will often write \(j \in \mathcal {P}_i\) instead of \(\mathsf {P}_{j} \in \mathcal {P}_i\).
In the first round, each party \(\mathsf {P}_{i} \in \mathcal {Q}\) starts the session \(\text {sID}\) by executing the initialization algorithm \({\mathsf {Init}}(\textsf {sk}_{i}, \{\textsf {pk}_{j}\}_{j \in \mathcal {P}_i})\) which outputs a message \(m_i\) and a state \(\text {st}\). The party \(\mathsf {P}_{i}\) broadcasts \((i,m_i)\) and keeps the internal state \(\text {st}\).
In the second round, let \(\mathcal {M}_{i}\) denote the set of all pairs \((j, m_j)\) received by \(\mathsf {P}_{i}\) in the first round. Then, each party \(\mathsf {P}_{i} \in \mathcal {Q}\) executes the response algorithm \({\mathsf {Res}}(\textsf {sk}_{i}, \{\textsf {pk}_{j}\}_{j \in \mathcal {P}_i}, \text {st}, \mathcal {M}_{i})\) to compute a message \(\hat{m}_i\) and an updated state \(\text {st}_{}\). As in the first round, \(\mathsf {P}_{i}\) broadcasts \((i,\hat{m}_i)\) and keeps the state \(\text {st}\).
In the final phase, let \(\hat{\mathcal {M}}_{i}\) denote the set of all pairs \((j,\hat{m}_j)\) received by party \(\mathsf {P}_{i}\) in the second round. To obtain the common group session key, each party \(\mathsf {P}_{i}\) can execute \({\mathsf {Der}}(\textsf {sk}_{i}, \{\textsf {pk}_{j}\}_{j \in \mathcal {P}_i}, \text {st}, \mathcal {M}_{i}, \hat{\mathcal {M}}_{i})\) which outputs the key \( K _{}\). An illustration is given in Fig. 14.
Similar to our twoparty key exchange protocol, our security game is written in pseudocode. In our model, \({\mathsf {GAKE}}\) achieves forward secrecy and has both the explicit authentication and implicit key confirmation properties. In the group key exchange setting, explicit authentication means entity authentication for every message transmitted in the sense that each party can explicitly confirm that the initial message is issued by the actual owner of the associated public key. Moreover, the key confirmation property is also implicit for our \({\mathsf {GAKE}}\), where every party in a group \(\mathcal {Q}\) is assured implicitly that all members of the group will have the same session key. The security game is given in Figs. 15 and 16. Our model can be viewed as a careful extension of our twoparty model to \(\mu \) parties. Moreover, we note that Poettering et al. [41] proposed a general framework for defining security of GAKE protocols. To the best of our knowledge, our model can be viewed as a specified use case of their framework. For instance, we do not consider Expose queries to reveal the local sessionstate.
Execution environment. We consider \(\mu \) parties \(\mathsf {P}= (\mathsf {P}_{1}, \ldots , \mathsf {P}_{\mu })\) with longterm key pairs \((\textsf {pk}_i, \textsf {sk}_i), i \in [\mu ]\). For each group key exchange, each party in a group \(\mathcal {Q}\) has their own session with a unique identification number \(\text {sID}\), and variables which are defined relative to \(\text {sID}\):

\(\text {owner}[\text {sID}] \in [\mu ]\) denotes the owner of the session.

\(\text {peer}[\text {sID}] \subseteq [\mu ]\) denotes the peers of the session.

\(\mathcal {Q}[\text {sID}]\) denotes all the participants of the session.

\(\mathsf {Msg_I}[\text {sID}]\) denotes the message sent by the owner during the first round.

\(\mathcal {M}[\text {sID}]\) denotes the messages received by the owner during the first round.

\(\mathsf {Msg_R}[\text {sID}]\) denotes the message sent by the owner during the second round.

\(\hat{\mathcal {M}}[\text {sID}]\) denotes the messages received by the owner during the second round.

\(\text {state}[\text {sID}]\) denotes the (secret) state information i.e. ephemeral secret keys.

\(\text {sKey}[\text {sID}]\) denotes the session key.
Adversary model. Similar to the \(\mathsf {AKE}\) security notion, we do not allow the adversary to register adversarially controlled parties by providing longterm public keys, and the adversary has access to oracles \(\textsc {Corr}\) and \(\textsc {Reveal}\) as described in Fig. 15. We use the following Boolean values to store which queries the adversary made:

\(\text {corrupted}[i]\) denotes whether the longterm secret key of party \(\mathsf {P}_{i}\) was given to the adversary.

\(\text {revealed}[\text {sID}]\) denotes whether the group session key was given to the adversary.

\(\text {peerCorrupted}[\text {sID}]\) denotes whether one of the peers in the group was corrupted and its longterm key was given to the adversary at the time when the session key was derived.
Matching sessions. Extending the definition of matching sessions from the twoparty case, we define matching sessions in the \({\mathsf {GAKE}}\) setting as follows.

Matching Sessions: Two sessions \(\text {sID}_i, \text {sID}_j\) are matching if:
$$\begin{aligned}&\text {owner}[\text {sID}_i] \ne \text {owner}[\text {sID}_j]&\text {(Different owners)}\\&\mathcal {Q}[\text {sID}_i] = \mathcal {Q}[\text {sID}_j]&\text {(Identical participants)} \\&\{\mathsf {Msg_I}[\text {sID}_i]\} \cup \mathcal {M}[\text {sID}_i] \\&= \{\mathsf {Msg_I}[\text {sID}_j]\} \cup \mathcal {M}[\text {sID}_j]&\text {(Identical messages in the first round)}\\&\{\mathsf {Msg_R}[\text {sID}_i]\} \cup \hat{\mathcal {M}}[\text {sID}_i] \\&= \{\mathsf {Msg_R}[\text {sID}_j]\}\cup \hat{\mathcal {M}}[\text {sID}_j]&\text {(Identical messages in the second round)} \end{aligned}$$
As in the \(\mathsf {AKE}\) setting, our protocols in the full \({\mathsf {GAKE}}\) model will use signatures, and hence any successful nomatch attack as described in [35] will lead to a signature forgery.
Test session. The adversary is given access to the test oracle \(\textsc {Test}\). This oracle can be queried multiple times and depending on a randomly chosen bit \(b \leftarrow _{\scriptscriptstyle \$}\{0,1\}\) (which is shared between all test queries), it outputs either a uniformly random key, or the specified session key.
6.2 Verifiable Group Key Exchange
To achieve tight security, we extend the verifiable key exchange from the twoparty setting to \(\mu \)parties. As for the regular two party \(\mathsf {AKE}\), we construct our tightly secure group authenticated key exchange based on a verifiable (nonauthenticated) group key exchange (GKE) that has OneWayness against Honest and key Verification attacks (aka. \(\mathsf {OW\text{ }G\text{ }HV}\) security). As in the twoparty case, the adversary can perform passive attacks, or forward messages in a different order to create nonmatching sessions, and check if a key corresponds to some honestly generated transcripts. We require that all the involved messages must be honestly generated by the security game and not by the adversary. A (nonauthenticated) group key exchange (GKE) protocol consists of a tuple of algorithms \({\mathsf {GKE}}:=({\mathsf {Init}}, {\mathsf {Res}}, {\mathsf {Der}})\), where parties do not hold any public or private key and \({\mathsf {Init}}\) algorithms now take users’ identities \((i, \mathcal {P}_i)\) as input.
The \(\mathsf {OW\text{ }G\text{ }HV}\) security is formally defined by Definition 6 with the security game \(\mathsf {OW\text{ }G\text{ }HV}\) as in Fig. 17.
Definition 6
(Group OneWayness against Honest and Key Verification Attacks (\(\mathsf {OW\text{ }G\text{ }HV}\))) A group key exchange protocol \({\mathsf {GKE}}\) is \((t, \varepsilon , \mu , S, Q_{V})\text{ }\mathsf {OW\text{ }G\text{ }HV}\)secure where \(\mu \) is the number of users, \(S\) is the number of sessions and \(Q_{V}\) is the number of call to \(\textsc {KVer}\), if for all adversaries \(\mathcal {A}\) attacking the protocol in time at most t, we have:
We require that a group key exchange protocol \({\mathsf {GKE}}\) has \(\alpha \)bits of minentropy, namely if for all messages \(m'\) we have \(\mathrm{Pr}[m = m'] \le 2^{\alpha }\), where m is output by either \({\mathsf {Init}}\) or \({\mathsf {Res}}\).
6.3 Instantiation of \(\mathsf {OW\text{ }G\text{ }HV}\) with Burmester–Desmedt
We show that the Burmester–Desmedt group key exchange protocol [12] is \(\mathsf {OW\text{ }G\text{ }HV}\) secure. We begin by describing the protocol in our framework, and then prove its security based on the strong computational Diffie–Hellman assumption.
Let \(\mathsf {par}=(p, g ,\mathbb {G})\) define a primeorder cyclic group \(\mathbb {G}:=\left\langle g\right\rangle \). We choose a group of users \(\mathcal {Q}\) with \(\left {\mathcal {Q}}\right =n\), and order the participants as \(\mathsf {P}_{1}\) to \(\mathsf {P}_{n}\) in a cycle. Messages \(m_i\) and \(\hat{m}_i\) are sent by \(\mathsf {P}_{i}\). We then have \(\mathsf {P}_{n+1}=\mathsf {P}_{1}\), and for the messages \(m_{n+1}\) and \(\hat{m}_{n+1}\) we have \(m_{ n+1} = m_{1}\) and \(\hat{m}_{ n+1} = \hat{m}_{1}\).
The Burmester–Desmedt protocol is described in Fig. 18, and for correctness we show that all parties compute the key
Recall that for user \(i\), we have \(\text {st}:=r_i\). We define the following values:
It then follows that for the key computed in line 06 of Fig. 18, we have
Lemma 2
Let \({\mathsf {GKE}}_{\mathsf {BD}}\) be the Burmester–Desmedt group key exchange protocol as in Fig. 18. Then, \({\mathsf {GKE}}_{\mathsf {BD}}\) has \(\alpha = \log _2 p\) bits of minentropy, and for every adversary \(\mathcal {A}\) that breaks the \((t,\varepsilon ,\mu , S, Q_{V})\)security of \({\mathsf {GKE}}_{\mathsf {BD}}\), there exists an adversary \(\mathcal {B}\) which breaks the \((t', \varepsilon ', Q_{V}')\)security of \(\mathsf {StCDH}\) with
Proof
The entropy statement is again straightforward, since \(r_i\) being drawn uniformly at random implies that both \(m_{i}\) and \(\hat{m}_{i}\) are uniformly random as well.
We now construct a simulator \(\mathcal {B}\), which on input \((g^x, g^y)\) breaks the \(\mathsf {CDH}\) assumption by simulating the \(\mathsf {OW\text{ }G\text{ }HV}\) game to \(\mathcal {A}\).
To simulate \(\textsc {Session}_\mathsf {I}( i\in [\mu ], \mathcal {P}_i\subseteq [\mu ])\), \(\mathcal {B}\) proceeds as in Fig. 17, but instead of running the \({\mathsf {Init}}\) algorithm in line 10, it does the following:

if \(i\) is odd, \(\mathcal {B}\) draws an element \(a_i\leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p\) and sets and returns \(m_i:=g^x g^{a_i}\)

if \(i\) is even, \(\mathcal {B}\) draws an element \(a_i\leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p\) and sets and returns \(m_i:=g^y g^{a_i}\).
All \(m_i\)’s are uniformly distributed, exactly as in the original protocol.
To simulate \(\textsc {Session}_\mathsf {R}\), note that \(\mathcal {B}\) does not know the discrete logarithm of \(m_i\)’s, but it can compute \(\hat{m}_{i} \) in the following way: If \(i\) is even, \(\mathcal {B}\) computes \(\hat{m}_{i}:=m_{i}^{a_{i+1}a_{i1}}\), since we have
Simulation of \(\hat{m}_{i}\) for odd \(i\) is similar. Equation (10) shows that the simulated \(\hat{m}_{i}\) are distributed the same as in the real distribution.
To simulate \(\textsc {Der}\), \(\mathcal {B}\) follows the steps in Fig. 17, but skips the key derivation in line 34 and leaves the corresponding session key empty. Since there are no sessionkeyreveal oracles in this game, \(\mathcal {A}\) will not notice this and the simulation is perfect from \(\mathcal {A}\)’s viewpoint.
To simulate the \(\textsc {KVer}\) oracle on input \((\text {sID}, K)\), for readability, we label \(r_i:=x+a_i\) for odd \(i\) and \(r_i:=y+a_i\) for even \(i\) and \(m_{i} = g^{r_i}\) for all \(i\). Recall that the derived session key in \({\mathsf {GKE}}_{\mathsf {BD}}\) is \( K _{}=g^{r_1r_2 + r_2r_3 + \cdots r_{n1}r_n+ r_nr_1}\). We then write
for odd \(i\), and
for even \(i\). Note that all \(a_i\)’s are known. If \( K _{}\) is valid for an \(\text {sID}\), we have
This implies that we can compute
If \( K _{}\) is valid for an \(\text {sID}\), we have \({\tilde{ K _{}}}=g^{xy}\). Hence, \(\mathcal {B}\) queries \(\textsc {Dh}_{x}\left( g^y, {\tilde{ K _{}}}\right) \) to verify the key, and returns the answer. This completes the simulation.
If \(\mathcal {A}\) is able to compute a valid session key, then \(\mathcal {B}\) wins the \(\mathsf {StCDH}\) game, and hence \(\varepsilon \le \varepsilon '\). The running time of \(\mathcal {B}\) is that of \(\mathcal {A}\) plus one exponentiation for each \(\textsc {Session}_\mathsf {I}\) and \(\textsc {Session}_\mathsf {R}\) call, and 6 exponentiations and one inversion (disregarding the inversion of \(n\), which is essentially free) for each call to \(\textsc {KVer}\), since we can sum the various exponents together before we perform the exponentiations in the denominator. The total number of queries \(Q_{V}'\) to \(\textsc {Dh}_{x}\) is \(Q_{V}' = Q_{V}+1\), as we get one additional call to \(\textsc {KVer}\) when we verify the adversaries forgery. This completes the lemma. \(\square \)
6.4 Our Generic Transformation for GAKE
Following the construction from Sect. 5, we construct an \(\mathsf {IND\text{ }G\text{ }FS}\)secure authenticated group key exchange protocol \({\mathsf {GAKE}}= ({\mathsf {\mathsf {Gen}}}_{{\mathsf {GAKE}}}, {\mathsf {Init}}, {\mathsf {Res}}, {\mathsf {Der}})\) by combining a \(\mathsf {StCorrCMA}\)secure signature scheme \({\mathsf {SIG}}= ({\mathsf {Gen}}, {\mathsf {Sign}}, {\mathsf {Ver}})\), an \(\mathsf {OW\text{ }G\text{ }HV}\)secure group key exchange protocol \({\mathsf {GKE}}= ({\mathsf {Init}}', {\mathsf {Res}}', {\mathsf {Der}}')\), and a random oracle \(\mathsf {H}\). The construction is given in Fig. 19
Theorem 3
For every adversary \(\mathcal {A}\) that breaks the \((t,\varepsilon , \mu , S, Q_\mathsf {H}, {Q}_{\textsc {Cor}})\text{ }\mathsf {IND\text{ }G\text{ }FS}\) security of a protocol \({\mathsf {GAKE}}\) constructed as in Fig. 19, we can construct an adversary \(\mathcal {B}\) that breaks the \((t',\varepsilon ', \mu , {Q}_{s},{Q}_\mathsf {H}, {Q}_{\textsc {Cor}}') \text{ }\mathsf {StCorrCMA}\) security of the underlying signature scheme \({\mathsf {SIG}}\) with \(\alpha \) bits of key minentropy, or breaks the \((t'',\varepsilon '', \mu , S', Q_{V}) \text{ }\mathsf {OW\text{ }G\text{ }HV}\) security of the underlying key exchange protocol \(\Pi \) with \(\beta \) bits of minentropy, such that
Proof
We will prove this by using the following hybrid games, which are illustrated in Fig. 20.
Game \(G_0\): This is the original \(\mathsf {IND\text{ }G\text{ }FS}\) for the protocol \({\mathsf {GAKE}}\). We assume that all longterm keys, and all messages generated by \({\mathsf {Init}}\) and \({\mathsf {Res}}\) are distinct. The security game aborts if a collision happens. Using the fact that \({\mathsf {SIG}}\) has \(\alpha \)bits of key minentropy and \({\mathsf {GKE}}\) has \(\beta \)bits of message minentropy, a collision in the keys happens with probability at most \(\mu ^2/2^{\alpha +1}\), and a collision in the messages happens with probability at most \(S^2/2^{\beta +1}\). Here, \(\mu \) is the number of users and \(S\) is the number of sessions. Thus, we have:
Game \(G_1\): In this game, \(\textsc {Session}_\mathsf {R}\) and \(\textsc {Der}\) abort upon input a session id and a message set which do not correspond to a previously broadcast message set (i.e. all messages are honestly generated by using the given oracles; however, there may still be nonmatching sessions), and all signatures with respect to each noncorrupted party in the group are valid. This step is to exclude the active attacks where an adversary creates its own message. This change is unnoticed by the adversary, since it requires him to forge at least one valid signature for the underlying \(\mathsf {StCorrCMA}\) secure signature scheme. We will give a formal proof of the indistinguishability of \(G_0\) and \(G_1\) in Lemma 3. We denote the abort event as \(\mathsf {AbortGAKE}:= \mathsf {AbortSessR}\cup \mathsf {AbortDer}\), where \(\mathsf {AbortSessR}\) and \(\mathsf {AbortDer}\) correspond to the aborting event in line line 29 and line 47 of Fig. 20, respectively. Since the only difference between \(G_0\) and \(G_1\) is the aborting events \(\mathsf {AbortGAKE}\), using Lemma 3 we have
Game \(G_2\): Intuitively, since in \(G_1\) an adversary \(\mathcal {A}\) is not allowed to create its own message for active attacks against the protocol, \(\mathcal {A}\) can either observe the protocol execution or forward the honestly generated messages in a different order. We will use the \(\mathsf {OW\text{ }G\text{ }HV}\) security to tightly argue the indistinguishability of a real session key and a uniformly random one. Formally in \(G_2\), the \(\textsc {Test}\) oracle always returns a uniformly random key, independent on the bit b. Since we already in \(G_0\) assume that all messages generated by \({\mathsf {Init}}\) and \({\mathsf {Res}}\) are distinct, and we are in the random oracle model, the only way for \(\mathcal {A}\) to compute a valid session key \( K _{}\) is to query the correct input. Therefore, by Lemma 4 we can reduce the difference between \(G_2\) and \(G_1\) to the \(\mathsf {OW\text{ }G\text{ }HV}\) security of \({\mathsf {GKE}}\), and we have
In summary, we have
\(\square \)
Lemma 3
For every adversary \(\mathcal {A}\) running in time \(t_{0,1}\) that distinguishes \(G_0\) from \(G_1\) with probability \(\varepsilon _{0,1}\), we can construct an adversary \(\mathcal {B}\) against the \((t', \varepsilon ', \mu , {Q}_\mathsf {H}, {Q}_{\textsc {Cor}}')\)\(\mathsf {StCorrCMA}\) security of the underlying signature scheme \({\mathsf {SIG}}\), where
Proof
The only difference between \(G_0\) and \(G_1\) is the aborting events \(\mathsf {AbortSessR}\) and \(\mathsf {AbortDer}\). To bound the probability of these, we build an adversary \(\mathcal {B}\) against the \(\mathsf {StCorrCMA}\) of the underlying signature scheme \({\mathsf {SIG}}\) as in Fig. 21. The adversary will successfully generate a valid forgery if and only if \(\mathsf {AbortSessR}\) or \(\mathsf {AbortDer}\) happens.
More precisely, if \(\mathsf {AbortGAKE}\) is \({\textbf {true}}\), then the signatures in line 31 and in line 54 of Fig. 21 are valid forgeries against the \(\mathsf {CorrCMA}\) security of \({\mathsf {SIG}}\). Here, we only prove the case where \(\mathsf {AbortSessR}= {\textbf {true}}\). The other case where \(\mathsf {AbortDer}= {\textbf {true}}\) follows the same idea. Given the fact that \(\mathsf {AbortSessR}\) happens, we have that for all \(j \in \mathcal {P}\), \({\mathsf {Ver}}(\textsf {pk}_j, m_j, \sigma _j) = 1\) and \(\text {peerCorrupted}[\text {sID}] = {\textbf {false}}\). Moreover, due to the criteria of line 30, there exists \(j^*\in \mathcal {P}\) such that \((j^*, (m_{j^*}, \sigma _{j^*}))\) has never been output by \(\textsc {Session}_\mathsf {I}\). Therefore, \((m_{j^*}, \sigma _{j^*})\) is a valid forgery against the \(\mathsf {CorrCMA}\) security of \({\mathsf {SIG}}\), and we have
Similarly, we also have \(\mathrm{Pr}[\mathsf {AbortDer}] \le \varepsilon '\). Overall, we have
\(\square \)
Lemma 4
For every \(\mathsf {PPT}\) adversary \(\mathcal {A}\) running in time \(t_{1,2}\) that distinguishes \(G_1\) from \(G_2\) with probability \(\varepsilon _{1,2}\), we can construct an adversary \(\mathcal {B}\) against \((t'', \varepsilon '', \mu , S', Q_{V})\)\(\mathsf {OW\text{ }G\text{ }HV}\) security of the underlying group key exchange protocol, where
Proof
Notice that when \(b =1\), the \(\textsc {Test}\) oracle always returns a uniformly random key in both \(G_2\) and \(G_1\); therefore, the only difference between \(G_2\) and \(G_1\) occurs when \(b = 0\). Hence, we have \(\mathrm{Pr}[G_2^\mathcal {A}\Rightarrow 1 \mid b = 1] = \mathrm{Pr}[G_1^\mathcal {A}\Rightarrow 1 \mid b = 1] \), and
To bound Equation (15), we construct an adversary \(\mathcal {B}\) that breaks the \((t'', \varepsilon '', \mu , S', Q_{V}) \text{ }\mathsf {OW\text{ }G\text{ }HV}\) security of the underlying \({\mathsf {GKE}}\) as in Fig. 22.
Firstly, we remark that the output of \(\textsc {Session}_\mathsf {I}'\), \(\textsc {Session}_\mathsf {R}'\) and \(\textsc {Der}'\) is distributed identically as in \(G_1\). For all sessions that have finished computing a key without making the game abort, all messages must be honestly generated due to the abort conditions introduced in \(G_1\), although they may be in a different order and there may be nonmatching sessions. Hence, \(\textsc {Session}_\mathsf {I}\), \(\textsc {Session}_\mathsf {R}\) and \(\textsc {Der}\) are perfectly simulated by \(\textsc {Session}_\mathsf {I}'\), \(\textsc {Session}_\mathsf {R}'\) and \(\textsc {Der}'\) of the \(\mathsf {OW\text{ }G\text{ }HV}\) game and the signing key.
We note that the random oracle \(\mathsf {H}\) simulated by \(\mathcal {B}\) has the same output distribution as in \(G_1\). When \(b = 0\) and line 72 is executed, we obtain a valid attack \((\text {sID}, K _{}^*)\) against the \(\mathsf {OW\text{ }G\text{ }HV}\) security. In summary, we have
\(\square \)
References
M. Abdalla, M. Bellare, P. Rogaway, The oracle DiffieHellman assumptions and an analysis of DHIES, in Naccache, D. (ed.) CTRSA 2001. LNCS, vol. 2020 (Springer, Heidelberg, 2001), pp. 143–158
C. Bader, D. Hofheinz, T. Jager, E. Kiltz, Y. Li, Tightlysecure authenticated key exchange, in Dodis, Y., Nielsen, J.B. (eds.) TCC 2015, Part I. LNCS, vol. 9014 (Springer, Heidelberg, 2015), pp. 629–658
M. Bellare, W. Dai, The multibase discrete logarithm problem: Tight reductions and nonrewinding proofs for Schnorr identification and signatures, in Bhargavan, K., Oswald, E., Prabhakaran, M. (eds.) INDOCRYPT 2020. LNCS, vol. 12578 (Springer, Heidelberg, 2020), pp. 529–552
M. Bellare, P. Rogaway, Random oracles are practical: A paradigm for designing efficient protocols, in Denning, D.E., Pyle, R., Ganesan, R., Sandhu, R.S., Ashby, V. (eds.) ACM CCS 93 (ACM Press, 1993), pp. 62–73
M. Bellare, P. Rogaway, Entity authentication and key distribution, in Stinson, D.R. (ed.) CRYPTO’93. LNCS, vol. 773 (Springer, Heidelberg, 1994), pp. 232–249
M. Bellare, P. Rogaway, The security of triple encryption and a framework for codebased gameplaying proofs, in Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004 (Springer, Heidelberg, 2006), pp. 409–426
F. Bergsma, T. Jager, J. Schwenk, Oneround key exchange with strong security: An efficient and generic construction in the standard model, in Katz, J. (ed.) PKC 2015. LNCS, vol. 9020 (Springer, Heidelberg, 2015), pp. 477–494
D.J. Bernstein, N. Duif, T. Lange, P. Schwabe, B.Y. Yang, Highspeed highsecurity signatures, in Preneel, B., Takagi, T. (eds.) CHES 2011. LNCS, vol. 6917 (Springer, Heidelberg, 2011), pp. 124–142
E. Bresson, O. Chevassut, D. Pointcheval, Provably authenticated group DiffieHellman key exchange—the dynamic case, in: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248 (Springer, Heidelberg, 2001), pp. 290–309
E. Bresson, O. Chevassut, D. Pointcheval, Dynamic group DiffieHellman key exchange under standard assumptions, in Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332 (Springer, Heidelberg, 2002), pp. 321–336
E. Bresson, O. Chevassut, D. Pointcheval, J.J. Quisquater, Provably authenticated group DiffieHellman key exchange, in Reiter, M.K., Samarati, P. (eds.) ACM CCS 2001 (ACM Press, 2001), pp. 255–264
M. Burmester, Y. Desmedt, A secure and efficient conference key distribution system (extended abstract), in: Santis, A.D. (ed.) EUROCRYPT’94. LNCS, vol. 950 (Springer, Heidelberg, 1995), pp. 275–286
D. Cash, E. Kiltz, V. Shoup, The twin DiffieHellman problem and applications, in Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965 (Springer, Heidelberg, 2008), pp. 127–145
K. CohnGordon, C. Cremers, K. Gjøsteen, H. Jacobsen, T. Jager, Highly efficient key exchange protocols with optimal tightness, in Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019, Part III. LNCS, vol. 11694 (Springer, Heidelberg, 2019), pp. 767–797
H. Davis, F. Günther, Tighter proofs for the SIGMA and TLS 1.3 key exchange protocols, in Sako, K., Tippenhauer, N.O. (eds.) ACNS 21, Part II. LNCS, vol. 12727 (Springer, Heidelberg, 2021), pp. 448–479
C. de Saint Guilhem, M. Fischlin, B. Warinschi, Authentication in keyexchange: Definitions, relations and composition, in: Jia, L., Küsters, R. (eds.) CSF 2020 Computer Security Foundations Symposium (IEEE Computer Society Press, 2020), pp. 288–303
D. Diemert, K. Gellert, T. Jager, L. Lyu, More efficient digital signatures with tight multiuser security, in Garay, J. (ed.) PKC 2021, Part II. LNCS, vol. 12711 (Springer, Heidelberg, 2021), pp. 1–31
D. Diemert, T. Jager, On the tight security of TLS 1.3: Theoretically sound cryptographic parameters for realworld deployments, J. Cryptol. 34(3), 30 (2021)
W. Diffie, M.E. Hellman, New directions in cryptography, IEEE Trans. Inf. Theory 22(6), 644–654 (1976)
W. Diffie, P.C. van Oorschot, M.J. Wiener, Authentication and authenticated key exchanges, Designs Codes Cryptography 2(2), 107–125 (1992)
M. Fischlin, F. Günther, B. Schmidt, B. Warinschi, Key confirmation in key exchange: A formal treatment and implications for TLS 1.3, in 2016 IEEE Symposium on Security and Privacy (IEEE Computer Society Press, 2016), pp. 452–469
N. Fleischhacker, T. Jager, D. Schröder, On tight security proofs for Schnorr signatures, in P. Sarkar, T. Iwata (eds.) ASIACRYPT 2014, Part I. LNCS, vol. 8873 (Springer, Heidelberg, 2014), pp. 512–531
S.D. Galbraith, J. MaloneLee, N.P. Smart, Public key signatures in the multiuser setting, Inf. Process. Lett. 83(5), 263–266 (2002). https://doi.org/10.1016/S00200190(01)003386
K. Gjøsteen, T. Jager, Practical and tightlysecure digital signatures and authenticated key exchange, in Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018, Part II. LNCS, vol. 10992 (Springer, Heidelberg, 2018), pp. 95–125
M.C. Gorantla, C. Boyd, J.M. González Nieto, Modeling key compromise impersonation attacks on group key exchange protocols, in Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443 (Springer, Heidelberg, 2009), pp. 105–123
D. Harkins, D. Carrel, The internet key exchange (IKE). RFC 2409 (1998). https://www.ietf.org/rfc/rfc2409.txt
D. Hofheinz, E. Kiltz, The group of signed quadratic residues and applications, in Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677 (Springer, Heidelberg, 2009), pp. 637–653
T. Jager, E. Kiltz, D. Riepel, S. Schäge, TightlySecure Authenticated Key Exchange, Revisited. In: Eurocrypt 2021 (2021). https://ia.cr/2020/1279
T. Jager, F. Kohlar, S. Schäge, J. Schwenk, On the security of TLSDHE in the standard model, in SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417 (Springer, Heidelberg, 2012), pp. 273–293
T. Jager, F. Kohlar, S. Schäge, J. Schwenk, Authenticated confidential channel establishment and the security of TLSDHE, J. Cryptol. 30(4), 1276–1324 (2017)
J. Katz, M. Yung, Scalable protocols for authenticated group key exchange, in Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729 (Springer, Heidelberg, 2003), pp. 110–125
E. Kiltz, D. Masny, J. Pan, Optimal security proofs for signatures from identification schemes, in Robshaw, M., Katz, J. (eds.) CRYPTO 2016, Part II. LNCS, vol. 9815 (Springer, Heidelberg, 2016), pp. 33–61
H. Krawczyk, SIGMA: The “SIGnandMAc” approach to authenticated DiffieHellman and its use in the IKE protocols, in Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729 (Springer, Heidelberg, 2003), pp. 400–425
B.A. LaMacchia, K. Lauter, A. Mityagin, Stronger security of authenticated key exchange, in Susilo, W., Liu, J.K., Mu, Y. (eds.) ProvSec 2007. LNCS, vol. 4784 (Springer, Heidelberg, 2007), pp. 1–16
Y. Li, S. Schäge, Nomatch attacks and robust partnering definitions: Defining trivial attacks for security protocols is not trivial, in Thuraisingham, B.M., Evans, D., Malkin, T., Xu, D. (eds.) ACM CCS 2017 (ACM Press, 2017), pp. 1343–1360
X. Liu, S. Liu, D. Gu, J. Weng, Twopass authenticated key exchange with explicit authentication and tight security, in Moriai, S., Wang, H. (eds.) ASIACRYPT 2020, Part II. LNCS, vol. 12492 (Springer, Heidelberg, 2020), pp. 785–814
U.M. Maurer, Abstract models of computation in cryptography (invited paper), in Smart, N.P. (ed.) 10th IMA International Conference on Cryptography and Coding. LNCS, vol. 3796 (Springer, Heidelberg, 2005), pp. 1–12
J. Pan, C. Qian, M. Ringerud, Signed diffiehellman key exchange with tight security, in Paterson, K.G. (ed.) CTRSA 2021. LNCS, vol. 12704 (Springer, Heidelberg, 2021), pp. 201–226
J. Pan, M. Ringerud, Signatures with tight multiuser security from search assumptions, in L. Chen, N. Li, K. Liang, S.A. Schneider (eds.) ESORICS 2020, Part II. LNCS, vol. 12309 (Springer, Heidelberg, 2020), pp. 485–504
PKCS #1: RSA cryptography standard. RSA Data Security, Inc. (1991)
B. Poettering, P. Rösler, J. Schwenk, D. Stebila, SoK: Gamebased security models for group key exchange, in Paterson, K.G. (ed.) CTRSA 2021. LNCS, vol. 12704 (Springer, Heidelberg, 2021), pp. 148–176
E. Rescorla, The Transport Layer Security (TLS) Protocol Version 1.3. RFC 8446 (Proposed Standard (2018). https://tools.ietf.org/html/rfc8446
P. Rösler, C. Mainka, J. Schwenk, More is less: On the endtoend security of group chats in signal, whatsapp, and threema, in 2018 IEEE European Symposium on Security and Privacy (EuroS P), pp. 415–429 (2018)
C.P. Schnorr, Efficient signature generation by smart cards J. Cryptol. 4(3), 161–174 (1991)
V. Shoup, Lower bounds for discrete logarithms and related problems, in Fumy, W. (ed.) EUROCRYPT’97. LNCS, vol. 1233 (Springer, Heidelberg, 1997), pp. 256–266
Y. Xiao, R. Zhang, H. Ma, Tightly secure twopass authenticated key exchange protocol in the CK model, in Jarecki, S. (ed.) CTRSA 2020. LNCS, vol. 12006 (Springer, Heidelberg, 2020), pp. 171–198
Acknowledgements
We thank the anonymous reviewers of CTRSA 2021 for their many insightful suggestions to improve our paper. We are also grateful to the anonymous reviewers of Journal of Cryptology for their valuable comments to clarify our security model and make our security proofs more understandable. Parts of Pan’s work were done, while he was supported by the Research Council of Norway under Project No. 324235.
Funding
Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital  Trondheim University Hospital).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Marc Fischlin
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
A Security of Schnorr in the Generic Group Model
We show the \(\mathsf {StCorrCMA}\) security of Schnorr’s signature scheme in the generic group model (GGM) which has been formally stated in Theorem 1. This section also gives a proof of the theorem.
We proceed as follows: Firstly, we propose a variant of the \({\mathsf {IDLOG}}\) assumption [32], \({\mathsf {CorrIDLOG}}\), by introducing an additional corruption oracle. Secondly, by using a slightly different version of [32, Lemma 5.8], we prove that Schnorr’s signature is tightly \(\mathsf {StCorrCMA}\)secure based on the \({\mathsf {CorrIDLOG}}\) assumption. Finally, we prove the hardness of \({\mathsf {CorrIDLOG}}\).
Note that in [32] it has been proven that \({\mathsf {IDLOG}}\) tightly implies the multiuser security of \(\mathsf {Schnorr}\) without corruptions, which does not necessary give us tight multiuser security with corruptions. However, our new \({\mathsf {CorrIDLOG}}\) assumption tightly implies the multiuser security of \(\mathsf {Schnorr}\) with corruptions. We believe that our \({\mathsf {CorrIDLOG}}\) assumption is of independent interest.
Let \(\mathsf {par}=(p, g , \mathbb {G})\) be a set of system parameters. The \({\mathsf {CorrIDLOG}}\) assumption is defined as follow:
Definition 7
(\({\mathsf {CorrIDLOG}}\)) The \({\mathsf {CorrIDLOG}}\) problem is \((t, \varepsilon , \mu , {Q}_{\textsc {Ch}}, Q_{\textsc {Dl}})\)hard in \(\mathsf {par}\), if for all adversaries \(\mathcal {A}\) interacting with \(\mu \) users, running in time at most t and making at most \({Q}_{\textsc {Ch}}\) queries to the challenge oracle \(\textsc {Ch}\) and \(Q_{\textsc {Dl}}\) queries to the corruption oracle \(\textsc {Dl}\), we have:
where on the jth challenge query \(\textsc {Ch}(R_j \in \mathbb {G})~(j \in [{Q}_{\textsc {Ch}}])\) \(\textsc {Ch}\) returns \(h_j \leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p\) to \(\mathcal {A}\), and on a corruption query \(\textsc {Dl}(i)\) for \(i \in [\mu ]\), \(\textsc {Dl}\) returns \(x_i\) to \(\mathcal {A}\) and adds i into the corruption list \(\mathcal {L}_{\mathcal {C}}\) (namely, \(\mathcal {L}_{\mathcal {C}}:=\mathcal {L}_{\mathcal {C}}\cup \{i \}\)).
Before proving the hardness of \({\mathsf {CorrIDLOG}}\) in the GGM, Lemma 5 shows that \({\mathsf {CorrIDLOG}}\) tightly implies the \(\mathsf {StCorrCMA}\) security of \(\mathsf {Schnorr}\) in the random oracle model (without using the GGM). Note that this lemma does not contradict the impossibility result of [22], since our assumption is interactive. In fact, following the framework in [32, Section 3], one can easily prove that the standard \(\mathsf {DLOG}\) assumption nontightly implies the \({\mathsf {CorrIDLOG}}\) assumption in the standard model.
Lemma 5
(\({\mathsf {CorrIDLOG}}\xrightarrow {{ \text {tight}}}\mathsf {StCorrCMA}\)) If \({\mathsf {CorrIDLOG}}\) is \((t, \varepsilon , \mu , {Q}_{\textsc {Ch}}, Q_{\textsc {Dl}})\)hard in \(\mathsf {par}\), then Schnorr’s signature \(\mathsf {Schnorr}\) is \((t', \varepsilon ', \mu , {Q}_{s},Q_{\textsc {Dl}}, {Q}_\mathsf {H})\)\(\mathsf {StCorrCMA}\) in the programmable random oracle model, where
Proof
This proof is straightforward by [32], but for completeness we prove it in details here. Let \(\mathcal {A}\) be an adversary against \(\mathsf {StCorrCMA}\) security. We construct \(\mathcal {B}\) against \({\mathsf {CorrIDLOG}}\) (Fig. 23).
Firstly, we argue that \(\mathcal {B}\) perfectly simulates the experiment \(\mathsf {StCorrCMA}\) unless \(\mathcal {B}\) aborts in line 14, namely \(( R, m )\) collides with a previous hash query. Since \(R\) is distributed uniformly at random, by the union bound the probability that \(\mathcal {B}\) aborts in line 14 is bounded by \({Q}_\mathsf {H}{Q}_{s}/p\).
Secondly, we show that \(\mathcal {B}\)’s forgery \(s^*\) is a valid \({\mathsf {CorrIDLOG}}\) forgery. Given the \((h^*,s^*)\) from \(\mathcal {A}\), we have \(R^*= g^{s^*} \cdot X_{i^*}^{h^*}\) and \(\textsc {Hash}(R^*, m ^*)=h^*\). We make our argument in the following steps:

1.
With high probability, there exists \((( R^*, m ^*), h^*) \in \mathcal {L}_{\mathsf {H}}\). Otherwise, it means \(\mathcal {A}\) was able to guess the hash value of \(( R^*, m ^*)\) without querying \(\textsc {Hash}\). This event is bounded by 1/p.

2.
If \((( R^*, m ^*), h^*)\) was added to \(\mathcal {L}_{\mathsf {H}}\) by the signing oracle \(\textsc {Sign}\), then \(\textsc {Sign}\) must have chosen an \(s'\) such that \(g^{s'} \cdot X_{i^*}^{h^*}= R^*= g^{s^*} \cdot X_{i^*}^{h^*} \), which means \(s'=s^*\). However, if \((h^*,s^*)\) from \(\mathcal {A}\) is a valid \(\mathsf {StCorrCMA}\) forgery, then \(s' = s^*\) cannot happen.

3.
Now \((( R^*, m ^*), h^*)\) can only be added to \(\mathcal {L}_{\mathsf {H}}\) by the hashing oracle \(\textsc {Hash}\). This is equivalent to \(R^*= R_j\) and \(h^*= h_j\) for some \(j\in [Q_{\mathbb {G}}]\). Thus \( g^{s^*}=R^*\cdot X_{i^*}^{h^*}=R_j\cdot X_{i^*}^{h_j}, \) and \(s^*\) is a valid attack in the \({\mathsf {CorrIDLOG}}\) security game.
This concludes the proof of Lemma 5. \(\square \)
Combining Lemma 5 and Lemma 6 (namely, the generic hardness of \({\mathsf {CorrIDLOG}}\)), we can conclude the \(\mathsf {StCorrCMA}\) security of Schnorr’s signature in Theorem 1.
1.1 A.1 Generic Hardness of \({\mathsf {CorrIDLOG}}\)
Generic Group Model. In the \(\mathrm {GGM}\) for primeorder groups \(\mathbb {G}\) [37, 45], operations in \(\mathbb {G}\) can only be carried out via blackbox access to the group oracle \(\textsc {O}_{\mathbb {G}}(\cdot , \cdot )\), and adversaries only get nonrandom handles of the group elements. Since groups \((\mathbb {G}, \cdot )\) and \((\mathbb {Z}_p, +)\) are isomorphic, every element in \(\mathbb {G}\) is internally identified as a \(\mathbb {Z}_p\) element. To consistently simulate the group operations, the simulator maintains a list \(\mathcal {L}_{\mathbb {G}}\) internally and a counter \(\text {cnt}\) that keeps track of the number of entries in \(\mathcal {L}_{\mathbb {G}}\). \(\mathcal {L}_{\mathbb {G}}\) contains entries of the form \((z(\mathbf {x}), C _{z})\), where \(z(\mathbf {x}) \in \mathbb {Z}_p[\mathbf {x}]\) represents a group element and the positive integer \( C _{z}\) is its counter.
We assume \(\mathcal {A}\) can make at most \({Q}_{\mathbb {G}}\) queries to \(\textsc {O}_{\mathbb {G}}\).
Lemma 6
For any adversary \(\mathcal {A}\) that \((t, \varepsilon , \mu , {Q}_{\textsc {Ch}}, Q_{\textsc {Dl}})\)breaks the \({\mathsf {CorrIDLOG}}\) assumption, we have
We recall the Schwartz–Zippel Lemma that is useful for proving Lemma 6.
Lemma 7
(Schwartz–Zippel Lemma) Let \(f(x_1, \ldots , x_n)\) be a nonzero multivariant polynomial of maximum degree \(d \ge 0\) over a field \(\mathbb {F}\). Let \(\mathcal {S}\) be a finite subset of \(\mathbb {F}\) and \(a_1, \dots , a_n\) be chosen uniformly at random from \(\mathcal {S}\). Then, we have
Proof of Lemma 6
\(\mathcal {A}\) is an adversary against the \({\mathsf {CorrIDLOG}}\) assumption. \(\mathcal {B}\) is simulator that simulates the \({\mathsf {CorrIDLOG}}\) security game in the \(\mathrm {GGM}\) and interacts with \(\mathcal {A}\). The simulation is described in Fig. 24
\(\mathcal {B}\) simulates the \({\mathsf {CorrIDLOG}}\) game in a symbolic way using degree1 polynomials. The internal list \(\mathcal {L}_{\mathbb {G}}\) stores the entries of the form \((f(\mathbf {x}), C _{f(\mathbf {x})})\), where \(f(\mathbf {x}) \in \mathbb {Z}_p[x_1,\ldots , x_\mu ]\) is a degree1 polynomial and \( C _{f(\mathbf {x})} \in \mathbb {N}\) identifies which entry it is. \(\mathcal {B}\) also keeps track of the size of \(\mathcal {L}_{\mathbb {G}}\) by \(\text {cnt}\). After \(\mathcal {A}\) outputs an attack, all the variables \((x_1\ldots x_{\mu })\) will be assigned a value \((a_1, \ldots , a_\mu ) \leftarrow _{\scriptscriptstyle \$}\mathbb {Z}_p^\mu \) chosen uniformly at random.
We remark that \(\mathcal {B}\) perfectly simulates the \({\mathsf {CorrIDLOG}}\) security game in the \(\mathrm {GGM}\) if none of the distinct polynomials \(z_i\) and \(z_j\) stored in \( \mathcal {L}_{\mathbb {G}}\) collide when evaluating on the random vector \(\mathbf {a}\) over \(\mathbb {Z}_p\). Applying the union bound over all pairs of distinct polynomials in \(\mathcal {L}_{\mathbb {G}}\), we have:
where the factor \(\frac{1}{p}\) comes from Lemma 7 and the fact that \(\mathcal {L}_{\mathbb {G}}\) contains only degree1 polynomials and \((a_1, \ldots , a_\mu )\) is chosen uniformly at random from \(\mathbb {Z}_p^\mu \).
We give an upper bound of the success probability of \(\mathcal {A}\) as follows:
The second term \(\frac{(\mu  Q_{\textsc {Dl}})}{p}\) comes from the fact that for each \(i^* \in [\mu ]\setminus \mathcal {L}_{\mathcal {C}}\) \(\mathcal {A}\) has no information about \(x_{i^*}\). Thus for a fixed \(i^* \in [\mu ]\setminus \mathcal {L}_{\mathcal {C}}\), we get that \(x_{i^*} h^*+ r^*(\mathbf {x}) s^*\) is a degree1 polynomial, and by Lemma 7
By the union bound, we have
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pan, J., Qian, C. & Ringerud, M. Signed (Group) Diffie–Hellman Key Exchange with Tight Security. J Cryptol 35, 26 (2022). https://doi.org/10.1007/s0014502209438y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s0014502209438y