1 Introduction

Besides encryption systems and digital signatures, key exchange protocols are among the most important building blocks of cryptography. It is well-known that the famous Diffie-Hellman (DH) protocol [15] only provides security against passive attackers. This is why since its introduction in 1976, many works focused on upgrading the DH protocol to also shield it against active attacks while trying to keep the overall efficiency as close as possible to the original protocol. An important step in that direction are authenticated DH-based protocols like \(\textsf {MQV}\)[25] and its successor \(\textsf {HMQV}\)[24]. As in the basic unauthenticated DH protocol, each message consists of only a single group element and messages can be sent in any order. An important feature of these DH-based protocols is that no long-term secret is required when computing the protocol messages; it is only when the session key is derived that the long-term secrets come into play. This generally makes the computation of protocol messages very efficient. The class of protocols that compute messages in this way (without the use of the long-term secrets) are called “implicitly-authenticated” protocols [24]. Unfortunately, in 2005 Krawczyk presented an attack that shows that implicitly-authenticated protocols inherently cannot provide forward secrecy against active attackers [24] (see Appendix E for a summary). Only if the attacker remains passive with respect to the test-session, implicitly-authenticated protocols can provide perfect forward secrecy. This passive form of PFS is commonly called weak PFS. We stress that weak forward secrecy is not a satisfying definition of security in practice (see Appendix A for a brief example for such a situation). Ultimately, there is no reason to assume that an otherwise unrestricted adversary (with respect to network control) may just refrain from using its full power. Arguably, weak forward-secrecy has rather been defined to show what protocols like \(\textsf {HMQV}\) can achieve. This is why Krawczyk proposes an extension of \(\textsf {HMQV}\)termed \(\textsf {HMQV-C}\), which comprises three message flows (while the second flow now also consists of more than 160 bits) and adds explicit key confirmation to the protocol. This guarantees full-PFS security but decreases the protocol’s overall efficiency.

Fig. 1
figure 1

Overview of \(\textsf {TOPAS}\). The key generation center maintains public parameters \(\textit{mpk}\) containing \(g_1,g_2,h_2,g_2^z,h_2^{z}\), prime p, a description of the pairing e, and descriptions of two hash functions \(H:\{0,1\}^*\rightarrow G_1\) and \(H':\{0,1\}^*\rightarrow \{0,1\}^*\). These parameters are available to all parties. We also assume that the identities of all communication partners are publicly available. The master secret \(\textit{msk}\) consists of z and is used by the key generation center to derive the user secret keys as \(\textit{sk}_i=(H(\textit{id}_i))^{1/z}\). \(K_A\) (resp. \(K_B\)) is the session key computed by Alice (Bob). The pairing operations in the denominator are message-independent and can be pre-computed (in times of low workload) and stored for later use. If Alice also pre-computes \(a=g_1^x\textit{sk}_A\), \((g_2^z)^x,(h_2^{z})^x\) and \(e(H(\textit{id}_B),g_2)^{-x},e(H(\textit{id}_B),h_2)^{-x}\) the computation of \(k,k'\) will require two pairing operations and two multiplications in \(G_T\) per key exchange. Messages can be sent in any order. Without loss of generality we assume that lexically \(\textit{id}_A\le \textit{id}_B\). (This is helpful in case the chronological order of the messages is otherwise unclear.)

We stress that (full) perfect forward secrecy is an important security property for key exchange protocols and that it is naturally well-supported by the original, unauthenticated Diffie-Hellman protocol. As pointed out in [20], the support of PFS is an important advantage over simple, public-key based session key transport and the main reason for the prevalence of DH-like protocols in protocol suites like SSH, IPsec, and TLS.

The only two-message protocol we are aware of that provides truly satisfactory security guarantees against active attackers while maintaining high efficiency is the modified Okamoto–Tanaka (\(\textsf {mOT}\)) protocol by Gennaro, Krawczyk, and Rabin (GKR)  [20] (depicted in Fig. 4 of Appendix C). Basically, \(\textsf {mOT}\)is an enhanced variant of the classical Okamoto–Tanaka protocol [26] from 1989 that introduces additional hashing operations to protect it against several practical attacks and allows a rigorous proof of security.Footnote 1 Like the original Okamoto–Tanaka protocol, \(\textsf {mOT}\)is defined in groups of hidden order and its security relies on the RSA assumption. Unfortunately, group elements consist of at least 1024 bits so that the overall number of transmitted bits for the two messages is 2048 bits which is much more than what is possible with the basic DH protocol and protocols like \(\textsf {HMQV}\)when defined over prime order elliptic curves. However, the protocol has very good computational efficiency.

It is considered an important open problem to design a protocol with full security (including full PFS) against active attackers and optimal communication complexity, i.e. where each message only consists of about 160 bits and we have only two messages in total.Footnote 2 This is of course optimal, since the birthday bound requires messages to be at least 160 bits for 80 bit security.Footnote 3

Fig. 2
figure 2

Overview of \(\textsf {TOPAS+}\). The key generation center maintains public parameters \(\textit{mpk}\) containing \(g_1,g_2,g_2^z,p\), a description of the pairing e, and descriptions of two hash functions \(H:\{0,1\}^*\rightarrow G_1\) and \(H':\{0,1\}^*\rightarrow \{0,1\}^*\). These parameters are available to all parties. The master secret \(\textit{msk}\) consists of z and is used by the key generation center to derive the user secret keys as \(\textit{sk}_i=(H(\textit{id}_i))^{1/z}\)

Contribution. As our main result, we present \(\textsf {TOPAS}\)(short for Transmission Optimal Protocol with Active Security), the first two-message key exchange protocol that provides full perfect forward secrecy and optimal communication complexity (Fig. 1). To achieve this, the design of \(\textsf {TOPAS}\)relies on new ideas and techniques. Key indistinguishability, security against key-compromise impersonation (KCI) attacks and reflection attacks are proven under generalizations of the Computational Bilinear Diffie-Hellman Inversion assumption. At the same time, \(\textsf {TOPAS}\)is weakly PFS secure under the Computational Bilinear Diffie-Hellman assumption. In Appendix D, we show that all our assumptions are concrete instantiations of the Uber-assumption introduced by Boyen in 2008 and therefore inherit its security in the generic bilinear group model [8]. We stress that for none of our assumptions does the input size grow with the number of adversarial queries (i.e. they do not constitute so-called q-type assumptions). Full-PFS security is shown under two new knowledge-type (or extraction-type) assumptions that are related to the difficulty of inverting bilinear pairings. (Traditional knowledge-type assumptions are usually related to the difficulty of inverting the modular exponentiation function, i.e. computing discrete logarithms.) Our protocol is defined over asymmetric (Type-3) bilinear groups and all our proofs rely on random oracles. In this work, we assume that the bilinear group supports the ideal ratio of b bit group element size for 2b security. For more conservative choices, the parameters have to be increased correspondingly. When instantiated with our aggressive parameter choices, each message thus consists of only about 160 bits for 80 bit security, resulting in the first key exchange protocol achieving full-PFS with an overall communication complexity of only 320 bits. Moreover, our protocol is identity-based what allows two parties to securely agree on a common session key without a prior exchange of their certificates.

With respect to computational efficiency, we note that all protocol messages can be computed very efficiently, virtually as efficient as in the original DH key exchange. In particular, each message consists of a single ephemeral DH public key that is additionally multiplied by the user’s secret long-term key. No additional exponentiation is required. Thus the computational overhead when compared to protocols like \(\textsf {HMQV}\)is minimal. However, session key derivation in our scheme is comparably slow. This is due to the application of a bilinear pairing in the key derivation process. We note that half of the required pairing operations must only be performed once per communication partner as they only depend on the identity of the communication partner. Finally, we remark that the size of the secret keys derived by the key generation center (KGC) is also only 160 bits using aggressive parameter choices and thus optimal as well.

We also present, \(\textsf {TOPAS+}\)(Fig. 2), a slightly modified version of \(\textsf {TOPAS}\)where the security proofs additionally rely on a variant of the Strong Diffie-Hellman assumption [1]. Basically, we require that our generalizations of the Computational Bilinear Diffie-Hellman Inversion assumption remain valid even when the adversary is also given access to an oracle that checks, given input k and \(\overline{k}\), whether \(k^{z^2}=\overline{k}\) for z unknown to the adversary. The resulting protocol requires less public parameters and only half the number of pairings required to compute the session key. When pre-computing message-independent values offline, key derivation only requires a single pairing operation online. The cost for this modification is that we have to rely on interactive security assumptions even when proving key indistinguishability and security against KCI and reflection attacks.

As our last result, we provide a new protocol that provides full security against active attackers in groups of hidden order (Fig. 3). \(\textsf {FACTAS}\)is a variant of the \(\textsf {mOT}\)protocol where computations are performed in the group of signed quadratic residues [22]. In contrast to the RSA-based \(\textsf {mOT}\)protocol, \(\textsf {FACTAS}\)features security reductions to the factoring problem in all proofs (except for full PFS). At the same time, the computation of the session key is slightly more efficient than in [20].

Fig. 3
figure 3

Overview of \(\textsf {FACTAS}\). The key generation center maintains public parameters \(\textit{mpk}\) containing the safe integer \(N=p_1p_2\), \(g\in \mathcal {SQR}_N \), and descriptions of two hash functions \(H:\{0,1\}^*\rightarrow \mathcal {SQR}_N \) and \(H':\{0,1\}^*\rightarrow \{0,1\}^*\). These parameters are available to all parties. The master secret \(\textit{msk}\) consists of the factorization of N. The set of exponents can be set to \(S=\left[ 1\ldots (N-1)/4\right] \). It can be shown that this distribution is indistinguishable from \(S=\left[ 1\ldots \left\lfloor \sqrt{N}/2\right\rfloor \right] \) under the factoring assumption [21]. In [20], the authors recommend a more aggressive choice of S with \(S=\left[ 1\ldots 2^{2\kappa }\right] \) and additionally assume that this distribution is indistinguishable from the previous ones. The master secret \(\textit{msk}\) is used by the key generation center to derive the user secret keys as \(\textit{sk}_i=(H(\textit{id}_i))^{\underline{1/2}}\), where \(g^{\underline{a}}=g^{\underline{a} \bmod |\mathcal {SQR}_N |}=|g^a \underline{\bmod } N|\) is the absolute value of \(g^a\, \underline{\bmod } N \in (-N/2,N/2)\) and elements in \(\mathbb {Z}^*_N\) are represented as signed integers in the symmetric interval \((-N/2,N/2)\)

We note that due to the application of the bilinear pairing, \(\textsf {mOT}\)and \(\textsf {FACTAS}\)have a computationally more efficient key derivation than \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\). Concrete numbers can for example be obtained from [28] that compares the properties of a variety of signature schemes on two platforms. (For more details on the exact setup we refer to [28]). The computational costs of the key derivation of \(\textsf {mOT}\)are dominated by a variable-base exponentiation with (large) exponents. This is equivalent to the costs incurred by RSA-1028 signing. In \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\), the dominant term in the key derivation is the application of the bilinear pairing. The costs are thus comparable to that of BLS signature verification. The comparison given in [28] suggests that \(\textsf {TOPAS+}\)protocol has key derivation times roughly 3–4 times slower than \(\textsf {mOT}\).

As mentioned before, the identity-based properties of our protocols avoid that additional information like certificates have to be exchanged between unknown communication partners, in contrast to PKI-based protocols like for example \(\textsf {HMQV}\). This guarantees that in \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)the size of each message does indeed never exceed 160 bits. Also, the time for key derivation is not slowed down by the additional verification of the received certificate. In general, identity-based protocols greatly pay off in highly dynamic settings where the membership to some eligible group of parties is usually demonstrated via short-lived certificates that are renewed on a regular basis (for example after some weekly or monthly payment).

We also remark that although message computation involves the usage of the secret key, all our protocols provide the strong form of deniability defined in [14]. This means that Bob or any other party cannot convince any third party that it once talked to Alice (given that there are no additional side information available to Bob that prove this fact in another way). This is a valuable privacy feature of our protocols that make them suitable for implementing “off-the-record” communication over (insecure) digital networks. We remark that, as with forward secrecy, the basic unauthenticated Diffie-Hellman protocol naturally supports this strong form of deniability (simply because the session key entirely relies only on ephemeral parameters). On the other hand, protocols based on digital signatures (like signed Diffie-Hellman) do not provide such deniability features.

Finally, we note that our proofs of \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)heavily exploit the programmability of the random oracle model. Using a separation technique that was introduced by Fischlin and Fleischhacker [18] and applied to identity-based non-interactive key exchange by Chen, Huang, and Zhang [11] we can show that, in some sense, the programmability of the random oracle model is actually necessary for our reductions. More concretely, we show in Sect. 6 that under a one-more-type security assumption, the programmability of the random oracle model is necessary for all security proofs that call the adversary once and in a black-box manner, which is the most common type of reduction in cryptography. Unfortunately, the results of [11] cannot directly by applied to \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)such that we have to rely on new ideas.

Interpretation. From a more theoretical point of view, we continue the work of GKR who analyzed how far the boundaries (in terms of strong security properties) of DH-like protocols can be pushed further while maintaining the efficiency of the original DH protocol (as far as possible). All our protocols add to this body of work as they provide higher efficiency or rely on weaker security assumptions than \(\textsf {mOT}\). In contrast to all other two-message protocols with comparable efficiency that we are aware of—except for the \(\textsf {mOT}\)protocol, our protocols are the only ones that provide full security against active attackers. In particular, they show for the first time that protocols in prime-order groups with full security against active adversaries can have optimal message size and optimal round complexity and do not necessarily have to rely on costly modifications of the protocol like the addition of key confirmation in \(\textsf {HMQV-C}\).

From a rather orthogonal perspective, our work is strongly motivated by the problem of finding the strongest security properties that two message key exchange protocols with optimal communication complexity can achieve. We show for the first time that these protocols can protect against fully active adversaries.

We admit that the feature of full PFS security comes at the cost of relying on (highly) non-standard security assumptions. However, we stress that the existing two-message key exchange protocols with 160 bit messages are implicitly-authenticated and therefore cannot provide full PFS under any security assumption.

Application scenarios Our protocols are very interesting for all networks where the transmission of data is very expensive. Important examples are satellite-based communication networks and communication over low-battery powered wireless (sensor) networks. Satellite-based communication is expensive since all communication flows over a shared medium with a fixed bandwidth. Using a higher proportion of that bandwidth because of larger cryptographic values will cost more and restrict other useful applications. In wireless sensor networks, node and network lifetimes are highly dependent on energy consumption. This cost is dominated by radio transmission [27, 30, 31]. Besides that, our protocol is of general interest since it is, due to its low message size, less wasteful in terms of network load. Message size has always been an important metric for designers of cryptographic protocols [24] and an important criterion for the selection of modern cryptographic algorithms as new standards. We note that since our protocols are identity-based they ensure that the optimal bound of 160 bits per key exchange message is always met.

Security Model To prove security of our protocols we extend and strengthen the security model of \(\textsf {mOT}\)[20]. Indistinguishability of session keys from random keys is shown in a variant of the Canetti-Krawczyk (CK) model[9] that is restricted to two message protocols. This variant was first introduced for the analysis of \(\textsf {HMQV}\)[24]. The \(\textsf {mOT}\)model has further adapted the \(\textsf {HMQV}\)model to the identity-based setting. Our model captures security against reflection attacks, key compromise attacks, and forward secrecy. There are two noteworthy ways in which our model differs from  [20]. The first is that we provide an explicit \(\textsf {Register} \) query to register new users. The second is that we introduce a strengthened definition of weak PFS called enhanced weak PFS which allows the adversary to obtain the secret keys of all parties and the KGC at protocol start-up, i.e. even before the session key is computed. We note that like the \(\textsf {mOT}\)protocol, our protocols require that intermediate values computed in the generation of the protocol messages and the derivation of the session key cannot be revealed by the adversary. Formally, we therefore do not consider state reveal attacks. Technically, this is enforced by requiring that the intermediate values remain in the same protected memory as the long-term key. This is for example similar to DSA signatures, where the random exponent used in the signing procedure must not be revealed to the adversary. Although this seems like a severe restriction, it is, in some sense, the best we can hope for when using two-message protocols. For completeness, in Appendix F we generalize a result by Krawczyk and show that any protocol which allows the adversary to reveal ephemeral keys, cannot provide full PFS. This argument has already been given in [7].

Discussion In the literature, there is a well-known controversy that developed as a reaction to Krawzcyk’s arguments[24] showing that implicitly-authenticated protocols inherently cannot provide forward secrecy against active attackers. In particular, there is no consensus on how this result should be interpreted [7]. First, as Cremers and Feltz [16] point out in 2014, the result is only applicable to stateless protocols. Second, beginning with Cremers and Feltz [12], some authors argue for a different way to deal with the inherent technical obstacles that are present in Krawczyk’s argument. In a nutshell, instead of giving up on revealing ephemeral keys entirely, [12] propose to use another, more fine-grained notion of full PFS instead. Intuitively, their notion only forbids certain combinations of queries which guarantees to not run into the same technical obstacles. This means that although ephemeral keys are generally allowed, the combinations of queries that are most useful for the attacker (since they provide most of the information) are forbidden. Using their approach, the combination of queries used in Krawczyk’s impossibility result is one such particular combination. Accordingly, the model by Cremers and Feltz is theoretically stronger in this sense.

In this paper, we take a more pragmatic approach that, like Krawczyk, largely favors a model that entirely does without ephemeral key reveals. We point out, as Cremers and Feltz have shown, that although revealing some ephemeral keys might not compromise security, which ephemeral keys exactly are unproblematic is highly dependent on the dynamic behavior of the attacker and cannot be planned in advance when setting up parties in real-world systems. From a practical standpoint, it thus seems useful to simply protect all ephemeral keys of the system sufficiently from being revealed. Now, if this protection mechanism is reliable enough to protect ephemeral keys from being revealed in any problematic combination of highly dynamic attacks, we can as well assume that it rules out ephemeral key reveals entirely. As a benefit, we end up with a simpler security model that unambiguously communicates to developers that additional measures to protect ephemeral keys from being revealed need to be taken.

Related Work It is well-known how to design 2-message protocols that are secure against active adversaries. One way to do this is to add to each (Diffie-Hellman) message a signature that authenticates the originator of that message and protects its integrity [29].Footnote 4 This approach has been generalized in [3, 12]. Another solution is to additionally exchange two encrypted nonces that when combined give rise to a symmetric key that is used to protect the integrity of the remaining messages (as used in SKEME [23]). However, all these methods require to send, besides the Diffie-Hellman shares, additional information. For example, consider the most efficient signature scheme that is due to Boneh, Lynn, and Shacham (BLS) where each signature consists of roughly 160 bits. Using the signature-based method with BLS signatures, each party has to exchange considerably more than the optimal amount of bits, namely the key exchange messages plus the size of the signatures (which already account for 160 bits). This does not even consider the costs for certificates that are required when two parties communicate for the first time. At the same time, since these protocols use digital signatures they cannot provide the strong form of deniability given in [14]. Moreover, we remark that when using any two-message protocol that provides full PFS we also must have that the corresponding security model does not allow to reveal the ephemeral secrets as formally shown in Appendix F. So, as the protocols in [3, 12] allow the adversary to reveal ephemeral keys, they cannot be shown to provide full PFS in the strong sense of  [20].

Another interesting approach is to make practical 2-message protocols like \(\textsf {MQV}\)and \(\textsf {HMQV}\)identity-based, while keeping their overall efficiency. Most noteworthy, Fiore and Gennaro presented a protocol that features (computational) performance comparable to \(\textsf {MQV}\)[17]. However, since it is identity-based there is no need for transmitting certificates as in the original \(\textsf {MQV}\)protocol. There are two drawbacks of their protocol. First, each messages consists of two values, thus exceeding the optimum of 160 bits. Second, their protocol does only provide weak PFS, not full PFS. Thus it lacks protection against fully active adversaries. As an advantage, their protocol offers very high computational efficiency.

Identity-based vs. PKI-based Protocols Finally, we would like to comment on the fact that our protocol is identity-based. Our main target is to obtain as short messages as possible while providing high security guarantees. It is interesting to note that when using protocols that provide enhanced weak PFS, the introduction of a KGC does not increase the vulnerability to long-term attacks as compared to relying on classical certification authorities (CAs). As for authentication, any KGC can of course impersonate its users as it can compute their secret keys. However, this is not different from classical CAs that can always create a certificate that binds the identity of the user to a public key chosen by the CA (such that it has access to the corresponding secret key). Now, when it comes to the secrecy of keys of past sessions where the adversary did not actively intervene, our notion of enhanced weak PFS guarantees that even with the help of all user secret keys and even that of the KGC no adversary can obtain the session key. This is exactly what is guaranteed by weak PFS for PKI-based protocols. Indeed we can show that similar to \(\textsf {mOT}\)our identity-based protocol can easily be turned into a PKI-based one. Of course we then lose the advantage that users need to exchange certificates before communicating for the first time. In Appendix B, we briefly sketch this transformation.

Open problems As stated above, although message computation is quite fast in our first protocol, the computation of the secret key involves costly pairing operations. We leave as an open problem to design a security protocol with optimal communication complexity and more efficient key derivation. We also find it very interesting to design a protocol with similar efficiency while featuring a proof of full PFS that relies on standard assumptions. Finally, we consider it very interesting to design a protocol that provides full PFS while being based on post-quantum security assumptions. Given state-of-the-art techniques, such a protocol is, however, highly likely to exceed the optimal message size that \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)achieve.

2 Preliminaries

Let \(\kappa \) be the security parameter. Let \(G_1\) and \(G_2\) be groups of prime order p with generators \(g_1\) and \(g_2\) such that \(\log _2(p)\) is a polynomial in \(\kappa \). Let \(e:G_1\times G_2\rightarrow G_T\) be a non-degenerate bilinear pairing. We call \(G=(p,g_1,g_2,e)\) a bilinear group. We will base our protocol on asymmetric bilinear groups of prime order where no isomorphism is known between \(G_2\) and \(G_1\) (Type-3 pairings) [19]. When using asymmetric bilinear groups, we assume that \(\log _2(p)\approx 160\) (effectively having \(\log _2(p)=2\kappa \)) and that elements of \(G_1\) can be implemented with roughly 160 bits for a security level of approximately 80 bits [2, 4]. In the following we may also refer to what we call an extended bilinear group.

2.1 Security assumptions

In the following, we will present the complexity assumptions that our security analysis of our first protocol relies on. Our main proof will assume the hardness of a generalization of the Computational Bilinear Diffie-Hellman Inversion problem. In Appendix D we will show that all our assumptions are covered by the Uber-assumption introduced in [8] and thus hold in the generic (bilinear) group model. The generic group model is a restricted computational model that idealizes groups G to only allow group operations on the group elements. In such a model it is more easy to derive lower bounds on certain computational tasks since only group operations have to be considered. The underlying assumption is that having access to the concrete representation of group elements in G does not provide additional benefits when attempting to break the computational task. Moreover, a proof in the generic group models is independent of the concrete group instantiation used in practice. The idea is that the proof holds as long as the ultimately used group behaves like a generic group. Groups over elliptic curves are often modeled as generic groups since typically the best-known attacks to solve complexity problems in elliptic curves are generic and could be applied to any other group as well. For the proof of full PFS security we will rely on two new “knowledge-type” (extraction-type) assumptions. We will give a brief motivation for these new assumptions.

(kl)-Computational Bilinear Diffie-Hellman Inversion ((kl)-\(\text {CBDHI }\))Assumption Let \(k=k(\kappa )\) and \(l=l(\kappa )\) be polynomials. Assume \(G=(p,g_1,g_2,e)\) is a bilinear group. The (kl)-Computational Bilinear Diffie-Hellman Inversion problem is, given G, the values \(g_1^{z},g_1^{z^2},\ldots ,g_1^{z^k}\), and \(g_2^{z},g_2^{z^2},\ldots ,g_2^{z^l}\) for some random \(z\in \mathbb {Z}_p \) to compute \(e(g_1,g_2)^{1/z}\). This is a generalization of the Computational Bilinear Diffie-Hellman Inversion problem introduced by Boneh–Boyen in [6] where k is fixed to \(k=2\).

Definition 1

(CBDHI Assumption) We say that attacker \(\mathcal {A}\) breaks the (kl)-\(\text {CBDHI }\) assumption if \(\mathcal {A}\) succeeds in solving the (kl)-Computational Bilinear Diffie-Hellman Inversion problem (where the probability is over the random coins of \(\mathcal {A}\) and the random choices for G and z). We say that the (kl)-\(\text {CBDHI }\) assumption holds if no PPT attacker \(\mathcal {A}\) can break the (kl)-\(\text {CBDHI }\) problem.

Looking ahead, in our proof of KCI security we reduce security to the (2, 3)-\(\text {CBDHI }\) assumption while in our proof of full PFS security we rely on the (3, 3)-\(\text {CBDHI }\) assumption.

(kl)-Generalized Computational Bilinear Diffie-Hell -man Inversion ((kl)-\(\text {GCBDHI }\)) Assumption Let again \(k=k(\kappa )\) and \(l=l(\kappa )\) be polynomials in \(\kappa \) and \(G=(p,g_1,g_2,e)\) be a bilinear group. The (kl)-Generalized Computational Bilinear Diffie-Hellman Inversion problem is, given G, random \(w\in \mathbb {Z}_p \), \(g_1^{z},g_1^{z^2},\ldots ,g_1^{z^k}\), and \(g_2^{z},g_2^{z^2},\ldots ,g_2^{z^l}\) for some random \(z\in \mathbb {Z}_p \) to compute \(e(g_1,g_2)^{\frac{z+w}{z^2}}\).

Definition 2

(GCBDHI Assumption) We say that attacker \(\mathcal {A}\) breaks the (kl)-\(\text {GCBDHI }\) assumption if \(\mathcal {A}\) succeeds in solving the (kl)-Generalized Computational Bilinear Diffie-Hellman Inversion problem (where the probability is over the random coins of \(\mathcal {A}\) and the random choices for G, z and w). We say that the (kl)-\(\text {GCBDHI }\) assumption holds if no PPT attacker \(\mathcal {A}\) can break the (kl)-\(\text {GCBDHI }\) problem.

We will rely on this assumption for \(k=2\) and \(l=3\) to prove security of our protocol under reflection attacks [24] where the adversary is also allowed to make parties communicate with themselves. We stress that since kl are constant, the challenge size of both of our assumptions does not grow with the security parameter (and so they do not constitute “q-type” assumptions).

Computational Bilinear Diffie-Hellman (\(\text {CBDH }\)) Assumption in \(G_1\)Assume \(G=(p,g_1,g_2,e)\) is a bilinear group. The \(\text {CBDH }\) problem is, given G and \(g_1^{x},g_1^{y}\) to compute \(e(g_1,g_2)^{xy}\).

Definition 3

(CBDH Assumption) We say that attacker \(\mathcal {A}\) breaks the \(\text {CBDH }\) assumption if \(\mathcal {A}\) succeeds in solving the \(\text {CBDH }\) problem (where the probability is over the random coins of \(\mathcal {A}\) and the random choices for G and xy). We say that the \(\text {CBDH }\) assumption holds if no PPT attacker \(\mathcal {A}\) can break the \(\text {CBDH }\) problem.

Later we will use this assumption to prove that our protocol guarantees weak PFS. Observe that the \(\text {CBDH }\) assumption implies that the classical Computational Diffie-Hellman assumption holds in \(G_1\).

Knowledge of (Pairing) Pre-Image Assumption (\(\text {KPA }\)) Recall the knowledge of exponent assumption for Diffie-Hellman pairs. It states that for any adversary \(\mathcal {A} \) which, given group G (of prime-order p) and two generators \(X,Y\in G\) outputs \(X',Y'\in G\) such that there is \(s\in \mathbb {Z}_p \) with \(X'=X^s\) and \(Y'=Y^s\), there exists another adversary \(\mathcal {A} '\) which given the same inputs additionally outputs the exponent s. However, when working in the target group \(G_T\) of a bilinear group this assumption can be false. For example, assume \(X=e(A,g_2)\) and \(Y=e(B,g_2)\) for some \(A,B\in G_1\). Then, an adversary that is given \(A,B\in G_1\) and \(g_2\in G_2\) can simply output \(X'=e(A,g'_2)\) and \(Y'=e(B,g'_2)\) for some \(g'_2\in G_2\) without knowing the discrete logarithm s between \(X'\) and \(Y'\).

The following assumption states that although the adversary may not know the discrete logarithm s between \(X',Y'\) it must at least know a suitable \(g'_2\). Observe, that if the adversary does indeed know the discrete logarithm s it can easily compute \(g'_2\) as \(g'_2=g_2^s\). In some sense our new assumption can be viewed as a variant of the knowledge of exponent assumption (which in its original form is related to the problem of inverting modular exponentiations). However, it is rather a “knowledge of group element” assumption that is related to the difficulty of inverting bilinear pairings.

Formally, security is defined via the following security experiment played between challenger \(\mathcal {C} \) and adversary \(\mathcal {A} \):

  1. 1.

    \(\mathcal {C}\) sends a bilinear group \(G=(p,g_1,g_2,e)\) to \(\mathcal {A}\) together with \(A,B\in G_1\). Let \(X=e(A,g_2)\) and \(Y=e(B,g_2)\).

  2. 2.

    \(\mathcal {A}\) outputs \(X',Y'\ne 1_T\).

We say that \(\mathcal {A}\) wins if there is some \(t\in \mathbb {Z}_p \) with \(X'=X^t\) and \(Y'=Y^t\).

Definition 4

(Knowledge of Pairing Pre-Image Assumption) We say that the Knowledge of Pairing Pre-Image assumption (\(\text {KPA }\)) holds, if for every PPT algorithm \(\mathcal {A}\) in the above security game there exists another PPT algorithm \(\mathcal {A}\) ’ that given the same inputs and random coins as \(\mathcal {A}\) behaves exactly like \(\mathcal {A}\) (having the same input and output behaviour) while additionally outputting \(g'_2=g_2^t\) besides \(X',Y'\) such that \(X'=e(A,g'_2)\) and \(Y'=e(B,g'_2)\) whenever \(\mathcal {A}\) wins.

Modified Knowledge of Co-CDH Assumption The next security assumption we rely on is based on the following problem in bilinear group \(G=(p,g_1,g_2,e)\). Assume we provide attacker \(\mathcal {A}\) with \(A\in G_1\) (such that \(A=g_1^a\) for some \(a\in \mathbb {Z}_p \)) and let \(X=e(A,g_2)\). Intuitively, the task of \(\mathcal {A}\) is to compute W such that \(X=e(A,g_2)=e(g_1,W)\) (i.e. \(W=g_2^a\)). This is equivalent to solving the \(\text {Co-CDH }\) assumption [5] in G with challenge \(A,g_2,g_1\). However, in our security experiment we will also give \(\mathcal {A}\) access to a \(\text {Co-CDH }\) oracle. To this end \(\mathcal {A}\) may after receiving A specify \(Y\in G_T\). As a response \(\mathcal {A}\) obtains \(U\in G_2\) such that \(XY=e(g_1,U)\). The attacker is successful if it can now compute W. We observe that by appropriate choices of Y, \(\mathcal {A}\) can easily compute W.

  • One way to do this is to have \(Y=X^i\) for some \(i\ne -1\). We then have that \(XY=X^{i+1}=e(g_1,U)\). Therefore, W can simply be computed from U as \(W=U^{1/i+1}\).

  • Another way is to set \(Y=e(g_1,T)\) for some \(T\in G_2\) known to \(\mathcal {A}\). We then get that \(XY=X\cdot e(g_1,T)=e(g_1,U)\) which is equivalent to \(X=e(g_1,U/T)\). Thus \(W=U/T\) is a correct solution to the problem.

Basically, our new assumption states that every successful adversary must follow one of these strategies—or a combination of both. Intuitively this should still hold if the adversary is, besides U, also provided with \(A'=A^{1/z}\in G_2\) (such that \(e(A,g_2)=e(A',g^z_2)\)) since knowing the z-th root of A for some otherwise unrelated z should not help to break the \(\text {Co-CDH }\) assumption.

The entire security experiment consists of four steps:

  1. 1.

    \(\mathcal {C}\) sends a bilinear group \(G=(p,g_1,g_2,e)\) and \(g_2^{z},h_2,h_2^{z}\) for uniformly random \(z\in \mathbb {Z}_p \) to \(\mathcal {A}\) together with uniformly random \(A\in G_1\).

  2. 2.

    \(\mathcal {A}\) outputs \(Y\in G_T\).

  3. 3.

    \(\mathcal {C}\) outputs \(A^{1/z}\in G_1\) and \(U\in G_2\) such that \(e(A,g_2)\cdot Y=e(g_1,U)\).

  4. 4.

    \(\mathcal {A}\) outputs \(W\in G_2\).

\(\mathcal {A}\) wins if \(e(A,g_2)=e(g_1,W)\).

Definition 5

(MKCoCDH Assumption) We say that the Modified Knowledge of Co-CDH Assumption (\(\text {MKCoCDH }\)) holds if for every PPT algorithm \(\mathcal {A}\) there exists another algorithm \(\mathcal {A}\) ’ that given the same inputs and random coins as \(\mathcal {A}\) behaves exactly like \(\mathcal {A}\) (having the same input and output behaviour) while in the second step of the above security experiment additionally outputting \(i\in \mathbb {Z}_p \) and \(T\in G_T\) such that \(Y=e(A,g_2)^i\cdot e(g_1,T)\) whenever \(\mathcal {A}\) wins.

Now consider a simplified game where less setup parameters are produced by the challenger. (This will later be used in the proof of \(\textsf {TOPAS+}\).)

  1. 1.

    \(\mathcal {C}\) sends a bilinear group \(G=(p,g_1,g_2,e)\) and \(g_2^{z}\) for uniformly random \(z\in \mathbb {Z}_p \) to \(\mathcal {A}\) together with uniformly random \(A\in G_1\).

  2. 2.

    \(\mathcal {A}\) outputs \(Y\in G_T\).

  3. 3.

    \(\mathcal {C}\) outputs \(A^{1/z}\in G_1\) and \(U\in G_2\) such that \(e(A,g_2)\cdot Y=e(g_1,U)\).

  4. 4.

    \(\mathcal {A}\) outputs \(W\in G_2\).

\(\mathcal {A}\) wins if \(e(A,g_2)=e(g_1,W)\).

Definition 6

(MKCoCDH’ Assumption) We say that the Simple Modified Knowledge of Co-CDH Assumption (\(\text {MKCoCDH }\) ’) holds if for every PPT algorithm \(\mathcal {A}\) there exists another algorithm \(\mathcal {A}\) ’ that given the same inputs and random coins as \(\mathcal {A}\) behaves exactly like \(\mathcal {A}\) (having the same input and output behaviour) while in the second step of the above security experiment additionally outputting \(i\in \mathbb {Z}_p \) and \(T\in G_T\) such that \(Y=e(A,g_2)^i\cdot e(g_1,T)\) whenever \(\mathcal {A}\) wins.

2.2 Hash functions

Definition 7

(Hash Function) Consider a set \(\mathcal {H} =\{H_t\}^{2^\kappa }_{t=1}\) of hash functions indexed by t where each \(H_t\) maps from \(\{0,1\}^*\) to the hash space T. We require that \(\log _2(|T|)\) is a polynomial in \(\kappa \). We say that \(\mathcal {H} \) is collision-resistant if for uniformly random t no PPT attacker can output two distinct string \(m_1,m_2\), such that \(H_t(m_1)=H_t(m_2)\) except with negligible probability.

In the following we will always implicitly assume that t is chosen uniformly at random at the beginning of the setup phase. We will then drop t and simply write H (and \(H'\)). In the security proofs we model hash functions as random oracles.

2.3 Security model

Let us very briefly re-call the basic features of the security model we use. For a more detailed exposition we refer to [20].

Protocol Framework We consider a set of up to \(n=n(\kappa )\) parties \(P_1\) to \(P_n\), each of which is identified via unique (identity) strings \(\textit{id}_i\) for \(i=1,\ldots ,n\), and a 2-pass key exchange protocol \(\Pi \) that can be run between two parties that we typically denote as \(\textit{id}_A\) and \(\textit{id}_B\)—or Alice and Bob. Unless stated explicitly otherwise we always assume that \(\textit{id}_A\ne \textit{id}_B\). Each instance of the protocol run at party \(\textit{id}_i\) is called session while \(\textit{id}_i\) is also called the holder of that session. A session can either complete, what involves processing incoming messages and computing outgoing messages until it computes a session key K, or it can abort in which case no session key will be computed. Additionally we consider expired sessions which are completed sessions where the session key and all ephemeral values to compute the session key have been erased.

The party with which the session key is intended to be shared with after the protocol run is called peer. (More technically, Bob is the the peer of one of Alice’s sessions if that session uses \(\textit{id}_B\) to derive the session key). The session identifier (\(z_1\),\(z_2\),\(z_3\),\(z_4\)) of a session is a combination of the identity string of the holder \(z_1\), the identity of the peer \(z_2\), the message sent by the session \(z_3\), and the message received by the session \(z_4\). We say that two sessions match if it holds for their session identifiers (\(z_1\),\(z_2\),\(z_3\),\(z_4\)) and (\(z'_1\),\(z'_2\),\(z'_3\),\(z'_4\)) that \(z_1=z'_2\), \(z_2=z'_1\), \(z_3=z'_4\), and \(z_4=z'_3\). There is also a special party called the key generation center that holds a master secret key \(\textit{msk}\) and publishes a corresponding master public key \(\textit{mpk}\). The \(\textit{msk}\) is used to derive secret keys \(\textit{sk}_i\) for \(i=1,\ldots ,n\) for each of the parties from their corresponding identity strings. We assume that each party \(\textit{id}_i\) receives its \(\textit{sk}_i\) from the KGC (in an authentic and confidential way that is out of the scope of this paper). The master public key contains all public information required by the parties to run the protocol. We assume that each party knows all identity strings of the other parties.

Attacker We consider an attacker \(\mathcal {A}\) that controls the entire network, being able to intercept, modify, drop, replay, and insert messages on transit. To model this, all outgoing messages are delivered to the adversary. If \(\mathcal {A}\) only relays all the messages that are sent to some session by its peer it is called passive with respect to that session, otherwise it is called active. \(\mathcal {A}\) can also activate sessions of parties to make them engage in a protocol run with peers of \(\mathcal {A}\) ’s choice. To model attack capabilities that grant the adversary access to the secret information of sessions, parties, or the KGC, we allow \(\mathcal {A}\) to sent different types of queries to sessions. The following two queries can be used by the attacker to reveal secret values.

  • A \(\textsf {Reveal} \) query reveals the session key of a complete session.

  • A \(\textsf {Corrupt} \) query returns all information in the memory of the holder of a session. This includes the secret keys of the party as well as the state information of all its sessions. If a query has been asked to a session with holder \(\textit{id}_i\) we also say that \(\textit{id}_i\) is corrupted.

We say that a session is exposed if its holder has been corrupted or its session key been revealed. Additionally sessions are considered exposed if there exists a matching session that is exposed.

Additionally, the following two queries are granted to the attacker.

  • A \(\textsf {Test} \) query can only be asked once and only to a complete session that is not exposed. Depending on the outcome of a randomly tossed coin \(c\in \{0,1\}\), the output of this query is either the session key K stored at that session (in this context also called the test-session) in case \(c=0\) or a random key uniformly drawn from the space of session keys in case \(c=1\).

  • The adversary may also make (up to n) \(\textsf {Register} \) queries. On input the j-th identity \(\textit{id}_j\) with \(\textit{id}_j\notin \{\textit{id}_1,\ldots ,\textit{id}_{j-1}\}\) for \(1\le j\le n\), this query createsFootnote 5 party \(P_j\) and assigns identity \(\textit{id}_j\) to it. Also the secret key \(sk_j\) corresponding to identity \(\textit{id}_j\) is given to \(P_j\). We assume that parties are initially uncorrupted.

Observe that in contrast to [20] we have formally introduced a \(\textsf {Register} \) query. This models that the adversary may also adaptively choose the identities of the honest (uncorrupted) parties. This is much stronger than in the \(\textsf {mOT}\) model, where the identities of the uncorrupted parties are fixed at start-up. (We consider it as an essential feature of identity-based cryptography that the adversary may choose the identities of the honest parties. This is in fact not possible in classical key exchange, where we cannot rule out that when an adversary registers a new public key that it knows the corresponding secret key.) Also, via a combination of \(\textsf {Register} \) and \(\textsf {Corrupt} \) queries the adversary may obtain secret keys on identities of his choice. The original model in [24] also specifies queries that reveal the secret state information of sessions. However, as stated before, like \(\textsf {mOT}\), our protocol will not be secure against \(\textsf {StateReveal} \) queries (even not when only revealing the ephemeral public keys \(g_1^x\) and \(g_2^y\)). As in [20] we instead require protection of these values to be at the same level as that of \(\textit{sk}_i\). As mentioned before we can show in Appendix F that any protocol which allows the adversary to obtain ephemeral secret keys, cannot provide full PFS. In general, we require that except for session keys, all internal information of parties and sessions can only be revealed via full party corruptions.

Security Definitions Let \({{\textbf {SG}} }\) (short for security game) denote the following security game between a challenger \(\mathcal {C}\) and an attacker \(\mathcal {A}\).

  1. 1.

    \(\mathcal {C}\) gives to \(\mathcal {A}\) the master public key \(\textit{mpk}\).

  2. 2.

    \(\mathcal {A}\) may activate sessions and issue \(\textsf {Reveal} \), \(\textsf {Corrupt} \), and \(\textsf {Register} \) queries to its liking. Also, \(\mathcal {A}\) may use its control of the network to modify messages on transit.

  3. 3.

    \(\mathcal {A}\) may ask the \(\textsf {Test} \) query to some completed, unexposed session with holder \(\textit{id}_A\) and peer \(\textit{id}_B\) such that \(\textit{id}_A\ne \textit{id}_B\). Let K be the response and c the internal random coin generated by the test session when answering the query.

  4. 4.

    \(\mathcal {A}\) may activate sessions, issue \(\textsf {Reveal} \), \(\textsf {Corrupt} \), and \(\textsf {Register} \) queries, and use its control of the network to modify messages on transit.

  5. 5.

    \(\mathcal {A}\) outputs \(c'\in \{0,1\}\).

We say that an attacker \(\mathcal {A}\) succeeds in a distinguishing attack if \(c'=c\), the test session is not exposed and the peer of the test-session has not been corrupted.

Definition 8

(Security of Identity-based Key Agreement Protocol) An identity-based key agreement protocol \(\Pi \) is secure if for all PPT attackers \(\mathcal {A}\) that are given the above attack capabilities, it holds that i) if two matching sessions of uncorrupted parties complete the probability that the corresponding session keys differ is negligibly close to zero and ii) \(\mathcal {A}\) has success probability in a distinguishing attack negligibly close to 1/2.

Definition 9

(Weak PFS) We say that \(\Pi \) is secure with weak perfect forward secrecy if in \({{\textbf {SG}} }\) attacker \(\mathcal {A}\) is also allowed to corrupt the peer and the holder of the test-session after the test-session key expired and \(\mathcal {A}\) has remained passive (only) with respect to the test-session and its matching session(s).

We stress that in our security proof of weak forward secrecy, security even holds when the attacker knows the secret long term keys (but no other session specific secret information) of the peer and the holder and the KGC before the session key is computed. This can be interesting when dealing with devices where long-term keys and session specific information are stored separately in two different memories possibly at different locations, but both with approximately the same level of protection against unauthenticated access. Thus corruptions would not reveal session-specific information. In these scenarios the \(\textsf {Corrupt} \) query would only allow to reveal the long-term secrets. Next, we present a formal definition that captures this strengthened form of weak PFS. Essentially, it reflects the intuition that forward secrecy should only rely on the secrecy of the ephemeral keys but not of any long-term secret.

Definition 10

(Enhanced Weak PFS) We say that \(\Pi \) provides enhanced weak perfect forward secrecy if \(\Pi \) is secure with weak perfect forward secrecy even if \(\mathcal {A}\) is additionally given the secret keys of all parties and the secret key of the KGC at the beginning of the security game and we allow that \(\textit{id}_A=\textit{id}_B\) for the test-session and its matching session.

Let us now define full PFS. In contrast to the previous definitions we do not require the attacker to remain passive with respect to the test-session.

Definition 11

(Full PFS) We say that \(\Pi \) is secure with full perfect forward secrecy if in \({{\textbf {SG}} }\) attacker \(\mathcal {A}\) is additionally allowed to i) obtain the (long-term) secret key of the holder of the test session at the beginning of the security experiment and ii) corrupt the peer of the test session after the test-session key expired.

Key Compromise Impersonation Attacks We also cover key compromise impersonation attacks[24]. In a KCI attack, \(\mathcal {A}\) may after obtaining the secret key of party \(\textit{id}_A\) make \(\textit{id}_A\) falsely believe that it is communicating with some other uncorrupted party \(\textit{id}_B\) although \(\textit{id}_A\) actually isn’t. (Obviously impersonating Alice to other parties with the help of Alice’s secret key is trivial.)

Definition 12

(KCI Security) We say that \(\Pi \) is secure against KCI attacks if in \({{\textbf {SG}} }\) attacker \(\mathcal {A}\) is additionally allowed to reveal the secret key of the holder of the test session at the beginning of the security experiment (Step 2).

Obviously, KCI security implies security under Definition 8 (Security of Identity-based Key Agreement Protocol) since the adversary is only given additional information to mount its attack.

Reflection Attacks We additionally cover reflection attacks in which an attacker makes two sessions of the same party communicate with each other. As pointed out by [20], these attacks are relevant in real-life scenarios when Alice wants to establish a connection between two of her computers (for example access to a home computer via her laptop).

Definition 13

(Security against Reflection Attacks) We say that \(\Pi \) is secure against reflection attacks if in \({{\textbf {SG}} }\) attacker \(\mathcal {A}\) may also choose a test-session whose peer is equal to its holder, i.e. allowing \(\textit{id}_A=\textit{id}_B\).

3 Main result

A detailed description of \(\textsf {TOPAS}\)is given in Fig. 1. We remark that the challenge in designing a protocol which provides optimal message size and full PFS is that any such protocol must provide two key properties. First, it must include an exchange of ephemeral public keys as otherwise we cannot have any meaningful form of forward secrecy. (Otherwise the session key can be derived by Alice solely from her long-term key and any adversary that obtains this key in a PFS experiment can also compute the session key.) On the other hand, the protocol must also somehow make the parties ‘authenticate’ their ephemeral public keys using their corresponding long-term secrets as otherwise, by the impossibility result of Krawczyk (Appendix E), we cannot have full PFS. The difficulty when designing a protocol with optimal message length now lies in the fact that we need to combine the two requirements into a single short value.

In \(\textsf {TOPAS}\), Alice and Bob exchange blinded versions of their long-term keys. In particular, in each message, the long-term secret is multiplied by a fresh ephemeral Diffie-Hellman key. Each long-term key in turn is a unique signature on the identity of its holder under the master secret. The verification equation for this signature relies on the bilinear pairing and can be re-written as

$$\begin{aligned} e(\textit{sk}_A,g_2^z)/e(H(\textit{id}_A),g_2){\mathop {=}\limits ^{?}}1. \end{aligned}$$

The crucial feature of the key derivation of \(\textsf {TOPAS}\)is that, due to the bilinearity of the pairing, Bob can remove the signature \(\textit{sk}_A\) (and thus any identity-specific information) from the message \(a=g_1^x\textit{sk}_A\) such that the shared key is independent of \(\textit{sk}_A\). However, the result lies in the target group and has an additional exponent z:

$$\begin{aligned} e(a,g_2^z)/e(H(\textit{id}_A),g_2)= & {} e(g_1^x,g_2^z)e(\textit{sk}_A,g_2^z)/e(H(\textit{id}_A),g_2)\\= & {} e(g_1^x,g_2^z)=e(g_1,g_2)^{xz}. \end{aligned}$$

By symmetry, Alice computes \(e(g_1,g_2)^{yz}\). Together with their own secret ephemeral key, each party can now compute \(e(g_1,g_2)^{xyz}\). In the same way, we can obtain

$$\begin{aligned} e(a,h_2^z)/e(H(\textit{id}_A),h_2)= & {} e(g_1^x,h_2^z)e(\textit{sk}_A,h_2^z)/e(H(\textit{id}_A),h_2)\\= & {} e(g_1^x,h_2^z)=e(g_1,h_2)^{xz} \end{aligned}$$

in \(\textsf {TOPAS}\).

In the rest of this section, we present a security analysis of our new protocol. We start by showing that \(\textsf {TOPAS}\) provides security against KCI and reflection attacks, as well as enhanced weak PFS under non-interactive security assumptions. Next, we provide a proof of full PFS security.

3.1 Proof of security against KCI and reflection attacks

Before we begin with the formal security proof, let us provide some intuition for the overall strategy. We present a general exposition that is valid for all the protocols presented in this paper and depicted in Figs. 1, 2, and 3. To this end, we introduce some new variables \(k^*\) and \(\overline{k}\) that will be initialized differently for each of our protocols. As a consequence of this notation, in each protocol, session keys are derived by querying \(\hat{k}=(k^*,\textit{id}_A,\textit{id}_B,a,b)\) to the random oracle \(H'\). For \(\textsf {TOPAS}\), we define \(k^*=(k,k')\) and \(\overline{k}=k^{z^2}\). In \(\textsf {TOPAS+}\), we use \(k^*=k\) and \(\overline{k}=k^{z^2}\) while in \(\textsf {FACTAS}\)we will define \(k^*=k\) and \(\overline{k}=k^{\underline{4}}\). We note that k is always part of \(k^*\).

First, as we use \(H'\) to compute the final session key, and since \(H'\) is modeled as a random oracle, any successful attacker needs to query \(\hat{k}=(k^*,\textit{id}_A,\textit{id}_B,a,b)\) to the random oracle \(H'\) such that \(H'(\hat{k})=K\) (where K is the real session key computed by the test-session) before outputting its guess \(b'\).Footnote 6 Informally, the attacker has no information on the session key of a particular session unless it has either asked a \(\textsf {Reveal} \) query to that session (or its matching session) or asked \(\hat{k}\) to the random oracle \(H'\).

Our overall simulation strategy can be outlined as follows. The simulator chooses setup parameters such that it

  1. 1.

    can compute \(\overline{k}\) for any session solely from the session-id \((\textit{id}_A,\textit{id}_B,a,b)\), even for the test-session.

  2. 2.

    cannot compute k for the test-session. (Actually, for the proof to go through it is even not necessary that the simulator can compute k for any session at all.)

  3. 3.

    can check whether for any \(H'\) query \(\hat{k}=(k^*,\textit{id}_A,\textit{id}_B,a,b)\), \(k^*\) is indeed the unique intermediate value that occurs in the computation of the session key for session-id \((\textit{id}_A,\textit{id}_B,a,b)\). In this context we also say that \(k^*\) is correct (with respect to \((\textit{id}_A,\textit{id}_B,a,b)\)). In particular, the simulator (but not necessarily the attacker) can evaluate an efficient algorithm that given \(\overline{k}\) and \(\hat{k}\), outputs 1 if \(k^*\) is correct with respect to \((\textit{id}_A,\textit{id}_B,a,b)\) and 0 otherwise.

  4. 4.

    can extract a solution to the complexity challenge when given the \(H'\) query \(\hat{k}=(k^*,\textit{id}_A,\textit{id}_B,a,b)\) of the test-session, in particular k.

We remark that the condition in 3.1 is very crucial and the main challenge in the security proofs. The simulator must always be able to recognize that a given value k (as part of \(k^*\)) is exactly the value that a real session would compute. On the other hand, it must not (at least in the test-session) be able to compute the intermediate value k on its own. We need this to recognize if and when the adversary queries a correct \(k^*\) to the random oracle. If \(k^*\) is correct with respect to the test-session the simulator can use k to extract a solution to the complexity challenge. However it is also important to check if a \(H'\) query \(k^*\) is correct with respect to some other session as we then have to put additional effort into ensuring that the outputs of the \(\textsf {Reveal} \) and \(H'\) queries remain consistent. (This is exemplified in Appendix H.3.) Otherwise the adversary could tell the real security game (in which each session always can compute k) from the simulated one (that in contrast sometimes cannot compute k) apart. In essence, we need the underlying problem to behave like a gap-problem, where computing a solution is hard even when given access to a corresponding decision oracle.

In each of our protocols we use a different approach to check whether \(k^*\) is correct with respect to some session. In \(\textsf {TOPAS}\), this is ensured via a trapdoor test that was used by Cash, Kiltz, Shoup to reduce the strong twin Diffie-Hellman problem to the ordinary decisional Diffie-Hellman problem [10]. To this end, we require the adversary (and the honest user) to compute, besides k an additional value \(k'\) when deriving the session key. Using the trapdoor test and \(k'\), the simulator can check whether \(k^{z^2}=\overline{k}\). As a result, we can base all security proofs of our first protocol, except for the proof of full PFS, on non-interactive security assumptions. In \(\textsf {TOPAS+}\), we do without the additional value \(k'\) at the cost of stronger security assumptions. Here we rely on variants of the Strong Diffie-Hellman assumption. We modify two of the security assumptions that we rely on, the \(\text {CBDHI }\) assumption and the \(\text {GCBDHI }\) assumption, to also give the adversary access an oracle \(O_{z^2}(\cdot ,\cdot )\) that checks whether for a given pair \((\tilde{k},\tilde{k}^*)\) it holds that \(\tilde{k}^*=\tilde{k}^{z^2}\). In this way the simulator can directly use its access to the Strong Diffie-Hellman oracle to check if \(k^*\) is correct. As a result, our second protocol has a more efficient key derivation procedure than our first. However, the downside of this approach is that our second protocol relies on interactive security assumptions in all security proofs of security (except for the proof of enhanced weak PFS). In \(\textsf {FACTAS}\), the simulator will simply check whether \((k^*)^{\underline{4}}=\overline{k}\). We stress that for all of our protocols we need that there is no other value \(\tilde{k}^*\ne k^*\) which, besides the correct intermediate value \(k^*\), can pass the check. Observe that for all of our protocols this is guaranteed since exponentiation by \(z^2\) constitutes a permutation in prime-order group \(G_T\) while exponentiation by 4 represents a permutation in the group of signed quadratic residues \(\mathcal {SQR}_N\).

3.2 Basic security properties

Theorem 1

In the random oracle model, \(\textsf {TOPAS}\)is secure against KCI attacks under the (2, 3)-\(\text {CBDHI }\) assumption, and secure against reflection attacks under the (2, 3)-\(\text {GCBDHI }\) assumption.

Proof

It is straight-forward to show that two matching sessions compute the same key. Since they are matching, they compute the same session identifier. Also, as shown above they compute the same values \(k,k'\). Thus all inputs to \(H'\) are identical for each session and the session key is equal too.

In the next step, we show that real session keys are indistinguishable from random keys. Assume we are given the random input \(G=(p,\hat{g}_1,\hat{g}_2,e)\) and \((\hat{g}_1^{t},\hat{g}_1^{t^2}\!\!,\hat{g}_2^{t},\hat{g}_2^{t^2}\!\!,\hat{g}_2^{t^3})\) to the \(\text {CBDHI }\)/\(\text {GCBDHI }\) challenge. First, we let the simulator set \(g_1=\hat{g}_1^{t}\) and \(g^{z^i}_2=\hat{g}^{t^i}_2\) for \(i=1,2,3\). This implicitly sets \(\textit{msk}=z=t\). Next, the simulator draws random \(r,s\in \mathbb {Z}_p \) and sets \(h_2=g_2^s/(g^{z^2}_2)^r=g_2^v\) and \(h_2^z=(g_2^z)^s/(g^{z^3}_2)^r=g_2^{vz}\) for some \(v\in \mathbb {Z}_p \). This implicitly sets \(v=s-rz^2\). Observe that all values are distributed exactly as in the original security game.

The simulator will randomly choose one party, Bob, to be the peer of the test-session. Since there is only a polynomial number of peers, the simulator’s guess is correct with non-negligible probability. Throughout the following, we therefore assume that Bob will not be corrupted by the adversary. Similarly, the simulator will guess the test-session with non-negligible probability.

We will consider two different types of attack strategies. Either the attacker tries to launch a KCI attack or a reflection attack. We exploit that security under KCI attacks implies security in the sense of Definition 8 (Security of Identity-based Key Agreement Protocol). (The only difference is that the attacker may in a KCI attack additionally request the secret key of Alice.) The proofs for both attack types are slightly distinct in the extraction phase. For ease of exposition, we describe a simulation strategy which is for the most part valid for both attack types. We clearly mark when and how the simulation strategies differ in the extraction phase. For better overview, in both cases we always ensure that the peer (Bob) of the test-session which is either held by Alice\(\ne \)Bob (in case of KCI attacks) or Bob himself (when dealing with reflection attacks) remains uncorrupted. Let us now present the general setup.

Setup and simulation of queries We will first show how the simulator will setup all parameters to be able to answer \(\textsf {Corrupt} \) queries for any party except Bob. To this end, the simulator programs the outputs of the random oracle H for all inputs except for \(\textit{id}_B\) as follows: given input \(\textit{id}_i\) it chooses a random value \(r_i\in \mathbb {Z}_p \) and outputs \(H(\textit{id}_i):=g^{r_i}_1=\hat{g}^{zr_i}_1\). In this way, the simulator can always compute a corresponding secret key as \(\textit{sk}_i=\hat{g}^{r_i}_1\) and simulate the \(\textsf {Register} \) and \(\textsf {Corrupt} \) queries. However, for \(\textit{id}_B\) it sets \(H(\textit{id}_B)=\hat{g}^{-r_B}_1\) for some random \(r_B\in \mathbb {Z}_p \). Observe that the simulator does not know the corresponding secret key of Bob. In almost all protocol runs the simulator makes sessions (except for those whose holder is Bob) compute their messages and keys as specified in the protocol description. In this way it can also answer all \(\textsf {Reveal} \) queries (because the simulator knows the secret key of any party except for Bob).

To compute messages in sessions where Bob is the session holder (we denote the message produced by this session b), the simulator does the following. It chooses a random \(b'\in \mathbb {Z}_p \) and computes \(b=\hat{g}^{b'}_1\). It then holds that \(b^z/H(\textit{id}_B)=\hat{g}^{zb'+r_B}_1={g}^{b'+r_B/z}_1\). Observe that now the secret exponent y in \(b={g}_1^{y} \left( H(\textit{id}_B)\right) ^{1/z}\) is not known to Bob (i.e. the simulator that simulates Bob) as

$$\begin{aligned} y =b'/z+r_B/z^2 \end{aligned}$$

and z is unknown. Observe that, as a consequence, the simulator cannot compute k on behalf of Bob anymore when only given message a in case a is produced by the adversary in an active attack.

Simulating Reveal Queries for Bob Let us show now how the simulator can successfully simulate sessions (and in particular \(\textsf {Reveal} \) queries) involving Bob (and the adversary). To this end we first show that, although the simulator cannot compute k, it can nevertheless always compute \(\overline{k}=k^{z^2}\) even when the adversary \(\mathcal {A}\) makes Bob engage in a communication with Bob himself. Recall that

$$\begin{aligned} \overline{k}_B=\left( \frac{e\left( a,g_2^{z^3}\right) }{e\left( H(\textit{id}_A),g^{z^2}_2\right) }\right) ^y. \end{aligned}$$

Now, independent of whether a has been computed by Bob (when considering reflection attacks), a session of any other party, or the adversary, the simulator can compute \(\overline{k}_B\) for \(y=b'/z+r_B/z^2\) as

$$\begin{aligned} \overline{k}_B= \frac{e\left( a\,,\,g_2^{yz^3}\right) }{e\left( H(\textit{id}_A)\,,\,g^{yz^2}_2\right) } = \frac{e\left( a\,,\left( g_2^{z^2}\right) ^{b'}\!\!\!\!\cdot \!\left( g_2^{z}\right) ^{r_B}\right) }{e\left( H(\textit{id}_A)\,,\,\left( g_2^{z}\right) ^{b'}\!\!\!\cdot \!\left( g_2\right) ^{r_B}\right) }\ . \end{aligned}$$

In the next step, we show that the simulator which knows \(\overline{k}\) can check, given \(k,k'\), if indeed \(\overline{k}=k^{z^2}\) and \(k'=k^{v}\). To this end we apply a variant of the trapdoor test that was introduced in [10]. Recall that we have \(h_2=g_2^v=g_2^s/(g^{z^2}_2)^r\) and \(h_2^z=g_2^{vz}=(g_2^z)^s/(g^{z^3}_2)^r\) for \(v=s-rz^2\) unknown to the simulator. We will now show that with overwhelming probability \(k^{z^2}=\overline{k} \wedge k^{v}=k'\) iff \(\overline{k}^rk'=k^s\). First assume that \(k^{z^2}=\overline{k} \wedge k^{v}=k'\). Then \(\overline{k}^rk'=\left( k^{z^2}\right) ^rk^{v}=\left( k^{z^2}\right) ^rk^{s-rz^2}=k^s\) which shows the first direction. Next assume that \(\overline{k}^rk'=k^s\). Observe that since \(s=v+rz^2\) we get that

$$\begin{aligned} k^s=k^{v+rz^2}=k^{v}\left( k^{z^2}\right) ^r= k'\overline{k}^r \end{aligned}$$

and thus

$$\begin{aligned} \left( \overline{k}/k^{z^2}\right) ^r=k^{v}/k' \end{aligned}$$
(1)

while r is information-theoretically hidden from the adversary. Now if \(\overline{k}=k^{z^2}\) this must imply \(k^{v}=k'\). In case \(\overline{k}\ne k^{z^2}\), \(\left( \overline{k}/k^{z^2}\right) ^r\) is uniformly distributed in \(G_T\) (for random r) while \(k^{v}/k'\) is fixed. Thus the success probability of an adversary to produce \(k,k'\) such that Eq. 1 is fulfilled is upper bounded by 1/p which is negligible.

So we have now showed that the simulator can always compute \(\overline{k}\) and always checks whether a given pair \(k,k'\) happens to be “correct” (with respect to some session) in the sense of \(k^{z^2}=\overline{k} \wedge k^{v}=k'\). Let us next describe the strategy of the simulator to program the second random oracle, \(H'\), and answer \(\textsf {Reveal} \) queries to sessions involving \(\textit{id}_B\). The main problem is to keep the outputs of the random oracle and the outputs to the \(\textsf {Reveal} \) queries consistent. The simulator maintains two lists R and S which are initially both empty. In R we store queries to the random oracle \(H'\) and the corresponding answers. In S we simply store session-ids. Let us first describe the basic strategy. Whenever, the attacker queries the random oracle with input \(x_i\) we look up if there is some entry \((x_i,y_i)\) already in R. In case it is not, we generate and output a new random string \(y_i\) and add \((x_i,y_i)\) to R. If \((x_i,y_i)\) is already in R we output \(y_i\). To compute session-keys for session-id \(\textit{id}_A,\textit{id}_B,a,b\) we proceed as follows. We look up if there is some entry \((u_i,v_i)\) with \(u_i=(\textit{id}_A,\textit{id}_B,a,b)\) already in S. In case it is not, we generate and output a new random string \(v_i\) and store \((u_i,v_i)\) in S. If \((u_i,v_i)\) is already in S we output \(v_i\). The challenge now is that we have to make sure that the answers stored in S and R remain consistent. In particular, sometimes the outputs stored in S and R must be identical. (For example, imagine an adversary that successfully computes the values \(k,\overline{k}\) of some session with session-id \(\textit{id}_A,\textit{id}_B,a,b\). Obviously, querying \(x_j=(k,k',\textit{id}_A,\textit{id}_B,a,b)\) to the random oracle must produce the same output as when asking the \(\textsf {Reveal} \) query to session \(\textit{id}_A,\textit{id}_B,a,b\).) To cope with such situations we need to perform additional checks. So whenever we receive a query \(x_i=(k,k',\textit{id}_A,\textit{id}_B,a,b)\) we additionally check whether there is a corresponding query in S with \(u_j=(\textit{id}_A,\textit{id}_B,a,b)\) such that \(k^{z^2}=\overline{k}\wedge k^v=k'\) for the corresponding \(\overline{k}\) value of that session. On success we output \(y_i=v_j\) as stored in S. Otherwise we output a random \(y_i\). On the other hand, whenever we encounter a \(\textsf {Reveal} \) query for some session held by Bob we can always compute \(u_i=(\textit{id}_A,\textit{id}_B,a,b)\) and \(\overline{k}\). Next we also check whether there is some entry \((x_j,y_j)\) with \(x_j=(k,k',\textit{id}_A,\textit{id}_B,a,b)\) such that again \(k^{z^2}=\overline{k}\wedge k^v=k'\). On success, we output \(v_i=y_j\) as stored in R, otherwise we output a random \(v_i\).

Extraction Now that we have showed how to simulate all attack queries, let us proceed to showing how the simulator extracts a solution to the \(\text {CBDHI }\) challenge. From this point on, we cover KCI attacks and reflection attacks separately. Either the test session is held by Alice\(\ne \)Bob or Bob.

First we show how the simulator can extract a solution if the test-session is held by Alice. For this session we deviate in the simulation of the test-session from the general simulation strategy that is described above. Instead of generating a honestly as \(a=g_1^{x}H(\textit{id}_A)^{1/z}\) the simulator computes a as \(a=\hat{g}_1^{a'}H(\textit{id}_A)^{1/z}\) for some random \(a'\in \mathbb {Z}_p \). Observe that now the discrete logarithm x in \(a=g_1^{x}H(\textit{id}_A)^{1/z}\) is implicitly set to \(x=a'/z\).

Assume the adversary has non-negligible success probability when querying the \(\textsf {Test} \) query to this sessions. In particular, it can decide whether the key provided by the \(\textsf {Test} \) query is the real session key or a random key from the same key space. We know that the attacker must ask the correct \(k,k'\) values with respect to the test-session to \(H'\). With \(y =b'/z+r_B/z^2\) and \(x=a'/z\) the simulator in this way obtains k such that

$$\begin{aligned} k= & {} e(g_1,g_2)^{xyz} =e(\hat{g}_1,\hat{g}_2)^{xyz^2} =e(\hat{g}_1,\hat{g}_2)^{a'/z\cdot (b'/z+r_B/z^2)\cdot z^2}\\= & {} e(\hat{g}_1,\hat{g}_2)^{a'b'+a'r_B/z}. \end{aligned}$$

From this we can easily compute a solution d to the \(\text {CBDHI }\) assumption as

$$\begin{aligned} d=\left( ke(\hat{g}_1,\hat{g}_2)^{-a'b'}\right) ^{1/a'r_B}=e(\hat{g}_1,\hat{g}_2)^{1/z}. \end{aligned}$$

Let us now show how to extract a solution to the \(\text {GCBDHI }\) challenge in case the test-session is held by Bob. Recall that the \(\text {GCBDHI }\) challenge also contains \(w\in \mathbb {Z}_p \) and the task is to compute the value \(e(\hat{g}_1,\hat{g}_2)^{\frac{z+w}{z^2}}\). In this case we already have that each message output by Bob is constructed as \(b=\hat{g}_1^{b'}\) for random \(b'\). Now for the test-session we slightly deviate and set a as \(a=\hat{g}_1^{a'}\hat{g}^{r_B}_1\) for \(a'\in \mathbb {Z}_p \) with \(a'= r_B w - b' \) (i.e. such that \(r_B/(a'+b')=w\)). Recall that \(H(\textit{id}_B)=\hat{g}^{-r_B}_1\). This sets x in \(a=g_1^{x}H(\textit{id}_B)^{1/z}=g_1^{x}\cdot \hat{g}^{-r_B}_1\) to \(x =a'+r_B/z\). Also assume that \(b=\hat{g}_1^{b'}\) for random \(b'\), implying \(y =b'/z+r_B/z^2\). This time the simulator obtains the value k from the queries to the random oracle such that

$$\begin{aligned} k= & {} e(g_1,g_2)^{xyz} =e(\hat{g}_1,\hat{g}_2)^{xyz^2}\\= & {} e(\hat{g}_1,\hat{g}_2)^{(a'+r_B/z)\cdot (b'/z+r_B/z^2)\cdot z^2}\\= & {} e(\hat{g}_1,\hat{g}_2)^{a'b'+(a'+b')r_B/z+(r_B)^2/z^2}. \end{aligned}$$

We now easily get a solution to the \(\text {GCBDHI }\) assumption as

$$\begin{aligned} \left( ke(\hat{g}_1,\hat{g}_2)^{-a'b'}\right) ^{1/(a'+b')r_B}= & {} e(\hat{g}_1,\hat{g}_2)^{\frac{z+r_B/(a'+b')}{z^2}}\\= & {} e(\hat{g}_1,\hat{g}_2)^{\frac{z+w}{z^2}}.\end{aligned}$$

This concludes the proof of security.\(\square \)

Enhanced Weak PFS Let us now show that \(\textsf {TOPAS}\)provides enhanced weak PFS. The proof is relatively straight-forward.

Theorem 2

\(\textsf {TOPAS}\)provides enhanced weak perfect forward secrecy under the \(\text {CBDH }\) assumption.

Proof

Except for the generation of two messages a and b, the simulator can setup everything as specified in the protocol description. As before, with non-negligible success probability a is the message sent by the test-session and b is the message received by the test-session. (In contrast to the previous proof the simulator will now also know the secret key of Bob and the master secret z.) Since almost everything is computed as specified in the protocol description and since the session key is expired the simulator can answer all queries of the attacker. We exploit that for enhanced weak PFS security we can assume that a and b may not be produced or modified by the adversary. Let \(g_1^{x},g_1^{y}\) be the \(\text {CBDH }\) challenge. The simulator computes

$$\begin{aligned} a=g_1^{x}H(\textit{id}_A)^{1/z} \text { and } b=g_1^{y}H(\textit{id}_B)^{1/z}. \end{aligned}$$

We now have that \(k=e(g_1,g_2)^{xyz}\). As before, any successful adversary must query this value to the random oracle \(H'\) before answering the test-query. The simulator can guess with non-negligible success probability which of the values queried to \(H'\) is equal to k. Then it can simply compute the answer to the \(\text {CBDH }\) challenge as \(k^{1/z}=e(g_1,g_2)^{xy}\). \(\square \)

3.3 Proof of full PFS security

Theorem 3

\(\textsf {TOPAS}\)provides full PFS under the (3, 3)-\(\text {CBDHI }\), the \(\text {KPA }\), and the \(\text {MKCoCDH }\) assumption.

In contrast to the previous security proof of enhanced weak PFS, the adversary can also modify the messages sent and received by the test-session in the security experiment for full PFS.

Proof

Assume there exists an adversary \(\mathcal {A} _0\) that breaks the full PFS security of the protocol. In the following we will step-wisely construct a chain of adversaries \(\mathcal {A} _0\) to \(\mathcal {A} _6\) such that \(\mathcal {A} _6\) breaks the (3, 3)-\(\text {CBDHI }\) assumption. Each adversary \(\mathcal {A} _i\) for \(i=1,2,3,4,5,6\) is based on the existence of the previous one \(\mathcal {A} _{i-1}\).

Let us first recall the essence of the security experiment when proving full PFS security. Besides the setup parameters, the adversary \(\mathcal {A} _0\) is also given \(a=g_1^xH(\textit{id}_A)^{1/z},\textit{sk}_A,H(\textit{id}_B)\) (but not \((H(\textit{id}_B))^{1/z}\)). In response, the adversary computes \(b\in G_1\). Let \(Y\in G_1\) be the value such that \(b=Y(H(\textit{id}_B))^{1/z}\). Next, the challenger provides the adversary with \(\textit{sk}_B=(H(\textit{id}_B))^{1/z}\). Now since the adversary can distinguish K from a random key it must query the corresponding

$$\begin{aligned} k=\left( e(b,g^z_2)/e(H(\textit{id}_B),g_2)\right) ^{x}=\left( e(a,g^z_2)/e(H(\textit{id}_A),g_2)\right) ^{y} \end{aligned}$$

to the random oracle H. In the following we always assume, for simplicity, that k is directly given to the challenger.

Attacker \(\mathcal {A} _6\) will simulate the real security game to \(\mathcal {A} _5\) using a similar setup as in the proofs before. Assume we are given the random \(\text {CBDHI }\) challenge consisting of \(G=(p,\hat{g}_1,\hat{g}_2,e)\) and \((\hat{g}_1^{t},\hat{g}_1^{t^2},\hat{g}_1^{t^3}, \hat{g}_2^{t},\hat{g}_2^{t^2},\hat{g}_2^{t^3})\). Let us first show how the simulator will construct the first part of the public parameters in \(\textit{mpk}\) that are to be given to \(\mathcal {A} _5\). Again we let the simulator output \(g_2^{z}=\hat{g}_2^{t}\) as part of \(\textit{mpk}\). Internally, it will also set \(g_2^{z^i}=\hat{g}_2^{t^i}\) for \(i=2,3\). This implicitly sets \(\textit{msk}=z=t\). The simulator draws random \(r,s\in \mathbb {Z}_p \) and sets sets \(h_2=g_2^s/(g^{z^2}_2)^r=g_2^v\) and \(h_2^z=(g_2^z)^s/(g^{z^3}_2)^r=g_2^{vz}\) for some \(v\in \mathbb {Z}_p \). This implicitly sets \(v=s-rz^2\). Observe that all values are distributed exactly as in the original security game.

Next, the simulator draws a random coin \(q\in \{0,1\}\) and a uniformly random \(r_B\in \mathbb {Z}_p \). Depending on q, the remaining setup values will slightly differ. That is, the simulator sets

$$\begin{aligned} H(\textit{id}_B)=(\hat{g}^{t^q}_1)^{r_B}=(\hat{g}^{z^q}_1)^{r_B}\text { and } g_1=\hat{g}_1^{t^{q+1}}=\hat{g}_1^{z^{q+1}}. \end{aligned}$$

Observe that the simulator does not know \(\textit{sk}_B\) in case \(q=0\). However, in case \(q=1\) the simulator knows \(\textit{sk}_B=(H(\textit{id}_B))^{1/z}=\hat{g}_1^{r_B}\)

For \(q=0\) and \(q=1\), the simulator programs the outputs of the random oracle H for all inputs except for \(\textit{id}_B\) as follows: given input \(\textit{id}_i\) (regardless of it being chosen by the adversary as part of a \(\textsf {Register} \) query or not) it chooses a random value \(r_i\in \mathbb {Z}_p \) and outputs \(H(\textit{id}_i):=g^{r_i}_1=\hat{g}^{z^{q+1}r_i}_1\). In this way, the simulator can always compute a corresponding secret key as \(\textit{sk}_i=\hat{g}^{z^{q} r_i}_1\) and answer the \(\textsf {Corrupt} \) query. As in the previous proofs, all sessions can be simulated with this setup except for the test-session.

Our next goal is to step-wisely construct attacker \(\mathcal {A} _5\). It behaves like \(\mathcal {A} _0\) but outputs some additional values in case the simulator correctly guesses the test-session (and its peer). We stress that \(\mathcal {A} _5\) is an attacker against the full PFS security just like \(\mathcal {A} _0\). Let us begin our formal analysis. Assume we have a successful adversary \(\mathcal {A} _0\).

Attacker \(\mathcal {A} _1\) Attacker \(\mathcal {A} _1\) will work exactly like \(\mathcal {A} _0\) except that it outputs \(k^{1/x}=e(b,g^z_2)/e(H(\textit{id}_B),g_2)\) together with b and \(g_1,X=a/\textit{sk}_A=g_1^x\) at the end of the security game. Observe that these values can easily be computed from the public values alone. In this way, \(\mathcal {A} _1\) can easily be computed from \(\mathcal {A} _0\).

Attacker \(\mathcal {A} _2\) Now, since \(\mathcal {A} _1\) outputs \(k,X,k^{1/x},g_1\), by the security of the Knowledge of Pairing Pre-Image assumption there also exists an adversary \(\mathcal {A} _2\) that works exactly like \(\mathcal {A} _1\) except that it also outputs \(g^*_2\) together with k such that \(k=e(X,g^*_2)\) and \(k^{1/x}=e(g_1,g^*_2)\). Since \(k=e(g_1,g_2)^{xyz}\), we must have \(g^*_2=g_2^{yz}\).

Attacker \(\mathcal {A} _3\) Next, we show that if \(\mathcal {A} _2\) wins the security game against a PFS challenger we can construct an attacker \(\mathcal {A} _3\) that can win in the security of the \(\text {MKCoCDH }\) assumption.

Let us recall the security game of the Modified Knowledge of Co-CDH Assumption. First \(\mathcal {A} _3\) receives G, \({g}_2^{z},h_2,h_2^{z}\) and \(B'\in G_1\). Next, \(\mathcal {A} _3\) outputs \(Y'\in G_T\). As a response, the challenger outputs \(B'^{1/z}\) and \(U'\in G_2\) with \(e(B',g_2)\cdot Y'=e(g_1,U')\). Finally, \(\mathcal {A} _3\) outputs \(W'\). It wins if \(e(B',g_2)=e(g_1,W')\).

We will now describe how \(\mathcal {A} _3\) works by using \(\mathcal {A} _2\) as a black-box. First \(\mathcal {A} _3\) uses the first part of its input \(G = (p, g_1, g_2, e)\), \(g_2^{z},h_2,h_2^{z}\) as the public key \(\textit{mpk}\) used in the PFS security game when simulating the challenger to \(\mathcal {A} _2\). Next, it programs the random oracle H as described before such that all the secret keys \(H(\textit{id}_A)\) for all the identities \(\textit{id}_A\) are known to \(\mathcal {A} _3\) except for \(\textit{id}_A=\textit{id}_B\). Moreover, \(\mathcal {A} _3\) sets \(H(\textit{id}_B):=B'\). In this way, \(\mathcal {A} _3\) can construct all the values \(G,g_2^z,h_2,h_2^{z},a=g_1^xH(\textit{id}_A)^{1/z},\textit{sk}_A,H(\textit{id}_B)\) and simulate the full PFS challenger to \(\mathcal {A} _2\).

When \(\mathcal {A} _2\) outputs \(b,k^{1/x}\). Attacker \(\mathcal {A} _3\) sends \(Y'=k^{1/x}\) to its challenger. In response \(\mathcal {A} _3\) receives \((H(\textit{id}_B))^{1/z}\) and \(U'\) from its challenger. The value \((H(\textit{id}_B))^{1/z}\) is used as input to \(\mathcal {A} _2\). The final output of \(\mathcal {A} _2\) is k and \(g_2^*=g_2^{yz}\). \(\mathcal {A} _3\) can now compute \(W'=U'/g_2^{yz}\). Observe that \(W'\) is correct since

$$\begin{aligned} e(B',g_2)= & {} e(g_1,U')/Y'=e(g_1,U')/e(g_1,g^*_2)\\= & {} e(g_1,U'/g^*_2)=e(g_1,W'). \end{aligned}$$

So whenever \(\mathcal {A} _2\) succeeds in a security game with the PFS challenger so will \(\mathcal {A} _3\) in the security game of the Modified Knowledge of Co-CDH Assumption.

Attacker \(\mathcal {A} _4\) Now by the security of the \(\text {MKCoCDH }\) assumption, as \(\mathcal {A} _3\) succeeds there exists another adversary \(\mathcal {A} _4\) that works exactly like \(\mathcal {A} _3\) except that it also outputs \(i\in \mathbb {Z}_p,T\in G_T\) together with \(Y'\) such that

$$\begin{aligned} Y'=e(B',g_2)^i\cdot e(g_1,T). \end{aligned}$$

We stress again that in the above series of attackers we have that if \(\mathcal {A} _0\) wins so will \(\mathcal {A} _4\).

Attacker \(A_5\) By the previous arguments, we are guaranteed that attackers \(\mathcal {A} _4\) and \(\mathcal {A} _2\) exist. Moreover, \(\mathcal {A} _4\) can use \(\mathcal {A} _2\) as a black-box to win in the \(\text {MKCoCDH }\) security game whenever \(\mathcal {A} _2\) wins.

We will now show an attacker \(\mathcal {A} _5\) that uses \(\mathcal {A} _4\) and \(\mathcal {A} _2\) as a black-box to win against a PFS challenger while outputting additional values besides what is required by definition. In the following, \(\mathcal {A} _5\) will play the role of the \(\text {MKCoCDH } \) challenger against \(\mathcal {A} _4\) and play the role of the PFS challenger against \(\mathcal {A} _2\). \(\mathcal {A} _5\) receives the setup parameters G, \({g}_2^{z},{h}_2,{h}_2^{z}\), \(\textit{sk}_A\), \(H(\textit{id}_B)\), and a as input. It relays G, \({g}_2^{z}\), and \(B':=H(\textit{id}_B)\) to \(\mathcal {A} _4\). In response, \(\mathcal {A} _4\), outputs \(k^{1/x}\) together with iT to \(\mathcal {A} _5\). At the same time \(\mathcal {A} _5\) sends all values G, \({g}_2^{z},{h}_2^{z},{h}_2^{z}\), \(\textit{sk}_A\), \(H(\textit{id}_B)\), a to \(\mathcal {A} _2\). In response, \(\mathcal {A} _2\) outputs b and \(k^{1/x}\) to \(\mathcal {A} _5\).

The attacker \(\mathcal {A} _5\) will output b and iT, i.e. a mix of the outputs by \(\mathcal {A} _4\) and \(\mathcal {A} _2\). Next, \(\mathcal {A} _5\) receives \((H(\textit{id}_B))^{1/z}\) from its PFS challenger. Attacker \(\mathcal {A} _5\) simply relays this value to \(\mathcal {A} _2\). As a response \(\mathcal {A} _2\) outputs k and \(g_2^*=g_2^{yz}\). These two values are finally output by \(\mathcal {A} _5\). Observe that we have not completed the run for \(\mathcal {A} _4\). However, we know by our previous analysis that if \(\mathcal {A} _2\) is successful, so will \(\mathcal {A} _4\) (if we complete the run of \(\mathcal {A} _4\)). However, at this point it is hidden from \(\mathcal {A} _4\)’s view that we abort, as all values given to \(\mathcal {A} _4\) are distributed exactly as in the real security game. Nevertheless, already at this point we must have that the values iT are such that \(Y'=e(B',g_2)^i\cdot e(g_1,T)\) (otherwise \(\mathcal {A} _4\) could not win in case we completed the run with a winning \(\mathcal {A} _2\)). In all of this, \(\mathcal {A} _5\) will deal with any attack queries made by \(\mathcal {A} _2\) to its PFS environment by simply relaying them to its own PFS challenger and the corresponding answers back to \(\mathcal {A} _2\).

Attacker \(A_6\) We will now present an attacker \(\mathcal {A} _6\) that can break the \(\text {CBDHI }\) assumption by using attacker \(\mathcal {A} _5\). \(\mathcal {A} _6\) will, using the \(\text {CBDHI }\) challenge, simulate all sessions (except for the test-session) as described before. Let us now turn our attention to the test-session. We have to consider two cases: either it holds for the value i output by \(\mathcal {A} _5\) that \(i\ne -1\) or \(i= -1\).

Let us first consider the case where \(i\ne -1\). With probability at least 1/2 we have that \(q=0\). In this case, it holds that \(H(\textit{id}_B)=\hat{g}_1^{r_B}\). We also have that

$$\begin{aligned} \frac{e(b,g_2^z)}{e(H(\textit{id}_B),g_2)}=e(Y,g^z_2)=e(H(\textit{id}_B),g_2)^i\cdot e(g_1,T). \end{aligned}$$

This directly gives

$$\begin{aligned} \left( \frac{e(b,g_2^z)}{e(g_1,T)}\right) ^{1/r_B(i+1)}=e(\hat{g}_1,\hat{g}_2)^{1/z}. \end{aligned}$$

It is important to observe that in case \(i\ne -1\) the simulator does not need to know \(H(\textit{id}_B)^{1/z}\). It can already break the \(\text {CBDHI }\) assumption just after receiving b and (iT).

Now let us turn our attention to the case where \(i=-1\). In this case, with probability at least 1/2 we have that \(q=1\) and \(H(\textit{id}_B)=(\hat{g}^t)^{r_B}\) and \(\mathcal {A} _6\) knows \(\textit{sk}_B=\hat{g}^{r_1}\). It can thus successfully send \(\textit{sk}_B\) to the adversary \(\mathcal {A} _5\) and receive back \(g_2^{zy}\). Since \(i=-1\) we have \(e(b,g_2^z)=e(g_1,T)\). It holds that

$$\begin{aligned} e(b,g_2^z)=e(Yg_1^{r_B/z^2},g_2^z)=e(g_1,g_2^{zy+r_B/z})=e(g_1,T). \end{aligned}$$

This immediately shows that \(T/g_2^{zy}=g_2^{r_B/z}\). We finally get a solution to the complexity challenge as

$$\begin{aligned} e(\hat{g}_1,T/g_2^{zy})^{1/r_B}=e(\hat{g}_1,\hat{g}_2)^{1/z}. \end{aligned}$$

This completes the proof of security. \(\square \)

4 Higher efficiency

\(\textsf {TOPAS+}\)is a variant of our protocol that features higher efficiency in the key derivation process (Fig. 2). Essentially, it is equivalent to our first protocol except that now only one intermediate value k is computed and fed into the hash function \(H'\). As a consequence we can have a shorter master public key. More importantly, when computing K each party only needs to apply two pairings one of which is message-independent and only needs to be computed once for every communication partner. The security proof of this variant will additionally rely on a variant of the so-called Strong Diffie-Hellman (SDH) assumption. Basically, it states that the assumptions used in the proofs of key indistinguishability, security against reflection, KCI, and full PFS attacks remain valid even if the adversary has access to an oracle \(O_{z^2}(\cdot ,\cdot )\) with the following property: given \(\tilde{k}\in G_T\), \(\tilde{k}^*\in G_T\), \(O_{z^2}(\cdot ,\cdot )\) outputs 1 iff \(\tilde{k}^{z^2}=\tilde{k}^*\) and 0 otherwise.

Definition 14

(CBDHI Assumption) We say that the (kl)-\(\text {CBDHI }\) ’ assumption holds if the (kl)-\(\text {CBDHI }\) assumption holds even when the adversary is additionally given access to oracle \(O_{z^2}(\cdot ,\cdot )\) in the \(\text {CBDHI }\) security game. Likewise we say that the (kl)-\(\text {GCBDHI }\) ’ assumption holds if the (kl)-\(\text {GCBDHI }\) assumption holds even when the adversary is additionally given access to oracle \(O_{z^2}(\cdot ,\cdot )\) in the \(\text {GCBDHI }\) security game.

We are now ready to state our result on the security of \(\textsf {TOPAS+}\).

Theorem 4

\(\textsf {TOPAS+}\)(Fig. 2) has the same security properties under the same security assumptions as \(\textsf {TOPAS}\)(Fig. 1), except that it relies on the (2, 3)-\(\text {CBDHI }\) ’, (3, 3)-\(\text {CBDHI }\) ’, (2, 3)-\(\text {GCBDHI }\) ’, and \(\text {MKCoCDH }\) ’ assumptions instead of the (2, 3)-\(\text {CBDHI }\), (3, 3)-\(\text {CBDHI }\), (2, 3)-\(\text {GCBDHI }\), and \(\text {MKCoCDH }\) assumptions.

The security proof remains virtually untouched. The only difference is now that we do not need a trapdoor test to maintain consistency when simulating the random oracle. Instead we can directly use the oracle \(O_{z^2}\) to check whether a query \(k^*=k\) of the adversary (as part of the \(H'\) query \(\hat{k}\)) actually equals the intermediate value computed by some session in the real security game. Again, the simulator is only able to compute \(\overline{k}=k^{z^2}\) for all sessions but using \(O_{z^2}(\cdot ,\cdot )\) it can check if \(O_{z^2}(k^*,\overline{k})\) is equal to 1. These modifications affect all proofs except for the proof of enhanced weak PFS.

5 Deniability

Deniable key exchange protocols protects Alice against the unwanted disclosure of her participation in a protocol run via Bob. This can be used to implement a digital variant of “off-the-record” communication over insecure networks. Intuitively, a key exchange protocol provides deniability, if Bob cannot convince a judge, Judy, that Alice once talked to him. To show deniability, it suffices to show that every transcript and corresponding session key that Bob presents to Judy can equally have been produced by a public simulation algorithm that has no access to Alice. More formally, for every PPT Bob that communicates with the PPT Alice, there exists a PPT simulator which when given the same inputs (including the same random coins) as Bob produces transcripts and corresponding session keys which are indistinguishable from those produced by Bob. For a formal treatment of deniability in key exchange protocols see [14]. Implicitly authenticated DH-based protocols like \(\textsf {HMQV}\)trivially fulfill this strong form of deniability.

Lemma 1

Any implicitly authenticated 2-pass protocol meets the strong notion of deniability of [14].

Proof

To see this, observe that Alice’s message \(a=g^x\) can be computed using public information only (Alice does not use her secret key when computing a). Therefore Bob can compute it in particular. Together with his own message b and his secret key Bob can now easily derive the secret session key. \(\square \)

In 2-message key exchange protocols where the computation of the exchanged messages involve the secret keys, it may be impossible to achieve deniability. As an example consider exchanging signed DH shares where the signature involves the identities of both parties. Of course, when Bob receives such a signature from Alice and presents it to Judy this immediately proves that Alice once talked to Bob. Fortunately, \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)provide a very strong form of deniability, although the computation of a involves Alice’s secret key.

Theorem 5

\(\textsf {TOPAS}\), \(\textsf {TOPAS+}\), and \(\textsf {FACTAS}\)meet the strong notion of deniability of [14].

Proof

Observe that a is uniformly distributed in the underlying group since x is uniform. Therefore the simulator can simulate Alice’s message a by just choosing a random group element. Recall that by definition the simulator is also given the same random coins as Bob. Thus and because the simulator also knows Bob’s secret key, it can compute y, b and the corresponding session key K in the exact same way as Bob. \(\square \)

6 On the necessity of the programmable random oracle model

Our proofs of \(\textsf {TOPAS}\)and \(\textsf {TOPAS+}\)heavily exploit the programmability of the random oracle model. Using a separation technique that was introduced by Fischlin and Fleischhacker [18] and transferred to the identity-based setting to analyze the Sakai–Ohgishi–Kasahara non-interactive key exchange by Chen, Huang, and Zhang [11] we can show that, in some sense, the programmability of the random oracle model is actually necessary for our reductions.

For preciseness, let us first introduce the notion of simple reductions.

Definition 15

(Simple Reduction) Let R be an efficient security reduction that uses a successful adversary (which is successful in some security game) to break a security assumption. We say that R is simple if it i) only calls the adversary once, ii) only uses the adversary in a black-box way, and iii) does not rewind the adversary to a point before R has output its first values to the adversary.

Although this restricts the class of possible security reductions, we remark that most reductions in cryptography are simple in the above sense.

Before we can present our impossibility result, we first need to introduce a new security assumption, the \(\text {CBDHI-2 }\) assumption, that is based on the \(\text {CBDHI }\) assumption. Consider a discrete logarithm oracle \(O_{DL}(\cdot ,\cdot )\) which given two group elements GH in \(G_T\) returns the discrete logarithm \(x\in \mathbb {Z}_p \) such that \(G^x=H\). Basically, our new assumption states that it is hard to break two instances of the \(\text {CBDHI }\) assumption when given one-time access to \(O_{DL}(\cdot ,\cdot )\). Similarly, we define \(\text {CBDHI-2 }\) ’ to be the corresponding assumption for \(\text {CBDHI }\) ’.

Assume \(G=(p,g_1,g_2,e)\) is a bilinear group. The (kl)-\(\text {CBDHI-2 }\) (respectively (kl)-\(\text {CBDHI-2 }\) ’) problem is to solve two random instances (both defined over G) of the (kl)-\(\text {CBDHI }\) ((kl)-\(\text {CBDHI }\) ’) problem when additionally given one-time access to \(O_{DL}(\cdot ,\cdot )\).

Definition 16

(CBDHI-2 and CBDHI’-2 Assumption) We say that attacker \(\mathcal {A}\) breaks the (kl)-\(\text {CBDHI-2 }\) (respectively the (kl)-\(\text {CBDHI-2 }\) ’) assumption if \(\mathcal {A}\) succeeds in solving the (kl)-\(\text {CBDHI-2 }\) ((kl)-\(\text {CBDHI-2 }\) ’) problem (where the probability is over the random coins of \(\mathcal {A}\) and the random choices for the bilinear group G and the problem instances z). We say that the (kl)-\(\text {CBDHI-2 }\) ((kl)-\(\text {CBDHI-2 }\) ’) assumption holds if no PPT attacker \(\mathcal {A}\) can break the (kl)-\(\text {CBDHI-2 }\) ((kl)-\(\text {CBDHI-2 }\) ’) problem.

We remark that these assumptions do not constitute a classical one-more assumption. In particular, the adversary is not provided with an oracle that solves the (kl)-\(\text {CBDHI-2 }\) (or (kl)-\(\text {CBDHI-2 }\) ’) problem but rather with a full-fledged discrete logarithm oracle. Observe that it is very easy to break a single instance of the \(\text {CBDHI }\) (\(\text {CBDHI }\) ’) problem when given one-time access to \(O_{DL}(\cdot ,\cdot )\). However, even with one-time access to such an oracle there seems to be no way to solve two instances of the (kl)-\(\text {CBDHI-2 }\) ((kl)-\(\text {CBDHI-2 }\) ’) problem. We are now able to state our impossibility result.

Theorem 6

Assume the \((k,l)-\text {CBDHI-2 } \) (respectively \((k,l)-\text {CBDHI-2 } ')\) assumption holds in the bilinear group G. Then there exists no simple reduction R that reduces the security of \(\textsf {TOPAS}\) \(( \textsf {TOPAS+})\) to the \((k,l)-\text {CBDHI } \) \(((k,l)-\text {CBDHI } ')\) assumption in the non-programmable random oracle model.

The proof of this theorem is similar to the proof of [11]. The main difference is that we cannot combine two master public keys to form an input for the discrete logarithm oracle such that the output value can be used to efficiently compute secret long-term keys. We can solve this problem by instead combining two long-term secrets that are computed on some random identity. The result is then used as an input to the discrete logarithm oracle.

Proof

The idea is to use a so-called meta-reduction that runs the reduction R twice. We denote the specific runs of R as \(R_1\) and \(R_2\). The meta-reduction will simulate the attacker against \(R_1\) and \(R_2\). The goal of the meta-reduction is to break the \((k,l)-\text {CBDHI-2 } \) assumption. Moreover, for the reduction, the simulations must be indistinguishable from real adversaries \(\mathcal {A} _1\) and \(\mathcal {A} _2\). \(R_1\) is given the first and \(R_2\) the second instance of the \(\text {CBDHI }\) (\(\text {CBDHI }\) ’) problem.

In the following, we assume that in the two runs, reduction \(R_i\) for \(i\in \{1,2\}\) computes the master secret \(z_i\). We can now describe two (inefficient) adversaries \(\mathcal {A} _1\) and \(\mathcal {A} _2\), such that \(\mathcal {A} _i\) communicates with the reduction \(R_i\) in the security game. Essentially \(\mathcal {A} _i\) only accesses the \(\textsf {Register} \) and \(\textsf {Corrupt} \) queries that are simulated by \(R_i\) a few times, modifies message b, and then directly breaks the key indistinguishability of \(\textsf {TOPAS}\)(\(\textsf {TOPAS+}\)). In particular, after being provided with the master public key, each adversary \(\mathcal {A} _i\) calls the \(\textsf {Register} \) query and the random oracle on the same five random identities \(\textit{id}_0\), \(\textit{id}_1\), \(\textit{id}_2\), \(\textit{id}_3\), and \(\textit{id}_4\). Next, \(\mathcal {A} _i\) calls \(\textsf {Corrupt} \) on \(\textit{id}_0\), \(\textit{id}_{2i-1}\), and \(\textit{id}_{2i}\) and obtains back \(\textit{sk}^{(i)}_0\), \(\textit{sk}^{(i)}_{2i-1}\), and \(\textit{sk}^{(i)}_{2i}\). Using the public parameters \(\mathcal {A} _i\) tests whether these secret keys are indeed correct by checking if \(e(\textit{sk}^{(i)}_{j},g_2^z)=e(H(\textit{id}_j),g_2)\), for all \(j\in \{0,2i-1,2i\}\). On failure \(\mathcal {A} _i\) simply aborts. Finally, \(\mathcal {A} _i\) computes the secret keys \(\textit{sk}^{(i)}_A:=\textit{sk}^{(i)}_{5-2i}\) and \(\textit{sk}^{(i)}_B:=\textit{sk}^{(i)}_{6-2i}\) of the remaining uncorrupted parties \(\textit{id}^{(i)}_A:=\textit{id}^{(i)}_{5-2i}\) and \(\textit{id}_B^{(i)}:=\textit{id}^{(i)}_{6-2i}\) by brute-force search. After that, \(\mathcal {A} _i\) makes the uncorrupted parties \(\textit{id}_A\) and \(\textit{id}_B\) run the protocol. We assume that the oracle of \(\textit{id}_A\) sends the first message a. Next, \(\mathcal {A} _i\) intercepts the message b of the oracle of \(\textit{id}_B\) and modifies it to \(b'=g^{y_i}\textit{sk}^{(i)}_B\) for some random but known \(y_i\in \mathbb {Z}_p \). In the next step, \(\mathcal {A} _i\) calls the \(\textsf {Test} \) query on the oracle of \(\textit{id}_A\) to obtain a key candidate. Finally, \(\mathcal {A} _i\) breaks the key indistinguishability of the session key by checking if this candidate equals the session key that is derived from a and \(y_i\) and the public parameters as given in the key derivation equation. This ends the description of our inefficient adversaries. For contradiction, we now assume that the random oracle H is not programmable, such that the queries to it are always answered in the same way for both adversaries. By definition, the reduction must break the security of the \(\text {CBDHI }\) (\(\text {CBDHI }\) ’) assumption in both runs.

We can then show how we can exploit the reduction to break the \(\text {CBDHI-2 }\) (\(\text {CBDHI-2 }\) ’) assumption. To this end, the two instances of the \(\text {CBDHI-2 }\) (\(\text {CBDHI-2 }\) ’) assumption are distributed among the two instances of the reduction. Next we show that we can describe a meta-reduction M that manages to efficiently simulate the two adversaries using the \(O_{DL}(\cdot ,\cdot )\) oracle. M essentially implements two adversaries \(\mathcal {A} '_1\) and \(\mathcal {A} '_2\) that proceeds exactly as the \(\mathcal {A} _i\)—except that they do not compute \(\textit{sk}^{(i)}_A\) and \(\textit{sk}^{(i)}_B\) via brute-force search. Instead M calls the discrete logarithm oracle on \(H(\textit{id}_0)\) and computes

$$\begin{aligned} \textit{sk}^{(2)}_0/\textit{sk}^{(1)}_0=H(\textit{id}_0)^{1/z_2}/H(\textit{id}_0)^{1/z_1}. \end{aligned}$$

As a result, M obtains \(c=1/z_2 - 1/z_1\). With this value, M can now compute the missing secret keys efficiently as

$$\begin{aligned} \textit{sk}^{(i)}_A= & {} \textit{sk}^{(i)}_{5-2i}=\textit{sk}^{(3-i)}_{5-2i}\cdot H(\textit{id}_{5-2i})^{(-1)^{i}c},\\ \textit{sk}^{(i)}_B= & {} \textit{sk}^{(i)}_{6-2i}=\textit{sk}^{(3-i)}_{6-2i}\cdot H(\textit{id}_{6-2i})^{(-1)^{i}c}. \end{aligned}$$

Essentially M uses the answers to the \(\textsf {Corrupt} \) query obtained by \(\mathcal {A} '_i\) and the value c to compute the missing secret keys. Observe that the distributions produced by the \(\mathcal {A} '_i\) are identical to those of the \(\mathcal {A} _i\). In conclusion, M obtains the two solutions for two instances of the \(\text {CBDHI-2 }\) (\(\text {CBDHI-2 }\) ’) instance from \(R_1\) and \(R_2\). At the same time M has only used the discrete logarithm oracle once. This contradicts the security of the \(\text {CBDHI-2 }\) (\(\text {CBDHI-2 }\) ’) assumption. Therefore, H cannot be modeled as a non-programmable random oracle. This ends the security proof. \(\square \)