1 Introduction

Tight security. In modern cryptography it is standard to propose new cryptographic constructions along with a proof of security. The provable security paradigm, which goes back to a seminal work of Goldwasser and Micali [27], becomes increasingly relevant for “real-world” cryptosystems today. For instance, the upcoming TLS version 1.3Footnote 1 is the first version of this important protocol where formal security proofs were used as a basis for several fundamental design decisions [44].

A security proof usually describes a reduction (in a complexity-theoretic sense), which turns an efficient adversary \(\mathcal {A}\) breaking the considered cryptosystem into an efficient adversary \(\mathcal {B}\) breaking some underlying complexity assumption, such as the discrete logarithm problem, for example. If \(\mathcal {B}\) can be shown to have about the same running time and success probability as \(\mathcal {A}\) (up to a constant factor), then the reduction is said to be tight. However, many security proofs are not tight. For example, we are often only able to show that if \(\mathcal {A}\) runs in time \(t_\mathcal {A} \) and has success probability \(\epsilon _\mathcal {A} \), then \(\mathcal {B}\) runs in time \(t_\mathcal {B} \approx t_\mathcal {A} \), but we can bound its success probability only as \(\epsilon _\mathcal {B} \ge \epsilon _\mathcal {A} /Q\), where Q is the security loss. Q can often be “large”, e.g., linear or even quadratic in the number of users.

If Q is polynomially bounded, then this still yields a polynomial-time reduction in the sense of classical asymptotic complexity theory. However, if we want to deploy the cryptosystem in practice, then the size of cryptographic parameters (like for instance the size of an underlying algebraic group, where the discrete logarithm problem is assumed to be hard) must be determined. If we want to choose these parameters in a theoretically-sound way, then a larger loss Q must be compensated by larger parameters, which in turn has a direct impact on efficiency. For example, in the discrete logarithm setting, this would typically require an increase in the group order by a factor \(Q^2\). As a concrete example, \(2^{32}\) users with \(2^{32}\) sessions each and quadratic security loss would force us to use 521 bit elliptic curves instead of 256 bit elliptic curves, which more than quintuples the cost of an exponentiation on one typical platform (as measured by openssl speed). Thus, in order to be able to instantiate the cryptosystem with “optimal” parameters, we need a tight security proof.

The possibility and impossibility of tight security proofs has been considered for many primitives, including symmetric encryption [29, 31, 36], public-key encryption [3, 5, 24, 32], (hierarchical) identity-based encryption [11, 16], digital signatures [22, 23, 32, 33, 37, 43, 45, 47], authenticated key exchange [2], and more. It also becomes increasingly relevant in “real-world” cryptography. For instance, most recently, Gueron and Lindell [29] improved the tightness of the AES-GCM-SIV nonce misuse-resistant encryption scheme, with a new nonce-based key derivation method. This construction is now part of the current draft of the corresponding RFC proposal,Footnote 2 won the best paper award at ACM CCS 2017, and is already used in practice in Amazon’s AWS key management scheme.Footnote 3

In many important areas with high real-world relevance, including digital signature schemes and authenticated key exchange protocols, we still do not have any tightly-secure constructions that are suitable for practical deployment. Known schemes either have a security loss which is at least linear in the number of users (typical for digital signatures) or even quadratic in the number of protocol sessions (typical for authenticated key exchange), if a real-world multi-user security model is considered. This huge security loss often makes it impossible to choose deployment parameters in a theoretically-sound way, because such parameters would have to be unreasonably large and thus impractical.

1.1 Tightly-Secure Digital Signatures

In the domain of digital signatures, there are two relevant dimensions for tightness: (i) the number of signatures issued per public key, and (ii) the number of users (=public keys).

For some important “real-world” schemes, such as Schnorr signatures, impossibility results suggest that current proof techniques are not sufficient to achieve tightness [22, 23, 43, 47], not even if only the first dimension is considered in a single-user setting. Some other schemes have a tight security proof in the first dimension [10, 32, 37, 45]. However, in a more realistic multi-user setting with adaptive corruptions, which appears to be the “right” real-world security notion for applications of signatures such as key exchange, cryptocurrencies, secure instant messaging, or e-mail signatures, there are only very few constructions with tight security in both dimensions.

One construction is due to Bader [1]. It is in the random oracle model [6], but this seems reasonable, given the objective of constructing simple and efficient real-world cryptosystems. However, the construction requires bilinear maps (aka. pairings). Even though bilinear maps have become significantly more efficient in the past years, their practical usability is still not comparable to schemes over classical algebraic groups, such as elliptic curves without pairings. Furthermore, bilinear maps involve rather complex mathematics, and are therefore rather difficult to implement, and not yet available on many platforms and software libraries, in particular not for resource-constrained lightweight devices or smartcards. Finally, recent advances in solving the discrete logarithm problem [4] restrain the applicability of bilinear maps in settings with high performance and security requirements.

The other two known constructions are due to Bader et al. [2]. Both have a security proof in the standard model (i.e., without random oracles), but are also based on bilinear maps. The first one uses a simulation-sound Groth-Sahai [28] proof system, which internally uses a tree-based signature scheme to achieve tightness. Thus, a signature of the resulting construction consists of hundreds of group elements, and is therefore not suitable for practical deployment. The second scheme is more efficient, but here public keys consist of hundreds of group elements, which is much larger than the public key size of any other schemes currently used in practice, and seems too large for many applications.

In summary, the construction of a practical signature scheme without bilinear maps, which provides tight security in a realistic multi-user setting with corruptions and in the standard sense of existential unforgeability under chosen-message attacks, is an important open problem. A solution would provide a very useful building block for applications where the multi-user setting with adaptive corruptions appears to be the “right” real-world security notion, such as those mentioned above.

The difficulty of constructing tightly-secure signatures. Constructing a tightly-secure signature scheme in a real-world multi-user security model with adaptive corruptions faces the following technical challenge. In the \(\mu \)-user setting, the adversary \(\mathcal {A}\) receives as input a list \( pk _1, \ldots , pk _\mu \) of public keys. We denote the corresponding secret keys with \( sk _1, \ldots , sk _\mu \). \(\mathcal {A}\) is allowed to ask two types of queries. It may either output a tuple (mi), to request a signature for a chosen message m under secret key \( sk _i\). The security experiment responds with \(\sigma \,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathsf {Sign} ( sk _i,m)\). Or it may “corrupt” keys. To this end, it outputs an index i, and the experiment responds with \( sk _i\). Adversary \(\mathcal {A}\) breaks the security, if it outputs \((i^*, m^*, \sigma ^*)\) such that \(\sigma ^*\) verifies correctly for a new message \(m^*\) and with respect to an uncorrupted key \( pk _{i^*}\). Note that this is the natural extension of existential unforgeability under chosen-message attacks (EUF-CMA) to the multi-user setting with corruptions. Following [2], we will call it \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-security. Security in this sense is implied by the standard EUF-CMA security definition, by a straightforward reduction that simply guesses the index \(i^*\) of the uncorrupted key, which incurs a security loss of \(Q = 1/\mu \) that is linear in the number of users.

Now let us consider the difficulty of constructing a security reduction \(\mathcal {B}\) which does not lose a factor \(Q = 1/\mu \). On the one hand, in order to avoid the need to guess an uncorrupted key, \(\mathcal {B}\) must “know” all secret keys \( sk _1, \ldots , sk _\mu \), in order to be able to answer key corruption queries.

On the other hand, however, the reduction \(\mathcal {B}\) must be able to extract the solution to a computationally hard problem from the forgery \((i^*, m^*, \sigma ^*)\). If \(\mathcal {B}\) “knows” \( sk _{i^*}\), then it seems that it could compute \((m^*, \sigma ^*)\) even without the help of the adversary. Now, if \(\mathcal {B}\) is then able to extract the solution of a “hard” computational problem from \((m^*, \sigma ^*)\), then this means that the underlying hardness assumption must be false, and thus the reduction \(\mathcal {B}\) is not meaningful.

The above argument seems to suggests that achieving tight \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-security is impossible. One can even turn this intuition into a formal impossibility result, as done in [3]. However, it turns out that the impossibility holds only for schemes where the distribution of signatures that are computable with a secret key \( sk _{i^*}\) known to reduction \(\mathcal {B}\) is identical to the distribution of signatures \((m^*, \sigma ^*)\) output by adversary \(\mathcal {A}\). This provides a leverage to overcome the seeming impossibility. Indeed, the known constructions of tightly \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-secure schemes [1, 2] circumvent the impossibility result. As we describe below in more detail, these constructions essentially ensure that with sufficiently high probability the adversary \(\mathcal {A}\) will output a message-signature pair \((m^*, \sigma ^*)\) such that \(\sigma ^*\) is not efficiently computable, even given \( sk _{i^*}\).

The main technical challenge in constructing signature schemes with tight security in a real-world multi-user security model with corruptions is therefore to build the scheme in a way that makes it possible to argue that the reduction \(\mathcal {B}\) is able to extract the solution to a computationally hard problem from the forged signature computed by \(\mathcal {A}\), even though \(\mathcal {B}\) knows secret keys for all users.

On constructing tightly-secure signatures without bilinear maps. All previously known tightly \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-secure signature schemes [1, 2] essentially work as follows. A public key \( pk \) consists of two public keys \( pk = ( pk _0, pk _1)\) of a “base” signature scheme, which is tightly-secure in a multi-user setting without corruptions. The secret key \( sk \) consists of a random secret key \( sk = sk _b\), \(b\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\{0,1\}\), for either \( pk _0\) or \( pk _1\). A signature consists of a Groth-Sahai-based [28] witness-indistinguishable OR-proof of knowledge, proving knowledge of a signature that verifies either under \( pk _0\) OR under \( pk _1\). In the security proof, the reduction \(\mathcal {B}\) basically knows \( sk _b\) (and thus is able to respond to all corruption-queries of \(\mathcal {A}\)), but it hopes that \(\mathcal {A}\) produces a proof of knowledge of a signature under \( pk _{1-b}\), which can then be extracted from the proof of knowledge and be used to break the instance corresponding to \( pk _{1-b}\).

A natural approach to adopt this technique to signatures without pairings would be to use a Fiat-Shamir-like proof of knowledge [21], in combination with the very efficient OR-proofs of Cramer-Damgård-Schoenmakers (CDS) [18]. However, here we face the following difficulties. First, existing signature schemes that use a Fiat-Shamir-like proof of knowledge, such as for the Schnorr scheme [46], are already difficult to prove tightly secure in the single-user setting, due to known impossibility results [22, 23, 43, 47]. Second, its tightly-secure variants, such as the DDH-based scheme of Katz-Wang [37] and the CDH-based schemes of Goh-Jarecki [25] or Chevallier-Mames [17], do not use a proof of knowledge, but actually a proof of language membership, where we cannot extract a witness along the lines of [1, 2]. Thus, adopting the approach of [1, 2] to efficient signature schemes without pairings requires additional ideas and new techniques.

Our contributions. We construct the first tightly \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-secure signature scheme which does not require bilinear maps. We achieve this by describing a new way of combining the efficient EDL signature scheme considered in [25] with Cramer-Damgård-Schoenmakers proofs [18], in order to obtain tightly-secure signatures in the multi-user setting with adaptive corruptions.

The scheme is very efficient, in particular in comparison to previous schemes with tight multi-user security. A public key consists of only two group elements, while the secret key consists of a bit and one integer smaller than the group order. A signature consists of a random nonce, two group elements and four integers smaller than the group order. The computational cost of the algorithms is dominated by exponentiations. Key generation costs a single exponentiation. Signing costs a single exponentiation plus the generation of a proof, for a total of seven exponentiations. Verification costs eight exponentiations.

1.2 Tightly-Secure Authenticated Key Exchange

Modern security models for authenticated key exchange consider very strong adversaries, which control the entire communication network, may adaptively corrupt parties to learn their long-term secret keys, or reveal session keys of certain sessions. This includes all security models that follow the classical Bellare-Rogaway [7] or Canetti-Krawczyk [14] approach, for instance. The adversary essentially breaks the security, if it is able to distinguish a non-revealed session key from random. Furthermore, in order to achieve desirable properties like forward security (aka. perfect forward secrecy) [30], the attacker is even allowed to attack session keys belonging to sessions where one or both parties are corrupted, as long as these corruptions do not allow the adversary to trivially win the security experiment (e.g., because it is able to corrupt a communication partner before the key is computed, such that the attacker can impersonate this party).

The huge complexity of modern security models for authenticated key exchange makes it difficult to construct tightly-secure protocols. Most examples of modern key exchange protocols even have a quadratic security loss in the total number of protocol sessions, which stems from the fact that a reduction has to guess two oracles in the security experiment that belong to the protocol session “tested” by the adversary (cf. the discussion of the “commitment problem” below).

Despite the huge practical importance of authenticated key exchange protocols, we currently know only a single example of a protocol that achieves tight security [2], but it requires complex building blocks, such as one of the tightly-secure signature schemes sketched above, as well as a special key encapsulation mechanisms that is composed of two public-key encryption schemes. Given the huge demand for efficient key exchange protocol in practice, the construction of a simple and efficient, yet tightly-secure, authenticated key exchange protocol without these drawbacks is an important open problem.

Our contributions. We describe the first truly practical key exchange protocol with tight security in a standard security model for authenticated key exchange. The construction (but not the security proof) is very simple, which makes the protocol easy to implement and ready to use, even on simple devices.

Our protocol is able to establish a key with very low latency, in three messages and within a single round-trip time (1-RTT) in a standard client-server setting. This holds even in a typical real-world situation where both communication partners are initially not in possession of their communication partner’s public keys, and therefore have to exchange their certified public keys within the protocol. Furthermore, the protocol provides full forward security.

In Sect. 5 we analyse the computational efficiency of our protocol, instantiated with our signature scheme, by comparing it to the simple “signed Diffie-Hellman” protocol, instantiated with EC-DSA. The analysis is based on the benchmark for ECC arithmetic of OpenSSL, and considers a theoretically-sound choice of cryptographic parameters based on the tightness of security proofs. Even though our protocol requires a larger absolute number of exponentiations, already in small-to-medium-scale settings this is quickly compensated by the fact that arithmetic in large groups is significantly more costly than in small groups. In a large-scale setting, our protocol even outperforms signed Diffie-Hellman with EC-DSA with respect to computational performance. This comes at the cost of moderately increased communication complexity, when compared to the (extremely communication-efficient) EC-DSA-signed Diffie-Hellman protocol.

Sketch of our construction and technical difficulties. Our starting point is the standard “signed Diffie-Hellman” protocol, instantiated with our tightly-secure multi-user signature scheme. However, we stress that this is not yet sufficient to achieve tight security, due to the “commitment problem” described below. We resolve this problem with an additional message, which is important to achieve tight security, but does not add any additional latency to the protocol.

More precisely, consider the standard “signed Diffie-Hellman” protocol, executed between Alice and Bob, where Bob first sends \(v = (g^b, \sigma _B)\), where \(\sigma _B\) is a signature under Bob’s secret key over \(g^b\), Alice responds with \(w = (g^a, \sigma _A)\), where \(\sigma _A\) is Alice’s signature over \(g^a\), and the resulting key is \(k = g^{ab}\). Let us sketch why this protocol seems not to allow for a tight security proof.

In a Bellare-Rogaway-style security model, such as the one that we describe in Sect. 4.1, each session of each party is modelled by an oracle \(\pi _i^s\), where \((i,s) \in [\mu ] \times [\ell ]\), \(\mu \) is the number of parties and \(\ell \) is the number of sessions per party. Now, consider a reduction \(\mathcal {B}\) which receives as input a DDH challenge \((g,g^x,g^y,g^z)\), and now wants to embed these values into the view of the key exchange adversary \(\mathcal {A}\). One way to do this, which is used in most existing security proofs of “signed Diffie-Hellman-like” protocols (such as [34, 35, 39], for instance) is to guess two oracles \(\pi _i^s\) and \(\pi _j^t\), embed \(g^x\) into the message sent by \(\pi _i^s\), \(g^y\) into the message sent by \(\pi _j^t\), and then hope that the adversary will forward \(g^y\) to \(\pi _i^s\) and “test” the key of oracle \(\pi _i^s\), where the \(g^z\)-value from the DDH challenge is then embedded. Note that the need to guess two out of \(\mu \ell \) oracles correctly incurs a quadratic security loss of \(O(\mu ^2\ell ^2)\), which is extremely non-tight.

A natural approach to improve tightness is to use the well-known random self-reducibility of DDH [5], and embed randomised versions of \(g^x\) and \(g^y\) into the messages of more than one oracle. However, here we face the following difficulty. As soon as an oracle \(\pi _i^s\) has output a Diffie-Hellman share \(g^a\), the reduction \(\mathcal {B}\) has essentially committed itself to whether it “knows” the discrete logarithm a.

  • If oracle \(\pi _i^s\) outputs a randomised version of \(g^x\), \(g^a = g^{x + e_i^s}\) where \(e_i^s\) is the randomization, then \(\mathcal {B}\) does not “know” the discrete logarithm \(a = x + e_i^s\). Now it may happen that the adversary \(\mathcal {A}\), which controls the network and possibly also some parties, sends a value h to oracle \(\pi _i^s\) (on behalf of some third party), such that h is not controlled by the reduction \(\mathcal {B}\). If then \(\mathcal {A}\) asks the reduction to reveal the key of oracle \(\pi _i^s\), then the reduction fails, because it is not able to efficiently compute \(k = h^a\).

  • This problem does not occur, if \(g^a = g^{e_i^s}\) such that \(\mathcal {B}\) “knows” the discrete logarithm a. However, if it now happens that the adversary \(\mathcal {A}\) decides to distinguish the key k of oracle \(\pi _i^s\) from random, then again the reduction fails, because it is not able to embed \(g^z\) into k.

This “commitment problem” is the reason why many classical security proofs, in particular for signed Diffie-Hellman protocols, have a quadratic security loss. They embed a DDH challenge into the view of adversary \(\mathcal {A}\) by guessing two out of \(\mu \ell \) oracles, and the reduction will fail if the guess is incorrect.

We resolve the commitment problem in a novel way by adding an additional message. We change the protocol such that Alice initiates the protocol with a message \(u = G(g^a)\), where G is a cryptographic hash function (cf. Fig. 3). This message serves as a commitment by Alice to \(g^a\). Then the protocol proceeds as before: Bob sends \(v = (g^b, \sigma _B)\), Alice responds with \(w = (g^a, \sigma _A)\), and the resulting key is \(k = g^{ab}\).Footnote 4 However, Bob will additionally check whether the first message u received from Alice matches the third protocol message, that is, \(u = G(g^a)\), and abort if not.

As we will prove formally in Sect. 4.2, the additional message u resolves the commitment problem as follows. We will model G as a random oracle. This guarantees that from the point of view of the adversary \(\mathcal {A}\), a value G(h) forms a binding and hiding commitment to h. However, for the reduction \(\mathcal {B}\), u is not binding, because \(\mathcal {B}\) controls the random oracle. We will construct \(\mathcal {B}\) such that whenever an oracle \(\pi _i^s\) outputs a first protocol message u, then receives back a message \(v = (g^b, \sigma _B)\), and now has to send message \(w = (g^a, \sigma _A)\), then \(\mathcal {B}\) it is able to retroactively decide to embed the element \(g^x\) from the DDH challenge into u such that \(u = G(g^{x + e_i^s})\), or not and it holds that \(u = G(g^{e_i^s})\). This is possible by re-programming the random oracle in a suitable way.Footnote 5

We will explain in Sect. 4.2 that the additional message u does not increase latency to the protocol, when used in a standard client-server setting. This is essentially because Alice can send cryptographically protected payload immediately after receiving message \(v = (g^b, \sigma _B)\) from Bob, along with message \(w = (g^a, \sigma _A)\). Thus, in a typical client-server setting, where the client initiates the protocol and then sends data to the server, the overhead required to establish a key is only 1 RTT, exactly like for ordinary signed Diffie-Hellman.

Outline. Section 2 recalls the necessary background and standard definitions. The signature scheme is described and proven secure in Sect. 3, the AKE protocol is considered in Sect. 4.

2 Background

In this section, we recap some background and standard definitions of Diffie-Hellman problems, the Fiat-Shamir heuristic, and digital signatures.

Diffie-Hellman Problems. Let denote a cyclic group of prime order \(p\) and let \(g\) be a generator. Let \(\mathcal {DDH}\) be the set of DDH tuples \(\{ (g^{a}, g^{b}, g^{ab}) \mid a, b\in \{ 0, 1, \dots , p-1 \}\).

Definition 1

Let \(\mathcal {A} \) be an algorithm that takes two group elements as input and outputs a group element. The success probability af \(\mathcal {A} \) against the Computational Diffie-Hellman (CDH) problem is

We say that \(\mathcal {A} \) \((t,\epsilon )\)-breaks CDH if \(\mathcal {A} \) runs in time \(t\) and its success probability is at least \(\epsilon \).

Definition 2

Let \(\mathcal {A} \) be an algorithm that takes three group elements as input and outputs 0 or 1. The advantage of \(\mathcal {A} \) against the Decision Diffie-Hellman (DDH) problem [12] is

We say that \(\mathcal {A} \) \((t,\epsilon )\)-breaks DDH if \(\mathcal {A} \) runs in time \(t\) and its advantage is at least \(\epsilon \).

In proofs, it is often convenient to consider an adversary that sees multiple CDH/DDH problems. The \(n\)-CDH adversary must solve a CDH problem, but it gets to choose the group elements from two lists of randomly chosen group elements. The \(n\)-DDH adversary gets \(n\) tuples, all of which are either DDH tuples or random tuples. Again, it is often convenient if some of these DDH tuples share coordinates.

Definition 3

Let \(\mathcal {A} \) be an algorithm that takes as input \(2n\) group elements and outputs two integers and a group element. The success probability of \(\mathcal {A} \) against the \(n\)-CDH problem is

Definition 4

Let \(\mathcal {A} \) be an algorithm that outputs 0 or 1. \(\mathcal {A} \) has access to an oracle that on input of an integer i returns three group elements. If \(i>0\), then the first group element returned will be the same as the first group element in the oracle’s ith response. Let \(\mathcal {O} _0\) be such an oracle that returns randomly chosen DDH tuples. Let \(\mathcal {O} _1\) be such an oracle that returns randomly chosen triples of group elements.

The advantage of the algorithm \(\mathcal {A} \) against the \(n\)-DDH problem is

It is clear that 1-CDH and 1-DDH correspond to the ordinary problems. Likewise, it is clear that we can embed a CDH or DDH problem in a \(n\)-CDH or \(n\)-DDH problem, so a hybrid argument would relate their advantage. However, the DH problems are random self-reducible, which means that we can create better bounds.

Theorem 1

Let \(\mathcal {A} \) be an adversary against \(n\)-CDH. Then there exists an adversary \(\mathcal {B} \) against CDH such that

The difference in running time is linear in \(n\).

Theorem 2

Let \(\mathcal {A} \) be an adversary against \(n\)-DDH. Then there exists an adversary \(\mathcal {B} \) against DDH such that

The difference in running time is linear in \(n\).

The proof of the first theorem is straight-forward. A proof of the second theorem can be found in e.g. Bellare, Boldyreva and Micali [5].

Proofs of equality of discrete logarithms. Sigma protocols are special three-move protocols originating in the Schnorr identification protocol [46]. We shall need a proof of equality for discrete logarithms [15] together with the techniques for creating a witness-indistinguishable OR-proof [18].

Let be such that \(x= g^{a}\) and \(z= y^{a}\). The standard sigma protocol [15] for proving that \(\log _{g} x= \log _{y} z\) works as follows:

 

Commitment. :

Sample \(\rho \leftarrow \{ 0, 1, \dots , p-1 \}\). Compute \(\alpha _0 = g^{\rho }\) and \(\alpha _1 = y^{\rho }\). The commitment is \((\alpha _0, \alpha _1)\).

Challenge. :

Sample \(\beta \leftarrow \{ 0, 1, \dots , p-1 \}\). The challenge is \(\beta \).

Response. :

Compute \(\gamma \leftarrow \rho - \beta a\bmod {p}\). The response is \(\gamma \).

Verification. :

The verifier accepts the response if

$$\begin{aligned} \alpha _0&= g^{\gamma } x^{\beta }&\alpha _1&= y^{\gamma } z^{\beta } \text{. } \end{aligned}$$

 

The usual special honest verifier zero knowledge simulator producing a simulated conversation on public input \((x, y, z)\) and challenge \(\beta \) is denoted by \(\mathsf {ZSim}_{\mathrm {eq}}(x, y, z; \beta )\), and it is a perfect simulator. The cost of generating a proof is dominated by the two exponentiations, while the simulation cost is dominated by four exponentiations.

We turn the proofs non-interactive using the standard Fiat-Shamir [21] heuristic, in which case the proof is a pair of integers \((\beta , \gamma )\). We denote the algorithm for generating a non-interactive proof \(\pi _{\mathrm {eq}}\) that \(\log _{g} x= \log _{y} z\) by \(\mathsf {ZPrv}_{\mathrm {eq}}(a; x, y, z)\). The algorithm for verifying that \(\pi _{\mathrm {eq}}\) is a valid proof of this claim is denoted by \(\mathsf {ZVfy}_{\mathrm {eq}}(\pi _{\mathrm {eq}}; x, y, z)\), which outputs 1 if and only if the proof is valid.

Based on this proof of equality for a pair of discrete logarithms, an OR-proof for the equality of one out of two pairs of discrete logarithms can be constructed using standard techniques [18].

Briefly, the prover chooses a random challenge \(\beta _{1-b}\) and uses the perfect simulator \(\mathsf {ZSim}_{\mathrm {eq}}(\dots )\) to generate a simulated proof for the inequal pair. It then runs the equal d.log. prover which produces a commitment. When the verifier responds with a challenge \(\beta \), the prover completes the proof for the equal pair using the challenge \(\beta _{b}= \beta - \beta _{1-b}\). It then responds with both challenges and both responses. The verifier checks that the challenges sum to \(\beta \).

We denote the special honest verifier simulator by

$$ \mathsf {ZSim}_{\mathrm {eq}, \mathrm {or}}(x_0, x_1, y_0, y_1, z_0, z_1; \beta _0, \beta _1) $$

We note that for any given challenge pair \((\beta _0, \beta _1)\), the simulator generates a particular transcript with probability \(1/p^2\).

Again, we can turn these proofs non-interactive using Fiat-Shamir and a hash function \(H_2\). In this case, the proof is a tuple \((\beta _0, \beta _1, \gamma _0, \gamma _1)\) of integers, and the verifier additionally checks that the hash value equals the sum of \(\beta _0\) and \(\beta _1\). The non-interactive algorithms for generating and verifying proofs are denoted by \(\mathsf {ZPrv}_{\mathrm {eq}, \mathrm {or}}(b, a_{b}; x_0, x_1, y_0, y_1, z_0, z_1)\) and \(\mathsf {ZVfy}_{\mathrm {eq}, \mathrm {or}}(\pi _{\mathrm {eq}, \mathrm {or}};x_0, x_1, y_0, y_1, z_0, z_1)\). The cost of generating a proof is dominated by the two exponentiations for the real equality proof and the four exponentiations for the fake equality proof.

As usual, the simulator is perfect. In addition, these proofs have very strong properties in the random oracle model.

Theorem 3

Let \(\mathcal {A} \) be an algorithm in the random oracle model, making at most \(l\) hash queries, that outputs a tuple \((x_0, x_1, y_0, y_1, z_0, z_1)\) of group elements and a proof \(\pi _{\mathrm {eq}, \mathrm {or}}\). The probability that \(\mathsf {ZVfy}_{\mathrm {eq}, \mathrm {or}}(\pi _{\mathrm {eq}, \mathrm {or}}; x_0, x_1, y_0, y_1, z_0, z_1) = 1\), but \((x_0, y_0, z_0) \not \in \mathcal {DDH}\) and \((x_1, y_1, z_1) \not \in \mathcal {DDH}\), is at most \(\frac{l+1}{p}\).

The proof of the theorem is straightforward and is implicit in e.g. Goh and Jarecki [25].

Digital Signatures. A digital signature scheme consists of a triple \((\mathsf {Gen}, \mathsf {Sign}, \mathsf {Vfy})\) of algorithms. The key generation algorithm \(\mathsf {Gen} \) (possibly taking a set of parameters \(\varPi \) as input) outputs a key pair \(( vk , sk )\). The signing algorithm \(\mathsf {Sign} \) takes a signing key \( sk \) and a message \(m\) as input and outputs a signature \(\sigma \). The verification algorithm \(\mathsf {Vfy} \) takes a verification key \( vk \), a message \(m\) and a signature \(\sigma \) as input and outputs 0 or 1. For correctness, we require that for all \(( vk , sk ) \leftarrow \mathsf {Gen} \) we have that \( \Pr [ \mathsf {Vfy} ( vk , m, \mathsf {Sign} ( sk , m)) ] = 1 \text{. } \)

3 Signatures with Tight Multi-User Security

Now we are ready to describe our signature scheme with tight multi-user security in a “real-world” security model with adaptive corruptions.

3.1 Security Definition

We define multi-user existential unforgeability under adaptive chosen-message attacks with adaptive corruptions, called \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\) security in [2]. Consider the following game between a challenger \(\mathcal {C}\) and an adversary \(\mathcal {A}\), which is parametrized by the number of public keys \(\mu \).

  1. 1.

    For each \(i \in [\mu ]\), it computes \(( pk ^{(i)}, sk ^{(i)})\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathsf {Gen} (\varPi )\). Furthermore, it initializes a set \(\mathcal {S}^\textsf {corr}\) to keep track of corrupted keys, and \(\mu \) sets \(\mathcal {S}_1,\ldots ,\mathcal {S}_\mu \), to keep track of chosen-message queries. All sets are initially empty. Then it outputs \(( pk ^{(1)},\ldots , pk ^{(\mu )})\) to \(\mathcal {A} \).

  2. 2.

    \(\mathcal {A} \) may now issue two different types of queries. When \(\mathcal {A} \) outputs an index \(i \in [\mu ]\), then \(\mathcal {C}\) updates \(\mathcal {S}^\textsf {corr} := \mathcal {S}^\textsf {corr} \cup \{i\}\) and returns \( sk ^{(i)}\). When \(\mathcal {A}\) outputs a tuple (mi), then \(\mathcal {C}\) computes \(\sigma := \mathsf {Sign} (sk_i,m)\), adds \((m,\sigma )\) to \(\mathcal {S}_i\), and responds with \(\sigma \).

  3. 3.

    Eventually \(\mathcal {A}\) outputs a triple \((i^*,m^*,\sigma ^*)\).

Definition 5

Let \(\mathcal {A} \) be a \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr} \)-adversary against a signature scheme \(\varSigma = (\mathsf {Gen}, \mathsf {Sign}, \mathsf {Vfy})\). The advantage of \(\mathcal {A} \) is

$$\begin{aligned} \mathrm{Adv}^{\mathsf {euf}\text {-}\mathsf {cma}}_{\varSigma }{(\mathcal {A})} = \Pr \left[ (m^*,i^*,\sigma ^*) \leftarrow \mathcal {A} ^{\mathcal {C}} : \begin{aligned}&i^*\not \in \mathcal {S}^\textsf {corr} \wedge (m^*,\cdot ) \not \in \mathcal {S}_{i^*}\\ \wedge&\mathsf {Vfy} (vk^{(i^*)},m^*,\sigma ^*) = 1 \end{aligned} \right] \text{. } \end{aligned}$$

We say that \(\mathcal {A}\) \((t,\epsilon ,\mu )\)-breaks the \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr} \)-security of \(\varSigma \) if \(\mathcal {A} \) runs in time t and \(\mathrm{Adv}^{\mathsf {euf}\text {-}\mathsf {cma}}_{\varSigma }{(\mathcal {A})} \ge \epsilon \). Here, we include the running time of the security experiment into the running time of \(\mathcal {A}\).

Remark 1

We include the running time of the security experiment into the running time t of \(\mathcal {A}\), because this makes it slightly simpler to analyse the running time of our reduction precisely. Let \(t_\mathsf {Exp}\) denote the time required to run the security experiment alone, and let \(t_\mathcal {A} \) be the running time of the adversary alone. Given that the experiment can be implemented very efficiently, we may assume \(t_\mathcal {A} \ge t_\mathsf {Exp}\) for any conceivable adversary \(\mathcal {A}\), so this increases the running time at most by a small constant factor. It allows us to make the analysis of our reduction more rigorous.

3.2 Construction

Let be a hash function from a randomness set \(R\) and a message space \(\{0,1\}^*\) to the group . The digital signature scheme \(\varSigma _{\text {mu}}\) works as follows:

 

Key generation. :

Sample \(b\leftarrow \{ 0, 1 \}\), \(a_{b}\leftarrow \{ 0, 1, \dots , p-1 \}\) and . Compute \(x_{b}\leftarrow g^{a_{b}}\). The signing key is \( sk = (b, a_{b})\) and the verification key is \( vk = (x_0, x_1)\).

Signing. :

To sign a message \(m\) using signing key \( sk = (b, a_{b})\), sample \(t\leftarrow R\) and , let \(y= H_1(t, m)\) and compute \(z_{b}\leftarrow y^{a_{b}}\). Then create a non-interactive zero knowledge proof

$$ \pi _{\mathrm {eq}, \mathrm {or}}\leftarrow \mathsf {ZPrv}_{\mathrm {eq}, \mathrm {or}}(b, a_{b}; x_0, x_1, y, y, z_0, z_1) $$

proving that \(\log _{g} x_0= \log _{y} z_0\) or \(\log _{g} x_1= \log _{y} z_1\). The signature is \(\sigma = (t, z_0, z_1, \pi _{\mathrm {eq}, \mathrm {or}})\).

Verification. :

To verify a signature \(\sigma = ( t, z_0, z_1, \pi _{\mathrm {eq}, \mathrm {or}})\) on a message \(m\) under verification key \( vk = (x_0, x_1)\), compute \(y= H_1(t, m)\) and verify that \(\pi _{\mathrm {eq}, \mathrm {or}}\) is a proof of the claim that \(\log _{g} x_0= \log _{y} z_0\) or \(\log _{g} x_1= \log _{y} z_1\) by checking that \(\mathsf {ZVfy}_{\mathrm {eq}, \mathrm {or}}(\pi _{\mathrm {eq}, \mathrm {or}}; x_0, x_1, y, y, z_0, z_1) = 1\).

 

The correctness of the scheme follows directly from the correctness of the non-interactive zero knowledge proof.

Theorem 4

Let \(\mathcal {S}\) be a forger for the signature scheme \(\varSigma _{\text {mu}}\) in the random oracle model, making at most \(l\) hash queries (with no repeating queries), interacting with at most \(\mu \) users and asking for at most \(n\) signatures. Then there exists adversaries \(\mathcal {B}\) and \(\mathcal {C}\) against DDH and CDH, respectively, such that

The difference in running time is linear in \(\mu +l+n\).

3.3 Proof of Theorem 4

The proof proceeds as a sequence of games between a simulator and a forger for the signature scheme. For each game \(G_i\), there is an event \(E_i\) corresponding to the adversary “winning” the game. We prove bounds on the differences \(\Pr [E_i] - \Pr [E_{i+1}]\) for consecutive games, and finally bound the probability \(\Pr [E_5]\) for the last game. Our claim follows directly from these bounds in the usual fashion.

figure a

Let \(E_0\) be the event that the adversary produces a valid forgery (and let \(E_i\) be the corresponding event for the remaining games). We have that

$$\begin{aligned} \mathrm{Adv}^{\mathsf {euf}\text {-}\mathsf {cma}}_{\varSigma _\mathrm{mu}}{(\mathcal {S})} = \Pr [E_0] \text{. } \end{aligned}$$
(1)
figure b

Since the challenge in the simulated conversation has been chosen uniformly at random, this change is not observable unless the random oracle \(H_2\) had been queried at this exact position before the reprogramming, and the reprogramming attempt fails.

As discussed in Sect. 2, the simulator will choose any particular proof with probability at most \(1/p^2\), so the probability that any reprogramming attempt fails is at most \(l/p^2\). The probability of the exceptional event, that at least one of the \(n\) attempts fail, is then upperbounded by \(nl/p^2\), giving us

$$\begin{aligned} | \Pr [E_1] - \Pr [E_0] | \le nl/p^2 \text{. } \end{aligned}$$
(2)
figure c

Since \(t\) is sampled from a set \(R\) with \(|R|\) elements, if there are at most \(l\) hash queries in the game, the probability that any one reprogramming attempt fails is at most \(l/|R|\). The probability of the exceptional event, that at least one of the \(n\) attempts fail, is then upperbounded by \(nl/|R|\), giving us

$$\begin{aligned} | \Pr [E_2] - \Pr [E_1] | \le \frac{nl}{|R|} \text{. } \end{aligned}$$
(3)
figure d

In the original key generation algorithm, \(x_{1-b}\) is sampled from the uniform distribution on the group. The key value \(a_{1-b}\) is sampled from the uniform distribution on \(\{ 0, 1, \dots , p-1 \}\), so \(x_{1-b}\) will also be sampled from the same distribution in this game. Since \(a_{1-b}\) is never used and never revealed, this game is indistinguishable from the previous game and

$$\begin{aligned} \Pr [E_3] = \Pr [E_2] \text{. } \end{aligned}$$
(4)
figure e
Fig. 1.
figure 1

\(\mu +n\)-DDH distinguisher \(\mathcal {B}'\) used in the proof of Theorem 4.

To bound the difference between this game and the previous one, we need the auxillary \(\mu +n\)-DDH distinguisher \(\mathcal {B}'\) given in Fig. 1.

Regardless of which oracle \(\mathcal {B}'\) interacts with, the verification key element \(x_{1-b}\) and \(y\) are sampled from the uniform distribution on , just like it is in both this game and the previous game.

When the adversary \(\mathcal {B}'\) interacts with the oracle \(\mathcal {O} _1\) which returns random tuples, then the oracle samples its third coordinate from the uniform distribution on , and this value is independent of all other values. Thus \(z_{1-b}\) is sampled from the uniform distribution on , just like in Game 3.

When the adversary \(\mathcal {B}'\) interacts with the oracle \(\mathcal {O} _0\) which returns DDH tuples, then \((x_{1-b}, y, z_{1-b})\) is a DDH tuple, just like in Game 4.

We conclude that \(\mathcal {B}'\) perfectly simulates the two games, depending on which oracle it has access to, and by Theorem 2 it follows that there exists a DDH adversary \(\mathcal {B}\) such that

figure f

At this point, we observe that in this game, the adversary has no information about \(b\) for any of the unrevealed keys.

figure g

Since \(y^{a_{1-b}} = (g^{\xi })^{a_{1-b}} = (g^{a_{1-b}})^{\xi } = x_{1-b}^{\xi }\), the adversary cannot detect this change. Therefore

$$\begin{aligned} \Pr [E_5] = \Pr [E_4] \text{. } \end{aligned}$$
(6)

Note that in this game, the fake signing key \(a_{1-b}\) introduced in Game 3 is no longer actually used for anything except computing \(x_{1-b}\).

Fig. 2.
figure 2

\(l\)-CDH adversary \(\mathcal {C}'\) used in the proof of Theorem 4.

Suppose the adversary wins Game 5 by outputting a signature \((t, z_0, z_1, \pi _{\mathrm {eq}, \mathrm {or}})\) for a message \(m\) and hash \(y= H_1(t, m)\) under the verification key \((x_0, x_1)\) with signature key \((b, a_0, a_1)\).

Since we can recover a tuple \((x_0, x_1, y, y, z_0, z_1)\) and a proof \(\pi _{\mathrm {eq}, \mathrm {or}}\), we would like to apply Theorem 3. But this is tricky because we simulate proofs and reprogram the random oracle involved in the theorem. However, since the adversary’s forgery must be on a message that has not been signed by our signature oracle, the forgery cannot involve any value for which we have reprogrammed the random oracle, unless the adversary has found a collision in \(H_1\). This collision must involve a \((t,m)\) pair from a signing query, which means that the probability of a collision is at most \(ln/2p\).

When there is no such collision, Theorem 3 applies and we know that either \(\log _{y} z_0= \log _{g} x_0\) or \(\log _{y} z_1= \log _{g} x_1\) (or both), except with probability \((l+1)/p\).

Since the forger \(\mathcal {S}\) has no information about \(b\), it follows that if equality holds for one of the discrete logarithm pairs, then \(\log _{y} z_{1-b}= \log _{g} x_{1-b}\) at least half the time.

Consider the \(l\)-CDH adversary \(\mathcal {C}'\) given in Fig. 2. It is clear that it perfectly simulates Game 5 with the adversary \(\mathcal {S}\). Furthermore, when the output signature satisfies \(\log _{y} z_{1-b}= \log _{g} x_{1-b}\), the \(l\)-CDH adversary outputs the correct answer. By Theorem 1 there exists a CDH adversary \(\mathcal {C}\) such that

(7)

Theorem 4 now follows from Eqs. (1)–(7).

4 Key Exchange

Now we describe our construction of a tightly-secure key exchange protocol, which uses the signature scheme presented above as a subroutine and additionally resolves the “commitment-problem” sketched in the introduction. This yields the first authenticated key exchange protocol which does not require a trusted setup, has tight security, and truly practical efficiency. The security proof is in the Random Oracle Model [6].

4.1 Security Model

Up to minor notational changes and clarifications, our security model is identical to the model from [2], except that we use the recent approach of Li and Schäge [41] to define “partnering” of oracles. Furthermore, we include a “sender identifier” into the \(\mathsf {Send}\) query (its relevance is discussed below). As in [2], we let the adversary issue more than one \(\mathsf {Test}\)-query, in order to achieve tightness in this dimension, too.

Execution Environment. We consider \(\mu \) parties \(P_1,\ldots ,P_\mu \). Each party \(P_i\) is represented by a set of \(\ell \) oracles, \(\{\pi _i^1, \ldots , \pi _i^{\ell }\}\), where each oracle corresponds to a single protocol execution, and \(\ell \in \mathbb {N}\) is the maximum number of protocol sessions per party. Each oracle is equipped with a randomness tape containing random bits, but is otherwise deterministic. Each oracle \(\pi _i^s\) has access to the long-term key pair \((sk^{(i)},pk^{(i)})\) of party \(P_i\) and to the public keys of all other parties, and maintains a list of internal state variables that are described in the following:

  • \(\rho _i^s\) is the randomness tape of \(\pi _i^s\).

  • \(\mathsf {Pid}_i^s\) stores the identity of the intended communication partner.

  • \(\varPsi _i^s \in \{\mathtt {accept},\mathtt {reject} \}\) indicates whether oracle \(\pi _i^s\) has successfully completed the protocol execution and “accepted” the resulting key.

  • \(k_i^s\) stores the session key computed by \(\pi _i^s\).

For each oracle \(\pi _i^s\) these variables are initialized as \((\mathsf {Pid}_i^s,\varPsi _i^s,k_i^s) = (\emptyset ,\emptyset ,\emptyset )\), where \(\emptyset \) denotes the empty string. The computed session key is assigned to the variable \(k_i^s\) if and only if \(\pi _i^s\) reaches the \(\mathtt {accept} \) state, that is, we have \( k_i^s \ne \emptyset \iff \varPsi _i^s = \mathtt {accept} \).

Attacker Model. The attacker \(\mathcal {A} \) interacts with these oracles through queries. Following the classical Bellare-Rogaway approach [7], we consider an active attacker that has full control over the communication network, and to model further real world capabilites of an attacker, we provide additionally queries. The \(\mathsf {Corrupt}\)-query allows the adversary to compromise the long-term key of a party. The \(\mathsf {Reveal}\)-query may be used to obtain the session key that was computed in a previous protocol instance. The \(\mathsf {RegisterCorrupt}\) enables the attacker to register maliciously-generated public keys, and we do not require the adversary to know the corresponding secret key. The \(\mathsf {Test}\)-query does not correspond to any real world capability of an adversary, but it is used to evaluate the advantage of \(\mathcal {A} \) in breaking the security of the key exchange protocol. However, we do not allow reveals of ephemeral randomness, as in [8, 14]. More precisely:

  • \(\mathsf {Send}(i,s,j,m)\): \(\mathcal {A} \) can use this query to send any message m of its choice to oracle \(\pi _i^s\) on behalf of party \(P_j\). The oracle will respond according to the protocol specification and depending on its internal state.

    If \((\mathsf {Pid}_i^s, \varPsi _i^s) = (\emptyset , \emptyset )\) and \(m = \emptyset \), then this means that \(\mathcal {A}\) initiates a protocol execution by requesting \(\pi _i^s\) to send the first protocol message to party \(P_j\). In this case, \(\pi _i^s\) will set \(\mathsf {Pid}_i^s = j\) and respond with the first message according to the protocol specification.

    If \((\mathsf {Pid}_i^s, \varPsi _i^s) = (\emptyset , \emptyset )\) and \(m \ne \emptyset \), then this means that \(\mathcal {A}\) sends a first protocol message from party \(P_j\) to \(\pi _i^s\). In this case, \(\pi _i^s\) will set \(\mathsf {Pid}_i^s = j\) and respond with the second message according to the protocol specification. This is the only reason why we include the “partner identifier” j in the \(\mathsf {Send}\) query.

    If \(\mathsf {Pid}_i^s = j' \ne \emptyset \) and \(j \ne j'\), then this means that the partner id of \(\pi _i^s\) has already been set to \(j'\), but the adversary issues a \(\mathsf {Send}\)-query with \(j \ne j'\). In this case, \(\pi _i^s\) will abort by setting \(\varPsi _i^s = \mathtt {reject} \) and responding with \(\bot \).

    Finally, if \(\pi _i^s\) has already rejected (that is, it holds that \(\varPsi _i^s = \mathtt {reject} \)), then \(\pi _i^s\) always responds with \(\bot \).

    If \(\mathsf {Send}(i,s,j,m)\) is the \(\tau \)-th query asked by \(\mathcal {A}\), and oracle \(\pi _i^s\) sets variable \(\varPsi _i^s = \mathtt {accept} \) after this query, then we say that \(\pi _i^s\) has \(\tau \)-accepted.

  • \(\mathsf {Corrupt}(i)\): This query returns the long-term secret key \(sk_i\) of party \(P_i\). If the \(\tau \)-th query of \(\mathcal {A}\) is \(\mathsf {Corrupt}(i)\), then we call \(P_i\) \(\tau \)-corrupted, or simply corrupted. If \(P_i\) is corrupted, then all oracles \(\pi _i^1, \ldots , \pi _i^\ell \) respond with \(\bot \) to all queries.

    We assume without loss of generality that \(\mathsf {Corrupt}(i)\) is only asked at most once for each i. If \(\mathsf {Corrupt}(i)\) has not yet been issued by \(\mathcal {A}\), then we say that party i is currently \(\infty \)-corrupted.

  • \(\mathsf {RegisterCorrupt}(i,pk^{(i)})\): This query allows \(\mathcal {A} \) to register a new party \(P_i\), \(i > \mu \), with public key \(pk^{(i)}\). If the same party \(P_i\) is already registered (either via \(\mathsf {RegisterCorrupt}\)-query or \(i \in [\mu ]\)), a failure symbol \(\bot \) is returned to \(\mathcal {A} \). Otherwise, \(P_i\) is registered, the pair \((P_i,pk^{(i)})\) is distributed to all other parties.

    Parties registered by this query are called adversarially-controlled. All parties controlled by the adversary are defined to be 0-corrupted. Furthermore, there are no oracles corresponding to these parties.

  • \(\mathsf {Reveal}(i,s)\): In response to this query \(\pi _i^s\) returns the contents of \(k^s_i\). Recall that we have \(k^s_i \ne \emptyset \) if and only if \(\varPsi ^s_i = \mathtt {accept} \). If \(\mathsf {Reveal}(i,s)\) is the \(\tau \)-th query issued by \(\mathcal {A}\), we call \(\pi _i^s\) \(\tau \)-revealed. If \(\mathsf {Reveal}(i,s)\) has not (yet) been issued by \(\mathcal {A}\), then we say that oracle \(\pi _i^s\) is currently \(\infty \)-revealed.

  • \(\mathsf {Test}(i,s)\): If \(\varPsi ^s_i \ne \mathtt {accept} \), then a failure symbol \(\bot \) is returned. Otherwise \(\pi _i^s\) flips a fair coin \(b_i^s\), samples \(k_0\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathcal {K}\) at random, sets \(k_1 = k^s_i\), and returns \(k_{b_i^s}\).

    The attacker may ask many \(\mathsf {Test}\)-queries to different oracles, but not more than one to each oracle. Jumping slightly ahead, we note that there exists a trivial adversary that wins with probability 1 / 4, if we allow \(\mathsf {Test}\)-queries of the above form to “partnered” oracles. In order to address this, we have to define partnering first. Then we will disallow \(\mathsf {Test}\)-queries to partnered oracles in the AKE security definition (Definition 7).

Partnering and original keys. In order to exclude trivial attacks, we need a notion of “partnering” of two oracles. Bader et al. [2] base their security definition on the classical notion of matching conversations of Bellare and Rogaway [7]. However, Li and Scháge [41] showed recently that this notion is error-prone and argued convincingly that it captures the cryptographic intuition behind “secure authenticated key exchange” in a very conservative way. This is because the strong requirement of matching conversation even rules out theoretical attacks based on “benign malleability” (e.g., efficient re-randomizability of signatures), which does not match any practical attacks, but breaks matching conversations, and thus seems stronger than necessary. This may hinder the design of simple and efficient protocols.

The new idea of [41] is to based “partnering” on an original key of a pair of oracles \((\pi _i^s,\pi _j^t)\). Recall that we consider an oracle \(\pi _i^s\) as a deterministic algorithm, but with access to a fixed randomness tape \(\rho _i^s\). The original key \(K_0 (\pi _i^s,\pi _j^t)\) of a pair of oracles \((\pi _i^s,\pi _j^t)\) consists of the session key that both oracles would have computed by executing the protocol with each other, and where \(\pi _i^s\) sends the first message. Note that \(K_0 (\pi _i^s,\pi _j^t)\) depends deterministically on the partner identities i and j and the randomness \(\rho _i^s\) and \(\rho _j^t\) of both oracles. Note also that for certain protocols it may not necessarily hold that \(K_0 (\pi _i^s,\pi _j^t) = K_0 (\pi _j^t,\pi _i^s)\), thus the order of oracles in the \(K_0 \) function matters.

Definition 6

(Partnering). We say that oracle \(\pi _i^s\) is partnered to oracle \(\pi _j^t\), if at least one of the following two condition holds.

  1. 1.

    \(\pi _i^s\) has sent the first protocol message and it holds that \(k_i^s = K_0 (\pi _i^s,\pi _j^t)\)

  2. 2.

    \(\pi _i^s\) has received the first protocol message and it holds that \(k_i^s = K_0 (\pi _j^t,\pi _i^s)\)

Security experiment. Consider the following game, played between an adversary \(\mathcal {A} \) and a challenger \(\mathcal {C} \). The game is parameterized by two numbers \(\mu \) (the number of honest identities) and \(\ell \) (the maximum number of protocol executions per party).

  1. 1.

    \(\mathcal {C}\) generates \(\mu \) long-term key pairs \((sk^{(i)},pk^{(i)}), i \in [ \mu ]\). It provides a \(\mathcal {A} \) with all public keys \(pk^{(1)},\ldots ,pk^{(\mu )}\).

  2. 2.

    The challenger \(\mathcal {C} \) provides \(\mathcal {A}\) with the security experiment, by implementing a collection of oracles \(\{\pi _i^s : i \in [\mu ], s \in [\ell ]\}\). \(\mathcal {A}\) may adaptively issue \(\mathsf {Send}\), \(\mathsf {Corrupt}\), \(\mathsf {Reveal}\), \(\mathsf {RegisterCorrupt}\) and \(\mathsf {Test}\) queries to these oracles in arbitrary order.

  3. 3.

    At the end of the game, \(\mathcal {A} \) terminates and outputs \((i,s,b')\), where (is) specifies an oracle \(\pi _i^s\) and \(b'\) is a guess for \(b_i^s\).

We write \(G_\varPi (\mu ,\ell )\) to denote this security game, carried out with parameters \(\mu ,\ell \) and protocol \(\varPi \).

Definition 7

(AKE Security). An attacker \(\mathcal {A}\) breaks the security of protocol \(\varPi \), if at least one of the following two events occurs in \(G_\varPi (\mu ,\ell )\):

Attack on authentication. Event \(\mathsf {break} _\mathrm {A} \) denotes that at any point throughout the security experiment there exists an oracle \(\pi _i^s\) such that all the following conditions are satisfied.

  1. 1.

    \(\pi ^s_i\) has accepted, that is, it holds that \(\varPsi _i^s = \mathtt {accept} \).

  2. 2.

    It holds that \(\mathsf {Pid}_i^s = j\) for some \(j \in [\mu ]\) and party \(P_j\) is \(\infty \)-corrupted.

  3. 3.

    There exists no unique oracle \(\pi _j^t\) that \(\pi ^s_i\) is partnered to.

Attack on key indistinguishability. We assume without loss of generality that \(\mathcal {A}\) issues a \(\mathsf {Test}(i,s)\)-query only to oracles with \(\varPsi _i^s = \mathtt {accept} \), as otherwise the query returns always returns \(\bot \). We say that event \(\mathsf {break} _\mathrm {KE} \) occurs if \(\mathcal {A} \) outputs \((i,s,b')\) and all the following conditions are satisfied.

  1. 1.

    \(\mathsf {break} _\mathrm {A} \) does not occur throughout the security experiment.

  2. 2.

    The intended communication partner of \(\pi ^s_i\) is not corrupted before the \(\mathsf {Test}(i,s)\)-query. Formally, if \(\mathsf {Pid}_i^s = j\) and \(\pi ^s_i\) is \(\tau \)-tested, then it holds that \(j \le \mu \) and party \(P_j\) is \(\tau '\)-corrupted with \(\tau ' \ge \tau \).

  3. 3.

    The adversary never asks a \(\mathsf {Reveal}\)-query to \(\pi _i^s\). Formally, we require that \(\pi _i^s\) is \(\infty \)-revealed throughout the security experiment.

  4. 4.

    The adversary never asks a \(\mathsf {Reveal}\)-query to the partner oracle of \(\pi _i^s\).Footnote 6 Formally, we demand that \(\pi _j^t\) is \(\infty \)-revealed throughout the security experiment.

  5. 5.

    \(\mathcal {A}\) answers the \(\mathsf {Test}\)-query correctly. That is, it holds that \(b_i^s = b'\), and if there exists an oracle \(\pi _j^t\) that \(\pi ^s_i\) is partnered to, then \(\mathcal {A}\) must not have asked \(\mathsf {Test}(j,t)\).

The advantage of the adversary \(\mathcal {A} \) against AKE security of \(\varPi \) is

$$\begin{aligned} \mathrm{Adv}^{\mathsf {AKE}}_{\varPi }{(\mathcal {A})} = \max \left\{ \Pr \left[ \mathsf {break} _\mathrm {A} \right] , |\Pr \left[ \mathsf {break} _\mathrm {KE} \right] - 1/2| \right\} \text{. } \end{aligned}$$

We say that \(\mathcal {A} \) \((\epsilon _\mathcal {A} , t, \mu , \ell )\)-breaks \(\varPi \) if its running time is t and \(\mathrm{Adv}^{\mathsf {AKE}}_{\varPi }{(\mathcal {A})} \ge \epsilon _\mathcal {A} \). Again, we include the running time of the security experiment into the running time of \(\mathcal {A}\) (cf. Remark 1).

Remark 2

Note that Definition 7 defines event \(\mathsf {break} _\mathrm {KE} \) such that it occurs only if \(\mathsf {break} _\mathrm {A} \) does not occur. We stress that this is without loss of generality. It makes the two possible ways to break the security of the protocol mutually exclusive, which in turn makes the reasoning in a security proof slightly simpler.

Remark 3

Note that an oracle \(\pi _i^s\) may be corrupted before the \(\mathsf {Test}(i,s)\)-query. This provides security against key-compromise impersonation attacks. Furthermore, the communication partner \(\pi _j^t\) may be corrupted as well, but only after \(\pi _i^s\) has accepted (to prevent the trivial impersonation attack), which provides forward security (aka. perfect forward secrecy).

4.2 Construction

In this section, we construct our protocol, based on a digital signature scheme \(\varSigma = (\mathsf {Gen}, \mathsf {Sign}, \mathsf {Vfy})\), a prime-order group , and cryptographic hash functions \(G : \{0,1\}^* \rightarrow \{0,1\}^\kappa \) and for some \(d \in \mathbb {N} \).

Protocol description. Let us consider a protocol execution between two parties Alice and Bob. The protocol is essentially the classical “signed Diffie-Hellman” with hashed session key, except that there is an additional first message which contains a cryptographic commitment to the Diffie-Hellman share \(g^a\) of the initiator of the protocol. This adds another message to the protocol, but is an important ingredient to achieve tightness, along the lines sketeched in the introduction. We stress that this additional message does not increase the latency of the protocol. That is, the protocol initiator is able to send cryptographically-protected payload data after one round-trip times (RTTs), exactly as with ordinary signed Diffie-Hellman.

Fig. 3.
figure 3

Basic protocol outline.

Each party is in possession of a long-term key pair \(( pk , sk )\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathsf {Gen} (1^\kappa )\) for signature scheme \(\varSigma \). We write \(( pk _A, sk _A)\) and \(( pk _B, sk _B)\) to denote the key pair of Alice and Bob, respectively. If Alice initiates a key exchange, then both parties proceed as follows.

  1. 1.

    Alice chooses a random exponent \(a\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathbb {Z} _p\), computes \(u := G(g^a)\), and sends u to Bob.

  2. 2.

    When Bob receives u, he picks \(b\,{\mathop {\leftarrow }\limits ^{{\scriptscriptstyle \$}}}\,\mathbb {Z} _p\) and defines its local transcript of messages as \(T_B = u||g^b||\mathtt {sr}\), where \(\mathtt {sr}\) is a constant that indicates that Bob acts as a server in this session. Then it computes \(\sigma _B := \mathsf {Sign} ( sk _B,T_B)\), and responds with \(v := (g^b, \sigma _B)\) to Alice.

  3. 3.

    When Alice receives \(v := (g^b, \sigma _B)\), she first defines her local view of Bob’s transcript as \(T_B' = u||g^b||\mathtt {sr}\) and checks \(\mathsf {Vfy} ( pk _B,T_B', \sigma _B) {\mathop {=}\limits ^{{\scriptscriptstyle ?}}}1\). If not, then she terminates the protocol execution and sets \(\varPsi _A := \mathtt {reject} \). Otherwise, she defines her local transcript as \(T_A = u||v||g^a||\mathtt {cl}\), where \(\mathtt {cl} \ne \mathtt {sr}\) is a constant indicating that Alice acts as a client. Then she computes \(\sigma _A := \mathsf {Sign} ( sk _A,T_A)\) and sends \(w := (g^a, \sigma _A)\) to Bob. Furthermore, she first computes an “internal Diffie-Hellman key” \(\hat{k}_A = g^{ab}\), and then the actual session key as \(k_A = H(\hat{k}_A)\), and sets \(\varPsi _A := \mathtt {accept} \).

  4. 4.

    When Bob receives \(w := (g^a, \sigma _A)\), he first defines his local view of Alice’s transcript as \(T_A' = u||v||g^a||\mathtt {cl}\) and checks whether \(\mathsf {Vfy} ( pk _A,T_A', \sigma _A) = 1\) and whether \(g^a\) matches the commitment from the first message, that is, it holds that \(u = G(g^a)\). If one of these checks fails, then he sets \(\varPsi _B := \mathtt {reject} \) and terminates. Otherwise he first computes its “internal Diffie-Hellman key” \(\hat{k}_B = g^{ab}\), and then the actual session key \(k_B = H(\hat{k}_B)\), and sets \(\varPsi _A := \mathtt {accept} \).

Remark 4

We make the “internal Diffie-Hellman key” explicit in the above description, because it will be useful to refer to it in order to define a certain event in the security proof.

Remark 5

We point out that the signatures \(\sigma _A\) and \(\sigma _B\) over \(T_A = u||v||g^a||\mathtt {cl}\) and \(T_B = u||g^b||\mathtt {sr}\) protect the whole message transcripts, which is more than actually necessary for our security proof (for which signing \(g^a\) and \(g^b\), respectively, would actually be sufficient). However, this is not only a more conservative design, but also facilitates a future security proof of the protocol in a security model based on matching conversations, such as the one from [2].

This seems easily possible, by instantiating the protocol with a strongly \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-secure signature scheme in the sense of [13]. Indeed, our signature scheme can easily be made tight strongly-unforgeable, by applying the generic transformation of [49], but this would increase the size of signatures by one group element and one exponent. We leave it as an interesting open problem to prove tight strong \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr}\)-security directly for our signature scheme, without increasing the size of signatures.

Correctness. It is straightforward to verify that this protocol is correct.

Efficiency and latency. At a first glance, our protocol seems less efficient than ordinary signed Diffie-Hellman, because the additional message u adds another protocol round and thus latency. We stress that this is actually not the case, for typical applications. Consider a setting where Alice (a client) wants to send cryptographically protected payload data to a server (Bob). To this end, she initiates the protocol by sending message u. Then she waits for message v, which takes about 1 RTT (round trip time). Finally, she computes message w. At this time Alice has already accepted the key exchange, in particular she has computed the key \(k_A\). This means that she can immediately send cryptographically protected payload data along with message w. Thus, the latency overhead of our protocol, defined as the time that Alice has to wait before she can send cryptographically protected payload, is only 1 RTT.

Now let us compare this to standard signed Diffie-Hellman, which essentially corresponds to our protocol restricted to messages v and w, without the additional commitment message u. In the same setting as above, the client Alice would now send the first protocol message v and then wait for w, which again takes 1 RTT. Only then is Alice able to compute the session key, and use it to send cryptographically protected payload. Thus, even though one message less is sent, it still takes 1 RTT before the session key can be used by Alice.

Thus, while our tightly-secure protocol uses an additional message u, this message does not increase the latency of key establishment at all. Furthermore, message u can be as small as 20–32 bytes in practice, such that the total communication overhead incurred by the key exchange protocol is not significantly increased. At the same time, the best known security proof of signed Diffie-Hellman has even quadratic security loss. In contrast, our protocol achieves tightness with only constant security loss, without significantly increasing latency or communication complexity.

Efficiency in real-world PKI settings. As usual in cryptographic theory, our security model considers a setting where each party “magically” has access to all public keys of all other parties. In practice, this is not realistic. Instead, in typical real-world protocols like TLS [20] public keys are typically exchanged within the protocol, along with certificates attesting their authenticity. Often this requires additional protocol rounds, and thus adds further messages and latency to the protocol.

We point out that our protocol does not require any such additional protocol rounds when used in a real-world PKI setting. Concretely, we could simply extend message v to \(v = (g^b, \sigma _B, pk _B, \mathtt {cert}_B)\), where \(( pk _B, \mathtt {cert}_B)\) is the certified public key of Bob. Message w would be adopted accordingly to \(w = (g^a, \sigma _A, pk _A, \mathtt {cert}_A)\), where \(( pk _A, \mathtt {cert}_AB)\) is the certified public key of Alice.

Preventing unknown key-shake (UKS) attacks. Blake-Wilson and Menezes [9] introduced UKS attacks, where a party Alice can be tricked into believing that it shares a key with Eve, even though actually the key is shared with a different party Bob. A simple generic method to prevent such attacks in protocols that use digital signatures for authentication (such as ours) is to include user identities in signatures. In a real-world setting where certified public keys are exchanged during the protocol, one could sign the certificates along with all other messages.

Server-only authentication. Another important real-world application scenario is where only the server is authenticated cryptographically, while the client is not in possession of a long-term cryptographic key pair, and thus the protocol can only achieve unilateral authentication. This setting has been considered e.g. in [38] for TLS, and in [26, 42, 48] for more general key exchange protocols. While we do not model and prove it formally, we expect that our protocol achieves tight security also for server-only authentication, by an adopting the security model from Sect. 4 and the proof to the unilateral setting. More precisely, in this setting we would consider a security model where we distinguish between client oracles (which are not in possession of a cryptographic long-term key), and server oracles in possession of long-term signature keys. For authentication, the proof is identical, except that event \(\mathsf {break} _\mathrm {A} \) is restricted to accepting client oracles. For key indistinguishability, we would allow \(\mathsf {Test}\)-queries only for sessions that involve a Diffie-Hellman share that originates from a client oracle controlled by the experiment (as otherwise the adversary is trivially able to win). In this case, we are able to embed a DDH challenge exactly as in the proof for mutual authentication.

Theorem 5

Consider protocol \(\varPi \) as defined above, where hash functions G and H are modeled as random oracles. Let \(\mathcal {A}\) be an adversary that \((t,\mu ,\ell ,\epsilon _\mathcal {A} )\)-breaks \(\varPi \). Then we can construct and adversaries \(\mathcal {B} _\mathrm {A}\) and \(\mathcal {B} _\mathrm {KE}\) such that:

  1. 1.

    Either \(\mathcal {B} _\mathrm {A}\) \((t',\epsilon ',\mu )\)-breaks the \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr} \)-security of \((\mathsf {Gen},\mathsf {Sign},\mathsf {Vfy})\) with \(t' = O(t)\) and \(\epsilon ' \ge \epsilon _\mathcal {A} - \mu ^2 \ell ^2/p\).

  2. 2.

    Or \(\mathcal {B} _\mathrm {KE}\) \((t',\epsilon ')\)-breaks the decisional Diffie-Hellman assumption in with \(t' = O(t)\) and \(\epsilon ' \ge \epsilon _\mathcal {A} - t^2/2^{d} - \mu ^2\ell ^2/p - \mu ^2\ell ^2/2^{d} - \mu \ell t/p\).

The proof of Theorem 5 consists of two parts. First, we prove that any adversary breaking authentication in the sense of Definition 7 implies an algorithm breaking the \(\mathsf {MU\text {-}EUF\text {-}CMA}^\mathsf {corr} \)-security of the signature scheme. This part is standard, with a straightforward reduction. Then we prove key indistinguishability. This result contains the main novelty of our proof. It follows the approach sketched in the introduction very closely. Due to space limitations, the full proofs are given only in the full version, which can be found at the Cryptology ePrint Archive at https://eprint.iacr.org/2018/.

5 Efficiency Analysis

Let us compare an instantiation of our protocol from Sect. 4.2, instantiated with our signature scheme from Sect. 3.2, to plain “signed Diffie-Hellman”, instantiated with EC-DSA. The latter is the currently most efficient practical instantiation of an authenticated key exchange protocol over simple groups with explicit authentication (in contrast, some protocols, such as NAXOS [40], do not provide explicit authentication via digital signatures, but only implicit authentication via indistinguishability of keys).

We consider a setting where both the signature scheme and the Diffie-Hellman key exchange are instantiated over the same group. This is desirable in practice for many different reasons. Most importantly, it reduces the size of the implementation. This makes the protocol not only faster to implement, but also easier to implement securely (e.g., constant-time and resilient to other side-channels) and easier to maintain, which are very desirable properties, from a real-world security point of view.

Furthermore, an implementation requiring a small codebase or circuit size is particularly desirable for resource-constrained devices, such as IoT devices, where tightness is particularly relevant due to the large number of devices in use.

Computational efficiency. In order to compare the efficiency of protocols, we count the number of exponentiations, as this is the most expensive computation to be performed. Below we will also briefly discuss the potential impact of optimisations.

 

Our protocol. :

Each party running our protocol has to perform two exponentiations to perform the Diffie-Hellman key exchange, seven exponentiations to sign a message, and eight exponentiations to verify a signature. In total, this amounts to 17 exponentiations.

Signed Diffie-Hellman. :

Executing the signed Diffie-Hellman protocol with EC-DSA takes two exponentiations to perform the Diffie-Hellman key exchange, one exponentiation to compute an EC-DSA signature, and two exponentiations to verify a signature. In total, this amounts to 5 exponentiations.

 

Thus, our protocol requires 3.4 times more exponentiations than signed Diffie-Hellman.

Theoretically-sound instantiations. Let us consider a desired security level equivalent to an 128-bit symmetric key.

  • Our protocol. The tightness of our security proof allows to instantiate out protocol on a 256-bit elliptic curve group, such as the NIST P-256 curve, independent of the number of users or sessions.

  • Signed Diffie-Hellman. When instantiating plain “signed Diffie-Hellman”, we have to compensate the quadratic security loss of \(Q = \mu ^2\ell ^2\) of the security proof, depending on the number of users \(\mu \) and the number of sessions \(\ell \), by choosing a larger group. For instance:

    • In a small-to-medium-scale setting with \(\mu = 2^{16}\) and \(\ell = 2^{16}\), the security loss amounts already to a factor of \(Q = 2^{64}\). In order to compensate this with larger parameters, we have to increase the group size by a factor of \(Q^2=2^{128}\). We can do this by using the NIST P-384 curve.

    • In a large-scale setting with \(\mu = 2^{32}\) and \(\ell = 2^{32}\), the security loss amounts even to a factor of \(Q = 2^{128}\). In order to compensate this with larger parameters, we have to increase the group size by a factor of \(Q^2=2^{256}\), e.g., by using the NIST P-521 curve.

Remark 6

To justify the numbers chosen above, let us consider Facebook as an example. Facebook lists 2.13 billion active users in December 2017, see https://newsroom.fb.com/company-info/. Even if we assume that each user performs only a single TLS handshake (that is, only a single login) per month, this amounts to about \(2^{31}\) execution of the TLS protocol per month, and about \(2^{34}\) per year (the lifetime of the certified public key). Since known security proofs for TLS have a quadratic security loss, we thus have a security loss of \(2^{68}\) already in the single-user setting where only Facebook is considered.

Comparison of computational efficiency. In order to estimate the time required for one exponentiation for different curves, we consider OpenSSL as an example. OpenSSL is a very widely-used and stable cryptographic library with good performance properties. The benchmark tests of elliptic curve Diffie-Hellman, which analyse the performance of different elliptic curves implemented by OpenSSL, can be run on a system where OpenSSL is installed by executing the command openssl speed ecdh.

We ran this benchmark on a MacBook Pro computer with 3.3 GHz Intel Core i7 CPU and 16 GB RAM, running Mac OS Version 10.13.2. Figure 1 summarises the results for the considered NIST curves (P256, P384, P521), as well as suitable alternatives. Note that one ECDH operation for the P384 curve takes about 2.7 times longer than for P256, while for P521 it is even about 7.7 times longer. The results for other families of curves (K233/409/571 and B283/409/571) are comparable.

Table 1. OpenSSL Benchmark Results for NIST Curves

Comparison of communication complexity. Now let us compare the amount of data to be transmitted for a key exchange. Again, we consider “128-bit security”. We assume that each element of an n-bit elliptic group takes \(n+1\) bits, which can be achieved via standard point compression.

  • Our protocol. This protocol requires the transmission of two group elements for the Diffie-Hellman key exchange, each consisting of 257 bits, plus two signatures (each consisting of a random 256-bit nonce, two group elements, and four 256-bit exponents, which yields 1794 bits), plus the first protocol message, which corresponds to one 256-bit value, if SHA-256 is used.

    In total, this yields \(2 \cdot 257 + 2 \cdot 1794 + 256 = 4358\) bytes, which corresponds to \(\approx 545\) bytes.

  • Signed Diffie-Hellman. When instantiating plain “signed Diffie-Hellman” with EC-DSA, each party sends one group element plus one signature consisting of two exponents. This yields:

    • When using the NIST P-384 curve, this amounts to \(2 \cdot 385 + 4 \cdot 384 = 2306\) bits, which corresponds to \(\approx 289\) bytes.

    • In a large-scale setting with the NIST P-521 curve, this amounts to \(2 \cdot 522 + 4 \cdot 521 = 3128\) bits, or \(\approx 391\) bytes.

Conclusion. Even though the absolute number of exponentiations required to run our protocol is larger than for simple signed Diffie-Hellman, it turns out that for small-to-medium-scale settings the overall computational efficiency is already comparable to signed Diffie-Hellman, if the group order is chosen in a theoretically-sound way. For large-scale settings, it is even significantly better. Concretely, the fact that our protocol requires 3.4 times more exponentiations is already almost compensated by the fact that an exponentiation is about 2.7-times more expensive in the small-to-medium-scale setting. Furthermore, given that in the large-scale setting an exponentiation is about 7.7 times more expensive, it turns out that our protocol is even significantly more efficient by a factor greater than 2.25. We note that this pencil-and-paper analysis considers naïve exponentiation, and does not yet involve optimisations, such as pre-computations, which usually tend to be more effective if more exponentiations are performed.

The improved computational efficiency comes at only very moderate cost of increased communication complexity, amounting to 256 bytes for the entire protocol in the small-to-medium-scale setting, and 154 bytes in the large-scale setting. This holds in comparison to the very minimalistic EC-DSA-signed Diffie-Hellman protocol, which is of course extremely communication-efficient in comparison to any other protocol with similar properties.

Given that our protocol is the first proposal for a truly practical and tightly-secure key exchange protocol, we expect that future work building upon our techniques will be able to improve this further.