Keywords

1 Introduction

Polynomial-based universal hash functions [dB93, Tay93, BJKS93] are simple and fast. They map inputs to polynomials, which are then evaluated on keys to produce output. When used to provide data authenticity as Message Authentication Code (MAC) algorithms or in Authenticated Encryption (AE) schemes, they often take the form of Wegman-Carter (WC) authenticators [WC81], which add the polynomial output to randomly generated values.

Part of the appeal of such polynomial-based WC authenticators is that if the polynomial keys and random values are generated independently and uniformly for each message, then information-theoretic security is achieved, as initially explored by Gilbert, MacWilliams, and Sloane [GMS74], following pioneering work by Simmons as described in [Sim91]. However, in the interest of speed and practicality, tweaks were introduced to WC authenticators, seemingly not affecting security.

Wegman and Carter [WC81] introduced one of the first such tweaksFootnote 1, by holding polynomial keys constant across messages, which maintained security as long as the polynomial outputs are still added to fresh random values each time. Further work then instantiated the random values via a pseudorandom number generator [Bra82], pseudorandom function (PRF), and then pseudorandom permutation (PRP) outputs [Sho96], the latter being dubbed Wegman-Carter-Shoup (WCS) authenticators by Bernstein [Ber05b]. Uniqueness of the PRF and PRP outputs is guaranteed using a nonce. With m the message and n the nonce, the resulting constructions take the form \((n,m)\mapsto \pi (n) + \rho (m)\), with \(\pi \) the PRF or PRP, and \(\rho \) the universal hash function.

The switch to using PRFs and PRPs means that information-theoretic is replaced by complexity-theoretic security. Furthermore, switching to PRPs in WCS authenticators results in security bound degradation, impacting the amount of data that can be processed per key (as, for example, exploited by the Sweet32 attacks [BL16]). Naïve analysis uses the fact that PRPs are indistinguishable from PRFs up to the birthday bound, however this imposes stringent limits. Shoup [Sho96], and then Bernstein [Ber05b] improve this analysis significantly using advanced techniques, yet do not remove the birthday bound limit. Regardless, despite the data limits, the use of PRPs enables practical and fast instantiations of MAC and AE algorithms, such as Poly1305-AES [Ber05c] and GCM [MV04a, MV04b], the latter of which has seen widespread adoption in practice [VM06, SMC08, IS09].

As a result of the increased significance of WCS authenticators schemes like GCM, more recent work has focused on trying to understand their fragility when deployed in the real-world. The history of attacks against WC and WCS authenticators consists of work exploring the consequences of fixing the polynomial key across all messages—once the polynomial key is known, all security is lost.

Joux [Jou] and Handschuh and Preneel [HP08] exhibit attacks which recover the polynomial key the moment a nonce is repeated. Ferguson [Fer05] explores attacks when tags are too short, further improved by Mattson and Westerlund [MW16]. A long line of work initiated by Handschuh and Preneel [HP08], illustrates how to efficiently exploit verification attempts to eliminate false keys, by systematically narrowing the set of potential polynomial keys and searching for so-called “weak” keys [Saa12, PC15, ABBT15, ZW17, ZTG13].

However, interestingly, in the case of polynomial-based WCS authenticators, none of the nonce-respecting attacks match the success of the predicted worst-case attacks by Bernstein [Ber05b]. Furthermore, the gap in success between the predicted worst-case and best-known attacks grows quadratically in the number of queries made to the authenticator. Naturally, one is led to question whether Bernstein’s analysis is in fact the best one can do, or whether there actually is an attack, forcing us to abide by the data limits.

1.1 Contributions

We exhibit novel nonce-respecting attacks against polynomial-based WCS authenticators (Sect. 3), and show how they naturally arise from a new, simplified proof (Sect. 4). We prove that both our attack and Bernstein’s bound [Ber05b] are optimal, by showing they match (Sect. 5).

Unlike other birthday bound attacks, our attacks work by establishing quadratically many polynomial systems of equations from the tagging queries. It applies to polynomial-based WCS authenticators such as Poly1305-AES, as well as GCM and the variant SGCM [Saa11]. We achieve optimality in a chosen-plaintext setting, however the attacks can be mounted passively, using just known plaintext for MACs and ciphertext for AE schemes.

1.2 Related Work

Our introduction provides only a narrow view of the history of universal hash functions, targeted to ones based on polynomials. Bernstein [Ber05c] provides a genealogy of polynomial-based universal hash functions and Wegman-Carter authenticators, and both Procter and Cid [PC15, PC13] and Abdelraheem et al. [ABBT15] provide detailed overviews of the past attacks against polynomial-based Wegman-Carter MACs and GCM.

Zhu, Tan, and Gong [ZTG13] and Ferguson [Fer05] have pointed out that non-96-bit nonce GCM suffers from birthday bound attacks which lead to immediate recovery of the polynomial key. Such attacks use the fact that the nonce is processed by the universal hash function before being used, resulting in block cipher call collisions. These attacks are not applicable to the most widely deployed version of GCM, which uses 96 bit nonces, nor to polynomial-based WCS authenticators in general.

Iwata et al. [IOM12] identify and correct issues with GCM’s original analysis [MV04a]. Niwa et al. find further improvements in GCM’s bounds [NOMI15]. Their proofs do not improve over Bernstein’s analysis [Ber05b].

New constructions using universal hash functions like EWCDM [CS16] achieve full security [MN17] in the nonce-respecting setting, and maintain security during nonce-misuse.

McGrew and Fluhrer [MF05] and Black and Cochran [BC09] explore how easy it is to find multiple forgeries once a single forgery has been performed.

A long line of research seeks attacks and proofs of constructions which match each other, such as the generic attack by Preneel and van Oorschot [PvO99], tight analysis for CBC-MAC [BPR05, Pie06], keyed sponges and truncated CBC [GPT15], and HMAC [GPR14], and new attacks for PMAC [LPSY16, GPR16].

2 Preliminaries

2.1 Basic Definitions and Notation

The notation used throughout the paper is summarized in Appendix C. Unless specified otherwise, all sets are assumed to be finite. Vectors are denoted \(\varvec{x}\in \mathsf {X}^q\), with corresponding components \((x_1,x_2,\ldots ,x_q)\). Given a set \(\mathsf {X}\), \(\mathsf {X}^{\le \ell }\) denotes the set of non-empty sequences of elements of \(\mathsf {X}\) with length not greater than \(\ell \).

A random function \(\rho :\mathsf {M}\rightarrow \mathsf {T}\) is a random variable distributed over the set of all functions from \(\mathsf {M}\) to \(\mathsf {T}\). A uniformly distributed random permutation (URP) \(\varphi :\mathsf {N}\rightarrow \mathsf {N}\) is a random variable distributed over the set of all permutations on \(\mathsf {N}\), where \(\mathsf {N}\) is assumed to be finite. When we write \(\varphi :\mathsf {N}\rightarrow \mathsf {T}\) is a URP, we implicitly assume that \(\mathsf {N}= \mathsf {T}\).

The symbol \(\mathbb {P}\) denotes a probability measure, and \(\mathbb {E}\) expected value.

We make the following simplifications when discussing the algorithms. We analyze block cipher-based constructions by replacing each block cipher call with a URP call. This commonly used technique allows us to focus on the constructions’ security without worrying about the underlying block cipher’s quality. See for example [Ber05b]. Furthermore, although our analysis uses information-theoretic adversaries, the attacks we describe are efficient, but require large storage.

We also implicitly include key generation as part of the oracles. For example, consider a construction \(E:\mathsf {K}\times \mathsf {M}\rightarrow \mathsf {T}\), where E is stateless and deterministic, and \(\mathsf {K}\) is its “key” input. In the settings we consider, E-queries are only actually made to \(E(k,\cdot )\), where the key input is fixed to some random variable k chosen uniformly at random from \(\mathsf {K}\). Hence, rather than each time talking of \(E(k,\cdot )\), we simplify notation by considering the random function \(\rho (m) {\mathop {=}\limits ^{\mathrm {def}}}E(k,m)\), with the uniform random variable k implicitly part of \(\rho \)’s description.

2.2 Polynomial-Based WCS Authenticators

Although not necessary, for simplicity we fix tags to lie in a commutative group. The following definition is from Bernstein [Ber05b].

Definition 2.1

(WCS Authenticator). Let \(\mathsf {T}\) be a commutative group with operation \(+\). Let \(\pi :\mathsf {N}\rightarrow \mathsf {T}\) be a URP, and \(\rho :\mathsf {M}\rightarrow \mathsf {T}\) a random function. The Wegman-Carter-Shoup (WCS) authenticator maps elements \((n,m)\in \mathsf {N}\times \mathsf {M}\) to \(\pi (n)+\rho (m)\).

We take the following definition from Procter and Cid [PC15].

Definition 2.2

(Polynomial-Based Universal Hash). Let \(\mathsf {X}\) be a field and \(\ell \) a positive integer. Given \(\varvec{x} = (x_1,x_2,\ldots ,x_l)\in \mathsf {X}^{\le \ell }\), define the polynomial \(p_{\varvec{x}}(\alpha )\) by

$$\begin{aligned} p_{\varvec{x}}(\alpha ){\mathop {=}\limits ^{\mathrm {def}}}\sum _{i=1}^l x_i\cdot \alpha ^i. \end{aligned}$$
(1)

Then the polynomial-based universal hash function \(\rho :\mathsf {X}^{\le \ell }\rightarrow \mathsf {X}\) is the random function \(\rho (\varvec{x}) {\mathop {=}\limits ^{\mathrm {def}}}p_{\varvec{x}}(\kappa )\), where \(\kappa \) is a uniform random variable over \(\mathsf {X}\), and \(\varvec{x}\in \mathsf {X}^{\le \ell }\).

We say that the input messages \(\mathsf {X}^{\le \ell }\) to the polynomial-based universal hash consist of blocks, with the block length of the messages being at most \(\ell \).

When a WCS authenticator uses a polynomial-based universal hash function, we call the resulting construction a polynomial-based WCS authenticator.

Let \(\gamma :\mathsf {N}\,\times \,\mathsf {M}\rightarrow \mathsf {T}\) be a WCS authenticator. An adversary \(\mathbf {A}\) interacting with \(\gamma \) is said to be nonce-respecting if it never repeats \(\mathsf {N}\)-input to \(\gamma \). Furthermore, the verification oracle associated to \(\gamma \), \(V:\mathsf {N}\times \mathsf {M}\times \mathsf {T}\rightarrow \left\{ 0,1\right\} \), is defined as

$$\begin{aligned} V(n,m,t) = {\left\{ \begin{array}{ll} 1 &{} \text {if}\,\,\gamma (n,m) = t\\ 0 &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$
(2)

Nonce-respecting adversaries may repeat nonce-input to V.

Definition 2.3

(Authenticity Advantage). Let \(\mathbf {A}\) be a nonce-respecting adversary interacting with WCS authenticator \(\gamma :\mathsf {N}\times \mathsf {M}\rightarrow \mathsf {T}\) and associated verification oracle V. Then \(\mathbf {A}\)’s authenticity advantage, denoted \(\mathsf {Auth}_\gamma (\mathbf {A})\), is the probability that \(\mathbf {A}\) makes a V-query \((n^*,m^*,t^*)\) resulting in V outputting 1 and \(\gamma (n^*,m^*) = t^*\) was not a previous query-response from \(\gamma \).

In our analysis we will also need the following definition.

Definition 2.4

(Single-Forgery Advantage). Let \(\mathbf {A}\) be a nonce-respecting adversary interacting with WCS authenticator \(\gamma :\mathsf {N}\times \mathsf {M}\rightarrow \mathsf {T}\), resulting in queries \(\gamma (n_i,m_i) = t_i\) for \(i = 1,\ldots , q\). Say that \(\mathbf {A}\) outputs \((n^*,m^*,t^*)\) after its interaction. Then \(\mathbf {A}\)’s single-forgery advantage is

(3)

The maximum over all adversaries making at most q queries is denoted \(\mathsf {sAuth}_\gamma (q)\).

Bernstein connects \(\mathsf {Auth}\) and \(\mathsf {sAuth}\) as follows.

Theorem 2.1

([Ber05a]). Let \(\mathbf {A}\) be an authenticity adversary making at most q \(\gamma \) queries and v verification queries, then

$$\begin{aligned} \mathsf {Auth}_\gamma (\mathbf {A}) \le v\cdot \mathsf {sAuth}_\gamma (q). \end{aligned}$$
(4)

Bellare et al. prove a similar result for different constructions [BGM04].

2.3 GCM

We present those details of GCM [MV04a, MV04b] necessary to describe our attacks. GCM takes nonce, associated data, and plaintext input. It operates by first encrypting the plaintext using CTR mode [Nat80] into a ciphertext c. Then it processes the ciphertext and associated data using a WCS authenticator into a tag.

GCM only uses one key, namely a block cipher key; as explained before, we view the keyed block cipher as a URP \(\pi \) over the set of 128-bit strings, hence the block cipher key is implicit in our description. An authentication key L is computed as the output of \(\pi \) under the all-zero string, which we denote 0: \(L{\mathop {=}\limits ^{\mathrm {def}}}\pi (0)\).

GCM’s WCS authenticator views the set of 128-bit strings as a finite field with \(2^{128}\) elements. Once the ciphertext c has been computed using CTR mode, its length is encoded in a 64-bit string and the ciphertext is padded with zeros to have length a multiple of 128 bits. The associated data is processed in the same way. Let \(a_1,a_2,\ldots ,a_{l}\) and \(c_1,c_2,\ldots ,c_{l'}\) denote the padded associated data and ciphertext, respectively, where the length of all blocks \(a_i\) and \(c_i\) is 128 bits. Let \(x_0\) denote the concatenation of the encoded lengths of the associated data and ciphertext. Then, if \(\varvec{x} = (x_0, a_l, a_{l-1},\ldots ,a_1, c_{l'},\ldots ,c_1)\), GCM computes its tag as

$$\begin{aligned} p_{\varvec{x}}(L) + \pi (n)\quad \text { with } L = \pi (0), \end{aligned}$$
(5)

where n is a value deduced from the nonce.

All \(\pi \)-input in GCM can be derived from the nonce and L, and no two \(\pi \)-inputs are the same, unless some unlikely event happens, in which case GCM loses all security [Jou, Fer05, ZTG13]. In more detail, the nonce is converted into distinct counters for CTR mode, as well as an additional, distinct input, which is used for the URP input in GCM’s WCS authenticator, denoted n in 5. In 96-bit nonce GCM, n is equal to the nonce concatenated with a string consisting of 31 zeroes, followed by a 1, and the counters used in CTR mode increment the last 32 bits of n.

In our attacks and analysis below we mostly focus on plain WCS authenticators, however everything translates nearly verbatim over to GCM’s WCS authenticator.

3 Key Recovery Attacks

Most of the previously published attacks aim to recover the polynomial key of the WCS authenticator in order to be able to construct arbitrary forgeries. All known key recovery attacks focus either on reducing the set of candidate keys \(\mathcal {T}\), which contains the actual key, or, equivalently, increasing \(\mathcal {T}\)’s complement \(\mathcal {F}\), the set of “false” keys. The former can be achieved through nonce misuse [Jou, HP08], which allows one to obtain a polynomial for which the key is a root, thereby reducing \(\mathcal {T}\) to the set of all roots of the polynomial. Although nonce misuse attacks are important to understand the fragility of the schemes, we focus on attacks which stay in the nonce-respecting model.

In contrast, the nonce-respecting attacks reduce \(\mathcal {T}\) via repeated verification attempts [HP08, PC15, ABBT15]. Their goal is to construct a forgery polynomial which evaluates to zero on the key. Then the forgery polynomial is combined with a previous tagging query into a verification attempt in such a way that if the verification attempt fails, then one knows that the key is not one of the roots of the forgery polynomial. If the forgery polynomial has degree \(\ell \), then at most \(\ell \) faulty keys can be removed for each verification attempt, resulting in a success probability of at most

$$\begin{aligned} \frac{1}{\left|\mathsf {T}\right|-v\ell }, \end{aligned}$$
(6)

where v is the number of verification attempts.

Our attacks differ from the previous nonce-respecting attacks in two ways: they do not require verification attempts in order to increase \(\mathcal {F}\), and \(\mathcal {F}\) increases quadratically as a function of the number of tagging queries, q, giving a success probability of roughly

$$\begin{aligned} \frac{1}{\left|\mathsf {T}\right|-q^2}. \end{aligned}$$
(7)

We describe chosen-plaintext attacks which perfectly match the bounds for both polynomial-based WCS MACs and GCM. The attacks can also be applied passively, where adversaries do not have chosen-plaintext control. Success then depends in a non-trivial way on the message distribution, which in turn depends on the application in consideration; we leave further detailed analysis of the known-plaintext attacks for future work. In Sect. 5 we show that our chosen-plaintext attacks are optimal.

3.1 WCS Authenticator Attacks

Constructing the False-Key Set. Let \(\gamma (n,m) = \pi (n) + \rho (m)\) be a polynomial-based WCS authenticator, with \(\pi \) a URP and \(\rho \) a polynomial-based universal hash function. Say that we somehow know that the queries \(\gamma (n_i,m_i)=t_i\) for \(i = 1,\ldots , q\) were made. This means

$$\begin{aligned} \pi (n_i) + \rho (m_i) = t_i \quad \text { or }\quad \pi (n_i) = t_i - \rho (m_i),\quad \text { for } i = 1,\ldots , q. \end{aligned}$$
(8)

Since \(\pi \) is a permutation, this means

$$\begin{aligned} t_i - \rho (m_i) \ne t_j - \rho (m_j),\quad \text { for } i\ne j. \end{aligned}$$
(9)

In particular, we know that the real key \(\kappa \) does not satisfy the polynomial equations

$$\begin{aligned} \rho (m_i) - \rho (m_j) + t_j - t_i = 0,\quad \text { for }i\ne j. \end{aligned}$$
(10)

Therefore, each query to \(\gamma \) might allow us to increase the set of false keys. In fact, the jth query to \(\gamma \) gives an additional \(j-1\) equations which can be used to discard keys.

Known-plaintext Attack. Given \((n_i,m_i,t_i)\) for \(i = 1,\ldots ,q\), perform the following:

  1. 1.

    Construct

    (11)
  2. 2.

    Pick any \(k^*\not \in \mathcal {F}\), output \(k^*\).

Analysis of the known-plaintext attack is complicated by the choice of distribution for the messages \(m_i\). We focus instead on analyzing the chosen-plaintext attack below.

Chosen-Plaintext Attack. Choose q distinct messages of length one block, \(m_1\), \(m_2,\ldots ,m_q\), and q nonces \(n_1,n_2,\ldots ,n_q\). For example, one could pick \(m_i = n_i = i\), for some encoding of i. Then conclude with the known-plaintext attack described above. The resulting false-key set is

$$\begin{aligned} \mathcal {F}= \left\{ \frac{t_i-t_j}{m_i-m_j}, i\ne j\right\} . \end{aligned}$$
(12)

The following proposition establishes the expected size of \(\mathcal {F}\) for this attack. In Sect. 5 we connect the expected size of \(\mathcal {F}\) with the success of key recovery attacks and forgeries.

Proposition 3.1

Let \(N = \left|\mathsf {T}\right|\), and say that \(q\le \sqrt{N-3}\), then

$$\begin{aligned} \mathbb {E}\left( \left|\mathcal {F}\right|\right) \ge \frac{q(q-1)}{4}, \end{aligned}$$
(13)

where \(\mathcal {F}\) is from (12) and \(\left|\mathcal {F}\right|\) denotes its cardinality.

Proof

Let \(\kappa \) denote the real key, then

$$\begin{aligned} \mathcal {F}&= \left\{ \frac{\pi (n_i)-\pi (n_j)+\kappa m_i - \kappa m_j}{m_i-m_j}, i\ne j\right\} \end{aligned}$$
(14)
$$\begin{aligned}&= \left\{ \frac{\pi (n_i)-\pi (n_j)+\kappa (m_i - m_j)}{m_i-m_j}, i\ne j\right\} \end{aligned}$$
(15)
$$\begin{aligned}&= \left\{ \frac{\pi (n_i)-\pi (n_j)}{m_i-m_j} + \kappa , i\ne j\right\} . \end{aligned}$$
(16)

Let \(S = \left\{ (\pi (n_i)-\pi (n_j))/(m_i-m_j), i\ne j\right\} \), so that \(\left|S\right| = \left|\mathcal {F}\right|\).

By Markov’s inequality,

(17)

and \(\left|S\right| \ge q(q-1)/2\) only if none of the \((\pi (n_i)-\pi (n_j))/(m_i-m_j)\) collide. By applying a union bound we know that the probability there is such a collision is at most \(q(q-1)/(2(N-3))\), hence

(18)

If \(q\le \sqrt{N-3}\), then

$$\begin{aligned} 1 - \frac{q(q-1)}{2(N-3)} \ge \frac{1}{2}, \end{aligned}$$
(19)

and we have our desired bound.    \(\square \)

3.2 GCM Attacks

With a known-plaintext attack against GCM it is possible to increase \(\mathcal {F}\) without resorting to verification attempts or polynomial equations. Since we know that the authentication key is computed as \(\pi (0)\), and all inputs to \(\pi \) are distinct, each URP output from CTR mode reduces the set of valid keys, which you can compute easily if you know the plaintext. However, such an attack still requires known plaintext, potentially making it more difficult to implement in practice.

In contrast, if we apply our WCS authenticator attacks described above to GCM, by replacing messages with ciphertexts, then we arrive at an attack which potentially only requires ciphertext. In a passive setting, the steps are identical: create a false-key set \(\mathcal {F}\) as in Eq. 11, except the polynomials are replaced by GCM’s, from (5).

The optimal chosen-plaintext attack changes slightly for GCM, since we need to deal with the encoded lengths of the ciphertexts in the polynomials of Eq. 5. Instead of choosing q distinct plaintexts \(m_i\), we now set all plaintexts to be the all-zero string of length one block. This results in polynomials

$$\begin{aligned} xL + c_iL^2\,, \end{aligned}$$
(20)

where x is the encoding of the length of a one-block length ciphertext, and the \(c_i\) are the ciphertexts, all distinct from each other. The resulting false-key set is as follows:

$$\begin{aligned} \left\{ \sqrt{\frac{t_i-t_j}{c_i-c_j}}, i\ne j\right\} . \end{aligned}$$
(21)

Since the square root is bijective in finite fields of characteristic two, we have that the above set contains the same number of elements as

$$\begin{aligned} \left\{ \frac{t_i-t_j}{c_i-c_j}, i\ne j\right\} , \end{aligned}$$
(22)

and the analysis made for WCS authenticators holds with little modification.

4 Bounding Authenticity with Key Recovery

4.1 Bernstein’s Analysis

Bernstein analyzes a generalization of Wegman-Carter and WCS MACs, namely those of the form \((n,m)\mapsto \rho (m) + \varphi (n)\), where \(\rho :\mathsf {M}\rightarrow \mathsf {T}\) and \(\varphi :\mathsf {N}\rightarrow \mathsf {T}\) are independent random functions. Wegman-Carter authenticators fix \(\varphi \) to be a uniformly distributed random function, and WCS authenticators fix \(\varphi \) to be a URP. As part of his analysis, Bernstein uses differential probability [Ber05b], more commonly known as \(\epsilon \)-almost (XOR) universal, given by

$$\begin{aligned} \varDelta _\rho {\mathop {=}\limits ^{\mathrm {def}}}\max _{\begin{array}{c} m\ne m' \\ t\in \mathsf {T} \end{array}}\mathbb {P}_{}\left[ \rho (m) = \rho (m')+t\right] . \end{aligned}$$
(23)

Various papers [dB93, Tay93, BJKS93] establish that for a polynomial-based universal hash function \(\rho :\mathsf {M}\rightarrow \mathsf {T}\), \(\varDelta _\rho \le \ell /\left|\mathsf {T}\right|\), where \(\mathsf {M}= \mathsf {T}^{\le \ell }\).

Bernstein also introduces the concept of interpolation probabilities of a random function \(\varphi \), which is the probability that \(\varphi (x_i) = y_i\) for some values \(x_1,\ldots ,x_q\) and \(y_1,\ldots ,y_q\). Bernstein establishes that \(\rho (m)+\varphi (n)\) is secure if \(\rho \)’s differential and \(\varphi \)’s interpolation probabilities are small. Ultimately when applied to polynomial-based WCS authenticators, we get the following.

Theorem 4.1

Let \(\gamma :\mathsf {N}\times \mathsf {M}\rightarrow \mathsf {T}\) be a polynomial-based WCS authenticator with \(\mathsf {M}= \mathsf {T}^{\le \ell }\) and let \(\mathbf {A}\) be a nonce-respecting adversary against \(\gamma \) making at most q \(\gamma \) queries and v verification queries, then

$$\begin{aligned} \mathsf {Auth}_\gamma (\mathbf {A})\le v\cdot \frac{\ell }{\left|\mathsf {T}\right|}\cdot \left( 1 - \frac{q}{\left|\mathsf {T}\right|}\right) ^{-\frac{q+1}{2}}. \end{aligned}$$
(24)

4.2 Reshaping Authenticity Advantage

Although Bernstein’s analysis is general and applies to more than just polynomial-based WCS MACs, a targeted analysis will elucidate the gap between currently known attacks and the bound given by Bernstein.

Whereas Bernstein proves bounds for \(\varphi (n) + \rho (m)\) in terms of \(\varphi \)’s interpolation and \(\rho \)’s differential probability, we instead rework the bounds to \(\varphi \)’s unpredictability (Sect. 4.3) and key recovery against \(\rho \) (Sect. 4.4), the latter only applying to polynomial-based MACs. The concepts introduced in this section will allow us to prove that the CPA attacks introduced in Sect. 3 are in fact optimal.

Instrumental to our analysis is the fact that an adversary’s single-forgery advantage can be split in two, according to whether its attempted forgery \((n^*,m^*,t^*)\) uses a nonce \(n^*\) that was never used before, or not. We let \(\mathsf {sAuth}_\gamma ^{\text {new}}(\mathbf {A})\) denote the probability that \(\mathbf {A}\) forges and uses a new nonce, and \(\mathsf {sAuth}_\gamma ^{\text {old}}(\mathbf {A})\) the probability that \(\mathbf {A}\) forges and uses an old nonce. By basic probability theory,

(25)

Letting \(\mathsf {KR}\) denote polynomial key recovery advantage (see Definition 4.2), we establish the following result.

Corollary 4.1

Let \(\gamma : (n,m)\mapsto \rho (m)+\pi (n)\) be a polynomial-based WCS authenticator with \(\rho :\mathsf {M}\rightarrow \mathsf {T}\) a random function, and \(\pi :\mathsf {N}\rightarrow \mathsf {T}\) an independent URP. Let \(\mathbf {A}\) be an authenticity adversary against \(\gamma \) making at most q queries of length at most \(\ell \). Then

(26)

The proof can be found in Appendix A, which relies on results developed in the next sections.

4.3 Unpredictability

We show how any attempted forgery using a new nonce against a WCS authenticator has low success probability. This means if authenticity adversaries want to achieve significant advantage, then they must re-use nonces during forgeries. We state the result more generally than for only polynomial-based WCS authenticators.

Definition 4.1

(Unpredictability). Let \(\mathbf {A}\) be an adversary interacting with random function \(\varphi :\mathsf {X}\rightarrow \mathsf {Y}\). Say that \(\mathbf {A}\) produces the sequence \(\varvec{x}\in \mathsf {X}^q\) and \(\varphi \) responds with outputs \(\varvec{y}\in \mathsf {Y}^q\). Let \((x^*,y^*)\) be \(\mathbf {A}\)’s output, then \(\mathbf {A}\)’s unpredictability advantage against \(\varphi \) is

(27)

where the probability is taken over the randomness of \(\mathbf {A}\) and \(\varphi \).

Let \(\gamma : (n,m)\mapsto \rho (m)+\pi (n)\) be any Wegman-Carter-style MAC using random functions \(\rho :\mathsf {M}\rightarrow \mathsf {T}\) and \(\varphi :\mathsf {N}\rightarrow \mathsf {T}\) which are independent of each other. Let \(\mathbf {A}\) be an authenticity adversary against \(\gamma \). We construct an unpredictability adversary \(\mathbf {B}\left\langle \mathbf {A}\right\rangle \) against \(\varphi \) as follows.

  1. 1.

    \(\mathbf {B}\) runs \(\mathbf {A}\).

  2. 2.

    \(\mathbf {B}\) simulates \(\rho \) using its own randomness; call it \(\rho '\).

  3. 3.

    Every \(\gamma \)-query made by \(\mathbf {A}\) is reconstructed by \(\mathbf {B}\) using \(\rho '\) and the \(\varphi \)-oracle \(\mathbf {B}\) interacts with. Concretely, every \(\gamma (n,m)\) made by \(\mathbf {A}\) gets forwarded as \(\varphi (n)\), and \(\mathbf {B}\) returns \(\varphi (n) + \rho '(m)\).

  4. 4.

    \(\mathbf {B}\) receives \(\mathbf {A}\)’s final output, \((n^*,m^*,t^*)\), and finally outputs \((n^*, t^*-\rho '(m^*))\).

Proposition 4.1

$$\begin{aligned} \mathsf {sAuth}_\gamma ^{\text {new}}(\mathbf {A}) \le \mathsf {Unpred}_\varphi (\mathbf {B}\left\langle \mathbf {A}\right\rangle ). \end{aligned}$$
(28)

Proof

First note that \(\mathbf {B}\) perfectly reconstructs \(\mathbf {A}\)’s authenticity game since \(\rho \)’ is independent of \(\varphi \). Then, if \(\mathbf {A}\) wins its authenticity game, \(\gamma (n^*,m^*) = t^*\), or in other words, \(\varphi (n^*) + \rho (m^*) = t^*\). In particular, \(\varphi (n^*) = t^*-\rho (m^*)\). If \(n^*\) has never been queried to \(\varphi \) before, \(t^*-\rho (m^*)\) would correctly predict \(\varphi \)’s output on an unknown input, hence \(\mathbf {B}\langle \mathbf {A}\rangle \) would win its unpredictability game.    \(\square \)

Lemma 4.1

Let \(\pi :\mathsf {N}\rightarrow \mathsf {T}\) be a URP and \(\mathbf {B}\) an adversary making at most q queries, then

$$\begin{aligned} \mathsf {Unpred}_{\pi }(\mathbf {B}) \le \frac{1}{\left|\mathsf {T}\right|-q}. \end{aligned}$$
(29)

4.4 Bounding Forgeries with Key Recovery

Having set aside adversaries which use new nonces for forgeries, we can focus on those that re-use nonces. This section applies only to polynomial-based WCS authenticators.

Definition 4.2

(Polynomial Key Recovery). Let \(\mathbf {A}\) be a nonce-respecting adversary interacting with polynomial-based WCS authenticator \(\gamma \) using URP \(\pi \) and polynomial-based universal hash \(\rho \), with \(\kappa \) denoting the random variable representing the key underlying \(\rho \). Say that \(\mathbf {A}\) outputs an element \(k^*\in \mathsf {K}\), then \(\mathbf {A}\)’s polynomial key recovery advantage against \(\gamma \) is

(30)

where the randomness is taken over \(\mathbf {A}\) and \(\gamma \). We let \(\mathsf {KR}_\gamma (q)\) denote the maximum of \(\mathsf {KR}_\gamma (\mathbf {A})\) over all adversaries \(\mathbf {A}\) making at most q queries.

Forgeries can be used to recover authentication keys. We construct a polynomial key recovery adversary \(\mathbf {C}\left\langle \mathbf {A}\right\rangle \) against \(\gamma \).

  1. 1.

    \(\mathbf {C}\) runs \(\mathbf {A}\).

  2. 2.

    Every (nm) query by \(\mathbf {A}\) gets forwarded to \(\mathbf {C}\)’s oracle, and \(\mathbf {C}\) returns the output \(\gamma (n,m)\) to \(\mathbf {A}\).

  3. 3.

    When \(\mathbf {A}\) outputs \((n^*,m^*,t^*)\), then \(\mathbf {C}\) checks to see if \(n^* = n_i\) for some previous query \(\gamma (n_i,m_i) = t_i\). If this is not the case, then \(\mathbf {C}\) aborts. Otherwise \(\mathbf {C}\) computes the roots of the polynomialFootnote 2 \(p_{m^*}(\alpha )-p_{m_i}(\alpha )-t^*+t_i = 0\), and chooses a key uniformly at random from the set of roots.

Proposition 4.2

Let \(\mathbf {A}\) be an adversary making queries of length at most \(\ell \). The probability that \(\mathbf {A}\) wins its authenticity game and outputs \((n^*,m^*,t^*)\) where \(n^* = n_i\) for some previous query \((n_i,m_i)\) to \(\gamma \), is bounded above by

$$\begin{aligned} \ell \cdot \mathsf {KR}_\gamma (\mathbf {C}\left\langle \mathbf {A}\right\rangle ). \end{aligned}$$
(31)

Proof

If \(\mathbf {A}\) wins with \(n^* = n_i\), then

$$\begin{aligned} \gamma (n^*,m^*) = \gamma (n_i,m^*) = \varphi (n_i) + \rho (m^*) = t^*, \end{aligned}$$
(32)

and

$$\begin{aligned} \gamma (n_i,m_i) = \varphi (n_i) + \rho (m_i) = t_i, \end{aligned}$$
(33)

therefore \(\rho (m^*) - \rho (m_i) - t^* + t_i = 0\). We know that the key used by \(\rho \) is in the set of roots of the polynomial \(p_{m^*}(\alpha ) - p_{m_i}(\alpha ) - t^* + t_i\), which has size at most \(\max \left\{ \left|m^*\right|,\left|m_i\right|\right\} \). Picking an element uniformly at random from this set, we have that \(\mathbf {C}\) wins with probability at least \(1/\max \left\{ \left|m^*\right|,\left|m_i\right|\right\} \).    \(\square \)

5 Using Key Recovery to Mount Forgeries

The previous section discussed how to convert authenticity attacks into key recovery attacks to reshape the upper bounds on forgery attacks. Here we discuss the opposite, namely how to use key recovery adversaries to mount forgeries. This will allow us to not only show that the analysis of Sect. 4 is tight, but also that the attacks of Sect. 3 are optimal, using Bernstein’s analysis.

5.1 Key-Set Recovery

The obvious way to convert a key recovery attack into an authenticity attack is to run the key recovery adversary and use the output of the key recovery adversary to mount a forgery. We explain this formally in Appendix B. However, this method constructs authenticity adversaries which are about as successful as key recovery adversaries.

In contrast, as seen in Sect. 4.4, Proposition 4.2, authenticity adversaries might improve over key recovery adversaries by up to a factor of \(\ell \). Intuitively, given a key recovery adversary, one could try to do this by taking the candidate key \(k^*\) output by the key recovery adversary, and finding a polynomial of degree \(\ell \) which contains \(k^*\) as a root, and then construct a forgery using this polynomial. The problem with this approach is that most of the roots of the polynomial chosen by the resulting authenticity adversary could be useless, as they could, for example, lie in some false-key set determined by the key recovery adversary. Without any further information about the key recovery adversary it does not seem possible to improve the authenticity adversary.

However, if we instead look at key-set recovery adversaries, we can improve our chances of constructing forgeries. We will show that key-set recovery and key-recovery adversaries are in fact very similar, allowing us to prove tight bounds on the connection between key-recovery and forgeries.

Definition 5.1

(Polynomial Key-Set Recovery). Let \(\mathbf {A}\) be a nonce-respecting adversary interacting with polynomial-based WCS authenticator \(\gamma \) using URP \(\pi \) and polynomial-based universal hash \(\rho \), with \(\kappa \) denoting the random variable representing the key underlying \(\rho \). Say that \(\mathbf {A}\) outputs a set \(K^*\subset \mathsf {K}\), and let \(1_{K^*}\) denote the random variable which equals one if \(\kappa \in K^*\) and zero otherwise. Then \(\mathbf {A}\)’s polynomial key-set recovery advantage against \(\gamma \) is

$$\begin{aligned} \mathsf {KS}_\gamma (\mathbf {A}){\mathop {=}\limits ^{\mathrm {def}}}\mathbb {E}\left( \frac{1_{K^*}}{\left|K^*\right|}\right) , \end{aligned}$$
(34)

where the randomness is taken over \(\mathbf {A}\) and \(\gamma \). We let \(\mathsf {KS}_\gamma (q)\) denote the maximum of \(\mathsf {KS}_\gamma (\mathbf {A})\) taken over all adversaries making at most q queries.

Let \(\mathbf {C}\) be a key-set recovery adversary. Once \(\mathbf {C}\) has made all its queries, it is possible to compute \(\mathcal {F}_\mathbf {C}\), the random set of false keys given by Eq. (11), and \(\mathcal {T}_\mathbf {C}\) its complement. Then it is straightforward to construct key-set adversary \(\mathbf {D}\left\langle \mathbf {C}\right\rangle \) which runs \(\mathbf {C}\), and then returns \(\mathcal {T}_\mathbf {C}\). We argue that \(\mathbf {C}\)’s advantage is not greater than \(\mathbf {D}\)’s.

Lemma 5.1

Let \(\mathbf {C}\) and \(\mathbf {D}\left\langle \mathbf {C}\right\rangle \) be defined as above, then

$$\begin{aligned} \mathsf {KS}_\gamma (\mathbf {C})\le \mathsf {KS}_\gamma (\mathbf {D}\left\langle \mathbf {C}\right\rangle ). \end{aligned}$$
(35)

Proof

First note that \(\kappa \), the key underlying the polynomial-based universal hash, must be in \(\mathcal {T}_\mathbf {C}\), since by definition it cannot satisfy any of the equations given in (11). Therefore, if \(\mathbf {C}\)’s output, denoted \(K^*\), contains elements not in \(\mathcal {T}_\mathbf {C}\), then it is possible to improve \(\mathbf {C}\)’s advantage by having \(\mathbf {C}\) output \(K^*\cap \mathcal {T}_\mathbf {C}\), since that would reduce \(\mathbf {C}\)’s output set size without affecting the probability that \(\kappa \) is in the set. Therefore without loss of generality we assume that \(K^*\subset \mathcal {T}_\mathbf {C}\).

Then, given any sequence of q queries that \(\mathbf {C}\) makes, \(\mathcal {T}_\mathbf {C}\) describes exactly those keys which satisfy the transcript, and in particular \(\kappa \) is uniformly distributed over \(\mathcal {T}_\mathbf {C}\). Therefore, if \(K^*\subset \mathcal {T}_\mathbf {C}\), then \(\mathbf {C}\)’s advantage is the same as \(\mathbf {D}\):

$$\begin{aligned} \mathbb {E}\left( \frac{1_{K^*}}{\left|K^*\right|}\right)&= \sum _n\frac{1}{n}\sum _m\mathbb {P}_{}\left[ *\right] {\kappa \in K^*, \left|K^*\right| = n, \left|\mathcal {T}_\mathbf {C}\right| = m}\end{aligned}$$
(36)
$$\begin{aligned}&= \sum _n\frac{1}{n}\sum _m\mathbb {P}_{}\left[ *\right] {\kappa \in K^*\mid * \left|K^*\right|=n, \left|\mathcal {T}_\mathbf {C}\right| = m} \mathbb {P}_{}\left[ *\right] {\left|K^*\right|=n,\left|\mathcal {T}_\mathbf {C}\right| = m} \end{aligned}$$
(37)
$$\begin{aligned}&= \sum _n\frac{1}{n}\sum _m \frac{n}{m}\cdot \mathbb {P}_{}\left[ *\right] {\left|K^*\right|=n,\left|\mathcal {T}_\mathbf {C}\right| = m}\end{aligned}$$
(38)
$$\begin{aligned}&= \sum _m\frac{1}{m}\mathbb {P}_{}\left[ *\right] {\left|\mathcal {T}_\mathbf {C}\right| = m}\end{aligned}$$
(39)
$$\begin{aligned}&= \mathbb {E}\left( \frac{1_{\mathcal {T}_\mathbf {C}}}{\left|\mathcal {T}_\mathbf {C}\right|}\right) . \end{aligned}$$
(40)

   \(\square \)

Since our focus is on optimal, information-theoretic adversaries, without loss of generality we assume that all key-set recovery adversaries return \(\mathcal {T}\).

Given such a key-set recovery adversary \(\mathbf {D}\), we construct single-forgery adversary \(\mathbf {A}\left\langle \mathbf {D}\right\rangle \) as follows:

  1. 1.

    \(\mathbf {A}\) runs \(\mathbf {D}\), and responds to any \(\mathbf {D}\)-query (nm) with \(\gamma (n,m)\).

  2. 2.

    When \(\mathbf {D}\) outputs the candidate set \(\mathcal {T}_\mathbf {D}\), \(\mathbf {A}\) picks \(\ell \) distinct elements uniformly at random from \(\mathcal {T}_\mathbf {D}\) and constructs a polynomial \(p_{m^*}\) with those elements as roots.

  3. 3.

    \(\mathbf {A}\) picks any previous query \(\gamma (n,m) = t\) made by \(\mathbf {D}\), adds \(m^*\) to m component-wise to get \(m' = (m_1+m_1^*,m_2+m_2^*,\ldots )\), and submits the forgery attempt \((n,m',t)\).

Naturally this reduction becomes void if the size of \(\mathcal {T}_\mathbf {D}\) is less than \(\ell \), however as we will see in Sect. 5.2, this can only happen if q nearly as large as the number of nonces the adversary can query. We capture this limit on q with \(M_\gamma \), which is defined to be

$$\begin{aligned} M_\gamma {\mathop {=}\limits ^{\mathrm {def}}}\max \left\{ q \mid * \min _{\begin{array}{c} m_1,\ldots ,m_q,\\ t_1,\ldots ,t_q \end{array}}\left|\mathsf {T}\right| \ge \ell \right\} . \end{aligned}$$
(41)

The following proposition shows that one can construct better forgeries using key-set recovery adversaries.

Proposition 5.1

Let \(q\le M_\gamma \), then

$$\begin{aligned} \ell \cdot \mathsf {KS}_\gamma (\mathbf {D})\le \mathsf {sAuth}^{\text {old}}_\gamma (\mathbf {A}\left\langle \mathbf {D}\right\rangle ). \end{aligned}$$
(42)

Proof

Let L denote the \(\ell \) elements that \(\mathbf {A}\) picks from \(\mathcal {T}_\mathbf {D}\). Adversary \(\mathbf {A}\) wins if \(\kappa \in L\), since then \(p_{m^*}(\kappa ) = 0\) and so \(p_{m+m^*}(\kappa ) + \pi (n) = t\).

$$\begin{aligned} \mathbb {P}_{}\left[ *\right] {\kappa \in L}&= \sum _n\mathbb {P}_{}\left[ *\right] {\kappa \in L\mid * \left|\mathcal {T}_\mathbf {D}\right| = n, \kappa \in \mathcal {T}_\mathbf {D}}\mathbb {P}_{}\left[ *\right] {\kappa \in \mathcal {T}_\mathbf {D}, \left|\mathcal {T}_\mathbf {D}\right| = n} \end{aligned}$$
(43)
$$\begin{aligned}&= \sum _n\frac{\ell }{n}\cdot \mathbb {P}_{}\left[ *\right] {\kappa \in \mathcal {T}_\mathbf {D}, \left|\mathcal {T}_\mathbf {D}\right| = n}\, \end{aligned}$$
(44)
$$\begin{aligned}&= \ell \cdot \mathbb {E}\left( \frac{1_{\mathcal {T}_\mathbf {D}}}{\left|\mathcal {T}_\mathbf {D}\right|}\right) = \ell \cdot \mathsf {KS}_\gamma (\mathbf {D}). \end{aligned}$$
(45)

   \(\square \)

Furthermore, there is little real difference between key-recovery and key-set recovery advantage.

Proposition 5.2

$$\begin{aligned} \mathsf {KS}_\gamma (q) = \mathsf {KR}_\gamma (q). \end{aligned}$$
(46)

Proof

If the output set size of a key-set recovery adversary is always one, then key-set recovery advantage is identical to key-recovery advantage. Since any key-recovery adversary can be converted into a key-set recovery adversary with output set size one, we have that \(\mathsf {KR}_\gamma (q)\le \mathsf {KS}_\gamma (q)\).

Given any key-set recovery adversary \(\mathbf {C}\), we convert it into a key-recovery adversary \(\mathbf {C}'\) by picking a candidate key \(k^*\) uniformly at random from the output set \(K^*\). Then

$$\begin{aligned} \mathsf {KR}_\gamma (\mathbf {C}')&= \mathbb {P}_{}\left[ *\right] {\kappa = k^*} \end{aligned}$$
(47)
$$\begin{aligned}&= \sum _n\mathbb {P}_{}\left[ *\right] {\kappa = k^* \mid * \kappa \in K^*,\left|K^*\right| = n}\mathbb {P}_{}\left[ *\right] {\kappa \in K^*,\left|K^*\right| = n} \end{aligned}$$
(48)
$$\begin{aligned}&= \sum _n\frac{1}{n}\mathbb {P}_{}\left[ *\right] {\kappa \in K^*,\left|K^*\right| = n} = \mathsf {KS}_\gamma (\mathbf {C}). \end{aligned}$$
(49)

   \(\square \)

Propositions 4.2, 5.1 and 5.2 establish the following result, confirming that the analysis of Sect. 4.4 is tight.

Corollary 5.1

Let \(q\le M_\gamma \), then

$$\begin{aligned} \ell \cdot \mathsf {KR}_\gamma (q) = \mathsf {sAuth}^{\text {old}}_\gamma (q). \end{aligned}$$
(50)

5.2 Attack Success Probability and Optimality

Our chosen-plaintext attack only uses messages of length one block, which is reflected in the fact that \(\left|\mathcal {F}\right|\) only grows as a function of q. Intuitively one would expect to be able to increase \(\mathcal {F}\) as well by taking advantage of longer messages and the fact that polynomials of higher degree have more roots. However, here we show that this is impossible.

The success probability of the key recovery attacks from Sect. 3 is given as follows, which results from the observation that the real key cannot be in \(\mathcal {F}\) by definition.

Proposition 5.3

Let \(\mathbf {A}\) denote the chosen-plaintext attack from Sect. 3, then

$$\begin{aligned} \mathsf {KR}_\gamma (\mathbf {A}) \ge \frac{1}{\left|\mathsf {T}\right| - \mathbb {E}\left( \left|\mathcal {F}\right|\right) }. \end{aligned}$$
(51)

Combining this result with Bernstein’s result, we have the following.

Theorem 5.1

Let \(\mathcal {F}\) be defined as in Sect. 3, then

$$\begin{aligned} \mathbb {E}\left( \left|\mathcal {F}\right|\right) \le \frac{q(q+1)}{2}. \end{aligned}$$
(52)

Proof

Using Theorem 4.1, Corollary 5.1, and Proposition 5.3, we have

$$\begin{aligned} \ell \cdot \frac{1}{\left|\mathsf {T}\right| - \mathbb {E}\left( \left|\mathcal {F}\right|\right) } \le \frac{\ell }{\left|\mathsf {T}\right|}\cdot \left( 1 - \frac{q}{\left|\mathsf {T}\right|}\right) ^{-\frac{q+1}{2}}. \end{aligned}$$
(53)

Letting x denote \(\mathbb {E}(\left|\mathcal {F}\right|)\) and \(N = \left|\mathsf {T}\right|\), we have

$$\begin{aligned} \frac{1}{N-x}&\le \frac{1}{N}\left( 1 - \frac{q}{N}\right) ^{-\frac{q+1}{2}}\end{aligned}$$
(54)
$$\begin{aligned} x&\le N\left[ 1 - \left( 1 - \frac{q}{N}\right) ^{\frac{q+1}{2}}\right] . \end{aligned}$$
(55)

We apply Bernoulli’s inequality, namely that \((1+x)^r \ge 1 + rx\) if \(r\ge 1\) and \(x\ge -1\), which holds in our case when \(1\le q\le N\), to get

$$\begin{aligned} \left( 1 - \frac{q}{N}\right) ^{\frac{q+1}{2}} \ge 1 - \frac{q+1}{2}\cdot \frac{q}{N}, \end{aligned}$$
(56)

hence

$$\begin{aligned} x \le \frac{q(q+1)}{2}. \end{aligned}$$
(57)

   \(\square \)

6 Conclusions, Limitations, and Open Problems

Using new analysis and attacks we have shown that, without further restrictions on the adversaries, Bernstein’s analysis is in fact optimal. We can therefore conclude that the data limits imposed by Bernstein’s bounds are necessary.

Our attacks illustrate for the first time how to maximally take advantage of tagging queries without needing verification queries in order to attack WCS authenticators. However, there are limitations on the applicability of the attacks.

As implied by the introduction, our attacks only work against polynomial-based WCS authenticators when they re-use the polynomial key, and is therefore not applicable to, for example, SNOW 3G [Ber09] or Poly1305 as used in NaCl [Ber09, Ber09].

The attacks work best when tags are not truncated, since the underlying PRP behaves more like a PRF with increased truncation [GGM18, HWKS98]. However, as pointed out by Ferguson [Fer05] and Mattsson and Westerlund [MW16], one must take care when truncating tags in WCS authenticators. In some cases standards mandate that tags not be truncated [VM06, SMC08, IS09].

The attacks are not directly applicable to constructions which do not follow the WCS authenticator structure of mapping (nm) to \(\pi (n) + \rho (m)\). A few different constructions are discussed by Bernstein [Ber05c] and Handschuh and Preneel [HP08]. In particular, if a PRF instead of a PRP is used to hide the polynomial output, or if multiple PRP calls are XORed together as with CWC [KVW04] and GCM/\(2^+\) [AY12], then the attacks are not applicable; it remains an open problem whether the analyses of the latter constructions are tight.

WCS authenticators can also be instantiated using non-polynomial-based universal hash functions, [BHK+99, HK97, EPR99, Joh97, KYS05, Kro06, BHK+99]. We expect that similar attacks are applicable to these functions.

As shown by Luykx et al. [LMP17], the attacks’ success probability will not improve in the multi-key setting.

Finally, although our attacks show that one should abide by Bernstein’s bounds, implementing the attacks seems to require a large amount of storage to achieve significant success probability. It is unclear whether there is a compact way of representing the set of false keys. Alternatively, if one were able to prove lower bounds on the storage requirements for any attacker, one could possibly afford to use keys beyond the data limits recommended by Bernstein’s analysis, assuming adversaries have bounded storage capabilities.