Abstract
In this work we study the security of Chaskey, a recent lightweight MAC designed by Mouha et al., currently being considered for standardization by ISO/IEC and ITU-T. Chaskey uses an ARX structure very similar to SipHash. We present the first cryptanalysis of Chaskey in the single user setting, with a differential-linear attack against 6 and 7 rounds, hinting that the full version of Chaskey with 8 rounds has a rather small security margin. In response to these attacks, a 12-round version has been proposed by the designers.
To improve the complexity of the differential-linear cryptanalysis, we refine a partitioning technique recently proposed by Biham and Carmeli to improve the linear cryptanalysis of addition operations. We also propose an analogue improvement of differential cryptanalysis of addition operations. Roughly speaking, these techniques reduce the data complexity of linear and differential attacks, at the cost of more processing time per data. It can be seen as the analogue for ARX ciphers of partial key guess and partial decryption for SBox-based ciphers.
When applied to the differential-linear attack against Chaskey, this partitioning technique greatly reduces the data complexity, and this also results in a reduced time complexity. While a basic differential-linear attack on 7 round takes \(2^{78}\) data and time (respectively \(2^{35}\) for 6 rounds), the improved attack requires only \(2^{48}\) data and \(2^{67}\) time (respectively \(2^{25}\) data and \(2^{29}\) time for 6 rounds). We also show an application of the partitioning technique to FEAL-8X, and we hope that this technique will lead to a better understanding of the security of ARX designs.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Linear cryptanalysis and differential cryptanalysis are the two major cryptanalysis techniques in symmetric cryptography. Differential cryptanalysis was introduced by Biham and Shamir in 1990 [6], by studying the propagation of differences in a cipher. Linear cryptanalysis was discovered in 1992 by Matsui [25, 26], using a linear approximation of the non-linear round function.
In order to apply differential cryptanalysis (respectively, linear cryptanalysis), the cryptanalyst has to build differentials (resp. linear approximations) for each round of a cipher, such the output difference of a round matches the input difference of the next round (resp. linear masks). The probability of the full differential or the imbalance of the full linear approximation is computed by multiplying the probabilities (respectively imbalances) of each round. This yields a statistical distinguisher for several rounds:
-
A differential distinguisher is given by a plaintext difference \(\delta _P\) and a ciphertext difference \(\delta _C\), so that the corresponding probability p is non-negligible:
$$\begin{aligned} p = \Pr \big [E(P \oplus \delta _P) = E(P) \oplus \delta _C\big ] \gg 2^{-n}. \end{aligned}$$The attacker collects \(D = \mathcal {O}(1/p)\) pairs of plaintexts \((P_i,P_i')\) with \(P_i' = P_i \oplus \delta _P\), and checks whether a pair of corresponding ciphertexts satisfies \(C_i' = C_i \oplus \delta _C\). This happens with high probability for the cipher, but with low probability for a random permutation.
-
A linear distinguisher is given by a plaintext mask \(\chi _P\) and a ciphertext mask \(\chi _C\), so that the corresponding imbalanceFootnote 1 \(\varepsilon \) is non-negligible:
The attacker collects \(D = \mathcal {O}(1/\varepsilon ^2)\) known plaintexts \(P_i\) and the corresponding ciphertexts \(C_i\), and computes the observed imbalance \(\hat{\varepsilon }\):
$$\begin{aligned} \hat{\varepsilon }= \left| 2 \cdot \# \left\{ i : P_i[\chi _P] = C_i[\chi _C]\right\} /D - 1\right| . \end{aligned}$$The observed imbalance is close to \(\varepsilon \) for the attacked cipher, and smaller than \(1/\sqrt{D}\) (with high probability) for a random function.
Last Round Attacks. The distinguishers are usually extended to a key-recovery attack on a few more rounds using partial decryption. The main idea is to guess the subkeys of the last rounds, and to compute an intermediate state value from the ciphertext and the subkeys. This allows to apply the distinguisher on the intermediate value: if the subkey guess was correct the distinguisher should succeed, but it is expected to fail for wrong key guesses. In a Feistel cipher, the subkey for one round is usually much shorter than the master key, so that this attack recovers a partial key without considering the remaining bits. This allows a divide and conquer strategy were the remaining key bits are recovered by exhaustive search. For an SBox-based cipher, this technique can be applied if the difference \(\delta _C\) or the linear mask \(\chi _C\) only affect a small number of SBoxes, because guessing the key bits affecting those SBoxes is sufficient to invert the last round.
ARX Ciphers. In this paper we study the application of differential and linear cryptanalysis to ARX ciphers. ARX ciphers are a popular category of ciphers built using only additions (\(x \boxplus y\)), bit rotations (\(x \lll n\)), and bitwise xors (\(x \oplus y\)). These simple operations are very efficient in software and in hardware, but they interact in complex ways that make analysis difficult and is expected to provide security. ARX constructions have been used for block ciphers (e.g. TEA, XTEA, FEAL, Speck), stream ciphers (e.g. Salsa20, ChaCha), hash functions (e.g. Skein, BLAKE), and for MAC algorithms (e.g. SipHash, Chaskey).
The only non-linear operation in ARX ciphers is the modular addition. Its linear and differential properties are well understood [14, 24, 29, 32, 33, 37, 39], and differential and linear cryptanalysis have been use to analyze many ARX designs (see for instance the following papers: [4, 8, 16, 21, 22, 26, 40, 41]).
However, there is no simple way to extend differential or linear distinguishers to last-round attack for ARX ciphers. The problem is that they typically have 32-bit or 64-bit words, but differential and linear characteristics have a few active bits in each wordFootnote 2. Therefore a large portion of the key has to be guessed in order to perform partial decryption, and this doesn’t give efficient attacks.
Besides, differential and linear cryptanalysis usually reach a limited number of rounds in ARX designs because the trails diverge quickly and we don’t have good techniques to keep a low number of active bits. This should be contrasted with SBox-based designs where it is sometimes possible to build iterative trails, or trails with only a few active SBoxes per round. For instance, this is case for differential characteristics in DES [7] and linear trails in PRESENT [13].
Because of this, cryptanalysis methods that allow to divide a cipher E into two sub-ciphers \(E = E_\bot \circ E_\top \) are particularly interesting for the analysis of ARX designs. In particular this is the case with boomerang attacks [38] and differential-linear cryptanalysis [5, 20]. A boomerang attack uses differentials with probabilities \(p_{\top }\) and \(p_{\bot }\) in \(E_\top \) and \(E_\bot \), to build a distinguisher with complexity \(\mathcal {O}(1/p_{\top }^2p_{\bot }^2)\). A differential-linear attack uses a differential with probability p for \(E_\top \) and a linear approximation with imbalance \(\varepsilon \) for \(E_\bot \) to build a distinguisher with complexity about \(\mathcal {O}(1/p^2\varepsilon ^4)\) (using a heuristic analysis).
Our Results. In this paper, we consider improved techniques to attack ARX ciphers, with application to Chaskey. Since Chaskey has a strong diffusion, we start with differential-linear cryptanalysis, and we study in detail how to build a good differential-linear distinguisher, and how to improve the attack with partial key guesses.
Our main technique follows a recent paper by Biham and Carmeli [3], by partitioning the available data according to some plaintext and ciphertext bits. In each subset, some data bits have a fixed value and we can combine this information with key bit guesses to deduce bits after the key addition. These known bits result in improved probabilities for differential and linear cryptanalysis. While Biham and Carmeli considered partitioning with a single control bit (i.e. two partitions), and only for linear cryptanalysis, we extend this analysis to multiple control bits, and also apply it to differential cryptanalysis.
When applied to differential and linear cryptanalysis, this results in a significant reduction of the data complexity. Alternatively, we can extend the attack to a larger number of rounds with the same data complexity. Those results are very similar to the effect of partial key guess and partial decryption in a last-round attack: we turn a distinguisher into a key recovery attack, and we can add some rounds to the distinguisher. While this can increase the time complexity in some cases, we show that the reduced data complexity usually leads to a reduced time complexity. In particular, we adapt a convolution technique used for linear cryptanalysis with partial key guesses [15] in the context of partitioning.
These techniques result in significant improvements over the basic differential-linear technique: for 7 rounds of Chaskey (respectively 6 rounds), the differential-linear distinguisher requires \(2^{78}\) data and time (respectively \(2^{35}\)), but this can be reduced to \(2^{48}\) data and \(2^{67}\) time (respectively \(2^{25}\) data and \(2^{29}\) time) (see Table 1). The full version of Chaskey has 8 rounds, and is claimed to be secure against attacks with \(2^{48}\) data and \(2^{80}\) time.
The paper is organized as follows: we first explain the partitioning technique for linear cryptanalysis in Sect. 2 and for differential cryptanalysis in Sect. 3. We discuss the time complexity of the attacks in Sect. 4. Then we demonstrate the application of this technique to the differential-linear cryptanalysis of Chaskey in Sect. 5. Finally, we show how to apply the partitioning technique to reduce the data complexity of linear cryptanalysis against FEAL-8X in Appendix A.
2 Linear Analysis of Addition
We first discuss linear cryptanalysis applied to addition operations, and the improvement using partitioning. We describe the linear approximations using linear masks; for instance an approximation for E is written as where \(\chi \) and \(\chi '\) are the input and output linear masks ( denotes \(x[\chi _1] \oplus x[\chi _2] \oplus \cdots x[\chi _{\ell }]\), where \(\chi = (\chi _1, \ldots \chi _{\ell })\) and \(x[\chi _i]\) is bit \(\chi _i\) of x), and \(\varepsilon \ge 0\) is the imbalance. We also denote the imbalance of a random variable x as \(\mathcal {I}(x) = 2 \cdot \Pr [x = 0] - 1\), and \(\varepsilon (x) = |\mathcal {I}(x)|\). We will sometimes identify a mask with the integer with the same binary representation, and use an hexadecimal notation.
We first study linear properties of the addition operation, and use an ARX cipher E as example. We denote the word size as w. We assume that the cipher starts with an xor key addition, and a modular addition of two state variablesFootnote 3. We denote the remaining operations as \(E'\), and we assume that we know a linear approximation \((\alpha , \beta , \gamma ) \mathop {\longrightarrow }\limits ^{E'} (\alpha ', \beta ', \gamma ')\) with imbalance \(\varepsilon \) for \(E'\). We further assume that the masks are sparse, and don’t have adjacent active bits. Following previous works, the easier way to extend the linear approximation is to use the following masks for the addition:
As shown in Fig. 1, this gives the following linear approximation for E:
In order to explain our technique, we initially assume that \(\alpha \) has a single active bit, i.e. \(\alpha = 2^i\). We explain how to deal with several active bits in Sect. 2.3. If \(i=0\), the approximation of the linear addition has imbalance 1, but for other values of i, it is only 1/2 [39]. In the following we study the case \(i > 0\), where the linear approximation (2) for E has imbalance \(\varepsilon /2\).
2.1 Improved Analysis with Partitioning
We now explain the improved analysis of Biham and Carmeli [3]. A simple way to understand their idea is to look at the carry bits in the addition. More precisely, we study an addition operation \(s = a \boxplus b\), and we are interested in the value . We assume that \(\alpha = 2^i, i>0\), and that we have some amount of input/output pairs. We denote individual bits of a as \(a_0, a_1, \ldots a_{n-1}\), where \(a_0\) is the LSB (respectively, \(b_i\) for b and \(s_i\) for s). In addition, we consider the carry bits \(c_i\), defined as \(c_0 = 0\), \(c_{i+1} = {{\mathrm{MAJ}}}(a_i, b_i, c_i)\) (where \({{\mathrm{MAJ}}}(a,b,c) = (a \wedge b) \vee (b \wedge c) \vee (c \wedge a)\)). Therefore, we have \(s_i = a_i \oplus b_i \oplus c_i\).
Note that the classical approximation \(s_i = a_i \oplus a_{i-1} \oplus b_i\) holds with probability 3/4 because \(c_i = a_{i-1}\) with probability 3/4. In order to improve this approximation, Biham and Carmeli partition the data according to the value of bits \(a_{i-1}\) and \(b_{i-1}\). This gives four subsets:
-
00 If \((a_{i-1}, b_{i-1}) = (0,0)\), then \(c_i = 0\) and \(s_i = a_i \oplus b_i\).
-
01 If \((a_{i-1}, b_{i-1}) = (0,1)\), then \(\varepsilon (c_i) = 0\) and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
10 If \((a_{i-1}, b_{i-1}) = (1,0)\), then \(\varepsilon (c_i) = 0\) and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
11 If \((a_{i-1}, b_{i-1}) = (1,1)\), then \(c_i = 1\) and \(s_i = a_i \oplus b_i \oplus 1\).
If bits of a and b are known, filtering the data in subsets 00 and 11 gives a trail for the addition with imbalance 1 over one half of the data, rather than imbalance 1/2 over the full data-set. This can be further simplified to the following:
In order to apply this analysis to the setting of Fig. 1, we guess the key bits \(k^x_{i-1}\) and \(k^y_{i-1}\), so that we can compute the values of \(x^1_{i-1}\) and \(y^1_{i-1}\) from \(x^0\) and \(y^0\). More precisely, an attack on E can be performed with a single (logical) key bit guess, using Eq. (3):
If we guess the key bit \(k^x_{i-1} \oplus k^y_{i-1}\), we can filter the data satisfying \(x_{i-1}^0 \oplus y_{i-1}^0 = k^x_{i-1} \oplus k^y_{i-1}\), and we have \(\varepsilon (x_i^2 \oplus x_i^0 \oplus y_i^0 \oplus x_{i-1}^0) = 1\). Therefore the linear approximation (2) has imbalance \(\varepsilon \). We need \(1/\varepsilon ^2\) data after the filtering for the attack to succeed, i.e. \(2/\varepsilon ^2\) in total. The time complexity is also \(2/\varepsilon ^2\) because we run the analysis with \(1/\varepsilon ^2\) data for each key guess. This is an improvement over a simple linear attack using (2) with imbalance \(\varepsilon /2\), with \(4/\varepsilon ^2\) data.
Complexity. In general this partitioning technique multiply the data and time complexity by the following ratio:
where \(\mu \) is the fraction of data used in the attack, \(\kappa \) is the number of guessed key bits, \(\varepsilon \) is the initial imbalance, and \(\widetilde{\varepsilon }\) is the improved imbalance for the selected subset. For Biham and Carmeli’s attack, we have \(\mu = 1/2\), \(\kappa = 1\) and \(\widetilde{\varepsilon }= 2 \varepsilon \), hence \(R^D_{\text {lin}} = 1/2\) and \(R^T_{\text {lin}} = 1/2\).
2.2 Generalized Partitioning
We now refine the technique of Biham and Carmeli using several control bits. In particular, we analyze cases 01 and 10 with extra control bits \(a_{i-2}\) and \(b_{i-2}\) (some of the cases of shown in Fig. 2):
-
01.00 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (0,1,0,0)\), then \(c_{i-1} = 0\), \(c_i = 0\) and \(s_i = a_i \oplus b_i\).
-
01.01 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (0,1,0,1)\), then \(\varepsilon (c_{i-1}) = 0\), \(\varepsilon (c_i) = 0\), and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
01.10 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (0,1,1,0)\), then \(\varepsilon (c_{i-1}) = 0\), \(\varepsilon (c_i) = 0\), and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
01.11 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (0,1,1,1)\), then \(c_{i-1} = 1\), \(c_i = 1\) and \(s_i = a_i \oplus b_i \oplus 1\).
-
10.00 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (1,0,0,0)\), then \(c_{i-1} = 0\), \(c_i = 0\) and \(s_i = a_i \oplus b_i\).
-
10.01 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (1,0,0,1)\), then \(\varepsilon (c_{i-1}) = 0\), \(\varepsilon (c_i) = 0\), and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
10.10 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (1,0,1,0)\), then \(\varepsilon (c_{i-1}) = 0\), \(\varepsilon (c_i) = 0\), and \(\varepsilon (s_i \oplus a_i \oplus a_{i-1}) = 0\).
-
10.11 If \((a_{i-1}, b_{i-1}, a_{i-2}, b_{i-2}) = (1,0,1,1)\), then \(c_{i-1} = 1\), \(c_i = 1\) and \(s_i = a_i \oplus b_i \oplus 1\).
This yields an improved partitioning because we now have a trail for the addition with imbalance 1 in 12 out of 16 subsets: 00.00, 00.01, 00.10, 00.11, 01.00, 01.11, 10.00, 10.11, 11.00, 11.01, 11.10, 11.11. We can also simplify this case analysis:
This gives an improved analysis of E by guessing more key bits. More precisely we need \(k^x_{i-1} \oplus k^y_{i-1}\) and \(k^x_{i-2} \oplus k^y_{i-2}\), as shown below:
Since this analysis yields different input masks for different subsets of the data, we use an analysis following multiple linear cryptanalysis [9]. We first divide the data into four subsets, depending on the value of \(x^0_{i-1} \oplus y^0_{i-1}\) and \(x^0_{i-2} \oplus y^0_{i-2}\), and we compute the measured (signed) imbalance \(\hat{\mathcal {I}}[s]\) of each subset. Then, for each guess of the key bits \(k^x_{i-1} \oplus k^y_{i-1}\), and \(k^x_{i-2} \oplus k^y_{i-2}\), we deduce the expected imbalance \(\mathcal {I}_k[s]\) of each subset, and we compute the distance to the observed imbalance as \(\sum _s (\hat{\mathcal {I}}[s] - \mathcal {I}_k[s])^2\). According to the analysis of Biryukov, De Cannière and Quisquater, the correct key is ranked first (with minimal distance) with high probability when using \(\mathcal {O}(1/ c^2)\) samples, where \(c^2 = \sum _i \mathcal {I}_i^2 = \sum _i \varepsilon _i^2\) is the capacity of the system of linear approximations. Since we use three approximations with imbalance \(\varepsilon \), the capacity of the full system is \(3\varepsilon ^2\), and we need \(1/3 \cdot 1/\varepsilon ^2\) data in each subset after partitioning, i.e. \(4/3 \cdot 1/\varepsilon ^2\) in total.
Again, the complexity ratio of this analysis can be computed as \(R^D_{\text {lin}} = \varepsilon ^2/\mu \widetilde{\varepsilon }^2\) \(R^T_{\text {lin}} = 2^{\kappa } \varepsilon ^2/ \widetilde{\varepsilon }^2\) With \(\mu = 3/4\) and \(\widetilde{\varepsilon }= 2 \varepsilon \), we find:
The same technique can be used to refine the partitioning further, and give a complexity ratio of \(R^D_{\text {lin}} = 1/4 \times 2^\kappa /(2^\kappa -1)\) when guessing \(\kappa \) bits.
Time complexity. In general, the time complexity of this improved partitioning technique is the same as the time complexity as the basic attack (\(R^T_{\text {lin}} = 1\)), because we have to repeat the analysis 4 times (for each key of the key bits) with one fourth of the amount of data. We describe some techniques to reduce the time complexity in Sect. 4.
2.3 Combining Partitions
Finally, we can combine several partitions to analyze an addition with several active bits. If we use \(k_1\) partitions for the first bit, and \(k_2\) for the second bit, this yields a combined partition with \(k_1 \cdot k_2\) cases. If the bits are not close to each other, the gains of each bit are multiplied. This can lead to significant improvements even though \(R_{\text {lin}}\) is small for a single active bit.
For more complex scenarios, we select the filtering bits assuming that the active bits don’t interact, and we evaluate experimentally the probability in each subset. We can further study the matrix of probabilities to detect (logical) bits with no or little effect on the total capacity in order to improve the complexity of the attack. This will be used for our applications in Sect. 5 and Appendix A.
3 Differential Analysis of Addition
We now study differential properties of the addition. We perform our analysis in the same way as the analysis of Sect. 2, following Fig. 3. We consider the first addition operation separately, and we assume that we know a differential \((\alpha , \beta , \gamma ) \rightarrow (\alpha ', \beta ', \gamma ')\) with probability p for the remaining of the cipher. Following previous works, a simple way to extend the differential is to linearize the first addition, yielding the following differences for the addition:
Similarly to our analysis of linear cryptanalysis, we consider a single addition \(s = a \boxplus b\), and we first assume that a single bit is active through the addition. However, we have to consider several cases, depending on how many input/output bits are active. The cases are mostly symmetric, but there are important differences in the partitioning.
3.1 Analysis of \((\alpha = 0, \beta = 2^i)\)
With \(i<w-1\), the probability for the addition is \(\Pr [(2^i, 2^i) \mathop {\longrightarrow }\limits ^{\boxplus } 0] = 1/2\).
Improved Analysis with Structures. We first discuss a technique using multiple differentials and structures. More precisely, we use the following differentials for the addition:Footnote 4
We can improve the probability of \(\mathcal {D}_2\) using a partitioning according to \((a_i,a_{i+1})\):
-
00 If \((a_{i}, a_{i+1}) = (0,0)\), then \(a' = a \boxplus 2^{i} \boxplus 2^{i+1}\) and \(s \ne s'\).
-
01 If \((a_{i}, a_{i+1}) = (0,1)\), then \(a' = a \boxminus 2^i\) and \(\Pr [s = s'] = 1/2\).
-
10 If \((a_{i}, a_{i+1}) = (1,0)\), then \(a' = a \boxplus 2^i\) and \(\Pr [s = s'] = 1/2\).
-
11 If \((a_{i}, a_{i+1}) = (1,1)\), then \(a' = a \boxminus 2^i \boxminus 2^{i+1}\) and \(s \ne s'\).
This can be written as:
The use of structures allows to build pairs of data for both differentials from the same data set. More precisely, we consider the following inputs:
We see that (p, q) and (r, s) follow the input difference of \(\mathcal {D}_1\), while (p, s) and (r, q) follow the input difference of \(\mathcal {D}_2\). Moreover, we have from the partitioning:
For each key guess, we select three candidate pair out of a structure of four plaintexts, and every pair follows a differential for E with probability p / 2. Therefore we need 2 / p pairs, with a data complexity of \(8/3 \cdot 1/p\) rather than \(4 \cdot 1/p\).
In general this partitioning technique multiply the data and time complexity by the following ratio:
where \(\mu \) is the fraction of data used in the attack, \(\kappa \) is the number of guessed key bits, T is the number of plaintexts in a structure (we consider \(T^2/4\) pairs rather than T / 2 without structures) p is the initial probability, and \(\widetilde{p}\) is the improved probability for the selected subset. Here we have \(\mu = 3/4\), \(\kappa = 1\), \(T = 4\), and \(\widetilde{p} = p\), hence
Moreover, if the differential trail is used in a boomerang attack, or in a differential-linear attack, it impacts the complexity twice, but the involved key bits are the same, and we only need to use the structure once. Therefore, the complexity ratio should be evaluated as:
In this scenario, we have the same ratios:
Generalized Partitioning. We can refine the analysis of the addition by partitioning according to \((b_i)\). This gives the following:
This gives an attack with \(T=4\), \(\mu = 3/8\), \(\kappa =2\) and \(\widetilde{p} = 2 p\), which yield the same ratio in a simple differential setting, but a better ratio for a boomerang or differential-linear attack:
In addition, this analysis allows to recover an extra key bit, which can be useful for further steps of an attack.
Larger Structure. Alternatively, we can use a larger structure to reduce the complexity: with a structure of size \(2^t\), we have an attack with a ratio \(R_{\text {diff}}^D = 1/2 \times 2^{\kappa }/(2^{\kappa }-1)\), by guessing \(\kappa -1\) key bits.
3.2 Analysis of \((\alpha = 2^i, \beta = 0\))
With \(i<w-1\), the probability for the addition is \(\Pr [(2^i, 0) \mathop {\longrightarrow }\limits ^{\boxplus } 2^i] = 1/2\).
Improved Analysis with Structures. As in the previous section, we consider multiple differentials, and use partitioning to improve the probability:
We also use structures in order to build pairs of data for both differentials from the same data set. More precisely, we consider the following inputs:
We see that (p, q) and (r, s) follow the input difference of \(\mathcal {D}_1\), while (p, s) and (r, q) follow the input difference of \(\mathcal {D}_2\). Moreover, we have from the partitioning:
In this case, we also have \(\mu = 3/4\), \(T=4\), and \(\widetilde{p} = p\), hence
Generalized Partitioning. Again, we can refine the analysis of the addition by partitioning according to \((s_i)\). This gives the following:
Since we can not readily filter according to bits of s, we use the results of Sect. 2:
This gives:
Unfortunately, we can only use a small fraction of the pairs \(\mu = 3/16\). With \(T = 4\) and \(\widetilde{p} = 2 p\), this yields, an increase of the data complexity for a simple differential attack:
3.3 Analysis of \((\alpha = 2^i, \beta = 2^i)\)
With \(i<w-1\), the probability for the addition is \(\Pr [(0, 2^i) \mathop {\longrightarrow }\limits ^{\boxplus } 2^i] = 1/2\).
The results in this section will be the same as in the previous section, but we have to use a different structure. Indeed when this analysis is applied to E, we can freely modify the difference in \(x^0\) but not in \(y^0\), because it would affect the differential in \(E'\).
More precisely, we use the following differentials:
and the following structure:
This yields:
4 Improving the Time Complexity
The analysis of the previous sections assume that we repeat the distinguisher for each key guess, so that the data complexity is reduced in a very generic way. When this is applied to differential or linear cryptanalysis, it usually result in an increased time complexity (\(R^T > 1\)). However, when the distinguisher is a simple linear of differential distinguisher, we can perform the analysis in a more efficient way, using the same techniques that are used in attacks with partial key guess against SBox-based ciphers. For linear cryptanalysis, we use a variant of Matsui’s Algorithm 2 [25], and the improvement using convolution algorithm [15]; for differential cryptanalysis we filter out pairs that can not be a right pair for any key. In the best cases, the time complexity of the attacks can be reduced to essentially the data complexity.
4.1 Linear Analysis
We follow the analysis of Matsui’s Algorithm 2, with a distillation phase using counters to keep track of the important features of the data, and an analysis phase for every key that requires only the counters rather than the full dataset.
More precisely, let us explain this idea within the setting of Sect. 2.2 and Fig. 1. For each key guess, the attacker computes the observed imbalance over a subset \(\mathcal {S}_k\) corresponding to the data with \(x^0_{i-1} \oplus y^0_{i-1} = k^x_{i-1} \oplus k^y_{i-1}\), or \(\left( x^0_{i-1} \oplus y^0_{i-1} \ne k^x_{i-1} \oplus k^y_{i-1} ~\text {and}~ x^0_{i-2} \oplus y^0_{i-2} = k^x_{i-2} \oplus k^y_{i-2}\right) \):
Therefore, the imbalance can be efficiently reconstructed from a series of \(2^4\) counters keeping track of the amount of data satisfying every possible value of the following bits:
This results in an attack where the time complexity is equal to the data complexity, plus a small cost to compute the imbalance. The analysis phase require only about \(2^6\) operations in this case (adding \(2^4\) counters for \(2^2\) key guesses). When the amount of data required is larger than \(2^6\), the analysis step is negligible.
When several partitions are combined (with several active bits in the first additions), the number of counters increases to \(2^b\), where b is the number of control bits. To reduce the complexity of the analysis phase, we can use a convolution algorithm (following [15]), so that the cost of the distillation is only \(\mathcal {O}(b \cdot 2^b)\) rather than \(\mathcal {O}(2^\kappa \cdot 2^b)\). This will be explained in more details with the application to Chaskey in Sect. 5.
In general, there is a trade-off between the number of partitioning bits, and the complexity. A more precise partitioning allows to reduce the data complexity, but this implies a larger set of counters, hence a larger memory complexity. When the number of partitioning bits reaches the data complexity, the analysis phase becomes the dominant phase, and the time complexity is larger than the data complexity.
4.2 Differential Analysis
For a differential attack with partitioning, we can also reduce the time complexity, by filtering pairs before the analysis phase. In the following, we assume that we use a simple differential distinguisher with output difference \(\delta '\), following Sect. 3 (where \(\delta ' = (\alpha ', \beta ', \gamma ')\))
We first define a linear function L with rank \(n-1\) (where n is the block size), so that \(L(\delta ') = 0\). In particular, any pair \(x, x' = x \oplus \delta '\) satisfies \(L(x) = L(x')\). This allows to detect collisions by looking at all values in a structure, rather than all pairs in a structure. We just compute L(E(x)) for all x’s in a structure, and we look for collisions.
5 Application to Chaskey
Chaskey is a recent MAC proposal designed jointly by researchers from COSIC and Hitachi [31]. The mode of operation of Chaskey is based on CBC-MAC with an Even-Mansour cipher; but it can also be described as a permutation-based design as seen in Fig. 4. Chaskey is designed to be extremely fast on 32-bit micro-controllers, and the internal permutation follows an ARX construction with 4 32-bit words based on SipHash; it is depicted in Fig. 5. Since the security of Chaskey is based on an Even-Mansour cipher, the security bound has a birthday term \(\mathcal {O}(TD \cdot 2^{-128})\). More precisely, the designers claim that it should be secure up to \(2^{48}\) queries, and \(2^{80}\) computations.
So far, the only external cryptanalysis results on Chaskey are generic attacks in the multi-user setting [27]. The only analysis of the permutation is in the submission document; the best result is a 4 round bias, that can probably be extended into a 5 round attack following the method of attacks against the Salsa family [1]. It is important to try more advanced techniques in order to understand the security of Chaskey, in particular because it is being considered for standardization.
5.1 Differential-Linear Cryptanalysis
The best differential characteristics found by the designers of Chaskey quickly become unusable when the number of rounds increase (See Table 2). The designers also report that those characteristics have an “hourglass structure”: there is a position in the middle where a single bit is active, and this small difference is expanded by the avalanche effect when propagating in both direction. This is typical of ARX designs: short characteristics have a high probability, but after a few rounds the differences cannot be controlled and the probability decrease very fast. The same observation typically holds also for linear trails.
Because of these properties, attacks that can divide the cipher E in two parts \(E=E_\bot \circ E_\top \) and build characteristics or trail for both half independently – such as the boomerang attack or differential-linear cryptanalysis – are particularly interesting. In particular, many attacks on ARX designs are based on the boomerang attack [10, 19, 23, 28, 35, 42] or differential-linear cryptanalysis [18]. Since Chaskey never uses the inverse permutation, we cannot apply a boomerang attack, and we focus on differential-linear cryptanalysis.
Differential-linear cryptanalysis uses a differential \(\delta _i \mathop {\longrightarrow }\limits ^{E_\top } \delta _o\) with probability p for \(E_\top \), and a linear approximation \(\chi _i \mathop {\longrightarrow }\limits ^{E_\bot } \chi _o\) with imbalance \(\varepsilon \) for \(E_\bot \) (see Fig. 6). The attacker uses pairs of plaintexts \((P_i, P_i')\) with \(P_i' = P_i \oplus \delta _i\), and computes the observed imbalance . Following the heuristic analysis of [5], the expected imbalance is about \(p\varepsilon ^2\), which gives an attack complexity of \(\mathcal {O}(2/p^2\varepsilon ^4)\):
-
A pair of plaintext satisfies \(E_\top (P) \oplus E_\top (P') = \delta _o\) with probability p. In this case, we have . Without loss of generality, we assume that .
-
Otherwise, we expect that is not biased. This gives the following:
(8)(9) -
We also have from the linear approximations. Combining with (9), we get .
A more rigorous analysis has been recently provided by Blondeau et al. [12], but since we use experimental values to evaluate the complexity of our attacks, this heuristic explanation will be sufficient.
5.2 Using Partitioning
A differential-linear distinguisher can easily be improved using the results of Sects. 2 and 3. We can improve the differential and linear part separately, and combine the improvements on the differential-linear attack. More precisely, we have to consider structures of plaintexts, and to guess some key bits in the differential and linear parts. We partition all the potential pairs in the structures according to the input difference, and to the filtering bits in the differential and linear part; then we evaluate the observed imbalance \(\hat{\mathcal {I}}[s]\) in every subset s. Finally, for each key guess k, we compute the expected imbalance \(\mathcal {I}_k[s]\) for each subset s, and then we evaluate the distance between the observed and expected imbalances as \(L(k) = \sum _s (\hat{\mathcal {I}}[s] - \mathcal {I}_k[s])^2\) (following the analysis of multiple linear cryptanalysis [9]).
While we follow the analysis of multiple linear cryptanalysis to evaluate the complexity of our attack, we use each linear approximation on a different subset of the data, partitioned according to the filtering bits. In particular, we don’t have to worry about the independence of the linear approximations.
If we use structures of size T, and select a fraction \(\mu _{\text {diff}}\) of the input pairs with an improved differential probability \(\widetilde{p}\), and a fraction \(\mu _{\text {lin}}\) of the output pairs with an improved linear imbalance \(\widetilde{\varepsilon }\), the data complexity of the attack is \( \mathcal {O}(\mu _{\text {lin}}\mu _{\text {diff}}^2 T/2 \times 2/\widetilde{p}^2 \widetilde{\varepsilon }^4) \). This corresponds to a complexity ratio of \(R_{\text {diff-2}}^D {R^D_{\text {lin}}}^2\).
More precisely, using differential filtering bits \(p_{\text {diff}}\) and linear filtering bits \(c_{\text {lin}}\), the subsets are defined by the input difference \(\varDelta \), the plaintext bits \(P[p_{\text {diff}}]\) and the cipher text bits \(C[c_{\text {lin}}]\) and \(C'[c_{\text {lin}}]\), with \(C = E(P)\) and \(C'=E(P \oplus \varDelta )\). In practice, for every \(P, P'\) in a structure, we update the value of \(\hat{\mathcal {I}}[P \oplus P', P[p_{\text {lin}}], C[c_{\text {diff}}], C'[c_{\text {diff}}]]\).
We also take advantage of the Even-Mansour construction of Chaskey, without keys inside the permutation. Indeed, the filtering bits used to define the subsets s correspond to the key bits used in the attack. Therefore, we only need to compute the expected imbalance for the zero key, and we can deduce the expected imbalance for an arbitrary key as \(\mathcal {I}_{k_{\text {diff}},k_{\text {lin}}}[\varDelta , p, c, c'] = \mathcal {I}_0[\varDelta , p \oplus k_{\text {lin}}, c \oplus k_{\text {diff}}, c' \oplus k_{\text {diff}}]\).
Time Complexity. This description lead to an attack with low time complexity using an FFT algorithm, as described previously for linear cryptanalysis [15] and multiple linear cryptanalysis [17]. Indeed, the distance between the observed and expected imbalance can be written as:
where only the last term depend on the key. Moreover, this term can be seem as the \(\phi (k)\)-th component of the convolution \(\mathcal {I}_0 *\hat{\mathcal {I}}\). Using the convolution theorem, we can compute the convolution efficiently with an FFT algorithm.
This gives the following fast analysis:
-
1.
Compute the expected imbalance \(\mathcal {I}_0[s]\) of the differential-linear distinguisher for the zero key, for every subset s.
-
2.
Collect D plaintext-ciphertext pairs, and compute the observed imbalance \(\hat{\mathcal {I}}[s]\) of each subset.
-
3.
Compute the convolution \(\mathcal {I}*\hat{\mathcal {I}}\), and find k that maximizes coefficient \(\phi (k)\).
5.3 Differential-Linear Cryptanalysis of Chaskey
In order to find good differential-linear distinguishers for Chaskey, we use a heuristic approach. We know that most good differential characteristics and good linear trails have an “hourglass structure”, with a single active bit in the middle. If a good differential-linear characteristics is given with this “hourglass structure”, we can divide E in three parts , so that the single active bit in the differential characteristic falls between , and the single active bit in the linear trail falls between and \(E_\bot \). We use this decomposition to look for good differential-linear characteristics: we first divide E in three parts, and we look for a differential characteristic \(\delta _i \mathop {\longrightarrow }\limits ^{E_\top } \delta _o\) in \(E_\top \) (with probability p), a differential-linear characteristic in (with imbalance b), and a linear characteristic \(\chi _i \mathop {\longrightarrow }\limits ^{E_\bot } \chi _o\) in \(E_{\bot }\) (with imbalance \(\varepsilon \)), where \(\delta _o\) and \(\chi _i\) have a single active bit. This gives a differential-linear distinguisher with imbalance close to \(b p \varepsilon ^2\):
-
We consider a pair of plaintext \((P,P')\) with \(P' = P \oplus \delta _i\), and we denote \(X = E_\top (P)\), , \(C = E_\bot (Y)\).
-
We have \(X \oplus X' = \delta _o\) with probability p. In this case,
-
Otherwise, we expect that is not biased. This gives the following:
(10)(11) -
We also have from the linear approximations. Combining with (11), we get .
In the section, we can see the characteristic as a small differential-linear characteristic with a single active input bit and a single active output bit, or as a truncated differential where the input difference has a single active bit and the output value is truncated to a single bit. In other words, we use pairs of values with a single bit difference, and we look for a biased output bit difference.
We ran an exhaustive search over all possible decompositions (varying the number of rounds), and all possible positions for the active bits i at the input of and the biased bitFootnote 5 j at the output of . For each candidate, we evaluate experimentally the imbalance , and we study the best differential and linear trails to build the full differential-linear distinguisher. This method is similar to the analysis of the Salsa family by Aumasson et al. [1]: they decompose the cipher in two parts , in order to combine a biased bit in with an approximation of \(E_{\bot }\).
This approach allows to identify good differential-linear distinguisher more easily than by building full differential and linear trails. In particular, we avoid most of the heuristic problems in the analysis of differential-linear distinguishers (such as the presence of multiple good trails in the middle) by evaluating experimentally without looking for explicit trails in the middle. In particular, the transition between \(E_\top \) and is a transition between two differential characteristics, while the transition between and \(E_\bot \) is a transition between two linear characteristics.
5.4 Attack Against 6-Round Chaskey
The best distinguisher we identified for an attack against 6-round Chaskey uses 1 round in \(E_\top \), 4 rounds in , and 1 round in \(E_\bot \). The optimal differences and masks are:
-
Differential for \(E_\top \) with probability \(p_{\top } \approx 2^{-5}\):
$$\begin{aligned} v_0[26], v_1[26], v_2[6,23,30], v_3[23,30]&\mathop {\longrightarrow }\limits ^{E_\top } v_2[22] \\ \end{aligned}$$ -
Biased bit for with imbalance :
-
Linear approximations for \(E_\bot \) with imbalance \(\varepsilon _{\bot } = 2^{-2.6}\):
$$\begin{aligned} v_2[16]&\mathop {\longrightarrow }\limits ^{E_\bot } v_0[5], v_1[23,31], v_2[0,8,15], v_3[5] \end{aligned}$$
The differential and linear trails are shown in Fig. 7. The expected imbalance is . This gives a differential-linear distinguisher with expected complexity in the order of
We can estimate the data complexity more accurately using [11, Eq. (11)]: we need about \(2^{34.1}\) pairs of samples in order to reach a false positive rate of \(2^{-4}\). Experimentally, with \(2^{34}\) pairs of samples (i.e. \(2^{35}\) data), the measured imbalance is larger than \(2^{-16.25}\) with probability 0.5; with random data, it is larger than \(2^{-16.25}\) with probability 0.1. This matches the predictions of [11], and confirms the validity of our differential-linear analysis.
This simple differential-linear attack is more efficient than generic attacks against the Even-Mansour construction of Chaskey. It follows the usage limit of Chaskey, and reaches more rounds than the analysis of the designers. Moreover, it we can be improved significantly using the results of Sects. 2 and 3.
Analysis of Linear Approximations with Partitioning. To make the description easier, we remove the linear operations at the end, so that the linear trail becomes:
We select control bits to improve the probability of the addition between \(v_1\) and \(v_2\) on active bits 16 and 24. Following the analysis of Sect. 2.2, we need \(v_1[14] \oplus v_2[14]\) and \(v_1[15] \oplus v_2[15]\) as control bits for active bit 16. To identify more complex control bits, we consider \(v_1[14,15,22,23]\), \(v_2[14,15,22,23]\) as potential control bits, as well as \(v_3[23]\) because it can affect the addition on the previous half-round. Then, we evaluate the bias experimentally (using the round function as a black box) in order to remove redundant bits. This leads to the following 8 control bits:
This defines \(2^{8}\) partitions of the ciphertexts, after guessing 8 key bits. We evaluated the bias in each partition, and we found that the combined capacity is \(c^2 = 2^{6.84}\). This means that we have the following complexity ratio
Analysis of Differential with Partitioning. There are four active bits in the first additions:
-
Bit 23 in \(v_2 \boxplus v_3\): \((2^{23}, 2^{23}) \mathop {\longrightarrow }\limits ^{\boxplus } 0\)
-
Bit 30 in \(v_2 \boxplus v_3\): \((2^{30}, 2^{30}) \mathop {\longrightarrow }\limits ^{\boxplus } 2^{31}\)
-
Bit 6 in \(v_2 \boxplus v_3\): \((2^{6}, 0) \mathop {\longrightarrow }\limits ^{\boxplus } 2^{6}\)
-
Bit 26 in \(v_0 \boxplus v_1\): \((2^{26}, 2^{26}) \mathop {\longrightarrow }\limits ^{\boxplus } 0\)
Following the analysis of Sect. 3, we can use additional input differences for each of them. However, we reach a better trade-off by selected only three of them. More precisely, we consider \(2^3\) input differences, defined by \(\delta _i\) and the following extra active bits:
As explained in Sect. 2, we build structures of \(2^{4}\) plaintexts, where each structure provides \(2^{3}\) pairs for every input difference, i.e. \(2^{6}\) pairs in total.
Following the analysis of Sect. 3, we use the following control bits to improve the probability of the differential:
This divides each set of pairs into \(2^{5}\) subsets, after guessing 5 key bits. In total we have \(2^{8}\) subsets to analyze, according to the control bits and the multiple differentials. We found that, for 18 of those subsets, there is a probability \(2^{-2}\) to reach \(\delta _o\) (the probability is 0 for the remaining subsets). This leads to a complexity ratio:
This corresponds to the analysis of Sect. 3: we have a ratio of 2 / 3 for bits \(v_2[23]\) and \(v_0[27]\) (Sect. 3.1), and a ratio of 1 / 2 for \(v_2[31]\) in the simple linear case. In the differential-linear case, we have respectively ratios of 1 / 3 and 1 / 4.
Finally, the improved attack requires a data complexity in the order of:
We can estimate the data complexity more accurately using the analysis of Biryukov et al. [9]. First, we give an alternate description of the attack similar the multiple linear attack framework. Starting from D chosen plaintexts, we build \(2^2 D\) pairs using structures, and we keep \(N = 18 \cdot 2^{-8} \cdot 2^{-14} \cdot 2^2 D\) samples per approximation after partitioning the differential and linear parts. The imbalance of the distinguisher is \(2^{-2} \cdot 2^{-6.05} \cdot 2^{6.84} = 2^{-1.21}\). Following [9, Corollary 1], the gain of the attack with \(D=2^{24}\) is estimated as 6.6 bits, i.e. the average key rank should be about 42 (for the 13-bit subkey).
Using the FFT method of Sect. 5.2, we perform the attack with \(2^{24}\) counters \(\hat{\mathcal {I}}[s]\). Each structure of \(2^{4}\) plaintexts provides \(2^6\) pairs, so that we need \(2^2 D\) operations to update the counters. Finally, the FFT computation require \(24 \times 2^{24} \approx 2^{28.6}\) operations.
We have implemented this analysis, and it runs in about 10 s on a single core of a desktop PCFootnote 6. Experimentally, we have a gain of about 6 bits (average key rank of 64 with 128 experiments); this validates our theoretical analysis. We also notice some key bits don’t affect the distinguisher and cannot be recovered. On the other hand, the gain of the attack can be improved using more data, and further trade-offs are possible using larger or smaller partitions.
5.5 Attack Against 7-Round Chaskey
The best distinguisher we identified for an attack against 7-round Chaskey uses 1.5 round in \(E_\top \), 4 rounds in , and 1.5 round in \(E_\bot \). The optimal differences and masks are:
-
Differential for \(E_\top \) with probability \(p_{\top } = 2^{-17}\):
\(v_0[8,18,21,30], v_1[8,13,21,26,30], v_2[3,21,26], v_3[21,26,27] \mathop {\longrightarrow }\limits ^{E_\top } v_0[31]\)
-
Biased bit for with imbalance :
-
Linear approximations for \(E_\bot \) with imbalance \(\varepsilon _{\bot } = 2^{-7.6}\):
\(v_2[20] \mathop {\longrightarrow }\limits ^{E_\bot } v_0[0,15,16,25,29], v_1[7,11,19,26], v_2[2,10,19,20,23,28], v_3[0,25,29]\)
This gives a differential-linear distinguisher with expected complexity in the order of This attack is more expensive than generic attacks against on the Even-Mansour cipher, but we now improve it using the results of Sects. 2 and 3.
Analysis of Linear Approximations with Partitioning. We use an automatic search to identify good control bits, starting from the bits suggested by the result of Sect. 2. We identified the following control bits:
Note that the control bits identified in Sect. 2 appear as linear combinations of those control bits.
This defines \(2^{19}\) partitions of the ciphertexts, after guessing 19 key bits. We evaluated the bias in each partition, and we found that the combined capacity is \(c^2 = 2^{14.38}\). This means that we gain the following factor:
This example clearly shows the power of the partitioning technique: using a few key guesses, we essentially avoid the cost of the last layer of additions.
Analysis of Differential with Partitioning. We consider \(2^{9}\) input differences, defined by \(\delta _i\) and the following extra active bits:
As explained in Sect. 2, we build structures of \(2^{10}\) plaintexts, where each structure provides \(2^{9}\) pairs for every input difference, i.e. \(2^{18}\) pairs in total.
Again, we use an automatic search to identify good control bits, starting from the bits suggested in Sect. 3. We use the following control bits to improve the probability of the differential:
This divides each set of pairs into \(2^{14}\) subsets, after guessing 14 key bits. In total we have \(2^{23}\) subsets to analyze, according to the control bits and the multiple differentials. We found that, for 17496 of those subsets, there is a probability \(2^{-11}\) to reach \(\delta _o\) (the probability is 0 for the remaining subsets). This leads to a ratio:
Finally, the improved attack requires a data complexity of:
Again, we can estimate the data complexity more accurately using [9]. In this attack, starting from \(N_0\) chosen plaintexts, we build \(2^8 N_0\) pairs using structures, and we keep \(N = 17496 \cdot 2^{-23} \cdot 2^{-38} \cdot 2^8 N_0\) samples per approximation after partitioning the differential and linear parts. The imbalance of the distinguisher is \(2^{-11} \cdot 2^{-6.1} \cdot 2^{14.38} = 2^{-2.72}\). Following [9, Corollary 1], the gain of the attack with \(N_0=2^{48}\) is estimated as 6.3 bits, i.e. the average rank of the 33-bit subkey should be about \(2^{25.7}\). Following the experimental results of Sect. 5.4, we expect this to estimation to be close to the real gain (the gain can also be increased if more than \(2^{48}\) data is available).
Using the FFT method of Sect. 5.2, we perform the attack with \(2^{61}\) counters \(\hat{\mathcal {I}}[s]\). Each structure of \(2^{10}\) plaintexts provides \(2^{18}\) pairs, so that we need \(2^8 D\) operations to update the counters. Finally, the FFT computation require \(61 \times 2^{61} \approx 2^{67}\) operations.
This attack recovers only a few bits of a 33-bit subkey, but an attacker can run the attack again with a different differential-linear distinguisher to recover other key bits. For instance, a rotated version of the distinguisher will have a complexity close to the optimal one, and the already known key bits can help reduce the complexity.
Conclusion
In this paper, we have described a partitioning technique inspired by Biham and Carmeli’s work. While Biham and Carmeli consider only two partitions and a linear approximation for a single subset, we use a large number of partitions, and linear approximations for every subset to take advantage of all the data. We also introduce a technique combining multiple differentials, structures, and partitioning for differential cryptanalysis. This allows a significant reduction of the data complexity of attacks against ARX ciphers, and is particularly efficient with boomerang and differential-linear attacks.
Our main application is a differential-linear attack against Chaskey, that reaches 7 rounds out of 8. In this application, the partitioning technique allows to go through the first and last additions almost for free. This is very similar to the use of partial key guess and partial decryption for SBox-based ciphers. This is an important result because standard bodies (ISO/IEC JTC1 SC27 and ITU-T SG17) are currently considering Chaskey for standardization, but little external cryptanalysis has been published so far. After the first publications of these results, the designers of Chaskey have proposed to standardize a new version with 12 rounds [30].
Notes
- 1.
The imbalance is also called correlation.
- 2.
A notable counterexample is FEAL, which uses only 8-bit additions.
- 3.
This setting is quite general, because any operation before a key addition can be removed, as well as any linear operation after the key addition. Ciphers where the key addition is made with a modular addition do not fit this model, but the technique can easily be adapted.
- 4.
Note that in the application to E, we can modify the difference in \(x^1\) but not in \(y^1\).
- 5.
We also consider pairs of adjacent bits, following the analysis of [14].
- 6.
Haswell microarchitecture running at 3.4 GHz.
- 7.
We use Biham and Carmeli’s notation \(f_{i,j}\) for bit j of input word i.
References
Aumasson, J.-P., Fischer, S., Khazaei, S., Meier, W., Rechberger, C.: New features of latin dances: analysis of Salsa, ChaCha, and Rumba. In: Nyberg, K. (ed.) FSE 2008. LNCS, vol. 5086, pp. 470–488. Springer, Heidelberg (2008)
Biham, E.: On Matsui’s linear cryptanalysis. In: De Santis, A. (ed.) EUROCRYPT 1994. LNCS, vol. 950, pp. 341–355. Springer, Heidelberg (1995)
Biham, E., Carmeli, Y.: An improvement of linear cryptanalysis with addition operations with applications to FEAL-8X. In: Joux, A., Youssef, A. (eds.) SAC 2014. LNCS, vol. 8781, pp. 59–76. Springer, Heidelberg (2014)
Biham, E., Chen, R., Joux, A.: Cryptanalysis of SHA-0 and reduced SHA-1. J. Cryptology 28(1), 110–160 (2015)
Biham, E., Dunkelman, O., Keller, N.: Enhancing differential-linear cryptanalysis. In: Zheng, Y. (ed.) ASIACRYPT 2002. LNCS, vol. 2501, pp. 254–266. Springer, Heidelberg (2002)
Biham, E., Shamir, A.: Differential cryptanalysis of DES-like cryptosystems. In: Menezes, A., Vanstone, S.A. (eds.) CRYPTO 1990. LNCS, vol. 537, pp. 2–21. Springer, Heidelberg (1991)
Biham, E., Shamir, A.: Differential cryptanalysis of DES-like cryptosystems. J. Cryptology 4(1), 3–72 (1991)
Biham, E., Shamir, A.: Differential cryptanalysis of feal and N-Hash. In: Davies, D.W. (ed.) EUROCRYPT 1991. LNCS, vol. 547, pp. 1–16. Springer, Heidelberg (1991)
Biryukov, A., De Cannière, C., Quisquater, M.: On multiple linear approximations. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 1–22. Springer, Heidelberg (2004)
Biryukov, A., Nikolić, I., Roy, A.: Boomerang attacks on BLAKE-32. In: Joux, A. (ed.) FSE 2011. LNCS, vol. 6733, pp. 218–237. Springer, Heidelberg (2011)
Blondeau, C., Gérard, B., Tillich, J.P.: Accurate estimates of the data complexity and success probability for various cryptanalyses. Des. Codes Crypt. 59(1–3), 3–34 (2011)
Blondeau, C., Leander, G., Nyberg, K.: Differential-linear cryptanalysis revisited. In: Cid, C., Rechberger, C. (eds.) FSE 2014. LNCS, vol. 8540, pp. 411–430. Springer, Heidelberg (2015)
Bogdanov, A.A., Knudsen, L.R., Leander, G., Paar, C., Poschmann, A., Robshaw, M., Seurin, Y., Vikkelsoe, C.: PRESENT: an ultra-lightweight block cipher. In: Paillier, P., Verbauwhede, I. (eds.) CHES 2007. LNCS, vol. 4727, pp. 450–466. Springer, Heidelberg (2007)
Cho, J.Y., Pieprzyk, J.: Crossword puzzle attack on NLS. In: Biham, E., Youssef, A.M. (eds.) SAC 2006. LNCS, vol. 4356, pp. 249–265. Springer, Heidelberg (2007)
Collard, B., Standaert, F.-X., Quisquater, J.-J.: Improving the time complexity of Matsui’s linear cryptanalysis. In: Nam, K.-H., Rhee, G. (eds.) ICISC 2007. LNCS, vol. 4817, pp. 77–88. Springer, Heidelberg (2007)
Gilbert, H., Chassé, G.: A statistical attack of the FEAL-8 cryptosystem. In: Menezes, A., Vanstone, S.A. (eds.) CRYPTO 1990. LNCS, vol. 537, pp. 22–33. Springer, Heidelberg (1991)
Hermelin, M., Nyberg, K.: Dependent linear approximations: the algorithm of Biryukov and others revisited. In: Pieprzyk, J. (ed.) CT-RSA 2010. LNCS, vol. 5985, pp. 318–333. Springer, Heidelberg (2010)
Huang, T., Tjuawinata, I., Wu, H.: Differential-linear cryptanalysis of ICEPOLE. In: Leander, G. (ed.) FSE 2015. LNCS, vol. 9054, pp. 243–263. Springer, Heidelberg (2015)
Lamberger, M., Mendel, F.: Higher-order differential attack on reduced SHA-256. In: IACR Cryptology ePrint Archive, report 2011/37 (2011)
Langford, S.K., Hellman, M.E.: Differential-linear cryptanalysis. In: Desmedt, Y.G. (ed.) CRYPTO 1994. LNCS, vol. 839, pp. 17–25. Springer, Heidelberg (1994)
Leurent, G.: Analysis of differential attacks in ARX constructions. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 226–243. Springer, Heidelberg (2012)
Leurent, G.: Construction of differential characteristics in ARX designs application to Skein. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 241–258. Springer, Heidelberg (2013)
Leurent, G., Roy, A.: Boomerang attacks on hash function using auxiliary differentials. In: Dunkelman, O. (ed.) CT-RSA 2012. LNCS, vol. 7178, pp. 215–230. Springer, Heidelberg (2012)
Lipmaa, H., Moriai, S.: Efficient algorithms for computing differential properties of addition. In: Matsui, M. (ed.) FSE 2001. LNCS, vol. 2355, pp. 336–350. Springer, Heidelberg (2002)
Matsui, M.: Linear cryptanalysis method for DES cipher. In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 386–397. Springer, Heidelberg (1994)
Matsui, M., Yamagishi, A.: A new method for known plaintext attack of FEAL cipher. In: Rueppel, R.A. (ed.) EUROCRYPT 1992. LNCS, vol. 658, pp. 81–91. Springer, Heidelberg (1993)
Mavromati, C.: Key-recovery attacks against the mac algorithm chaskey. In: SAC 2015 (2015)
Mendel, F., Nad, T.: Boomerang distinguisher for the SIMD-512 compression function. In: Bernstein, D.J., Chatterjee, S. (eds.) INDOCRYPT 2011. LNCS, vol. 7107, pp. 255–269. Springer, Heidelberg (2011)
Miyano, H.: Addend dependency of differential/linear probability of addition. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 81(1), 106–109 (1998)
Mouha, N.: Chaskey: a MAC algorithm for microcontrollers - status update and proposal of Chaskey-12 -. In: IACR Cryptology ePrint Archive, report 2015/1182 (2015)
Mouha, N., Mennink, B., Van Herrewege, A., Watanabe, D., Preneel, B., Verbauwhede, I.: Chaskey: an efficient MAC algorithm for 32-bit microcontrollers. In: Joux, A., Youssef, A. (eds.) SAC 2014. LNCS, vol. 8781, pp. 306–323. Springer, Heidelberg (2014)
Mouha, N., Velichkov, V., De Cannière, C., Preneel, B.: The differential analysis of S-Functions. In: Biryukov, A., Gong, G., Stinson, D.R. (eds.) SAC 2010. LNCS, vol. 6544, pp. 36–56. Springer, Heidelberg (2011)
Nyberg, K., Wallén, J.: Improved linear distinguishers for SNOW 2.0. In: Robshaw, M. (ed.) FSE 2006. LNCS, vol. 4047, pp. 144–162. Springer, Heidelberg (2006)
Sakikoyama, S., Todo, Y., Aoki, K., Morii, M.: How much can complexity of linear cryptanalysis be reduced? In: Lee, J., Kim, J. (eds.) ICISC 2014. LNCS, vol. 8949, pp. 117–131. Springer, Heidelberg (2014)
Sasaki, Y.: Boomerang distinguishers on MD4-Family: first practical results on full 5-Pass HAVAL. In: Miri, A., Vaudenay, S. (eds.) SAC 2011. LNCS, vol. 7118, pp. 1–18. Springer, Heidelberg (2012)
Shimizu, A., Miyaguchi, S.: Fast data encipherment algorithm FEAL. In: Price, W.L., Chaum, D. (eds.) EUROCRYPT 1987. LNCS, vol. 304, pp. 267–278. Springer, Heidelberg (1988)
Tardy-Corfdir, A., Gilbert, H.: A known plaintext attack of FEAL-4 and FEAL-6. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 172–182. Springer, Heidelberg (1992)
Wagner, D.: The boomerang attack. In: Knudsen, L.R. (ed.) FSE 1999. LNCS, vol. 1636, pp. 156–170. Springer, Heidelberg (1999)
Wallén, J.: Linear approximations of addition modulo 2\(^{n}\). In: Johansson, T. (ed.) FSE 2003. LNCS, vol. 2887, pp. 261–273. Springer, Heidelberg (2003)
Wang, X., Yin, Y.L., Yu, H.: Finding collisions in the full SHA-1. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 17–36. Springer, Heidelberg (2005)
Wang, X., Yu, H.: How to break MD5 and other hash functions. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 19–35. Springer, Heidelberg (2005)
Yu, H., Chen, J., Wang, X.: The boomerang attacks on the round-reduced Skein-512. In: Knudsen, L.R., Wu, H. (eds.) SAC 2012. LNCS, vol. 7707, pp. 287–303. Springer, Heidelberg (2013)
Acknowledgement
We would like to thank Nicky Mouha for enriching discussions about those results, and the anonymous reviewers for their suggestions to improve the presentation of the paper.
The author is partially supported by the French Agence Nationale de la Recherche through the BRUTUS project under Contract ANR-14-CE28-0015.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix: Application to FEAL-8X
A Appendix: Application to FEAL-8X
We now present application of our techniques to reduce the data complexity of differential and linear attacks.
FEAL is an early block cipher proposed by Shimizu and Miyaguchi in 1987 [36]. FEAL uses only addition, rotation and xor operations, which makes it much more efficient than DES in software. FEAL has inspired the development of many cryptanalytic techniques, in particular linear cryptanalysis.
At the rump session of CRYPTO 2012, Matsui announced a challenge for low data complexity attacks on FEAL-8X using only known plaintexts. At the time, the best practical attack required \(2^{24}\) known plaintexts [2] (Matsui and Yamagishi had non-practical attacks with as little as \(2^{14}\) known plaintext [26]), but Biham and Carmeli won the challenge with a new linear attack using \(2^{15}\) known plaintexts, and introduced the partitioning technique to reduce the data complexity to \(2^{14}\) [3]. Later Sakikoyama et al. improved this result using multiple linear cryptanalysis, with a data complexity of only \(2^{12}\) [34].
We now explain how to apply the generalized partitioning to attack FEAL-8X. Our attack follows the attack of Biham and Carmeli [3], and uses the generalized partitioning technique to reduce the data complexity further. The attack by Biham and Carmeli requires \(2^{14}\) data and about \(2^{45}\) time, while our attack needs only \(2^{12}\) data, and \(2^{45}\) time. While the attack of Sakikoyama et al. is more efficient with the same data complexity, this shows a simple example of application of the generalized partitioning technique.
The attacks are based on a 6-round linear approximation with imbalance \(2^{-5}\), using partial encryption for the first round (with a 15 bit key guess), and partial decryption for the last round (with a 22 bit key guess). This allows to compute enough bits of the state after the first round and before the last round, respectively, to compute the linear approximation. For more details of the attack, we refer the reader to the description of Biham and Carmeli [3].
In order to improve the attack, we focus on the round function of the second-to-last round. The corresponding linear approximation is \(x[\mathtt {10115554}] \rightarrow y[\mathtt {04031004}]\) with imbalance of approximately \(2^{-3}\).
We partition the data according to the following 4 bitsFootnote 7 (note that all those bits can be computed in the input of round 6 with the 22-bit key guess of DK7):
The probability of the linear approximation in each subset is as follows (indexed by the value of \(b_3, b_2, b_1, b_0\)):
This gives a total capacity \(c^2 = \sum _i (2 \cdot p_i - 1)^2 = 2.49\), using subsets of 1 / 16 of the data. For reference, a linear attack without partitioning has a capacity \((2^{-3})^2\), therefore the complexity ratio can be computed as:
This can be compared to Biham and Carmeli’s partitioning, where they use a single linear approximation with capacity 0.1 for 1 / 2 of the data, this gives a ratio of only:
With a naive implementation of this attack, we have to repeat the analysis 16 times, for each guess of 4 key bits. Since the data is reduced by a factor 4, the total time complexity increases by a factor 4 compared to the attack on Biham and Carmeli. This result in an attack with \(2^{12}\) data and \(2^{47}\) time.
However, the time complexity can also be reduced using counters, because the 4 extra key bits only affect the choice of the partitions. This leads to an attack with \(2^{12}\) data and \(2^{43}\) time.
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Leurent, G. (2016). Improved Differential-Linear Cryptanalysis of 7-Round Chaskey with Partitioning. In: Fischlin, M., Coron, JS. (eds) Advances in Cryptology – EUROCRYPT 2016. EUROCRYPT 2016. Lecture Notes in Computer Science(), vol 9665. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49890-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-662-49890-3_14
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-49889-7
Online ISBN: 978-3-662-49890-3
eBook Packages: Computer ScienceComputer Science (R0)