Abstract
When discussing how to improve sidechannel resilience of a cipher, an obvious direction is to use various masking or hiding countermeasures. However, such schemes come with a cost, e.g. an increase in the area and/or reduction of the speed. When considering lightweight cryptography and various constrained environments, the situation becomes even more difficult due to numerous implementation restrictions. However, some options are possible like using Sboxes that are easier to mask or (more on a fundamental level), using Sboxes that possess higher inherent sidechannel resilience. In this paper we investigate what properties should an Sbox possess in order to be more resilient against sidechannel attacks. Moreover, we find certain connections between those properties and cryptographic properties like nonlinearity and differential uniformity. Finally, to strengthen our theoretical findings, we give an extensive experimental validation of our results.
Download conference paper PDF
1 Introduction
When designing a block cipher, one needs to consider many possible cryptanalysis attacks and often give the best tradeoff between the security, speed, ease of implementation, etc. Besides the two main directions in the form of linear [1] and differential [2] cryptanalyses, today the most prominent attacks come from the implementation attacks group where sidechannel attacks (SCAs) play an important role. To protect against SCA, one common option is to use various countermeasures such as hiding or masking schemes [3] where one well known example is the threshold implementation [4]. However, such countermeasures come with a cost when implementing ciphers. If considering more resource constrained environments, one often does not have enough resources to implement standard ciphers like AES and therefore one needs to use lightweight cryptography. However, even lightweight ciphers can be too resource demanding especially when the cost of countermeasures is added. Therefore, although countermeasures represent the way how to go when considering SCA protection, there is no countermeasure (at least at the current state of the research) that offers sufficient protection against any attack while being cheap enough to be implemented in any environment.
In this paper, we consider how to improve SCA resilience of ciphers without imposing any extra cost. This is possible by considering the inherent resilience of ciphers. We particularly concentrate on block ciphers which utilize Sboxes and therefore study the resilience of Sboxes against sidechannel attacks.
In the case of SCA concentrating only on 1bit of the Sbox output, a theoretical connection between the sidechannel resistance and differential uniformity of Sboxes has been found in [5]. In particular, the authors showed that the higher the sidechannel resistance, the smaller the differential resistance. However, as we show, this extension does not straightforwardly hold when considering more complex leakage models as the Hamming weight of the Sbox output, which is the most prominent leakage model in sidechannel analysis when considering Correlation Power Analysis (CPA) [6]. We therefore investigate Sbox parameters which may influence the sidechannel resistance while still having good or optimal cryptographic properties. The (almost) preservation of Hamming weight and a small Hamming distance between x and F(x) are two properties each of which could strengthen the resistance to SCA from an intuitive perspective. Our theoretical and empirical findings show that notably in the case when exactly preserving the Hamming weight, the SCA resilience is improved. Moreover, we relax this assumption and investigate in Sboxes that almost preserve the Hamming weight. For our study, we employ the confusion coefficient [7] as a metric for sidechannel resistance. Besides the signaltonoiseratio and the number of observed measurements, the confusion coefficient is the factor influencing the success rate of CPA and, moreover, it is the only factor that depends on the underlying considered algorithm and thus on the Sbox. More precisely our main contributions are:

1.
We calculate (resp. we bound above) the confusion coefficient value of a function F in the two scenarios where:

(a)
x and F(x) have the same Hamming weight.

(b)
in average, F(x) has a Hamming weight near that of x.

(a)

2.
We observe that the Sboxes with no difference between the Hamming weights of their input and output have nonlinearity equal to 0; more generally, the same happens when the Hamming weight of x and the Hamming weight of F(x) have always the same parity. Such functions are of course to be avoided from a cryptanalysis perspective. Furthermore, we show that more generally as well, for every Sbox F, denoting by \(d_{w_H}\) the number of inputs x for which the Hamming weights of x and F(x) have different parities, F has nonlinearity at most \(d_{w_H}\). This implies that if the number of inputs x such that \(w_H(x)\ne w_H(F(x))\) is at most \(d_{w_H}\), the nonlinearity is at most \(d_{w_H}\). We show in Example 2 that this does not make however the Sbox necessarily weak. We emphasize that although these observations could be regarded trivial, they have practical consequences.

3.
We show the connection between the number of fixed points in a function F and its nonlinearity.

4.
We show that Sboxes such that F(x) lies at a small Hamming distance from x (or more generally from an affine function of x) cannot have high nonlinearity although the obtainable values are not too bad for \(n = 4, 8\).

5.
In the practical part, we confirm our theoretical findings about the connection between (almost) preserving the Hamming weight and the confusion coefficient by investigating several Sboxes.

6.
We investigate the relationship between the confusion coefficient of different key guesses and evaluate a number of Sboxes used in today’s ciphers to show that their SCA resilience can significantly differ.
2 Preliminaries
2.1 Generalities on SBoxes
Let n, m be positive integers, i.e., \(n, m \in \mathbb {N}^+\). We denote by \(\mathbb {F}_{2}^{n}\) the ndimensional vector space over \(\mathbb {F}_{2}\) and by \(\mathbb {F}_{2^n}\) the finite field with \(2^n\) elements. The set of all ntuples of elements in the field \(\mathbb {F}_{2}\) is denoted by \(\mathbb {F}_{2}^{n}\), where \(\mathbb {F}_{2}\) is the Galois field with two elements. Further, for any set S, we denote \(S \backslash \{0\}\) by \(S^{*}\). The usual inner product of a and b equals \(a\cdot b = \bigoplus _{i=1}^{n} a_{i}b_{i}\) in \(F_{2}^n\).
The Hamming weight \(w_H(a)\) of a vector a, where \(a \in \mathbb {F}_{2}^{n}\), is the number of nonzero positions in the vector. An (n, m)function is any mapping F from \(\mathbb {F}_{2}^{n}\) to \(\mathbb {F}_{2}^{m}\). An (n, m)function F can be defined as a vector \(F = (f_1,\cdots ,f_m)\), where the Boolean functions \(f_i: \mathbb {F}_2^n \rightarrow \mathbb {F}_2\) for \(i \in \{1, \cdots , m\}\) are called the coordinate functions of F.
The component functions of an (n, m)function F are all the linear combinations of the coordinate functions with non allzero coefficients. Since for every n, there exists a field \(\mathbb {F}_{2^n}\) of order \(2^n\), we can endow the vector space \(\mathbb {F}_2^n\) with the structure of that field when convenient. If the vector space \(\mathbb {F}_2^n\) is identified with the field \(\mathbb {F}_{2^n}\) then we can take \(a\cdot b = tr (ab)\) where \(tr(x) = x + x^2 + \ldots +x^{2^{n1}}\) is the trace function from \(\mathbb {F}_{2^n}\) to \(\mathbb {F}_{2}\). The addition of elements of the finite field \(\mathbb {F}_{2^n}\) is denoted with “+”, as usual in mathematics. Since, often, we identify \(\mathbb {F}_{2}^n\) with \(\mathbb {F}_{2^n}\) and if there is no ambiguity, we denote the addition of vectors of \(\mathbb {F}_{2}^n\) when \(n>1\) with “+” as well.
An (n, m)function F is balanced if it takes every value of \(\mathbb {F}_{2}^{m}\) the same number \(2^{n  m}\) of times.
The WalshHadamard transform of an (n, m)function F is (see e.g. [8]):
The nonlinearity nl of an (n, m)function F equals the minimum nonlinearity of all its component functions \(v\cdot F\), where \(v \in \mathbb {F}_{2}^{m*}\) [8, 9]:
The nonlinearity of any (n, m) function F is bounded above by the socalled covering radius bound:
In the case \(m=n\), a better bound exists. The nonlinearity of any (n, n) function F is bounded above by the socalled SidelnikovChabaudVaudenay bound [10]:
Bound (4) is an equality if and only if F is an Almost Bent (AB) function, by definition of AB functions [8].
Let F be a function from \(\mathbb {F}_2^n\) into \(\mathbb {F}_2^m\) with \(a \in \mathbb {F}_2^n\) and \(b \in \mathbb {F}_2^m\). We denote:
The entry at the position (a, b) corresponds to the cardinality of the delta difference table \(D_F (a, b)\) and is denoted as \(\delta (a, b)\). The differential uniformity \(\delta _F\) is then defined as [11]:
Functions that have differential uniformity equal to 2 are called the Almost Perfect Nonlinear (APN) functions. Every AB function is also APN, but the converse does not hold in general. AB functions exist only in an odd number of variables, while APN functions also exist for an even number of variables. When discussing the differential uniformity parameter for permutations, the best possible (and known) value is 2 for any odd n and also for \(n = 6\). For n even and larger than 6, this is an open question. The differential uniformity value for the inverse function \(F(x)=x^{2^n2}\) equals 4 when n is even and 2 when n is odd.
2.2 SideChannel Resistance
Sidechannel attacks analyze physical leakage that is unintentionally emitted during cryptographic operations in a device (e.g., through the power consumption [12] or electromagnetic emanation [13]). This sidechannel leakage is statistically dependent on the intermediate processed values involving the secret key, which makes it possible to retrieve the secret from the measured data. In particular, as the attacker wants to retrieve the secret key, he makes predictions (hypotheses) on a small enumerable chunk (e.g., byte) of an intermediate state using all possible key values.
The sidechannel resistance of implementations against Correlation Power Attack (CPA) [6] depends on three factors: the number of measurement traces, the signaltonoise ratio (SNR) [14], and the confusion coefficient [7]. The relationship between the three factors is linear in case of low SNR [15]. The confusion coefficient measures the discrepancy between the hypothesis of an intermediate state using the correct (secret) key and any hypothesis made with a (wrong) key assumption. Therefore, as one compares possible intermediate processed values, the confusion coefficient depends on the underlying cryptographic algorithm and thus, if the attacker targets an Sbox operation, on the sidechannel resistance of that Sbox. More precisely, let us assume the attacker exploits an intermediate processed value \(F(k_c + t)\) during the first round that depends on the secret key \(k_c \in \mathbb F_2^n\), an nbit chunk of the plaintext \(t \in \mathbb {F}_2^n\), and an Sbox function F. Moreover, let us make the commonly accepted assumption that the device is leaking sidechannel information as the Hamming weight (see e.g., [14]) of intermediate values with additive noise N:
As the secret key \(k_c\) is unknown to the attacker, he computes for each key guess \(k_g \in \mathbb {F}_2^n\) a hypothesis about the intermediate state:
of the deterministic part of the leakage in Eq. (7). Interestingly, these hypotheses are not independent and their discrepancy is characterized by the confusion coefficient. Originally in [7] the confusion coefficient has been introduced for (n, 1) Boolean functions:
with T being the random variable whose realization is t. In [5], the authors related \(\kappa (k_c,k_g)\) in Eq. (9) to \(\delta _F\) and showed that the higher the sidechannel resistance, the smaller the differential resistance (that is, the higher \(\delta _F\)). In fact, \(\kappa (k_c,k_g)\) is represented as
which can then be straightforwardly connected to \(\delta _F\) for 1bit models.
In [16] the authors extend \(\kappa (k_c,k_g)\) to the general multibit case for CPA and thus to (n, m)functions F. In this paper, we use the definition given in [15] which is a standardized version of confusion coefficient given in [16] and thus a natural extension of Eq. (9):
where y is assumed to be standardized (i.e., \(\mathbb E(y(\cdot ,T))=0, Var(y(\cdot ,T))=1\)). More specifically, Eq. (11) enables us to compare confusion coefficients for different functions F. By substituting \(y(*)\) with Eq. (8) and denoting \(x = t \oplus k_c\) and \(a=k_c+k_g\) we can write \(\kappa (k_c,k_g)\) as
Now, it is easy to see that from Eq. (12) we cannot straightforwardly derive a connection to \(\delta _F\) for (n, m) functions. More precisely, for \(m = 1\) the square is just 4 times the value of \(F(t)+ F(t+a)\) and then the confusion coefficient equals \(\delta (a, 1)\). For \(m>1\) we have the square of the difference between the weights of F(t) and \(F(t+ a)\) which is not 4 times the weight of \(b=F(t)+ F(t + a)\) because the \(10\) and the \(01\) count with their signs in the sum. So there is no direct connection with \(\delta _F\) anymore.
As a decisive criterion for comparison between confusion coefficients, the minimum value of \(\kappa (k_c,k_g)\) was specified in [15] as it relates to the success rate when the SNR is low. Note that the higher is the minimum of the confusion coefficient, the lower is the sidechannel resilience. This comes from the fact that the lower the confusion coefficient the smaller is the (Euclidean) distance between the correct key \(k_c\) and a key guess \(k_g\) and thus the harder it is for an attacker to distinguish if the leakage is arising due to a computation with \(k_c\) or \(k_g\). A detailed discussion on this will be given in Subsect. 5.2. On the other hand, in [17] authors use \(var(\kappa (k_c,k_g))\) as a criterion, where smaller values indicate lower sidechannel resilience. Our experiments in Sect. 5 show that both metrics coincide with the empirical resilience using simulations.
In the case \(\kappa (k_c,k_g)=0\) or \(\kappa (k_c,k_g)=1\) for any \(k_g \ne k_c\), CPA is not able to distinguish between \(k_c\) and this key guess \(k_g\) and will thus fail to reveal the secret key exclusively even if the number of measurements goes to infinity. More precisely, \(\kappa (k_c,k_g)=0\) means that for a key guess \(k_g\) one observes exactly the same intermediate values (see Eq. (8)) as for the correct key \(k_c\). Contrary for \(\kappa (k_c,k_g)=1\) one observes the complementary value (can be seen from Eqs. (9) and (11)), however, as CPA takes the absolute value of correlation (due to hardware related properties [14]) an attacker again cannot distinguish between \(k_c\) and \(k_g\) in this case. In general, normalized confusion coefficient values close to 0.5 indicate that \(k_c\) and \(k_g\) can be easily distinguished (see Eq. (9)). We will show in Sect. 3 and empirically confirm in Sect. 5 that in case of preserving \(w_H\) there exists an key guess \(k_g\) such that \(\kappa (k_c,k_g)=1\).
3 SBoxes (Almost) Preserving the Hamming Weight
3.1 Relation to the Confusion Coefficient
To obtain, for an (n, m)function F, a connection between the confusion coefficient parameter and the Hamming weight preservation (i.e., the fact that, for every x, F(x) has the same Hamming weight as x) or, more generally, a limited average Hamming weight modification, we start with Eq. (12). For any function F, we have:
Lemma 1 addresses the case where F preserves the Hamming weight, whereas the scenario in which F modifies the Hamming weight in a limited way is described in Lemma 2. Note that the first scenario is a particular case of the second.
Lemma 1
For an (n, n)function such that, for every x, F(x) has the same Hamming weight as x, the confusion coefficient equals \(\frac{w_H(a)}{n}\).
Proof
If F preserves the Hamming weight, that is, if \(w_H(F(x))=w_H(x)\) for every x (or more generally, if F is the composition of a function preserving the weight by an affine isomorphism on the right), then the confusion coefficient \(\kappa (k_c,k_g)=\mathbb {E}\left( \left( \frac{w_H(F(x))w_H(F(x+a))}{\sqrt{n}}\right) ^2\right) ,\) where \(a=k_c+k_g\), becomes \(\mathbb {E}\left( \left( \frac{w_H(x)w_H(x+a)}{\sqrt{n}}\right) ^2\right) \), and by applying Eq. (13) (which is valid for every F) to \(F=Id\), we obtain:
The expectations of all these sums for \(i\ne j\) are null (since the character sums of nonzero linear functions are null), and we obtain:
Example 1
For \(n=4\), Lemma 1 gives \(\min _{k_c \ne k_g} \kappa (k_c,k_g) = 0.25\) and for \(w_H(a)=n\) we have \(\kappa (k_c,k_g) = 1\), which means that the CPA distinguisher is not able to distinguish between these two hypotheses \(k_g\) and \(k_c\) (see Subsect. 2.2). Note that we give a more detailed discussion about the results and their ramifications in Sect. 5.
Lemma 2
For an (n, n)function such that, on average, F(x) has a Hamming weight near that of x, more precisely, where \(\sum _x w_H(F(x))w_H(x)\le d_{w_H}\), where \(d_{w_H}\) is some number, the standardized confusion coefficient is bounded above by \(\frac{w_H(a)}{n}+ \frac{4d_{w_H}}{2^n} \).
Proof
If \(\mathbb {E}(w_H(F(x))w_H(x))\le \frac{d_{w_H}}{2^n}\), then according to Lemma 1 and its proof the confusion coefficient \(\kappa (k_c,k_g)=\) \(\mathbb {E}\left( \left( \frac{w_H(F(x))w_H(F(x+a))}{\sqrt{n}}\right) ^2\right) \) is such that
3.2 Relation to Cryptographic Properties
We study the cryptographic consequences of the preservation of the Hamming weight. Again we first cover the specific case were the input and output of an Sbox always have the same Hamming weight, and then the second case where the output has on average a Hamming weight close to that of the corresponding input (see Lemma 3).
If for every x, we have \(w_H(F(x))=w_H(x)\) then the sum (mod 2) of all coordinate functions of F equals the sum (mod 2) of all coordinates of x. This means that F has nonlinearity equal to zero since one of its component functions is linear. Of course, the same happens under the much weaker hypothesis that \(w_H(F(x))\) and \(w_H(x)\) have always the same parity. Therefore, an Sbox function preserving the Hamming weight is cryptographically insecure.
However, if \(\sum _x w_H(F(x))w_H(x)\le d_{w_H}\), then we have \(nl \le d_{w_H}\). Indeed, this is a direct consequence of the following straightforward result, which has however much importance in our context:
Lemma 3
If the Hamming weight of the Boolean function:
that is, \(\sum _x ((w_H(F(x))w_H(x)) \ [mod\, 2])\), is at most \(d_{w_H}\), then we have \(nl \le d_{w_H}\).
Indeed, the Hamming distance between the component function \(\sum _i F_i\ [mod\, 2])\) and the linear function \(\sum _i x_i\ [mod\, 2])\) is then at most \(d_{w_H}\).
Example 2
For a (4, 4)function F to have nonlinearity equal to 4 (optimal nonlinearity), it means that \(d_{w_H}\) must be at least 4. In order to construct functions with such properties, we ran a genetic algorithm as given by Picek et al. [17]. We use the same settings as there: 30 independent runs, population size equal to 50, 3tournament selection, and mutation probability 0.3 per individual. The objective is the maximization of the following fitness function:
Here, \(\varDelta _{nl, 4}\) represents the Kronecker delta function that equals 1 when nonlinearity is 4 and 0 otherwise. Notice we subtract the difference of the Hamming weights of the inputs and outputs of an Sbox from the summed Hamming weight value for a (4, 4)function since we work with the maximization problem while that value should be minimized. Interestingly, we observed that finding Sboxes with those properties is a relatively easy task and that the obtained Sboxes never have more than 8 fixed points. We give examples of such Sboxes in Table 1, for instance, \(S_5\) where nonlinearity equals 4 and \(d_{w_H}\) is 4.
Next, inspired by our empirical results, we investigate whether it is theoretically possible to construct an Sbox with even more fixed points while still having the maximal nonlinearity.
Lemma 4
If an (n, n)function has k fixed points then the maximal value of \(W_F(a,v)\) when \(v \ne 0\) is bounded below by \((k1)/(12^{n})\). If nl is the nonlinearity of an (n, n)function, then its number k of fixed points is not larger than \(2^n\lceil (22^{1n})\, nl\rceil \).
Proof
The number of fixed points k of an (n, n)function F equals:
which follows from Eq. (1) when \(a = v\) and the property that \(\sum _{v\in \mathbb {F}_ 2^n}(1)^{v\cdot a}\) equals \(2^n\) if \(a=0\) and is null otherwise. The value of \(W_F(0,0)\) involved in Eq. (17) equals \(2^n\). We take it off and obtain:
Then the arithmetic mean of \(W_F(v,v)\) when \(v \ne 0\) equals \((k1)/(12^{n})\). This implies that \(\max _v W_F(v,v)\) is at least \((k1)/(12^{n})\) and the nonlinearity cannot be larger than \(2^{n1}(k1)/(22^{1n})\). The inequality \(nl\le 2^{n1}(k1)/(22^{1n})\) is equivalent to \(k\le 2^n\lceil (22^{1n})\, nl\rceil \).
4 SBoxes Minimizing the Hamming Distance
4.1 Relation to the Confusion Coefficient
In real world applications, the device may not only leak in the Hamming weight, but also in the Hamming distance, therefore we now extend our study to the case were the leakage arises from the Hamming distance between x and F(x). Again we first study the relation to the confusion coefficient and then give the connection to cryptographic properties.
By the triangular inequality, we have \(w_H(F(x))w_H(x)\le d_H(x,F(x))\). This implies that \(\sum _x w_H(F(x))w_H(x)\le \sum _x d_H(x,F(x))\).
Hence, if \(\sum _x d_H(x,F(x))\le d_{d_H}\), we can use Lemma 2 and deduce that also in this scenario the confusion coefficient is bounded by \(\frac{w_H(a)}{n}+\frac{4d_{d_H}}{2^n}\).
4.2 Relation to Cryptographic Properties
From \(\sum _x d_H(x,F(x))\le d_{d_H}\), up to adding a linear function (which does not change the nonlinearity nor the differential uniformity), considering Sboxes such that, for every x, F(x) lies at a small distance from x corresponds to considering functions which take a too small number of values. We show that such functions have bad nonlinearity and bad differential uniformity.
Lemma 5
Let F be any (n, m)function such that \(F(\mathbb {F}_2^n) \le D\), then \(\delta _F\ge \frac{2^n}{2^m1}\left( \frac{2^n}{D}1\right) \) and \(nl\le 2^{n1}\frac{\frac{2^{n+m1}}{D}2^{n1}}{2^m1}\).
Proof
By using the CauchySchwartz inequality, we obtain \(\sum _{a\in \mathbb {F}_2^{n*}}D_aF^{1}(0)=\sum _{b\in \mathbb {F}_2^m}F^{1}(b)^22^n\ge \frac{(\sum _{b\in \mathbb {F}_2^m}F^{1}(b))^2}{D}2^n=\frac{2^{2n}}{D}2^n\), and there exists then \(a\in \mathbb {F}_2^{m*}\) such that \(D_aF^{1}(0)\ge \frac{\frac{2^{2n}}{D}2^n}{2^m1}\). This proves the first assertion.
We have a partition of \(\mathbb {F}_2^n\) into at most D parts by the preimages \(F^{1}(b)\), \(b\in \mathbb {F}_2^m\), and there exists then \(b\in \mathbb {F}_2^m\) such that \(F^{1}(b)\ge \frac{2^n}{D}\); for such b, we have \(\sum _{x\in \mathbb {F}_2^n,v\in \mathbb {F}_2^m} (1)^{v\cdot (F(x)+b)}\ge \frac{2^{n+m}}{D}\), which is equivalent to \(\sum _{v\in \mathbb {F}_2^m, v\ne 0} (1)^{v\cdot b} W_F(0,v) \ge \frac{2^{n+m}}{D}2^n\), and then there exists \(v\ne 0\) such that \(W_F(0,v)\ge \frac{\frac{2^{n+m}}{D}2^n}{2^m1}\), which implies that \(nl\le 2^{n1}\frac{\frac{2^{n+m1}}{D}2^{n1}}{2^m1}\). This proves the second assertion.
If D is small with respect to \(2^m\) (so that \(2^{n1}\) is small with respect to \(\frac{2^{n+m1}}{D}\)) and D is small with respect to \(2^{n/2}\) (so that \(\frac{2^n}{D}\) is large with respect to \(2^{n/2}\)), the nonlinearity is bad with respect to the covering radius bound \(nl\le 2^{n1}2^{n/21}\). More precisely, if \(D\le \frac{2^m}{\lambda }\) with \(\lambda >1\), then \(nl\le 2^{n1}\frac{(\lambda 1)2^{n1}}{2^m1}< 2^{n1}(\lambda 1)2^{nm1}\) and if \((\lambda 1)2^{nm}\) is significantly larger than \(2^{n/2}\), the nonlinearity is bad with respect to the covering radius bound. We have also that if D is small with respect to \(2^m\) then \(\delta _F\) is large with respect to \(2^{nm}\) if \(m<n\) and with 2 if \(m=n\) (which are the smallest possible values of \(\delta _F\)).
If F is an (n, n)function and \(x+F(x)\) has low weight for every x, say at most \(t_{d_H}\), which is equivalent to saying that \(d_H(x,F(x))\le t_{d_H}\) for every x, then its number of values is at most \(D=\sum _{i=0}^{t_{d_H}} {n\atopwithdelims ()i}\) and we can apply the result above to \(x+F(x)\), which has the same nonlinearity and the same \(\delta _F\) as F. As far as we know, these observations are new. Note that we also have the possibility of applying Lemma 3 and then we have that nonlinearity is bounded by \(t_{d_H}\).
Remark 3
Lemma 5 applies to the case when \(d_H(x,F(x))\le t_{d_H}\) for every x where x equals \(t\, \oplus \, k_g\). This represents a setting one would encounter when working for instance with software implementations. Now, if we consider a hardware setting (e.g., FPGA), then we are interested in the case \(d_H(t,F(t \ \oplus k_g))\le t_{d_H}\) for every key. However, this case leads to the same observation as before but now with up to adding an affine function instead of up to adding a linear function as given in Lemma 5.
5 SideChannel Evaluation
5.1 Evaluation of SBoxes with (Almost) \(w_H\) Preservation
As cryptographically nonoptimal examples of Sboxes (almost) preserving \(w_H\) we consider five different functions F: the identity mapping (\(S_1\)), F not Id but preserving \(w_H\) (\(S_2\)), the identity mapping with an exchange of the images at position \(x=3\) and \(x=12\), i.e., \(F(3)=12\) and \(F(12)=3\), and as \(w_H(3) = 2\) and \(w_H(12) = 3\) we have \(d_{w_H}=2\) (see Lemma 1) (\(S_3\)), \(F(x) = 2^nx\) which gives the complementary Hamming weight (\(S_4\)). Finally, we investigate four Sbox functions \(S_5\) to \(S_8\) with the smallest possible distance \(d_{w_H}\) that equals 4 and maximal possible nonlinearity equal to 4 (see Subsect. 3.2). Sbox functions \(S_7\) and \(S_8\) have furthermore optimal differential uniformity (=4). The mappings are given in Table 1.
The confusion coefficients are illustrated in Fig. 1. Note that, the distribution of \(\kappa (k_c,k_g)\) is independent on the particular choice of \(k_c\) (in the case there are no weak keys) and the values for \(\kappa (k_c,k_g)\) are only permuted when choosing different value \(k_c\in \mathbb F_2^n\). For our experiments we choose \(k_c=0\) and furthermore we order \(\kappa (k_c,k_g)\) in an increasing order of magnitude for illustrative purpose. The minimum value of \(\kappa (k_c,k_g)\) for \(k_g\ne k_c\) is highlighted with a red cross as it is one indicator of the sidechannel resistance. Moreover, we mark \(\kappa (k_c,k_g)=0\) or \(\kappa (k_c,k_g)=1\) with a red circle which points out that CPA is not able to distinguish between \(k_c\) and the marked \(k_g\).
Figure 1a shows that, indeed, \(k_c\) is indistinguishable from one key hypothesis \(k_g\) if \(w_H\) is preserved. Or in other words, even if knowing t and observing \(w_H(F(t+k_c))+N\) with F equal to \(S_1\) the attacker can not exclusively gain information about \(k_c\) even if the number of measurements \(m \rightarrow \infty \). Moreover, it confirms Lemma 1. Note that in our example \(a=k_g\), thus \( \kappa (k_c,k_g) = \frac{w_H(k_g)}{4}\). Interestingly, when comparing our results to the study in [5], where the authors investigated (n, 1)functions, we observe that the confusion coefficient takes different values which indeed confirms that the Hamming weight model is not a straightforward extension from 1bit models. More precisely, in case of linear (n, 1)function the authors observed that the confusion coefficient only takes values from {0,1}, whereas our examples illustrate (as well as our theoretical findings in Sect. 3) that the confusion coefficient is not restricted to only {0,1}, and is equal to 1 for only one particular \(k_g\). Interestingly, for \(d_{w_H}=2\) (see in Fig. 1b) we also have that \(k_c\) is indistinguishable for one \(k_g\). Moreover, apart from \( \kappa (k_c,k_g)=1\), only two different values are taken, each 7 times. This means that CPA is not able to distinguish between each of these 7 key guesses and in total only produces three different correlation values. When considering a complementary \(w_H\) preservation (e.g. \(4w_H\)) we achieve the same results as for \(w_H\) preservation (see also Fig. 1).
Note that, while being illustrative, these first four examples of F are not cryptographically optimal and thus are not suitable in practice. We therefore constructed four Sboxes (\(S_5\) to \(S_8\)) with the smallest \(d_{w_H} (=4)\) while having optimal nonlinearity. Note that \(S_5,S_6\) have suboptimal differential uniformity, while \(S_7,S_8\) are cryptographically optimal (i.e. optimal nonlinearity and differential uniformity). Figures 1c to f show the confusion coefficient of \(S_5\) to \(S_8\). We can observe that all Sboxes have a very low minimum confusion coefficient that is even lower than for \(S_1\) to \(S_4\). Even more, as the previously investigated Sboxes, \(S_5\) has \(\kappa (k_c,k_g)=1\). Therefore, we find an Sbox with almost Hamming weight preserving for which even with an infinity amount of traces the secret key cannot exclusively be found. As the minimum value of the confusion coefficient of \(S_5\) is low (=0.125) there exists additionally other key hypotheses which are harder to distinguish from the secret key. As a conclusion we can say that indeed exact \(w_H\) preserving results in a good sidechannel resistance since we have \( \kappa (k_c,k_g)=1\). Moreover, when the \(w_H\) is almost preserved we present here Sboxes which have a very low minimum confusion coefficient.
5.2 A Closer Look at the Confusion Coefficient
To understand the exact reason why some (one or more) key guesses result in a smaller confusion coefficient than others and how this is related to F, we concentrate on the connection between \(k_c,k_g\), F, and \(\kappa (k_c,k_g)\). Loosely speaking, we are iterating on key guesses influencing the input of F while calculating the confusion coefficient on the measured output of F and being interested in the properties of F. To better address these connections, we split the problem into 2 individual problems.
First, we take a deeper look at the input of F, i.e., \(t \oplus k_g\) where \(\forall t,k_g \in \mathbb F_2^n\) (see Eq. (8)). Clearly, due to the \(\oplus \) operation a particular permutation for different key guesses \(k_g\) is given. A 2D representation for \(t \oplus k_g\), where \(k_g\) is on the horizontal and t on the vertical axis, is given in Fig. 2, where again, for simplicity reasons, \(t,k_g \in \mathbb F_2^4\). In this figure we furthermore group \(t \oplus k_g\) into 4 boxes (\(n\times n\)) together, each containing \(4\times 4\) values: blue (\(B_0\)): \(t \oplus k_g \in [0,3]\), yellow (\(B_1\)): \(t \oplus k_g \in [4,7]\), green (\(B_2\)): \(t \oplus k_g \in [8,11]\), and red (\(B_3\)): \(t \oplus k_g \in [12,15]\). Using this color representation we can easily see 4 different permutations \( \pi _0, \pi _1, \pi _2, \pi _3\) applied on (\(B_0\) \(B_1\) \(B_2\) \(B_3\)). More precisely, when considering a column representation^{Footnote 1} among the key guesses \(k_g\), we have:

for \(k_g \in [0,3]\): no permutation (\( \pi _0 = \bigl ({\begin{matrix} 0 &{} 1 &{} 2 &{} 3 \\ 0 &{} 1 &{} 2 &{} 3 \end{matrix}}\bigr )\)),

for \(k_g \in [4,7]\): pairwise swap of elements in each half of matrix (\( \pi _1 = \bigl ({\begin{matrix} 0 &{} 1 &{} 2 &{} 3 \\ 1 &{} 0 &{} 3 &{} 2 \end{matrix}}\bigr )\)),

for \(k_g \in [8,11]\): additionally reverse ordering of elements (\( \pi _2 = \bigl ({\begin{matrix} 0 &{} 1 &{} 2 &{} 3 \\ 2 &{} 3 &{} 0 &{} 1 \end{matrix}}\bigr )\)),

for \(k_g \in [12,15]\) additionally a pairwise swap of elements in each half of matrix (\( \pi _3 = \bigl ({\begin{matrix} 0 &{} 1 &{} 2 &{} 3 \\ 3 &{} 2 &{} 1 &{} 0 \end{matrix}}\bigr )\)).
Moreover, as highlighted by the zoom in on each box, within each box (i.e., \(B_i\), \(0\le i \le 3\)) we have the same permutations \( \pi _0,\ldots , \pi _3\) on the 4 column entries. Note that the order of permutations is equivalent for each box, or in other words, regardless of the color and position of the box the same permutation is applied. More formally, let \(b_{ij} \in [4i,4i+3]^4\) (for \(0 \le i,j \le 3\)) denote the columns within \(B_i\), then \(b_{ij}\) equals \(\pi _j\) applied on the column vector \((4i \ 4i+1 \ 4i+2 \ 4i+3)\).
Second, we examine the expression of the confusion coefficient in Eq. (11) itself. Recall from Eq. (8), \(y_{k_g,t} = y(k_g,t) = w_H(F(k_g + t))\). Let
denote the vector of hypotheses for one key guess \(k_g\) over all texts t. Referring to Fig. 2, \( y_{k_g}\) relates to one column before its application to F and \(w_H\). The confusion coefficient can be rewritten as
with \(\Vert \cdot \Vert _2\) being the Euclidean norm. Let us recall that we are especially interested in \(min_{k_g\ne k_c} \kappa (k_c,k_g)\). Moreover, the elements of \( y_{k_c}  y_{k_g}\) are in \([4,4]\). Now, as Eq. (19) considers not only the difference but its squared values, we may conjecture that the minimum value is most likely reached when the elements of \( y_{k_c}  y_{k_g}\) are in \([1,1]\), which is discussed in more detail and confirmed using several lightweight Sboxes in Appendix A. Roughly speaking, one difference of \(\pm 2\) is equivalent to 4 changes with \(\pm 1\) and so on.
Now let us put the observations of both parts together. Our previous findings about the permutations can be straightforwardly applied to the Hamming weight of the output of F. Let us assume w.l.o.g. \(k_c = 0\), then for \(k_g = 4i+j\) (with \(0\le i,j \le 3\)) we have
with \( y_{0} = (y_{0,0}, y_{0,1}, \ldots , y_{0,15})\) and \((\cdot )^T\) denoting the transpose. Thus, we are looking for a function F such that the distance
is as small as possible for any \(\pi _i,\pi _j \in \{\pi _0,\pi _1,\pi _2,\pi _3\}\).
This finding indicates that the order of the Hamming weight of the output of F plays a significant role. To be more precise, the minimum confusion coefficient may depend not only on the distribution of values along the 4 boxes (Example 4), but also on the order within each box (Example 5).
Example 4
Note that the elements of \(y_{k_g}\) follow a binomial distribution due to the application of \(w_H\). Therefore, 0 and 4 occur once, 1 and 3 occurs four times, and 2 six times. In order to reach a mininum squared Euclidean distance in Eq. (21) a natural strategy seems to be to distribute the values broadly among the 4 sets \([4i,4i+3]\) and to have a small difference between the values in one set. Let us consider the Sbox of Midori [18] and Mysterion [19]. From Table 2 one can observe that for Midori we have the following sets: 2,2,3,2 – 3,3,4,3 – 1,2,1,2 – 0,1,1,2. So, the maximal distance between values is 2. Moreover, the first three sets only contain 2 different values and the last has 3. On the contrary, when looking at Mysterion (0,1,2,3 – 2,4,3,2 – 1,3,1,2 – 2,1,3,2), the structure looks less balanced. In particular, the maximal distance is 3 and we have always 3 different values within a set. When comparing the confusion coefficient in Fig. 3 we can observe that Midori has a much smaller minimum confusion coefficient and is thus more SCA resilient.
Example 5
Let us consider the Sbox of KLEIN [20] and a small modification (\(S_9\)) in which we swap F(1) with F(3) (see Table 2). Note that both functions consist of the same values among the sets: 3,1,2,2 – 1,4,3,0 – 2,2,1,2 – 1,3,3,2. For both \(min_{k_g\ne k_c} \kappa (k_c,k_g)\) is reached for \(k_g=11\), thus \(\pi _1=2\) and \(\pi _2=2\). However, as Fig. 3d shows, for KLEIN we have \(min_{k_g\ne k_c} \kappa (k_c,k_g) = 0.125\), whereas \(min_{k_g\ne k_c} \kappa (k_c,k_g) = 0.185\) for \(S_9\), which relates to a squared Euclidean distance (see Eq. (21)) of 8 and 12, respectively.
Furthermore, in Appendix A we investigate several lightweight Sboxes in terms of minimum confusion coefficient and provide empirical evaluations. Note that, a preliminary study showing the difference of some lightweight Sboxes has been conducted in [21]^{Footnote 2}. Our extended results in Appendix A theoretically and empirically confirm [21]. Moreover, the appendix provides details about the minimum Euclidean distance and the permutations \(\pi _i,\pi _j\). Additionally, we take a deeper look at the expression of \( y_{k_c}  y_{k_g}\) for the key hypothesis \(k_g\) that results in the smallest confusion coefficient (i.e., \(\arg \min _{k_c\ne k_g} \kappa (k_c,k_g)\)). We discover that for \(S_5\) and the Sbox proposed in [17], which has optimal properties of the confusion coefficient while holding optimal differential properties, the difference \( \Vert y_{k_c}  y_{k_g}\Vert ^2_2\) has a special particular structure, which is not observed for any other investigated 4bit Sbox.
Concluding, we derived specific criteria influencing the sidechannel resistance (in particular in Eq. (21) and our findings in Appendix A) that could be exploited to optimize and find Sboxes in terms of sidechannels resistance in future work – especially when adapted for \(n>4\).
6 Conclusions
In this paper, we prove a number of bounds between various cryptographic properties that can be related also with the sidechannel resilience of a cipher. Our results confirm some well known intuitions that having an Sbox more resilient against SCA will make it potentially more vulnerable against classical cryptanalyses. However, they also show that for the usual sizes of Sboxes, this weakening is moderate and tradeoffs are then possible.
Since in this work we concentrated in our practical investigations on the Hamming weight model, in the future we plan to explore possible tradeoffs for the Hamming distance model and to extend our (empirical) analysis to larger Sboxes using the theoretical findings in this paper.
Notes
 1.
Note that we also have the same permutations on the row entries, however, we are interested in particular in a column representation as they reflect the key hypotheses.
 2.
Note that, in [22] the authors compared Sboxes regarding another (not normalized) version of the confusion coefficient and derived that their version is not aligned with their empirical results.
References
Matsui, M., Yamagishi, A.: A new method for known plaintext attack of FEAL cipher. In: Rueppel, R.A. (ed.) EUROCRYPT 1992. LNCS, vol. 658, pp. 81–91. Springer, Heidelberg (1993). doi:10.1007/3540475559_7
Biham, E., Shamir, A.: Differential cryptanalysis of DESlike cryptosystems. In: Menezes, A.J., Vanstone, S.A. (eds.) CRYPTO 1990. LNCS, vol. 537, pp. 2–21. Springer, Heidelberg (1991). doi:10.1007/3540384243_1
Mangard, S., Oswald, E., Popp, T.: Power Analysis Attacks: Revealing the Secrets of Smart Cards (Advances in Information Security). SpringerVerlag New York Inc., Secaucus (2007)
Nikova, S., Rechberger, C., Rijmen, V.: Threshold implementations against sidechannel attacks and glitches. In: Ning, P., Qing, S., Li, N. (eds.) ICICS 2006. LNCS, vol. 4307, pp. 529–545. Springer, Heidelberg (2006). doi:10.1007/11935308_38
Heuser, A., Rioul, O., Guilley, S.: A theoretical study of kolmogorovsmirnov distinguishers. In: Prouff, E. (ed.) COSADE 2014. LNCS, vol. 8622, pp. 9–28. Springer, Cham (2014). doi:10.1007/9783319101750_2
Brier, E., Clavier, C., Olivier, F.: Correlation power analysis with a leakage model. In: Joye, M., Quisquater, J.J. (eds.) CHES 2004. LNCS, vol. 3156, pp. 16–29. Springer, Heidelberg (2004). doi:10.1007/9783540286325_2
Fei, Y., Luo, Q., Ding, A.A.: A statistical model for DPA with novel algorithmic confusion analysis. In: Prouff, E., Schaumont, P. (eds.) CHES 2012. LNCS, vol. 7428, pp. 233–250. Springer, Heidelberg (2012). doi:10.1007/9783642330278_14
Carlet, C.: Vectorial boolean functions for cryptography. In: Crama, Y., Hammer, P.L. (eds.) Boolean Models and Methods in Mathematics, Computer Science, and Engineering, 1st edn, pp. 398–469. Cambridge University Press, New York (2010)
Nyberg, K.: On the construction of highly nonlinear permutations. In: Rueppel, R.A. (ed.) EUROCRYPT 1992. LNCS, vol. 658, pp. 92–98. Springer, Heidelberg (1993). doi:10.1007/3540475559_8
Chabaud, F., Vaudenay, S.: Links between differential and linear cryptanalysis. In: Santis, A. (ed.) EUROCRYPT 1994. LNCS, vol. 950, pp. 356–365. Springer, Heidelberg (1995). doi:10.1007/BFb0053450
Nyberg, K.: Perfect nonlinear Sboxes. In: Davies, D.W. (ed.) EUROCRYPT 1991. LNCS, vol. 547, pp. 378–386. Springer, Heidelberg (1991). doi:10.1007/3540464166_32
Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999). doi:10.1007/3540484051_25
Gandolfi, K., Mourtel, C., Olivier, F.: Electromagnetic analysis: concrete results. In: Koç, Ç.K., Naccache, D., Paar, C. (eds.) CHES 2001. LNCS, vol. 2162, pp. 251–261. Springer, Heidelberg (2001). doi:10.1007/3540447091_21
Mangard, S., Oswald, E., Popp, T.: Power Analysis Attacks: Revealing the Secrets of Smart Cards. Springer, Heidelberg (2006). ISBN 0387308571. http://www.dpabook.org/
Guilley, S., Heuser, A., Rioul, O.: A key to success. In: Biryukov, A., Goyal, V. (eds.) INDOCRYPT 2015. LNCS, vol. 9462, pp. 270–290. Springer, Cham (2015). doi:10.1007/9783319266176_15
Thillard, A., Prouff, E., Roche, T.: Success through confidence: evaluating the effectiveness of a sidechannel attack. In: Bertoni, G., Coron, J.S. (eds.) CHES 2013. LNCS, vol. 8086, pp. 21–36. Springer, Heidelberg (2013). doi:10.1007/9783642403491_2
Picek, S., Papagiannopoulos, K., Ege, B., Batina, L., Jakobovic, D.: Confused by confusion: systematic evaluation of DPA resistance of various Sboxes. In: Meier, W., Mukhopadhyay, D. (eds.) INDOCRYPT 2014. LNCS, vol. 8885, pp. 374–390. Springer, Cham (2014). doi:10.1007/9783319130392_22
Banik, S., Bogdanov, A., Isobe, T., Shibutani, K., Hiwatari, H., Akishita, T., Regazzoni, F.: Midori: a block cipher for low energy (extended version). Cryptology ePrint Archive, Report 2015/1142 (2015). http://eprint.iacr.org/
Journault, A., Standaert, F.X., Varici, K.: Improving the security and efficiency of block ciphers based on LSdesigns. Codes Crypt. Des. 82(1–2), 495–509 (2016)
Gong, Z., Nikova, S., Law, Y.W.: KLEIN: a new family of lightweight block ciphers. In: Juels, A., Paar, C. (eds.) RFIDSec 2011. LNCS, vol. 7055, pp. 1–18. Springer, Heidelberg (2012). doi:10.1007/9783642252860_1
Heuser, A., Picek, S., Guilley, S., Mentens, N.: Sidechannel analysis of lightweight ciphers: does lightweight equal easy? Cryptology ePrint Archive, Report 2017/261 (2017). http://eprint.iacr.org/2017/261
Lerman, L., Markowitch, O., Veshchikov, N.: Comparing Sboxes of ciphers from the perspective of sidechannel attacks. IACR Cryptology ePrint Archive 2016/993 (2016)
Daemen, J., Peeters, M., Assche, G.V., Rijmen, V.: Nessie proposal: the block cipher Noekeon. Nessie submission (2000). http://gro.noekeon.org/
Shibutani, K., Isobe, T., Hiwatari, H., Mitsuda, A., Akishita, T., Shirai, T.: Piccolo: an ultralightweight blockcipher. In: Preneel, B., Takagi, T. (eds.) CHES 2011. LNCS, vol. 6917, pp. 342–357. Springer, Heidelberg (2011). doi:10.1007/9783642239519_23
Bogdanov, A., Knudsen, L.R., Leander, G., Paar, C., Poschmann, A., Robshaw, M.J.B., Seurin, Y., Vikkelsoe, C.: PRESENT: an ultralightweight block cipher. In: Paillier, P., Verbauwhede, I. (eds.) CHES 2007. LNCS, vol. 4727, pp. 450–466. Springer, Heidelberg (2007). doi:10.1007/9783540747352_31
Borghoff, J., et al.: PRINCE – a lowlatency block cipher for pervasive computing applications. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 208–225. Springer, Heidelberg (2012). doi:10.1007/9783642349614_14
Zhang, W., Bao, Z., Lin, D., Rijmen, V., Yang, B., Verbauwhede, I.: RECTANGLE: a bitslice lightweight block cipher suitable for multiple platforms. Sci. Chin. Inf. Sci. 58(12), 1–15 (2015)
Beierle, C., Jean, J., Kölbl, S., Leander, G., Moradi, A., Peyrin, T., Sasaki, Y., Sasdrich, P., Sim, S.M.: The SKINNY family of block ciphers and its lowlatency variant MANTIS. Cryptology ePrint Archive, Report 2016/660 (2016). http://eprint.iacr.org/2016/660
Standaert, F., Malkin, T., Yung, M.: A unified framework for the analysis of sidechannel key recovery attacks (extended version). IACR Cryptology ePrint Archive 2006/139 (2006)
Acknowledgments
This work has been supported in part by Croatian Science Foundation under the project IP2014094882. The parts of this work were done while the third author was affiliated with KU Leuven, Belgium.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Investigation of Known SBoxes
A Investigation of Known SBoxes
We already described properties of the Sboxes of KLEIN, Midori, and Mysterion showing that Midori and KLEIN have both \(\min _{k_g \ne k_c} \kappa (k_c,k_g) = 0.125\), whereas for Mysterion it equals 0.3125. Thus, the sidechannel resistance of Mysterion is much smaller than that of KLEIN and Midori. Table 3 shows properties of several wellknown Sboxes, where \(\pi _i\) and \(\pi _j\) indicate the permutations (see Eq. (21)) for the smallest squared Euclidean distance (\(\min \Vert \cdot \Vert ^2_2\)) and thus smallest confusion coefficient (\(\min \kappa (k_c,k_g)\)).
Note that the squared Euclidean distance should not serve as a new metric as it is in direct relation with the confusion coefficient, but its stated values should rather provide information how far \(y_{k_g}\) is apart from \(y_{k_c}\) in terms of the squared Hamming weight values. Further, as used in [17], we give \(var(\kappa (k_c,k_g))\), where the higher the variance, the higher the sidechannel resistance. Finally, we specify the Hamming weights preserved (P\(w_H\)). One can observe that Piccolo has the highest minimum value of the confusion coefficient (and the highest minimum squared Euclidean norm) and thus its sidechannel resistance is the lowest among the evaluated one. Next, there is Mysterion followed by SKINNY, RECTANGLE, PRESENT, and Midori 2 that all have the same minimum value of the confusion coefficient, but different variances thereof. Then, we have NOEKEON and PRINCE. The lowest minimum confusion coefficient is reached by KLEIN, Midori, and the Sbox proposed in [17], which has been found under the constraint of optimal differential properties and the lowest confusion coefficient by using genetic algorithms. Interestingly for the latter one, Fig. 4 illustrates that for one key guess \(\kappa (k_c,k_g)=1\), which we do not observe for any other known Sboxes with optimal differential properties. Moreover, it corresponds to the confusion coefficient of \(S_5\).
Additionally, we take a deeper look at the expression of \( y_{k_c}  y_{k_g}\) for the key hypothesis \(k_g\) that results in the smallest confusion coefficient and we are interested if the elements in \( \vert y_{k_c}  y_{k_g}\vert \) are in \([1,1]\) (see remark in Subsect. 5.2). Our investigations show that this does not hold for Sboxes with \(\kappa (k_c,k_g)\ge 0.25\), but for the ones which are most sidechannel resistant. In particular, Midori 2, Mysterion, PRESENT, RECTANGLE, and SKINNY contain two absolute difference of 2 (resulting in a Euclidean distance of 4), whereas Piccolo even has 4 absolute differences of 2. However, we could not observe any absolute difference greater than 2. On the contrary KLEIN, Midori, NOEKEON, PRINCE, and the Sbox in [17] only contain absolute differences of one, which is thus equivalent to the Euclidean distance.
When considering the sum of differences among the 4 sets \([4s,4s+3]\) for \(0\le s \le 3\), we observed interesting distinctions. In particular, let us denote
with \(\pi _i\) and \(\pi _j\) being the permutation resulting in the minimum confusion coefficient.
Table 4 highlights that only for the Sbox in [17] we have the same difference among all four sets. Note that, in future work this property may additionally help to detect and find Sboxes with better sidechannel resistance for \(n>4\).
Finally, an empirical evaluation of the studied Sboxes is given in Fig. 5. To be reliable we conducted 5 000 independent simulation experiments (SNR \(=2\)) with random secret keys \(k_c\) and texts t. Figure 5a shows the firstorder success rate (SR), i.e., the empirical probability that the correct secret key is exclusively found. As found due to the properties of the confusion coefficient, the Sbox of Piccolo is the weakest, finding the correct key with a SR of 0.9 using 20 measurement traces, whereas KLEIN and Midori require around 35 and 40 traces to reach SR = 0.9. Since the Sbox in [17] does not exclusively find the correct key and thus has a SR = 0, we additionally plot the guessing entropy [29] in Fig. 5b which confirms our findings that at least 2 key guesses have to be made.
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Carlet, C., Heuser, A., Picek, S. (2017). TradeOffs for SBoxes: Cryptographic Properties and SideChannel Resilience. In: Gollmann, D., Miyaji, A., Kikuchi, H. (eds) Applied Cryptography and Network Security. ACNS 2017. Lecture Notes in Computer Science(), vol 10355. Springer, Cham. https://doi.org/10.1007/9783319612041_20
Download citation
DOI: https://doi.org/10.1007/9783319612041_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319612034
Online ISBN: 9783319612041
eBook Packages: Computer ScienceComputer Science (R0)