Skip to main content
Log in

An error-tolerant approach for efficient AES key retrieval in the presence of cacheprefetching – experiments, results, analysis

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

The challenge in cache-based attacks on cryptographic algorithms is not merely to capture the cache footprints during their execution but to process the obtained information to deduce the secret key. Our principal contribution is to develop a theoretical framework based upon which our AES key retrieval algorithms are not only more efficient in terms of execution time but also require up to 75% fewer blocks of ciphertext compared with previous work. Aggressive hardware prefetching greatly complicates access-driven attacks since they are unable to distinguish between a cache line fetched on demand versus one prefetched and not subsequently used during a run of a victim executing AES. We implement a multi-threaded spy code that reports accesses to the AES tables at the granularity level of a cache block. Since prefetching greatly increases side-channel noise, we develop sophisticated heuristics to “clean up” the input received from the spy threads. Our key retrieval algorithms process the sanitized input to recover the AES key using only about 25 blocks of ciphertext in the presence of prefetching and, stunningly, a mere 2–3 blocks with prefetching disabled. We also derive analytical models that capture the effect of varying false positive and false negative rates on the number of blocks of ciphertext required for key retrieval.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

Notes

  1. The false positive rate per element is equal to \( f_{q}\mid G \mid _{q}\,+\,(1-f_q)(\mid G \mid _{q}-1) = \,\mid G \mid _{q} + f_q - 1\).

  2. The higher variance for the Round 2 distribution is a secondary effect, which favours a larger number of decryptions for the Second Round Attack.

References

  1. Zhou Y B and Feng D 2005 Side-channel attacks: Ten years after its publication and the impacts on cryptographic module security testing. IACR Cryptol. ePrint Arch.: 388

  2. Joan D and Rijmen V 2002 The Design of Rijndael: AES—The Advanced Encryption Standard. Springer

    Book  Google Scholar 

  3. OpenSSL Software Foundation. Openssl project. https://www.openssl.org/. Accessed Apr 2018

  4. Hennessy J L and Patterson D A 2011 Computer Architecture: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design). 5th edn., Morgan Kaufmann, Burlington. p. 9

  5. Yarom Y and Falkner K 2014 Flush+reload: A high resolution, low noise, l3 cache side-channel attack. In: 23rd USENIX Security Symposium (USENIX Security 14), San Diego, CA, USENIX Association, pp. 719–732

  6. Ashokkumar C, Giri R P and Menezes B March 2016 Highly efficient algorithms for AES key retrieval in cache access attacks. In: 2016 IEEE European Symposium on Security and Privacy (EuroS P), pp. 261–275

  7. Ashokkumar C, Venkatesh M B S, Giri R P and Menezes B 2016 Design, Implementation and performance analysis of highly efficient algorithms for AES key retrieval in access-driven cache-based side channel attacks. Technical report, Department of Computer Science and Engineering, IIT-Bombay

  8. Eran T, DagArne O and Adi S 2010 Efficient cache attacks on AES and countermeasures. J. Cryptol. 23(1): 37–71

    Article  MathSciNet  Google Scholar 

  9. Intel Corporation 2016 Intel® 64 and IA-32 Architectures Optimization Reference Manual. Number 248966-033

  10. Gullasch D, Bangerter E and Krenn S 2011 Cache games—bringing access-based cache attacks on AES to practice. In: IEEE Computer Society, Proceedings of the 2011 IEEE Symposium on Security and Privacy, SP ’11, Washington, DC, USA, pp. 490–505

  11. Intel Corporation 2017 Intel® 64 and IA-32 Architectures Software Developer’s Manual. Number 325462-062US

  12. Hu W-M 1992 Lattice scheduling and covert channels. In: IEEE Computer Society, Proceedings of the IEEE Symposium on Security and Privacy, SP ’92, Washington, DC, USA, pp. 52–61

  13. Kocher P C 1996 Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In: Proceedings of the 16th Annual International Cryptology Conference on Advances in Cryptology, CRYPTO ’96, Springer, London, UK, pp. 104–113

  14. John K, Bruce S, David W and Chris H 2000 Side channel cryptanalysis of product ciphers. J. Comput. Secur. 8: 141–158

    Article  Google Scholar 

  15. Page D 2002 Theoretical Use of Cache Memory as a Cryptanalytic Side-Channel. IACR Cryptology ePrint Archive, p. 169

  16. Percival C 2005 Cache missing for fun and profit

  17. Neve M and Seifert J-P 2007 Advances on access-driven cache attacks on AES. In: Biham E and Youssef AMRM, editors, Selected Areas in Cryptography, Volume 4356 of Lecture Notes in Computer Science, Springer, New York, pp. 147–162

  18. Acıiçmez O, Brumley B B and Grabher P 2010 New results on instruction cache attacks. In: Cryptographic Hardware and Embedded Systems, CHES 2010, Springer, New York, pp. 110–124

    Chapter  Google Scholar 

  19. Spreitzer R and Thomas PLOS 2013 Cache-access pattern attack on disaligned AES T-Tables. In: International Workshop on Constructive Side-Channel Analysis and Secure Design, Springer, New York, pp. 200–214

    Chapter  Google Scholar 

  20. Acıiçmez O and Koç Çetin K 2006 Trace-driven cache attacks on AES. In: Proceedings of the 8th International Conference on Information and Communications Security, ICICS’06, Springer, Berlin, pp. 112–121

  21. Gallais J F, Kizhvatov I and Tunstall M 2011 Improved trace-driven cache-collision attacks against embedded AES implementations. In: Information Security Applications, Springer, New York, pp. 243–257

    Chapter  Google Scholar 

  22. Zhao X J and Wang T 2010 Improved cache trace attack on AES and CLEFIA by considering cache miss and S-box misalignment. IACR Cryptology ePrint Archive, p. 56

  23. Tsunoo Y, Saito T, Suzaki T, Shigeri M and Miyauchi H 2003 Cryptanalysis of DES implemented on computers with cache. In: Cryptographic Hardware and Embedded Systems—CHES 2003, Volume 2779 of Lecture Notes in Computer Science, Springer, New York, pp. 62–76

    Google Scholar 

  24. Bonneau J and Mironov I 2006 Cache-collision timing attacks against AES. In: Goubin L and Matsui M editors, Cryptographic Hardware and Embedded Systems—CHES 2006, Volume 4249 of Lecture Notes in Computer Science, Springer, New York, pp. 201–215

    Google Scholar 

  25. Bernstein D J 2005 Cache-timing attacks on AES

  26. Spreitzer R and Gérard B 2014 Towards more practical time-driven cache attacks. In: Information Security Theory and Practice. Securing the Internet of Things, Springer, New York, pp. 24–39

    Google Scholar 

  27. Neve M, Seifert J-P and Wang Z 2006 A refined look at Bernstein’s AES side-channel analysis. In: Proceedings of the ACM Symposium on Information, computer and communications security, ACM, pp. 369–369

  28. Osvik D, Shamir A and Tromer E 2006 Cache attacks and countermeasures: The case of AES. In David P, editor, Topics in Cryptology CT-RSA 2006, Volume 3860 of Lecture Notes in Computer Science, Springer, New York, pp. 1–20

    Google Scholar 

  29. Acıiçmez O, Schindler W and Koç Ç K 2006 Cache based remote timing attack on the AES. In: Topics in Cryptology—CT-RSA 2007, Springer, New York, pp. 271–286

    Google Scholar 

  30. Canteaut A, Lauradoux C and Seznec A 2006 Understanding cache attacks. In: Research Report RR-5881

  31. Tiri K, Acıiçmez O, Neve M and Andersen F 2007 An analytical model for time-driven cache attacks. In: Fast Software Encryption, Springer, New York, pp. 399–413

  32. Rebeiro C, Mondal M and Mukhopadhyay D 2010 Pinpointing cache timing attacks on AES. In: 23rd International Conference on VLSI Design, IEEE, pp. 306–311

  33. Atici A C, Yilmaz C and Savas E 2013 An approach for isolating the sources of information leakage exploited in cache-based side-channel attacks. In: IEEE 7th International Conference on Software Security and Reliability-Companion (SERE-C), IEEE, pp. 74–83

  34. S\$A: A shared cache attack that works across cores and defies VM sandboxing–and its application to AES, author=Irazoqui, Gorka and Eisenbarth, Thomas and Sunar, Berk, booktitle=IEEE Symposium on Security and Privacy, pages=591–604, year=2015, organization=IEEE

  35. Liu F, Yarom Y, Ge Q, Heiser G and Lee R B 2015 Last-level cache side-channel attacks are practical. In: IEEE Symposium on Security and Privacy, pp. 605–622

  36. Kayaalp M, Abu-Ghazaleh N, Ponomarev D and Jaleel A 2016 A high-resolution side-channel attack on last-level cache. In: Proceedings of the 53rd Annual Design Automation Conference, ACM, p. 72

  37. Zhang Y, Juels A, Reiter M K and Ristenpart T 2012 Cross-VM side channels and their use to extract private keys. In: Proceedings of the ACM Conference on Computer and Communications Security, ACM, pp. 305–316

  38. Kong J, Acıiçmez O, Seifert J P and Zhou H 2009 Hardware-software integrated approaches to defend against software cache-based side channel attacks. In: IEEE 15th International Symposium on High Performance Computer Architecture., IEEE, pp. 393–404

  39. Weiß M, Heinz B and Stumpf F 2012 A cache timing attack on AES in virtualization environments. In: Financial Cryptography and Data Security, Springer, New York, pp. 314–328

    Chapter  Google Scholar 

  40. Apacechea G I, Inci M S, Eisenbarth T and Sunar B 2014 Fine grain cross-VM attacks on XEN and VMware are possible! IACR Cryptology ePrint Archive, p. 248

  41. Irazoqui G, Inci M S, Eisenbarth T and Sunar B 2014 Wait a minute! A fast, Cross-VM attack on AES. In: Research in Attacks, Intrusions and Defenses, Springer, New York, pp. 299–319

    Google Scholar 

  42. Baer J-L and Chen T-F 1991 An effective on-chip preloading scheme to reduce data access penalty. In: Proceedings of the 1991 ACM/IEEE Conference on Supercomputing, 1991. Supercomputing’91, IEEE, pp. 176–186

  43. Yarom Y and Benger N 2014 Recovering OpenSSL ECDSA Nonces Using the FLUSH+ RELOAD Cache Side-channel Attack. IACR Cryptology ePrint Archive, p. 140

  44. Liu F, Yarom Y, Ge Q, Heiser G and Lee R B 2015 Last-level cache side-channel attacks are practical. In: IEEE Computer Society Proceedings of the 2015 IEEE Symposium on Security and Privacy, SP ’15 Washington, DC, USA, pp. 605–622

  45. Gruss D, Maurice C, Fogh A, Lipp M and Mangard S 2016 Prefetch side-channel attacks: Bypassing smap and kernel aslr. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, ACM, New York, NY, USA, pp. 368–379

  46. Fuchs A and Lee R B 2015 Disruptive prefetching: Impact on side-channel attacks and cache designs. In: Proceedings of the 8th ACM International Systems and Storage Conference, SYSTOR ’15, ACM, New York, NY, USA, pp. 14:1–14:12

  47. Mowery K, Keelveedhi S and Shacham H 2012 Are AES x86 cache timing attacks still feasible? In: Proceedings of the 2012 ACM Workshop on Cloud Computing Security Workshop, ACM, pp. 19–24

  48. Zhang Y and Reiter M K 2013 Düppel: Retrofitting commodity operating systems to mitigate cache side channels in the cloud. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS ’13, ACM, New York, NY, USA, pp. 827–838

  49. Varadarajan V, Ristenpart T and Swift M 2014 Scheduler-based defenses against cross-vm side-channels. In: 23rd USENIX Security Symposium (USENIX Security 14), USENIX Association, San Diego, CA, pp. 687–702

  50. Lipp M, Gruss D, Spreitzer R, Maurice C and Mangard S 2016 Armageddon: Cache attacks on mobile devices. In: 25th USENIX Security Symposium (USENIX Security 16), USENIX Association, Austin, TX, pp. 549–564

  51. Hu WM 1992 Reducing timing channels with fuzzy time. J. Comput. Secur. 1(3-4):233–254

    Article  Google Scholar 

  52. Vattikonda B C, Das S and Shacham H 2011 Eliminating fine grained timers in XEN. In: Proceedings of the 3rd ACM Workshop on Cloud Computing Security Workshop, ACM, pp. 41–46

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to C Ashokkumar.

Appendices

Appendix I: Proof of Theorem 1

Say \(f_q\) and \(\mid G \mid _{q}\) are, respectively, the average false negative rate and the average cardinality of the set of guesstimates after q refinements are made in initial set of guesstimates. Average false negative rate can also be thought of as the probability of occurrence of a false negative:

$$\begin{aligned} p_c = 1 - f_q. \end{aligned}$$
(A1)

This equation follows from the observation that the correct nibble in the histogram receiving a boost and the occurrence of a false negative are mutually exclusive and exhaustive events. To derive \(p_{in}\), consider the following reasoning. When there is a false negative, all the guesstimates lead to boosting some incorrect nibble in the histogram; hence in this case, probability of an incorrect nibble receiving a boost is \(\frac{\mid G \mid _{q}}{15}\), since there are 15 incorrect nibbles. When there is no false negative, all the guesstimates except one lead to boosting some incorrect nibble in the histogram. This guesstimate boosts the correct nibble. Hence, in this case, the probability of an incorrect nibble receiving a boost is \(\frac{\mid G \mid _{q}-1}{15}\). Combining both the cases, we obtain the following equation:

$$\begin{aligned} p_{in} = f_q\left( \frac{\mid G \mid _{q}}{15}\right) +(1-f_q)\left( \frac{\mid G \mid _{q}-1}{15}\right) . \end{aligned}$$
(A2)

After obtaining a key nibble, we refine the set of guesstimates. The cardinality of the set of guesstimates decreases by 1 with probability \(1-f_q\). This is the probability of absence of a false negative, i.e., there must be a guesstimate that led to boost of correct nibble and it was removed from set of guesstimates during refinement. Also, the average false negative rate increases due to refinement since some line numbers may be accessed more than once in a round and they are removed during refinement:

$$\begin{aligned} \mid G \mid _{q+1} = \mid G \mid _{q} - (1-f_{q}), \, \, 0 \le q \le 2. \end{aligned}$$
(A3)

Let f be the probability of the occurrence of a false negative due to the spy input and preprocessing strategy. Let \(p_q\) be the probability of false negative occurrence due to refining of guesstimate sets (some line numbers may be accessed more than once in a round) after q refinements. Assuming that these two sources of false negatives are independent of each other, we have the following equation:

$$\begin{aligned} f_{q} = f + p_{q} - (f \bullet p_{q}), \,\, 0 \leq q \leq 3 \end{aligned}$$
(A4)

where \(\begin{aligned} p_0 = 0,\,\, p_1 = \frac{1}{16},\,\, p_2 = \frac{31}{256}, \,\, p_{3} = \frac{721}{4096}. \end{aligned}\)

As \(p_0\) corresponds to zero refinements, there would not be any false negatives due to refinement, so \(p_0 = 0\). After refining once, there is a chance of false negative occurrence since the removed line number might have been accessed more than once. For the second nibble to be recovered, the removed line number is a false negative if the removed line number is accessed due to this nibble. As there are 16 possible line numbers, probability of this event is \(\frac{1}{16}\). Hence, \(p_1 = \frac{1}{16}\).

After two refinements, the probability of a false negative occurrence due to refinement is equal to the probability of line number accessed being either of the first two nibbles recovered, which is \(\frac{16+16-1}{16x16} = \frac{31}{256}\). Hence \(p_2 = \frac{31}{256}\).

After three refinements, the probability of false negative occurrence due to refinement is equal to probability of the line number accessed being either of the first three nibbles recovered. Let \(A_i\) be the event in which the line number accessed corresponding to \(i^{\mathrm{th}}\) nibble recovered matches corresponding to last nibble recovered. Then, \(p_3\) is \(P(A_1 \cup A_2 \cup A_3)\), which is

$$\begin{aligned} \sum _{i=1}^{3} P(A_i) - \sum _{i=1}^{3}\sum _{j>i}^{3}P(A_i \cap A_j)+ P(A_1 \cap A_2 \cap A_3). \end{aligned}$$

When matching with any one of the three, other two are free to take any of the 16 possible values; hence out of \(16\times 16\times 16\) possibilities, \(16\times 16\) are favourable, so \(P(A_i) = \frac{256}{4096}\). When matching with any two of the three simultaneously, the third one is free to take any of the 16 possible values. Out of \(16\times 16\times 16\) possible cases, 16 are favourable, so \(P(A_i \cap A_j) = \frac{16}{4096}\). As there is only 1 way in which all three match, out of \(16\times 16\times 16\) possible cases, \(P(A_1 \cap A_2 \cap A_3) = \frac{1}{4096}\). Using these values, we can calculate

$$\begin{aligned} p_3 = 3 \times \frac{256}{4096} - 3 \times \frac{16}{4096} + \frac{1}{4096} = \frac{721}{4096}. \end{aligned}$$

As explained in section 4.3, probability of recovering the first nibble among \(k^{'}_{4m}, 0\le m\le 3\), is

$$\begin{aligned} 1-[1-P_{h}(p_c,p_{in}, 2^4, \delta )]^{4}. \end{aligned}$$

Hence, probability of retrieving a nibble after q refinements is

$$\begin{aligned} 1-[1-{\mathcal {P}}_{h}(f_q, \mid G \mid _{q}, 2^4, \delta )]^{4-q}. \end{aligned}$$

where \(\begin{aligned} {\mathcal {P}}_{h}(f_q, \mid G \mid _{q}, 2^4, \delta ) = P_{h}(p_c, p_{in}, 2^4, \delta ). \end{aligned}\)

Hence, probability of retrieving all the 4 nibbles \(k^{'}_{4m}, 0 \le m\le 3\), is

$$\begin{aligned} \prod _{q=0}^{3} \left[ 1-[1-{\mathcal {P}}_{h}(f_q, \mid G \mid _{q}, 2^4, \delta )]^{4-q} \right] . \end{aligned}$$

It is also the same as the probability of correctly retrieving all the four nibbles \(k^{'}_{t+4m}, 0 \le m\le 3\), for a given \(t, 0 \le t\le 3\). Hence, overall probability of retrieving all the 16 high-order nibbles is

$$\begin{aligned} \left\{ \ \prod _{q=0}^{3} \left[ 1-[1-{\mathcal {P}}_{h}(f_q, \mid G \mid _{q}, 2^4, \delta )]^{4-q} \right] \right\} ^4. \end{aligned}$$

Appendix II: AES equations and T-table usage

Deriving equations

Input to Round 1 of decryption is

$$\begin{aligned} \left( \begin{array}{llll} c_0 \oplus k_0 &{} c_4 \oplus k_4 &{} c_8 \oplus k_8 &{} c_{12} \oplus k_{12}\\ c_1 \oplus k_1 &{} c_5 \oplus k_5 &{} c_9 \oplus k_9 &{} c_{13} \oplus k_{13}\\ c_2 \oplus k_2 &{} c_6 \oplus k_6 &{} c_{10} \oplus k_{10} &{} c_{14} \oplus k_{14}\\ c_3 \oplus k_3 &{} c_7 \oplus k_7 &{} c_{11} \oplus k_{11} &{} c_{15} \oplus k_{15} \end{array} \right) \end{aligned}$$

where \(C = (c_{0}, c_{1}, \ldots , c_{15})\) and \(K = (k_{0}, k_{1}, \ldots , k_{15})\), respectively, denote ciphertext and tenth round key (in terms of key scheduling algorithm used for encryption; in implementation for storage-constrained environments, this key is stored and other round keys are generated on the fly). After inverse byte substitution and inverse row shift operations, input transforms to

$$\begin{aligned} \left( {\begin{array}{llll} s^{-1}(c_0 \oplus k_0)&{}s^{-1}(c_4 \oplus k_4)&{}s^{-1}(c_8 \oplus k_8)&{}s^{-1}(c_{12} \oplus k_{12})\\ s^{-1}(c_{13} \oplus k_{13})&{}s^{-1}(c_1 \oplus k_1)&{}s^{-1}(c_5 \oplus k_5)&{}s^{-1}(c_9 \oplus k_9)\\ s^{-1}(c_{10} \oplus k_{10})&{}s^{-1}(c_{14} \oplus k_{14})&{}s^{-1}(c_2 \oplus k_2)&{}s^{-1}(c_6 \oplus k_6)\\ s^{-1}(c_7 \oplus k_7)&{}s^{-1}(c_{11} \oplus k_{11})&{}s^{-1}(c_{15} \oplus k_{15})&{}s^{-1}(c_3 \oplus k_3) \end{array}} \right) . \end{aligned}$$

For keys generated using key scheduling algorithm for encryption, round key addition and then inverse column mixing should be performed. To have a similar structure to decryption as that of encryption, round key addition and inverse column mixing steps are interchanged but this requires that the round key is suitably transformed. Here, first we will consider doing round key addition and then we perform inverse column mixing.

Let \(W_{36}, W_{37}, W_{38}\) and \(W_{39}\) denote 4 words (1 word = 4 bytes) of \(9^{\mathrm{th}}\) round key in encryption procedure. Let \(W_{40}, W_{41}, W_{42}\) and \(W_{43}\) denote 4 words of \(10^{\mathrm{th}}\) round key in encryption procedure. According to key scheduling algorithm for encryption, these words are related as described in following equations:

$$\begin{aligned} W_{40}= & {} W_{36} \oplus f(W_{39}), \\ W_{41}= & {} W_{37} \oplus W_{40}, \\ W_{42}= & {} W_{38} \oplus W_{41}, \\ W_{43}= & {} W_{39} \oplus W_{42}. \end{aligned}$$

These equations are used to obtain the \(10^{\mathrm{th}}\) round key using the \(9^{\mathrm{th}}\) round key in encryption. In these equations, f(W) is obtained by first doing one left cyclic rotation of bytes of word W and then applying S-box on each of the bytes. It is then XORed with a round-dependent constant. We can manipulate these equations to obtain the \(9^{\mathrm{th}}\) round key, given the \(10^{\mathrm{th}}\) round key:

$$\begin{aligned} W_{36}= & {} W_{40} \oplus f(W_{39}), \\ W_{37}= & {} W_{40} \oplus W_{41}, \\ W_{38}= & {} W_{41} \oplus W_{42}, \\ W_{39}= & {} W_{42} \oplus W_{43}. \end{aligned}$$

The equation to obtain \(W_{36}\) can be re-written as

$$\begin{aligned} W_{36} = W_{40} \oplus f(W_{42} \oplus W_{43}), \end{aligned}$$

using these equations. Combining two different notations for the \(10^{\mathrm{th}}\) round key, we have

$$\begin{aligned} \begin{pmatrix}W_{40}&W_{41}&W_{42}&W_{43}\end{pmatrix}=\begin{pmatrix} k_0 &{} k_4 &{} k_8 &{} k_{12}\\ k_1 &{} k_5 &{} k_9 &{} k_{13}\\ k_2 &{} k_6 &{} k_{10} &{} k_{14}\\ k_3 &{} k_7 &{} k_{11} &{} k_{15} \end{pmatrix}. \end{aligned}$$

According to the definition of f(W)

$$\begin{aligned} f(W_{42} \oplus W_{43}) = \begin{pmatrix} s(k_9 \oplus k_{13}) \oplus 36\\ s(k_{10} \oplus k_{14})\\ s(k_{11} \oplus k_{15})\\ s(k_8 \oplus k_{12}) \end{pmatrix} \end{aligned}$$

where 36 is the round-dependent constant. In any round-dependent constant, last three bytes are all zeros. Hence, the \(9^{\mathrm{th}}\) round key of encryption is

$$\begin{aligned} \begin{pmatrix} k_0 \oplus s(k_9 \oplus k_{13}) \oplus 36 &{} k_0 \oplus k_4 &{} k_4 \oplus k_8 &{} k_8 \oplus k_{12}\\ k_1 \oplus s(k_{10} \oplus k_{14}) &{} k_1 \oplus k_5 &{} k_5 \oplus k_9 &{} k_9 \oplus k_{13}\\ k_2 \oplus s(k_{11} \oplus k_{15}) &{} k_2 \oplus k_6 &{} k_6 \oplus k_{10} &{} k_{10} \oplus k_{14}\\ k_3 \oplus s(k_8 \oplus k_{12}) &{} k_3 \oplus k_7 &{} k_7 \oplus k_{11} &{} k_{11} \oplus k_{15} \end{pmatrix}. \end{aligned}$$

We XOR this matrix and the result we obtain after inverse byte substitution and inverse row shift operations. Next, we perform inverse column mixing to obtain the output of first round of decryption:

$$\begin{aligned} B^{-1} = \begin{pmatrix} 0e&{}0b&{}0d&{}09\\ 09&{}0e&{}0b&{}0d\\ 0d&{}09&{}0e&{}0b\\ 0b&{}0d&{}09&{}0e \end{pmatrix}. \end{aligned}$$

Hence, the last step before obtaining output of first round of decryption is pre-multiplication by this matrix \(B^{-1}\).

1.1 Appendix II.1: explaining T-tables

Each \(T_t, 0 \le t \le 3\), takes one byte as input and returns 4 output bytes. Output from \(T_t\) table is product of \(t^{\mathrm{th}}\) column of \(B^{-1}\) and inverse S-Box applied to the input of the table. Hence

$$\begin{aligned} T_0[\alpha ]&= (0e \bullet s^{-1}(\alpha ), 09 \bullet s^{-1}(\alpha ), 0d \bullet s^{-1}(\alpha ), 0b \bullet s^{-1}(\alpha )), \\ T_1[\alpha ]&= (0b \bullet s^{-1}(\alpha ), 0e \bullet s^{-1}(\alpha ), 09 \bullet s^{-1}(\alpha ), 0d \bullet s^{-1}(\alpha )), \\ T_2[\alpha ]&= (0d \bullet s^{-1}(\alpha ), 0b \bullet s^{-1}(\alpha ), 0e \bullet s^{-1}(\alpha ), 09 \bullet s^{-1}(\alpha )), \\ T_3[\alpha ]&= (09 \bullet s^{-1}(\alpha ), 0d \bullet s^{-1}(\alpha ), 0b \bullet s^{-1}(\alpha ), 0e \bullet s^{-1}(\alpha )). \end{aligned}$$

Total possible number of inputs is \(2^8\). A table stores 4 bytes at each index position, so the size of each table size is \(4 \times 2^8 = 2^{10}\) bytes = 1 KB.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ashokkumar, C., Venkatesh, M.B.S., Giri, R.P. et al. An error-tolerant approach for efficient AES key retrieval in the presence of cacheprefetching – experiments, results, analysis. Sādhanā 44, 88 (2019). https://doi.org/10.1007/s12046-019-1070-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12046-019-1070-8

Keyword

Navigation