Physical Attack Countermeasures for Reconfigurable Cryptographic Processors

Chapter

Abstract

The physical attack countermeasures for reconfigurable cryptographic processors are mainly achieved in two ways. One way is to implement all the universal countermeasures to the reconfigurable architecture. Another way is to develop new countermeasures by using the characteristics of the reconfigurable computing. Traditional universal countermeasures do not take full advantage of the characteristics of reconfigurable computing and result in significant performance, area, and power overhead. In addition, new threatening attack methods such as the local electromagnetic attack with the attack precision of gate level, multiple fault attack which can introduce more than one fault in a single execution, and attacks based on ultra-low frequency (kHz level for instance) acoustic or electromagnetic signal continue to emerge. Various existing traditional countermeasures cannot be used to effectively resist these attacks. Compared with the direct application of traditional countermeasures, countermeasures designed based on the characteristics of reconfigurable cryptographic architecture can effectively reduce the performance, area, and power overhead caused by security improvement through resource reuse. What is more, the new countermeasures are expected to resist novel attack methods that have not been effectively overcome. On the one hand, the dynamic and partial reconfiguration feature can be fully exploited to develop countermeasures based on time and spatial randomization. When each execution of the cryptographic algorithm is performed at a different time and circuit region in the array, various precision attacks will not take effect. It just like that when an attacker wants to attack the backdoor of the cryptographic implementation, the randomization method keeps the position of the backdoor changing rapidly, making it difficult for the attacker to attack even when he or she has the key to the backdoor. On the other hand, we can make full use of the structural advantages of reconfigurable processors and fully combine the countermeasure design with the reconfigurable architecture, thus maximizing the advantages of reconfigurable computing. The rich array computing units and interconnection resources on the reconfigurable cryptographic processors can be used to resist physical attacks. With the resource reuse, the consumption caused by countermeasures can be significantly reduced. For example, a physically unclonable function (PUF) can be constructed based on array computing units, and lightweight authentication or security keys can be generated after the basic encryption/decryption operations are performed. The rich interconnection resources on the array can also be fully developed to resist attacks. When various topology attributes of interconnection network changed slightly and randomness was introduced, physical attack countermeasures can be implemented besides the normal data transmission.

References

  1. 1.
    Courbon F, Loubet Moundi P, Fournier JJ A et al (2014) Adjusting laser injections for fully controlled faults. In: International workshop on constructive side channel analysis and secure design, pp 229–242Google Scholar
  2. 2.
    Roscian C, Sarafianos A, Dutertre JM et al (2013) Fault model analysis of laser-induced faults in SRAM memory cells. In: IEEE workshop on fault diagnosis and tolerance in cryptography, pp 89–98Google Scholar
  3. 3.
    Woudenberg JGJV, Witteman MF, Menarini F (2011) Practical optical fault injection on secure microcontrollers. In: The workshop on fault diagnosis and tolerance in cryptography, pp 91–99Google Scholar
  4. 4.
    Moro N, Dehbaoui A, Heydemann K et al (2013) Electromagnetic fault injection: towards a fault model on a 32-bit microcontroller. In: IEEE workshop on fault diagnosis and tolerance in cryptography (FDTC), pp 77–88Google Scholar
  5. 5.
    Beroulle V, Candelier P, Castro SD et al (2014) Laser-induced fault effects in security dedicated circuits. In: IFIP/IEEE international conference on very large scale integration-system on a chip, pp 220–240Google Scholar
  6. 6.
    Bossuet L, Grand M, Gaspar L et al (2013) Architectures of flexible symmetric key crypto engines: a survey: from hardware coprocessor to multicryptoprocessor system on chip. ACM Comput Surv 45(4):1–32CrossRefGoogle Scholar
  7. 7.
    Wang B, Liu L, Deng C et al (2016) Against double fault attacks: injection effort model, space and time randomization based countermeasures for reconfigurable array architecture. IEEE Trans Inf Forensics Secur 11(6):11511164Google Scholar
  8. 8.
    Kocher PC (2016) Differential power analysis resistant cryptographic processing. U.S. Patent application 15/236, 739.2016-8-15Google Scholar
  9. 9.
    Agrawal DRJRR (2003) Multi-channel attacks. In: International workshop on cryptographic hardware and embedded systems-CHES, pp 2–16Google Scholar
  10. 10.
    Sugawara T, Suzuki D, Saeki M et al (2013) On measurable side-channel leaks inside ASIC design primitives. In: International workshop on cryptographic hardware and embedded systems, pp 159–178CrossRefGoogle Scholar
  11. 11.
    Hutter M, Mangard S, Feldhofer M (2012) Power and EM attacks on passive 13.56 MHz 13.56 MHz RFID devices. Lect Notes Comput Sci 4727:320333Google Scholar
  12. 12.
    J175212_201609 (1996) Measurement of radiated emissions from integrated circuits-surface scan method (loop probe method) 10 MHz to 3 GHz. SAE InternationalGoogle Scholar
  13. 13.
    Heyszl J, Mangard S, Heinz B et al (2012) Localized electromagnetic analysis of cryptographic implementations. In: Cryptographers’ track at the RSA conference, pp 231–244Google Scholar
  14. 14.
    Brier E, Clavier C, Olivier F (2004) Correlation power analysis with a leakage model. In: International workshop on cryptographic hardware and embedded systems, pp 16–29Google Scholar
  15. 15.
    Yoo HS, Herbst C, Mangard S et al (2007) Investigations of power analysis attacks and countermeasures for ARIA. Inf Secur Appl 160–172Google Scholar
  16. 16.
    Standaert FX, Malkin TG, Yung M (2006) A formal practice-oriented model for the analysis of side-channel attacks. IACR E-Print Archive 134:2Google Scholar
  17. 17.
    Shan W, Shi L, Fu X et al (2014) A side-channel analysis resistant reconfigurable cryptographic coprocessor supporting multiple block cipher algorithms. In: Design automation conference, pp 1–6Google Scholar
  18. 18.
    Herder C, Yu MD, Koushanfar F et al (2014) Physical unclonable functions and applications: a tutorial. Proc IEEE 102(8):11261141CrossRefGoogle Scholar
  19. 19.
    Maes R (2013) Physically unclonable functions: constructions, properties and applications. Springer, DordrechtCrossRefGoogle Scholar
  20. 20.
    Gassend, Blaise, Clarke et al (2002) Silicon physical random functions. In: Proceedings of the 9th ACM conference on computer and communications security, pp 148–160Google Scholar
  21. 21.
    Suh GE, Devadas S (2007) Physical unclonable functions for device authentication and secret key generation. In: Design automation conference, pp 9–14Google Scholar
  22. 22.
    Maiti A, Schaumont P (2011) Improved ring oscillator PUF: an FPGA-friendly secure primitive. Springer, New York, pp 375–397MathSciNetCrossRefGoogle Scholar
  23. 23.
    Lee JW, Lim D, Gassend B et al (2004) A technique to build a secret key in integrated circuits for identification and authentication applications. In: Symposium on VLSI circuits, 2004. Digest of technical papers, pp 176–179Google Scholar
  24. 24.
    Becker GT (2015) The gap between promise and reality: on the insecurity of XOR Arbiter PUFs. Springer, Berlin, pp 535–555Google Scholar
  25. 25.
    Guajardo J, Kumar SS, Schrijen GJ et al (2007) FPGA intrinsic PUFs and their use for IPl protection. In: International workshop on cryptographic hardware and embedded systems, pp 63–80Google Scholar
  26. 26.
    Su Y, Holleman J, Otis B (2007) A 1.6 pJ/bit 96% stable chip-ID generating circuit using process variations. In: IEEE international solid-state circuits conference, pp 406–611Google Scholar
  27. 27.
    Maes R, Tuyls P, Verbauwhede I (2008) Intrinsic PUFs from flip-flops on reconfigurable devices. In: The 3rd Benelux workshop on information and system securityGoogle Scholar
  28. 28.
    Kumar SS, Guajardo J, Maes R et al (2008) The butterfly PUF protecting IP on every FPGA. In: IEEE international workshop on hardware-oriented security and trust, pp 67–70Google Scholar
  29. 29.
    Simons P, Sluis EVD, Leest VVD (2012) Buskeeper PUFs, a promising alternative to D flip-flop PUFs. In: IEEE international symposium on hardware-oriented security and trust, pp 7–12Google Scholar
  30. 30.
    Leest VVD (2012) Comparative analysis of SRAM memories used as PUF primitives. In: Conference on design, automation and test in Europe, pp 1319–1324Google Scholar
  31. 31.
    Nedospasov D, Seifert JP, Helfmeier C et al (2013) Invasive PUF analysis. In: The workshop on fault diagnosis and tolerance in cryptography, pp 30–38Google Scholar
  32. 32.
    Kong J, Koushanfar F, Pendyala PK et al (2014) PUFatt: embedded platform attestation based on novel processor-based PUFs. In: Design automation conference, pp 1–6Google Scholar
  33. 33.
    Cline B, Chopra K, Blaauw D et al (2006) Analysis and modeling of CD variation for statistical static timing. In: IEEE/ACM international conference on computer-aided design, pp 60–66Google Scholar
  34. 34.
    Kruskal CP, Snir M (1986) A unified theory of interconnection network structure. Theoret Comput Sci 48:7594MathSciNetCrossRefGoogle Scholar
  35. 35.
    Bossuet L, Grand M, Gaspar L et al (2013) Architectures of flexible symmetric key crypto engines-a survey: from hardware coprocessor to multi-crypto-processor system on chip. ACM Comput Surv (CSUR) 45(4):41CrossRefGoogle Scholar
  36. 36.
    Horowitz E, Sahni S (1978) Fundamentals of computer algorithms. Computer Science Press, New YorkGoogle Scholar
  37. 37.
    Wu C, Feng T (1980) On a class of multistage interconnection networks. IEEE Trans Comput 100(8):694702MathSciNetGoogle Scholar
  38. 38.
    Lee RB, Shi Z, Yang X (2001) Efficient permutation instructions for fast software cryptography. IEEE Micro 21(6):5669CrossRefGoogle Scholar
  39. 39.
    Damgard I, Ishai Y, Krøigaard M (2010) Perfectly secure multiparty computation and the computational overhead of cryptography. In: Annual international conference on the theory and applications of cryptographic techniques, pp 445–465CrossRefGoogle Scholar
  40. 40.
    Beneš VE (1964) Optimal rearrangeable multistage connecting networks. Bell Syst Tech J 43(4):16411656MathSciNetMATHGoogle Scholar
  41. 41.
    Portz M (1991) On the use of interconnection networks in cryptography. In: Workshop on the theory and application of cryptographic techniques, pp 302–315Google Scholar
  42. 42.
    Wang B, Liu L, Deng C et al (2017) Exploration of Benes network in cryptographic processors: a random infection countermeasure for block ciphers against fault attacks. IEEE Trans Inf Forensics Secur 12(2):309322CrossRefGoogle Scholar
  43. 43.
    Lomné V, Roche T, Thillard A (2012) On the need of randomness in fault attack countermeasures-application to AES. In: IEEE workshop on fault diagnosis and tolerance in cryptography (FDTC), pp 85–94Google Scholar
  44. 44.
    Agoyan M, Bouquet S, Fournier J et al (2011) Design and characterisation of an AES chip embedding countermeasures. Int J Intell Eng Inf 1(3–4):328–347Google Scholar
  45. 45.
    Joye M, Manet P, Rigaud J (2007) Strengthening hardware AES implementations against fault attacks. IET Inf Secur 1(3):106CrossRefGoogle Scholar
  46. 46.
    Gierlichs B, Schmidt J, Tunstall M (2012) Infective computation and dummy rounds: fault protection for block ciphers without check-before-output. In: International conference on cryptology and information security in Latin America, pp 305–321CrossRefGoogle Scholar
  47. 47.
    Tupsamudre H, Bisht S, Mukhopadhyay D (2014) Destroying fault invariant with randomization. Springer, New YorkGoogle Scholar
  48. 48.
    Battistello A, Giraud C (2013) Fault analysis of infective AES computations. In: IEEE workshop on fault diagnosis and tolerance in cryptography (FDTC), pp 101–107Google Scholar
  49. 49.
    Mathew SK, Srinivasan S, Anders MA et al (2012) 2.4 Gbps, 7 mW all-digital PVT-variation tolerant true random number generator for 45 nm CMOS high-performance microprocessors. IEEE J Solid-State Circuits 47(11):2807–2821CrossRefGoogle Scholar
  50. 50.
    Leveugle R, Maistri P, Vanhauwaert P et al (2014) Laser-induced fault effects in security-dedicated circuits. In: The 22nd international conference on very large scale integration (VLSI-SoC), pp 1–6Google Scholar
  51. 51.
    Piret G, Quisquater J (2003) A differential fault attack technique against SPN structures, with application to the AES and KHAZAD. In: International workshop on cryptographic hardware and embedded systems, pp 77–88Google Scholar
  52. 52.
    Tunstall M, Mukhopadhyay D, Ali S (2011) Differential fault analysis of the advanced encryption standard using a single fault. In: IFIP international workshop on information security theory and practices, pp 224–233CrossRefGoogle Scholar
  53. 53.
    Ali SS, Mukhopadhyay D, Tunstall M (2013) Differential fault analysis of AES: towards reaching its limits. J Cryptogr Eng 3(2):7397CrossRefGoogle Scholar
  54. 54.
    Ghosh S, Saha D, Sengupta A et al (2015) Preventing fault attacks using fault randomization with a case study on AES. In: Australasian conference on information security and privacy, pp 343–355CrossRefGoogle Scholar
  55. 55.
    Patranabis S, Chakraborty A, Mukhopadhyay D (2015) Fault tolerant infective countermeasure for AES. In: International conference on security, privacy, and applied cryptography engineering, pp 190–209CrossRefGoogle Scholar
  56. 56.
    Reingold O (1998) Pseudo-random synthesizers, functions and permutations. The Weizmann Institute of Science doctor dissertation, RehovotGoogle Scholar
  57. 57.
    Blömer J, Guajardo J, Krummel V (2004) Provably secure masking of AES. In: International workshop on selected areas in cryptography, pp 69–83CrossRefGoogle Scholar
  58. 58.
    Oswald E, Mangard S, Pramstaller N et al (2005) A side-channel analysis resistant description of the AES S-Box. In: International workshop on fast software encryption, pp 413–423CrossRefGoogle Scholar
  59. 59.
    Patranabis S, Chakraborty A, Nguyen PH et al (2015) A biased fault attack on the time redundancy countermeasure for AES. In: International workshop on constructive side-channel analysis and secure design, pp 189–203CrossRefGoogle Scholar
  60. 60.
    Patranabis S, Chakraborty A, Mukhopadhyay D et al (2015) Using state space encoding to counter biased fault attacks on AES countermeasures. IACR Cryptology ePrint Archive 2015:806Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. and Science Press, Beijing 2018

Authors and Affiliations

  1. 1.Institute of MicroelectronicsTsinghua UniversityBeijingChina

Personalised recommendations