Skip to main content

Artificial Neural Networks and Fault Injection Attacks

  • Chapter
  • First Online:
Security and Artificial Intelligence

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13049))

  • 1621 Accesses

Abstract

This chapter is on the security assessment of artificial intelligence (AI) and neural network (NN) accelerators in the face of fault injection attacks. More specifically, it discusses the assets on these platforms and compares them with those known and well-studied in the field of cryptography. This is a crucial step that must be taken in order to define the threat models precisely. With respect to that, fault attacks mounted on NNs and AI accelerators are explored.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that fault tolerance is not an intrinsic feature of NNs and should be designed to be exhibited by the models [39].

  2. 2.

    In an independent study conducted in parallel with the one introducing RAM-Jam, Hong et al. have proposed their attack [15].

References

  1. Alam, M., et al.: Enhancing fault tolerance of neural networks for security-critical applications. arXiv preprint arXiv:1902.04560 (2019)

  2. Alam, M.M., Tajik, S., Ganji, F., Tehranipoor, M., Forte, D.: RAM-jam: remote temperature and voltage fault attack on FPGAs using memory collisions. In: 2019 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC), pp. 48–55. IEEE (2019)

    Google Scholar 

  3. Bar-El, H., Choukri, H., Naccache, D., Tunstall, M., Whelan, C.: The sorcerer’s apprentice guide to fault attacks. Proc. IEEE 94(2), 370–382 (2006)

    Article  Google Scholar 

  4. Batina, L., Bhasin, S., Jap, D., Picek, S.: CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 515–532 (2019)

    Google Scholar 

  5. Bolt, G.: Fault models for artificial neural networks. In: [Proceedings] 1991 IEEE International Joint Conference on Neural Networks, pp. 1373–1378. IEEE (1991)

    Google Scholar 

  6. Breier, J., Hou, X., Jap, D., Ma, L., Bhasin, S., Liu, Y.: Practical fault attack on deep neural networks. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 2204–2206. ACM (2018)

    Google Scholar 

  7. Breier, J., Jap, D., Hou, X., Bhasin, S., Liu, Y.: SNIFF: reverse engineering of neural networks with fault attacks. arXiv preprint arXiv:2002.11021 (2020)

  8. Chen, T., et al.: DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. ACM SIGARCH Comput. Archit. News 42(1), 269–284 (2014)

    Article  Google Scholar 

  9. Chen, Y.H., Emer, J., Sze, V.: Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. ACM SIGARCH Comput. Archit. News 44(3), 367–379 (2016)

    Article  Google Scholar 

  10. Chi, P., et al.: PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. ACM SIGARCH Comput. Archit. News 44(3), 27–39 (2016)

    Article  Google Scholar 

  11. Dubey, A., Cammarota, R., Aysu, A.: MaskedNet: the first hardware inference engine aiming power side-channel protection. In: 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pp. 197–208. IEEE (2020)

    Google Scholar 

  12. Gnad, D.R., Oboril, F., Tahoori, M.B.: Voltage drop-based fault attacks on FPGAs using valid bitstreams. In: 27th International Conference on Field Programmable Logic and Applications, pp. 1–7. IEEE (2017)

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  14. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A., Dally, W.J.: EIE: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput. Archit. News 44(3), 243–254 (2016)

    Article  Google Scholar 

  15. Hong, S., Frigo, P., Kaya, Y., Giuffrida, C., Dumitraş, T.: Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 497–514 (2019)

    Google Scholar 

  16. Hou, X., Breier, J., Jap, D., Ma, L., Bhasin, S., Liu, Y.: Security evaluation of deep neural network resistance against laser fault injection. In: 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA), pp. 1–6. IEEE (2020)

    Google Scholar 

  17. Hua, W., Zhang, Z., Suh, G.E.: Reverse engineering convolutional neural networks through side-channel information leaks. In: 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), pp. 1–6. IEEE (2018)

    Google Scholar 

  18. Jouppi, N.: Google supercharges machine learning tasks with TPU custom chip. Google Blog May 18, 1 (2016)

    Google Scholar 

  19. Wiggers, K.: Apple’s A12 Bionic chip runs Core ML apps up to 9 times faster. Technical report. https://venturebeat.com/2018/09/12/apples-a12-bionic-chip-run-core-ml-apps-up-to-9-times-faster/

  20. Kenjar, Z., Frassetto, T., Gens, D., Franz, M., Sadeghi, A.R.: V0LTpwn: attacking x86 processor integrity from software. In: 29th USENIX Security Symposium (USENIX Security 2020) (2020)

    Google Scholar 

  21. Kerckhoffs, A.: La cryptographie militaire, ou, Des chiffres usités en temps de guerre: avec un nouveau procédé de déchiffrement applicable aux systèmes à double clef. Librairie militaire de L, Baudoin (1883)

    Google Scholar 

  22. Kwon, H., Samajdar, A., Krishna, T.: MAERI: enabling flexible dataflow mapping over DNN accelerators via reconfigurable interconnects. ACM SIGPLAN Not. 53(2), 461–475 (2018)

    Article  Google Scholar 

  23. Liu, W., Chang, C.H., Zhang, F.: Stealthy and robust glitch injection attack on deep learning accelerator for target with variational viewpoint. IEEE Trans. Inf. Forensics Secur. 16, 1928–1942 (2020)

    Article  Google Scholar 

  24. Liu, W., Chang, C.H., Zhang, F., Lou, X.: Imperceptible misclassification attack on deep learning accelerator by glitch injection. In: 2020 57th ACM/IEEE Design Automation Conference (DAC), pp. 1–6. IEEE (2020)

    Google Scholar 

  25. Liu, Y., Wei, L., Luo, B., Xu, Q.: Fault injection attack on deep neural network. In: IEEE/ACM International Conference on Computer-Aided Design, pp. 131–138 (2017)

    Google Scholar 

  26. Mahdiani, H.R., Fakhraie, S.M., Lucas, C.: Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans. Neural Netw. Learn. Syst. 23(8), 1215–1228 (2012)

    Article  Google Scholar 

  27. Mehrotra, K., Mohan, C.K., Ranka, S., Chiu, C.T.: Fault tolerance of neural networks. Technical report, Syracuse Univ NY School of Computer and Information Science (1994)

    Google Scholar 

  28. Newsroom, A.: The future is here: iPhone X (2017). https://www.apple.com/newsroom/2017/09/the-future-is-here-iphone-x/. Accessed 8 Mar 2022

  29. Provelengios, G., Holcomb, D., Tessier, R.: Characterizing power distribution attacks in multi-user FPGA environments. In: 2019 29th International Conference on Field Programmable Logic and Applications (FPL), pp. 194–201. IEEE (2019)

    Google Scholar 

  30. Provelengios, G., Holcomb, D., Tessier, R.: Power wasting circuits for cloud FPGA attacks. 2020 30th International Conference on Field Programmable Logic and Applications (FPL) (2020)

    Google Scholar 

  31. Rakin, A.S., He, Z., Fan, D.: Bit-flip attack: crushing neural network with progressive bit search. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1211–1220 (2019)

    Google Scholar 

  32. Rakin, A.S., He, Z., Fan, D.: TBT: targeted neural network attack with bit trojan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13198–13207 (2020)

    Google Scholar 

  33. Review, A.: Tesla hardware 3 (full self-driving computer) detailed (2019). https://www.autopilotreview.com/tesla-custom-ai-chips-hardware-3/. Accessed 8 Mar 2022

  34. Salami, B., et al.: An experimental study of reduced-voltage operation in modern FPGAs for neural network acceleration. In: 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 138–149. IEEE (2020)

    Google Scholar 

  35. Salami, B., Unsal, O.S., Kestelman, A.C.: On the resilience of RTL NN accelerators: fault characterization and mitigation. In: 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), pp. 322–329. IEEE (2018)

    Google Scholar 

  36. Sequin, C.H., Clay, R.: Fault tolerance in artificial neural networks. In: 1990 IJCNN International Joint Conference on Neural Networks, pp. 703–708. IEEE (1990)

    Google Scholar 

  37. Shannon, C.E.: Communication theory of secrecy systems. Bell Syst. Tech. J. 28(4), 656–715 (1949)

    Article  MathSciNet  Google Scholar 

  38. Tang, A., Sethumadhavan, S., Stolfo, S.: CLKSCREW: exposing the perils of security-oblivious energy management. In: 26th USENIX Security Symposium (USENIX Security 2017), pp. 1057–1074 (2017)

    Google Scholar 

  39. Torres-Huitzil, C., Girau, B.: Fault and error tolerance in neural networks: a review. IEEE Access 5, 17322–17341 (2017)

    Article  Google Scholar 

  40. Wei, L., Luo, B., Li, Y., Liu, Y., Xu, Q.: I know what you see: power side-channel attack on convolutional neural network accelerators. In: Proceedings of the 34th Annual Computer Security Applications Conference, pp. 393–406 (2018)

    Google Scholar 

  41. Yan, M., Fletcher, C.W., Torrellas, J.: Cache telepathy: leveraging shared resource attacks to learn \(\{\)DNN\(\}\) architectures. In: 29th USENIX Security Symposium (USENIX Security 2020), pp. 2003–2020 (2020)

    Google Scholar 

  42. Zhang, J., Rangineni, K., Ghodsi, Z., Garg, S.: Thundervolt: enabling aggressive voltage underscaling and timing error resilience for energy efficient deep learning accelerators. In: Proceedings of the 55th Annual Design Automation Conference, pp. 1–6 (2018)

    Google Scholar 

  43. Zhao, P., Wang, S., Gongye, C., Wang, Y., Fei, Y., Lin, X.: Fault sneaking attack: a stealthy framework for misleading deep neural networks. In: 2019 56th ACM/IEEE Design Automation Conference (DAC), pp. 1–6. IEEE (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shahin Tajik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tajik, S., Ganji, F. (2022). Artificial Neural Networks and Fault Injection Attacks. In: Batina, L., Bäck, T., Buhan, I., Picek, S. (eds) Security and Artificial Intelligence. Lecture Notes in Computer Science, vol 13049. Springer, Cham. https://doi.org/10.1007/978-3-030-98795-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98795-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98794-7

  • Online ISBN: 978-3-030-98795-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics