Skip to main content

Towards Isolated AI Accelerators with OP-TEE on SoC-FPGAs

  • Conference paper
  • First Online:
Applied Cryptography and Network Security Workshops (ACNS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13285))

Included in the following conference series:

  • 1200 Accesses

Abstract

An artificial intelligence (AI) accelerator is a specialized hardware accelerator designed to accelerate machine learning applications. The machine learning applications may require an isolated execution for the confidentiality of model information and processing data and the integrity of the application tasks. For example, when critical applications such as biometrics use machine learning, the applications are required to execute in a trusted environment isolated not to be compromised by the other applications. The isolated execution of a machine learning application using an AI accelerator is often achieved with a proprietary hardware architecture consisting of dedicated security circuits for the accelerator. On the other hand, several previous works have proposed using open-source or general-purpose security functions for the isolation execution to reduce design costs and commonly apply to various accelerators. This paper proposes an isolated execution method of AI accelerators using OP-TEE, an open-source Trusted Execution Environment (TEE) implementing the Arm TrustZone technology. The contribution is to analyze the security threats of AI accelerators, propose the countermeasure based on OP-TEE, and evaluate the implementation of the isolated execution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  2. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  3. Isakov, M., Gadepally, V., Gettings, K.M., Kinsy, M.A.: Survey of attacks and defenses on edge-deployed neural networks. In: 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–8. IEEE (2019)

    Google Scholar 

  4. Nakai, T., Suzuki, D., Fujino, T.: Towards trained model confidentiality and integrity using trusted execution environments. In: Zhou, J., et al. (eds.) ACNS 2021. LNCS, vol. 12809, pp. 151–168. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81645-2_10

    Chapter  Google Scholar 

  5. ETSI GR SAI 004.: GROUP REPORT V1.1.1 Securing Artificial Intelligence (SAI); Problem Statement (2020). https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf

  6. Apple Secure Enclave. https://support.apple.com/ja-jp/guide/security/sec59b0b31ff/1/web/1

  7. Xilinx CHaiDNN-v2. https://github.com/Xilinx/CHaiDNN

  8. Hua, W., Umar, M., Zhang, Z., Edward Suh, G.: GuardNN: secure DNN accelerator for privacy-preserving deep learning. arXiv preprint arXiv:2008.11632 (2020)

  9. Moreau, T., et al.: A hardware-software blueprint for flexible deep learning specialization. IEEE Micro 39(5), 8–16 (2019)

    Article  Google Scholar 

  10. Xie, P., Ren, X., Sun, G.: Customizing trusted AI accelerators for efficient privacy-preserving machine learning. arXiv preprint arXiv:2011.06376 (2020)

  11. Linaro OP-TEE. https://www.op-tee.org

  12. NVIDIA Deep Learning Accelerator. https://nvdla.org

  13. Isolation Design Example for the Zynq UltraScale+ MPSoC. https://japan.xilinx.com/support/documentation/application_notes/xapp1336-isolation-design-flow-example-mpsoc.pdf

  14. Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12 (2017)

    Google Scholar 

  15. Park, H., Lin, F.X.: Safe and practical GPU acceleration in trustzone. arXiv preprint arXiv:2111.03065 (2021)

  16. Hashemi, H., Wang, Y., Annavaram, M.: Privacy and integrity preserving training using trusted hardware. CoRR, abs/2105.00334 (2021)

    Google Scholar 

  17. NVIDIA H100 Tensor Core GPU Architecture, Exceptional Performance, Scalability, and Security for the Data Center. https://www.nvidia.com/en-us/data-center/solutions/confidential-computing/

  18. Benhani, E.M., Bossuet, L., Aubert, A.: The security of ARM TrustZone in a FPGA-based SoC. IEEE Trans. Comput. 68(8), 1238–1248 (2019)

    Article  MathSciNet  Google Scholar 

  19. Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)

  20. Zynq UltraScale+ MPSoC ZCU102. https://japan.xilinx.com/products/boards-and-kits/ek-u1-zcu102-g.html

  21. LeCun, Y., Haffner, P., Bottou, L., Bengio, Y.: Object recognition with gradient-based learning. In: Shape, Contour and Grouping in Computer Vision. LNCS, vol. 1681, pp. 319–345. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-46805-6_19

    Chapter  Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  23. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  24. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  25. Xu, Q., Arafin, Md.T., Qu, G.: Security of neural networks from hardware perspective: a survey and beyond. In: 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 449–454 (2021)

    Google Scholar 

  26. Wang, X., Hou, R., Zhu, Y., Zhang, J., Meng, D.: NPUFort: a secure architecture of DNN accelerator against model inversion attack. In: Proceedings of the 16th ACM International Conference on Computing Frontiers, pp. 190–196 (2019)

    Google Scholar 

  27. Gross, M., Jacob, N., Zankl, A., Sigl, G.: Breaking TrustZone memory isolation through malicious hardware on a modern FPGA-SoC. In: Proceedings of the 3rd ACM Workshop on Attacks and Solutions in Hardware Security Workshop, pp. 3–12 (2019)

    Google Scholar 

  28. Stajnrod, R., Yehuda, R.B., Zaidenberg, N.J.: Attacking TrustZone on devices lacking memory protection. J. Comput. Virol. Hacking Tech. 1–11 (2021)

    Google Scholar 

Download references

Acknowledgments

This work was supported by JST-Mirai Program Grant Number JPMJMI19B6, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tsunato Nakai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nakai, T., Suzuki, D., Fujino, T. (2022). Towards Isolated AI Accelerators with OP-TEE on SoC-FPGAs. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2022. Lecture Notes in Computer Science, vol 13285. Springer, Cham. https://doi.org/10.1007/978-3-031-16815-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16815-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16814-7

  • Online ISBN: 978-3-031-16815-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics