Skip to main content

Mitigating Targeted Bit-Flip Attacks via Data Augmentation: An Empirical Study

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13370))

Abstract

As deep neural networks (DNNs) become more widely used in various safety-critical applications, protecting their security has been an urgent and important task. Recently, one critical security issue is proposed that DNN models are vulnerable to targeted bit-flip attacks. This kind of sophisticated attack tries to inject backdoors into models via flipping only a few bits of carefully chosen model parameters. In this paper, we propose a gradient obfuscation-based data augmentation method to mitigate these targeted bit-flip attacks as an empirical study. Particularly, we mitigate such targeted bit-flip attacks by preprocessing only input samples to break the link between the features carried by triggers of input samples with the modified model parameters. Moreover, our method can keep an acceptable accuracy on benign samples. We show that our method is effective against two targeted bit-flip attacks by experiments on two widely-used structures (ResNet-20 and VGG-16) with one famous dataset (CIFAR-10).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y., Léonard, N., Courville, A.: Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 (2013)

  2. Boveiri, H.R., Khayami, R., Javidan, R., Mehdizadeh, A.: Medical image registration using deep neural networks: a comprehensive review. Comput. Electr. Eng. 87, 106767 (2020)

    Google Scholar 

  3. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  4. Chen, H., Fu, C., Zhao, J., Koushanfar, F.: ProFlip: targeted trojan attack with progressive bit flips. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7718–7727 (2021)

    Google Scholar 

  5. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  6. Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)

  7. Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)

    Article  Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. He, Z., Rakin, A.S., Li, J., Chakrabarti, C., Fan, D.: Defending and harnessing the bit-flip based adversarial weight attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14095–14103 (2020)

    Google Scholar 

  10. Hong, S., Frigo, P., Kaya, Y., Giuffrida, C., Dumitras, T.: Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 497–514 (2019)

    Google Scholar 

  11. Kim, Y., et al.: Flipping bits in memory without accessing them: an experimental study of dram disturbance errors. ACM SIGARCH Comput. Archit. News 42(3), 361–372 (2014)

    Article  Google Scholar 

  12. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)

    Google Scholar 

  13. Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world (2016)

    Google Scholar 

  14. Li, Y., Song, Y., Jia, L., Gao, S., Li, Q., Qiu, M.: Intelligent fault diagnosis by fusing domain adversarial training and maximum mean discrepancy via ensemble learning. IEEE Trans. Ind. Inf. 17(4), 2833–2841 (2020)

    Article  Google Scholar 

  15. Li, Y., Wu, B., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey. arXiv preprint arXiv:2007.08745 (2020)

  16. Litjens, G., et al.: Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 6(1), 1–11 (2016)

    Article  Google Scholar 

  17. Liu, S., Zhang, X., Zhang, S., Wang, H., Zhang, W.: Neural machine reading comprehension: methods and trends. Appl. Sci. 9(18), 3698 (2019)

    Article  Google Scholar 

  18. Liu, Y., et al.: Trojaning attack on neural networks (2017)

    Google Scholar 

  19. Liu, Y., Xie, Y., Srivastava, A.: Neural trojans. In: 2017 IEEE International Conference on Computer Design (ICCD), pp. 45–48. IEEE (2017)

    Google Scholar 

  20. McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of Bayesian deep learning. In: International Joint Conferences on Artificial Intelligence, Inc. (2017)

    Google Scholar 

  21. Pang, G., Shen, C., Cao, L., Hengel, A.V.D.: Deep learning for anomaly detection: a review. ACM Comput. Surv. (CSUR) 54(2), 1–38 (2021)

    Article  Google Scholar 

  22. Qiu, H., Dong, T., Zhang, T., Lu, J., Memmi, G., Qiu, M.: Adversarial attacks against network intrusion detection in IoT systems. IEEE Internet Things J. 8(13), 10327–10335 (2020)

    Article  Google Scholar 

  23. Qiu, H., Noura, H., Qiu, M., Ming, Z., Memmi, G.: A user-centric data protection method for cloud storage based on invertible DWT. IEEE Trans. Cloud Comput, 9(4), 1293–1304 (2019)

    Article  Google Scholar 

  24. Qiu, H., Zeng, Y., Guo, S., Zhang, T., Qiu, M., Thuraisingham, B.: DeepSweep: an evaluation framework for mitigating DNN backdoor attacks using data augmentation. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pp. 363–377 (2021)

    Google Scholar 

  25. Qiu, H., Zeng, Y., Zheng, Q., Guo, S., Zhang, T., Li, H.: An efficient preprocessing-based approach to mitigate advanced adversarial attacks. IEEE Trans. Comput. (2021)

    Google Scholar 

  26. Qiu, M., et al.: Data allocation for hybrid memory with genetic algorithm. IEEE Trans. Emerg. Top. Comput. 3(4), 544–555 (2015)

    Article  Google Scholar 

  27. Rakin, A.S., He, Z., Fan, D.: Bit-flip attack: crushing neural network with progressive bit search. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1211–1220 (2019)

    Google Scholar 

  28. Rakin, A.S., He, Z., Fan, D.: TBT: targeted neural network attack with bit trojan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13198–13207 (2020)

    Google Scholar 

  29. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  30. Van Der Veen, V., et al.: Drammer: deterministic Rowhammer attacks on mobile platforms. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1675–1689 (2016)

    Google Scholar 

  31. Veres, M., Moussa, M.: Deep learning for intelligent transportation systems: a survey of emerging trends. IEEE Trans. Intell. Transp. Syst. 21(8), 3152–3168 (2019)

    Article  Google Scholar 

  32. Zou, M., Shi, Y., Wang, C., Li, F., Song, W., Wang, Y.: PoTrojan: powerful neural-level trojan designs in deep learning models. arXiv preprint arXiv:1802.03043 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han Qiu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Z., Wang, M., Chen, W., Qiu, H., Qiu, M. (2022). Mitigating Targeted Bit-Flip Attacks via Data Augmentation: An Empirical Study. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13370. Springer, Cham. https://doi.org/10.1007/978-3-031-10989-8_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10989-8_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10988-1

  • Online ISBN: 978-3-031-10989-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics