Skip to main content

Kryptonite: An Adversarial Attack Using Regional Focus

  • Conference paper
  • First Online:
Applied Cryptography and Network Security Workshops (ACNS 2021)

Abstract

With the Rise of Adversarial Machine Learning and increasingly robust adversarial attacks, the security of applications utilizing the power of Machine Learning has been questioned. Over the past few years, applications of Deep Learning using Deep Neural Networks(DNN) in several fields including Medical Diagnosis, Security Systems, Virtual Assistants, etc. have become extremely commonplace, and hence become more exposed and susceptible to attack. In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose ‘Kryptonite’, an adversarial attack on images. We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN. We test our attack on several DNN’s and compare our results with state of the art adversarial attacks like Fast Gradient Sign Method (FGSM), DeepFool (DF), Momentum Iterative Fast Gradient Sign Method (MIFGSM), and Projected Gradient Descent (PGD). The results obtained by us cause a maximum drop in network accuracy while yielding minimum possible perturbation and in considerably less amount of time per sample. We thoroughly evaluate our attack against three adversarial defence techniques and the promising results showcase the efficacy of our attack.

Y. Kulkarni and K. Bhambani—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Carrasco, M.: Visual attention: the past 25 years. Vis. Res. 51(13), 1484–1525 (2011)

    Article  Google Scholar 

  2. Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)

    Google Scholar 

  3. Goodfellow, I.J., Shlens, J., Christian, S.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  4. Alexey, K., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)

    Google Scholar 

  5. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  6. Dong, Y., Liao, F., Pang, T., Su, H., et al.: Boosting adversarial attacks with momentum. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  7. Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  8. Paul, R., Schabath, M., Gillies, R., Hall, L., Goldgof, D.: Mitigating adversarial attacks on medical image understanding systems. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 1517–1521 (2020)

    Google Scholar 

  9. Li, X., Zhu, D.: Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 1154–1158 (2020)

    Google Scholar 

  10. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)

    Google Scholar 

  11. Tian, B., Guo, Q., Juefei-Xu, F., Le Chan, W., et al.: Bias field poses a threat to DNN-based X-Ray recognition. arXiv preprint arXiv:2009.09247 (2020)

  12. Yao, Z., Gholami, A., Xu, P., Keutzer, K., Mahoney, M.W.: Trust region based adversarial attack on neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11342–11351 (2019)

    Google Scholar 

  13. Erichson, N.B., Yao, Z., Mahoney, M.W.: Jumprelu: a retrofit defense strategy for adversarial attacks. In: Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods, pp. 103–114 (2020)

    Google Scholar 

  14. Göpfert, J.P., Artelt, A., Wersing, H., Hammer, B.: Adversarial attacks hidden in plain sight. In: Advances in Intelligent Data Analysis (2020)

    Google Scholar 

  15. Rotemberg, V., Kurtansky, N., Betz-Stablein, B., Caffery, L., et al.: A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 8 1–34 (2021)

    Google Scholar 

  16. Tan, M., Le, Q.V.: Efficient net: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning(2019)

    Google Scholar 

  17. Xie, S., Girshick, R., Dollár, P., Tu, Z., et al.: Aggregated residual transformations for deep neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5987–5995 (2017)

    Google Scholar 

  18. https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  20. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)

    Google Scholar 

  21. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979)

    Article  Google Scholar 

  22. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  23. Satoshi, S., KeiichiA, N.: Topological structural analysis of digitized binary images by border following. In: Computer Vision, Graphics, and Image Processing, vol. 30(1), pp. 32–46 (1985)

    Google Scholar 

  24. Geis, T.: Using computer vision to play super hexagon (2016)

    Google Scholar 

  25. Papernot, N., Faghri, F., Carlini, N., Goodfellow, I., Feinman, R., et al.: Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv preprint arXiv:1610.00768 (2016

  26. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 618–626 (2017)

    Google Scholar 

  27. Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8571–8580 (2018)

    Google Scholar 

  28. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy, pp. 582–597 (2016)

    Google Scholar 

  29. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp. 274–283, PMLR (2018)

    Google Scholar 

  30. Carlini, N., Wagner, D.: Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016)

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kulkarni, Y., Bhambani, K. (2021). Kryptonite: An Adversarial Attack Using Regional Focus. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2021. Lecture Notes in Computer Science(), vol 12809. Springer, Cham. https://doi.org/10.1007/978-3-030-81645-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-81645-2_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-81644-5

  • Online ISBN: 978-3-030-81645-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics