Abstract
With the Rise of Adversarial Machine Learning and increasingly robust adversarial attacks, the security of applications utilizing the power of Machine Learning has been questioned. Over the past few years, applications of Deep Learning using Deep Neural Networks(DNN) in several fields including Medical Diagnosis, Security Systems, Virtual Assistants, etc. have become extremely commonplace, and hence become more exposed and susceptible to attack. In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose ‘Kryptonite’, an adversarial attack on images. We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN. We test our attack on several DNN’s and compare our results with state of the art adversarial attacks like Fast Gradient Sign Method (FGSM), DeepFool (DF), Momentum Iterative Fast Gradient Sign Method (MIFGSM), and Projected Gradient Descent (PGD). The results obtained by us cause a maximum drop in network accuracy while yielding minimum possible perturbation and in considerably less amount of time per sample. We thoroughly evaluate our attack against three adversarial defence techniques and the promising results showcase the efficacy of our attack.
Y. Kulkarni and K. Bhambani—Equal Contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Carrasco, M.: Visual attention: the past 25 years. Vis. Res. 51(13), 1484–1525 (2011)
Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)
Goodfellow, I.J., Shlens, J., Christian, S.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
Alexey, K., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
Dong, Y., Liao, F., Pang, T., Su, H., et al.: Boosting adversarial attacks with momentum. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)
Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Paul, R., Schabath, M., Gillies, R., Hall, L., Goldgof, D.: Mitigating adversarial attacks on medical image understanding systems. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 1517–1521 (2020)
Li, X., Zhu, D.: Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 1154–1158 (2020)
Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)
Tian, B., Guo, Q., Juefei-Xu, F., Le Chan, W., et al.: Bias field poses a threat to DNN-based X-Ray recognition. arXiv preprint arXiv:2009.09247 (2020)
Yao, Z., Gholami, A., Xu, P., Keutzer, K., Mahoney, M.W.: Trust region based adversarial attack on neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11342–11351 (2019)
Erichson, N.B., Yao, Z., Mahoney, M.W.: Jumprelu: a retrofit defense strategy for adversarial attacks. In: Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods, pp. 103–114 (2020)
Göpfert, J.P., Artelt, A., Wersing, H., Hammer, B.: Adversarial attacks hidden in plain sight. In: Advances in Intelligent Data Analysis (2020)
Rotemberg, V., Kurtansky, N., Betz-Stablein, B., Caffery, L., et al.: A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 8 1–34 (2021)
Tan, M., Le, Q.V.: Efficient net: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning(2019)
Xie, S., Girshick, R., Dollár, P., Tu, Z., et al.: Aggregated residual transformations for deep neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5987–5995 (2017)
https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Satoshi, S., KeiichiA, N.: Topological structural analysis of digitized binary images by border following. In: Computer Vision, Graphics, and Image Processing, vol. 30(1), pp. 32–46 (1985)
Geis, T.: Using computer vision to play super hexagon (2016)
Papernot, N., Faghri, F., Carlini, N., Goodfellow, I., Feinman, R., et al.: Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv preprint arXiv:1610.00768 (2016
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 618–626 (2017)
Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8571–8580 (2018)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy, pp. 582–597 (2016)
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp. 274–283, PMLR (2018)
Carlini, N., Wagner, D.: Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kulkarni, Y., Bhambani, K. (2021). Kryptonite: An Adversarial Attack Using Regional Focus. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2021. Lecture Notes in Computer Science(), vol 12809. Springer, Cham. https://doi.org/10.1007/978-3-030-81645-2_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-81645-2_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81644-5
Online ISBN: 978-3-030-81645-2
eBook Packages: Computer ScienceComputer Science (R0)