Abstract
Deep learning models are vulnerable to adversarial examples due to their fragility. Current black-box attack methods typically add perturbations to the whole example, and added perturbations may be large and easily detected by human eyes. This study proposes a Local Interpretable Model-agnostic Explanations (LIME)-based approach for black-box adversarial Attack (LIME-Attack). The approach can reduce the size of perturbations via adding perturbations in discriminative regions of an example. First, LIME is used to interpret a black-box model to obtain discriminative regions of an example. Then, the gradient information of the example is estimated by a derivative-free optimization method (Nature Evolution Strategy). Utilizing the gradient information, two white-box attack methods are adapted to generate perturbations, which are added in discriminative regions of the example to form an adversarial example. LIME-Attack is applied to several typical neural network models. Experiments show that it can achieve a high attack success rate with perturbation size 10%–30% lower than that of comparative methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dong, S., Wang, P., Abbas, K.: A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021)
Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (2014)
Huang, X., Kroening, D., Ruan, W., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2021)
Qiu, S., Liu, Q., Zhou, S., Wu, C.: Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci. 9(5), 909 (2019)
Chen, P., Zhang, H., Sharma, Y., et al.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)
Tu, C., Ting, P., Chen, P., et al.: Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 742–749 (2019)
Shi, Y., Wang, S., Han, Y.: Curls & Whey: boosting black-box adversarial attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6519–6527 (2019)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Selvaraju, R., Cogswell, M., Das, A., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Xiang, T., Liu, H., Guo, S., Zhang, T., Liao, X.: Local black-box adversarial attacks: a query efficient approach. arXiv preprint arXiv:2101.01032 (2021)
Duan, Y., Zhou, X., Zou, J., Qiu, J., Zhang, J., Pan, Z.: Mask-guided noise restriction adversarial attacks for image classification. Comput. Secur. 100, 102111 (2021)
Papernot, N., McDaniel, P., Jha, S., et al.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy, pp. 372–387 (2016)
Ribeiro, M., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Wierstra, D., Schaul, T., Glasmachers, T., et al.: Natural evolution strategies. J. Mach. Learn. Res. 15(1), 949–980 (2014)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
Madry, A., Makelov, A., Schmidt, L., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
Bhambri, S., Muku, S., Tulasi, A., et al.: A survey of black-box adversarial attacks on computer vision models. arXiv preprint arXiv:1912.01667 (2019)
Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 484–501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_29
Guidotti, R., Monreale, A., Ruggieri, S., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)
Baehrens, D., Schroeter, T., Harmeling, S., et al.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)
Szegedy, C., Vanhoucke, V., Ioffe, S., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, pp. 7–9 (2015)
Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)
Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. In: International Conference on Learning Representations (2019)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Duan, Y., Zuo, X., Huang, H., Wu, B., Zhao, X. (2024). A Local Interpretability Model-Based Approach for Black-Box Adversarial Attack. In: Tan, Y., Shi, Y. (eds) Data Mining and Big Data. DMBD 2023. Communications in Computer and Information Science, vol 2018. Springer, Singapore. https://doi.org/10.1007/978-981-97-0844-4_1
Download citation
DOI: https://doi.org/10.1007/978-981-97-0844-4_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0843-7
Online ISBN: 978-981-97-0844-4
eBook Packages: Computer ScienceComputer Science (R0)