Abstract
Adversarial patch attacks have become a primary concern in recent years as they pose a significant threat to the security and reliability of deep neural networks. Modifying benign images by introducing adversarial patches comprising localized adversarial pixels alters the salient features of the image resulting in misclassification. The novelty of our approach is in the use of image inpainting technique as an adversarial defence for rectifying the patch region. Adversarial patch is automatically localized using Fast Score Class Activation Map and superseded by inpainting using Fast Marching Method which efficiently propagates pixel information from the surrounding areas into the patch region. This approach ensures original image’s structural integrity while simultaneously inpainting the adversarial pixels. Moreover, at the time of the attack it is not expected to have prior knowledge about the patch. Therefore, we propose our novel adversarial defence technique in a black-box setting assuming no knowledge about the patch location, shape or its size. Furthermore, we do not rely on re-training our victim model on adversarial examples, indicating its potential usefulness for real-world applications. Our experimental results show that the proposed approach achieves accuracy up to 76.37% on ImageNet100 despite the adversarial patch attack amounting to a considerable improvement of 76.28% points. Moreover, on benign images our approach gives decent accuracy of \(81.11\%\) thereby suggesting that our defence pipeline is applicable irrespective of whether the input image is adversarial or clean.
S. Sharma and R. Joshi—Co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch (2018)
Chiang, P.Y., Ni, R., Abdelkader, A., Zhu, C., Studer, C., Goldstein, T.: Certified defenses for adversarial patches (2020)
Ding, L., et al.: Towards universal physical attacks on single object tracking. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, pp. 1236–1245 (2021). https://doi.org/10.1609/aaai.v35i2.16211, https://ojs.aaai.org/index.php/AAAI/article/view/16211
Huang, L., et al.: UPC: learning universal physical camouflage attacks on object detectors. CoRR abs/1909.04326 (2019). http://arxiv.org/abs/1909.04326
Karmon, D., Zoran, D., Goldberg, Y.: LaVAN: localized and visible adversarial noise (2018)
Kubota, Y.: tf-keras-vis (2022). https://keisen.github.io/tf-keras-vis-docs/
Lee, M., Kolter, Z.: On physical adversarial patches for object detection (2019)
Levine, A., Feizi, S.: (De)randomized smoothing for certifiable defense against patch attacks (2021)
Li, J., Zhang, D., Meng, B., Li, Y., Luo, L.: FIMF score-CAM: fast score-CAM based on local multi-feature integration for visual interpretation of CNNs. IET Image Process. 17(3), 761–772 (2023). https://doi.org/10.1049/ipr2.12670, https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/ipr2.12670
Liang, B., Li, J., Huang, J.: We can always catch you: detecting adversarial patched objects with or without signature (2021)
Liu, J., Levine, A., Lau, C.P., Chellappa, R., Feizi, S.: Segment and complete: defending object detectors against adversarial patch attacks with robust patch detection (2022)
Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill (2017). https://doi.org/10.23915/distill.00007, https://distill.pub/2017/feature-visualization
Padalkar, M., Joshi, M., Khatri, N.: Digital Heritage Reconstruction Using Super-resolution and Inpainting. Synthesis Lectures on Visual Computing: Computer Graphics, Animation, Computational Photography and Imaging. Springer, Heidelberg (2022). https://books.google.co.in/books?id=J4FyEAAAQBAJ
Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9 (2004). https://doi.org/10.1080/10867651.2004.10487596
Wang, H., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks (2020)
Xiang, C., Bhagoji, A.N., Sehwag, V., Mittal, P.: PatchGuard: a provably robust defense against adversarial patches via small receptive fields and masking. In: 30th USENIX Security Symposium (USENIX Security 2021), pp. 2237–2254. USENIX Association (2021). https://www.usenix.org/conference/usenixsecurity21/presentation/xiang
Xiang, C., Mahloujifar, S., Mittal, P.: PatchCleanser: certifiably robust defense against adversarial patches for any image classifier (2022)
Xiang, C., Mittal, P.: DetectorGuard: provably securing object detectors against localized patch hiding attacks (2021)
Xu, K., Xiao, Y., Zheng, Z., Cai, K., Nevatia, R.: PatchZero: defending against adversarial patch attacks by detecting and zeroing the patch (2022)
Zhou, G., et al.: Information distribution based defense against physical attacks on object detection. 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1–6 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sharma, S., Joshi, R., Bhilare, S., Joshi, M.V. (2023). Robust Adversarial Defence: Use of Auto-inpainting. In: Tsapatsoulis, N., et al. Computer Analysis of Images and Patterns. CAIP 2023. Lecture Notes in Computer Science, vol 14184. Springer, Cham. https://doi.org/10.1007/978-3-031-44237-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-44237-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44236-0
Online ISBN: 978-3-031-44237-7
eBook Packages: Computer ScienceComputer Science (R0)