Skip to main content

Robust Adversarial Defence: Use of Auto-inpainting

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2023)

Abstract

Adversarial patch attacks have become a primary concern in recent years as they pose a significant threat to the security and reliability of deep neural networks. Modifying benign images by introducing adversarial patches comprising localized adversarial pixels alters the salient features of the image resulting in misclassification. The novelty of our approach is in the use of image inpainting technique as an adversarial defence for rectifying the patch region. Adversarial patch is automatically localized using Fast Score Class Activation Map and superseded by inpainting using Fast Marching Method which efficiently propagates pixel information from the surrounding areas into the patch region. This approach ensures original image’s structural integrity while simultaneously inpainting the adversarial pixels. Moreover, at the time of the attack it is not expected to have prior knowledge about the patch. Therefore, we propose our novel adversarial defence technique in a black-box setting assuming no knowledge about the patch location, shape or its size. Furthermore, we do not rely on re-training our victim model on adversarial examples, indicating its potential usefulness for real-world applications. Our experimental results show that the proposed approach achieves accuracy up to 76.37% on ImageNet100 despite the adversarial patch attack amounting to a considerable improvement of 76.28% points. Moreover, on benign images our approach gives decent accuracy of \(81.11\%\) thereby suggesting that our defence pipeline is applicable irrespective of whether the input image is adversarial or clean.

S. Sharma and R. Joshi—Co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch (2018)

    Google Scholar 

  2. Chiang, P.Y., Ni, R., Abdelkader, A., Zhu, C., Studer, C., Goldstein, T.: Certified defenses for adversarial patches (2020)

    Google Scholar 

  3. Ding, L., et al.: Towards universal physical attacks on single object tracking. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, pp. 1236–1245 (2021). https://doi.org/10.1609/aaai.v35i2.16211, https://ojs.aaai.org/index.php/AAAI/article/view/16211

  4. Huang, L., et al.: UPC: learning universal physical camouflage attacks on object detectors. CoRR abs/1909.04326 (2019). http://arxiv.org/abs/1909.04326

  5. Karmon, D., Zoran, D., Goldberg, Y.: LaVAN: localized and visible adversarial noise (2018)

    Google Scholar 

  6. Kubota, Y.: tf-keras-vis (2022). https://keisen.github.io/tf-keras-vis-docs/

  7. Lee, M., Kolter, Z.: On physical adversarial patches for object detection (2019)

    Google Scholar 

  8. Levine, A., Feizi, S.: (De)randomized smoothing for certifiable defense against patch attacks (2021)

    Google Scholar 

  9. Li, J., Zhang, D., Meng, B., Li, Y., Luo, L.: FIMF score-CAM: fast score-CAM based on local multi-feature integration for visual interpretation of CNNs. IET Image Process. 17(3), 761–772 (2023). https://doi.org/10.1049/ipr2.12670, https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/ipr2.12670

  10. Liang, B., Li, J., Huang, J.: We can always catch you: detecting adversarial patched objects with or without signature (2021)

    Google Scholar 

  11. Liu, J., Levine, A., Lau, C.P., Chellappa, R., Feizi, S.: Segment and complete: defending object detectors against adversarial patch attacks with robust patch detection (2022)

    Google Scholar 

  12. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill (2017). https://doi.org/10.23915/distill.00007, https://distill.pub/2017/feature-visualization

  13. Padalkar, M., Joshi, M., Khatri, N.: Digital Heritage Reconstruction Using Super-resolution and Inpainting. Synthesis Lectures on Visual Computing: Computer Graphics, Animation, Computational Photography and Imaging. Springer, Heidelberg (2022). https://books.google.co.in/books?id=J4FyEAAAQBAJ

  14. Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9 (2004). https://doi.org/10.1080/10867651.2004.10487596

  15. Wang, H., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks (2020)

    Google Scholar 

  16. Xiang, C., Bhagoji, A.N., Sehwag, V., Mittal, P.: PatchGuard: a provably robust defense against adversarial patches via small receptive fields and masking. In: 30th USENIX Security Symposium (USENIX Security 2021), pp. 2237–2254. USENIX Association (2021). https://www.usenix.org/conference/usenixsecurity21/presentation/xiang

  17. Xiang, C., Mahloujifar, S., Mittal, P.: PatchCleanser: certifiably robust defense against adversarial patches for any image classifier (2022)

    Google Scholar 

  18. Xiang, C., Mittal, P.: DetectorGuard: provably securing object detectors against localized patch hiding attacks (2021)

    Google Scholar 

  19. Xu, K., Xiao, Y., Zheng, Z., Cai, K., Nevatia, R.: PatchZero: defending against adversarial patch attacks by detecting and zeroing the patch (2022)

    Google Scholar 

  20. Zhou, G., et al.: Information distribution based defense against physical attacks on object detection. 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1–6 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rohan Joshi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, S., Joshi, R., Bhilare, S., Joshi, M.V. (2023). Robust Adversarial Defence: Use of Auto-inpainting. In: Tsapatsoulis, N., et al. Computer Analysis of Images and Patterns. CAIP 2023. Lecture Notes in Computer Science, vol 14184. Springer, Cham. https://doi.org/10.1007/978-3-031-44237-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44237-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44236-0

  • Online ISBN: 978-3-031-44237-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics