Skip to main content

Self-adaptive Adversarial Training for Robust Medical Segmentation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14222))

Abstract

Adversarial training has been demonstrated to be one of the most effective approaches to training deep neural networks that are robust to malicious perturbations. Research on effectively applying it to produce robust 3D medical image segmentation models is ongoing. While few empirical studies have been done in this area, developing effective adversarial training methods for complex segmentation models and high-volume 3D examples is challenging and requires theoretical support. In this paper, we consider the robustness of 3D segmentation tasks from a PAC-Bayes generalisation perceptive and show that reducing the trained models’ Lipschitz constant benefits the models’ robustness performance. Demonstrating by empirical investigation, we show that adjusting the adversarial iteration can help to reduce the model’s Lipschitz constant, enabling a self-adaptive adversarial training strategy. Empirical studies on the medical segmentation decathlon dataset have been done to demonstrate the efficiency of the proposed adversarial training method. Our implementation is available at https://github.com/TrustAI/SEAT.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Antonelli, M., Reinke, A., Bakas, S., et al.: The medical segmentation decathlon. Nat. Commun. 13(1), 4128 (2022)

    Article  Google Scholar 

  2. Arnab, A., Miksik, O., Torr, P.H.S.: On the robustness of semantic segmentation models to adversarial attacks. In: CVPR (2018)

    Google Scholar 

  3. Athalye, A., Carlini, N., et al.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML (2018)

    Google Scholar 

  4. Croce, F., et al.: Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670 (2020)

  5. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML (2020)

    Google Scholar 

  6. Daza, L., Pérez, J.C., Arbeláez, P.: Towards robust general medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 3–13. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_1

    Chapter  Google Scholar 

  7. Drori, Y., Shamir, O.: The complexity of finding stationary points with stochastic gradient descent. In: ICML (2020)

    Google Scholar 

  8. Farnia, F., Zhang, J.M., Tse, D.: Generalizable adversarial training via spectral normalization. In: ICLR (2019)

    Google Scholar 

  9. Gu, J., Zhao, H., Tresp, V., Torr, P.: Adversarial examples on segmentation models can be easy to transfer. arXiv preprint arXiv:2111.11368 (2021)

  10. Gu, J., Zhao, H., Tresp, V., Torr, P.H.S.: SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13689, pp. 308–325. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19818-2_18

    Chapter  Google Scholar 

  11. Huang, X., Jin, G., Ruan, W.: Enhancement to safety and security of deep learning. In: Huang, X., Jin, G., Ruan, W. (eds.) Machine Learning Safety, pp. 205–216. Springer, Singapore (2023). https://doi.org/10.1007/978-981-19-6814-3_12

    Chapter  Google Scholar 

  12. Huang, X., Kroening, D., Ruan, W., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  14. Kaviani, S., Han, K.J., Sohn, I.: Adversarial attacks and defenses on AI in medical imaging informatics: a survey. Expert Syst. Appl. 116815 (2022)

    Google Scholar 

  15. Li, X., Zhu, D.: Robust detection of adversarial attacks on medical images. In: ISBI (2020)

    Google Scholar 

  16. Liu, Q., et al.: Defending deep learning-based biomedical image segmentation from adversarial attacks: a low-cost frequency refinement approach. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12264, pp. 342–351. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59719-1_34

    Chapter  Google Scholar 

  17. Liu, X., Faes, L., Kale, A.U., et al.: A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit. Health 1(6), e271–e297 (2019)

    Article  Google Scholar 

  18. Ma, X., Niu, Y., Gu, L., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)

    Article  Google Scholar 

  19. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  20. McAllester, D.A.: PAC-Bayesian model averaging. In: COLT (1999)

    Google Scholar 

  21. Moosavi-Dezfooli, S.M., Fawzi, A., Uesato, J., Frossard, P.: Robustness via curvature regularization, and vice versa. In: CVPR (2019)

    Google Scholar 

  22. Neyshabur, B., Bhojanapalli, S., Srebro, N.: A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks. In: ICLR (2018)

    Google Scholar 

  23. Panayides, A.S., Amini, A., Filipovic, N.D., et al.: Ai in medical imaging informatics: current challenges and future directions. IEEE J. Biomed. Health Inform. 24(7), 1837–1857 (2020)

    Article  Google Scholar 

  24. Pandey, P., Vardhan, A., Chasmai, M., et al.: Adversarially robust prototypical few-shot segmentation with neural-odes. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13438, pp. 77–87. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16452-1_8

    Chapter  Google Scholar 

  25. Peiris, H., Chen, Z., Egan, G., Harandi, M.: Duo-SegNet: adversarial dual-views for semi-supervised medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 428–438. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_40

    Chapter  Google Scholar 

  26. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  27. Scaman, K., Virmaux, A.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: NeurIPS (2018)

    Google Scholar 

  28. Sedghi, H., Gupta, V., Long, P.M.: The singular values of convolutional layers. In: ICLR (2018)

    Google Scholar 

  29. Shafahi, A., Najibi, M., Ghiasi, M.A., et al.: Adversarial training for free! In: NeurIPS (2019)

    Google Scholar 

  30. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  31. Wang, F., Zhang, C., Xu, P., Ruan, W.: Deep learning and its adversarial robustness: a brief introduction. In: Handbook on Computer Learning and Intelligence: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation, pp. 547–584. World Scientific (2022)

    Google Scholar 

  32. Wang, F., Zhang, Y., Zheng, Y., Ruan, W.: Dynamic efficient adversarial training guided by gradient magnitude. In: NeurIPS TEA Workshop (2022)

    Google Scholar 

  33. Wang, P., Peng, J., Pedersoli, M., Zhou, Y., Zhang, C., Desrosiers, C.: Context-aware virtual adversarial training for anatomically-plausible segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 304–314. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_29

    Chapter  Google Scholar 

  34. Xie, C., Wang, J., Zhang, Z., et al.: Adversarial examples for semantic segmentation and object detection. In: ICCV (2017)

    Google Scholar 

  35. Xu, X., Zhao, H., Jia, J.: Dynamic divide-and-conquer adversarial training for robust semantic segmentation. In: ICCV (2021)

    Google Scholar 

  36. Xu, Y., Xie, S., Reynolds, M., et al.: Adversarial consistency for single domain generalization in medical image segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13437, pp. 671–681. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_64

    Chapter  Google Scholar 

  37. Zhang, H., Yu, Y., Jiao, J., et al.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)

    Google Scholar 

  38. Zhang, Y., Ruan, W., Wang, F., Huang, X.: Generalizing universal adversarial perturbations for deep neural networks. Mach. Learn. 112(5), 1597–1626 (2023)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

FW is funded by the Faculty of Environment, Science and Economy at the University of Exeter. WR is the corresponding author of this work that was funded by the Partnership Resource Fund of ORCA Hub via the EPSRC under project [EP/R026173/1]. We would like to thank Abhra Chaudhuri for helping with proofreading and the anonymous reviewers for providing valuable feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjie Ruan .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 700 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, F., Fu, Z., Zhang, Y., Ruan, W. (2023). Self-adaptive Adversarial Training for Robust Medical Segmentation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14222. Springer, Cham. https://doi.org/10.1007/978-3-031-43898-1_69

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43898-1_69

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43897-4

  • Online ISBN: 978-3-031-43898-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics