Skip to main content

How Effective is Adversarial Training of CNNs in Medical Image Analysis?

  • Conference paper
  • First Online:
Medical Image Understanding and Analysis (MIUA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13413))

Included in the following conference series:

  • 2367 Accesses

Abstract

Adversarial attacks are carefully crafted inputs that can deceive machine learning models into giving wrong results with seemingly high confidence. One approach that is commonly used in the image analysis literature to defend against such attacks is the introduction of adversarial images during training time, i.e. adversarial training. However, the effectiveness of adversarial training remains unclear in the healthcare domain, where the use of complex medical scans is crucial for a wide range of clinical workflows. In this paper, we carried out an empirical investigation into the effectiveness of adversarial training as a defence technique in the context of medical images. We demonstrated that adversarial training is, in principle, a transferable defence on medical imaging data, and that it can potentially be used on attacks previously unseen by the model. We also empirically showed that the strength of the attack, determined by the parameter \(\epsilon \), and the percentage of adversarial images included during training, have key influence over the level of success of the defence. Our analysis was carried out using 58,954 images from the publicly available MedNIST benchmarking dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Medical Image Classification Using the MedNIST Dataset. https://colab.research.google.com/github/abhaysrivastav/Pytorch/blob/master/MedNIST.ipynb

  2. MONAI: Medical Open Network for AI. https://monai.io/

  3. Bar, Y., Diamant, I., Wolf, L., Lieberman, S., Konen, E., Greenspan, H.: Chest pathology detection using deep learning with non-medical training. In: 2015 IEEE 12th International Symposium on Biomedical Imaging, ISBI, pp. 294–297 (2015)

    Google Scholar 

  4. De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018)

    Article  Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  6. Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)

    Article  Google Scholar 

  7. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)

    Google Scholar 

  8. Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019)

    Article  Google Scholar 

  9. Finlayson, S.G., Chung, H.W., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296 (2018)

  10. Gale, W., Oakden-Rayner, L., Carneiro, G., Bradley, A.P., Palmer, L.J.: Detecting hip fractures with radiologist-level performance using deep neural networks. arXiv preprint arXiv:1711.06504 (2017)

  11. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  12. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2017)

    Google Scholar 

  13. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC (2018)

    Google Scholar 

  14. Lakhani, P., Sundaram, B.: Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284(2), 574–582 (2017)

    Article  Google Scholar 

  15. LeCun, Y., Cortes, C., Burges, C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  16. Li, X., Pan, D., Zhu, D.: Defending against adversarial attacks on medical imaging AI system, classification or detection? In: 2021 IEEE 18th International Symposium on Biomedical Imaging, ISBI, pp. 1677–1681. IEEE (2021)

    Google Scholar 

  17. Li, X., Zhu, D.: Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, ISBI, pp. 1154–1158. IEEE (2020)

    Google Scholar 

  18. Li, Y., Zhang, H., Bermudez, C., Chen, Y., Landman, B.A., Vorobeychik, Y.: Anatomical context protects deep learning from adversarial perturbations in medical imaging. Neurocomputing 379, 370–378 (2020)

    Article  Google Scholar 

  19. Liu, X., et al.: A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Dig. Health 1(6), e271–e297 (2019)

    Article  Google Scholar 

  20. Ma, X.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)

    Article  Google Scholar 

  21. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations, ICLR (2018)

    Google Scholar 

  22. McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020)

    Article  Google Scholar 

  23. Møgelmose, A., Trivedi, M.M., Moeslund, T.B.: Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey. IEEE Trans. Intell. Transp. Syst. 13(4), 1484–1497 (2012)

    Article  Google Scholar 

  24. NHS Counter Fraud Authority: NHS Counter Fraud Authority Annual Report & Accounts 19–20. https://www.gov.uk/government/publications/nhs-counter-fraud-authority-annual-report-and-accounts-2019-to-2020

  25. Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 493–501. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_56

    Chapter  Google Scholar 

  26. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)

    Article  Google Scholar 

  27. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR (2014)

    Google Scholar 

  28. Taghanaki, S.A., Abhishek, K., Azizi, S., Hamarneh, G.: A kernelized manifold mapping to diminish the effect of adversarial perturbations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pp. 11340–11349 (2019)

    Google Scholar 

Download references

Acknowledgment

The work of Dr Ahmed E. Fetit was supported by the UK Research and Innovation Centre for Doctoral Training in Artificial Intelligence for Healthcare under Grant EP/S023283/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ahmed E. Fetit .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, Y., Fetit, A.E. (2022). How Effective is Adversarial Training of CNNs in Medical Image Analysis?. In: Yang, G., Aviles-Rivero, A., Roberts, M., Schönlieb, CB. (eds) Medical Image Understanding and Analysis. MIUA 2022. Lecture Notes in Computer Science, vol 13413. Springer, Cham. https://doi.org/10.1007/978-3-031-12053-4_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-12053-4_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-12052-7

  • Online ISBN: 978-3-031-12053-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics