Abstract
Adversarial attacks are carefully crafted inputs that can deceive machine learning models into giving wrong results with seemingly high confidence. One approach that is commonly used in the image analysis literature to defend against such attacks is the introduction of adversarial images during training time, i.e. adversarial training. However, the effectiveness of adversarial training remains unclear in the healthcare domain, where the use of complex medical scans is crucial for a wide range of clinical workflows. In this paper, we carried out an empirical investigation into the effectiveness of adversarial training as a defence technique in the context of medical images. We demonstrated that adversarial training is, in principle, a transferable defence on medical imaging data, and that it can potentially be used on attacks previously unseen by the model. We also empirically showed that the strength of the attack, determined by the parameter \(\epsilon \), and the percentage of adversarial images included during training, have key influence over the level of success of the defence. Our analysis was carried out using 58,954 images from the publicly available MedNIST benchmarking dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Medical Image Classification Using the MedNIST Dataset. https://colab.research.google.com/github/abhaysrivastav/Pytorch/blob/master/MedNIST.ipynb
MONAI: Medical Open Network for AI. https://monai.io/
Bar, Y., Diamant, I., Wolf, L., Lieberman, S., Konen, E., Greenspan, H.: Chest pathology detection using deep learning with non-medical training. In: 2015 IEEE 12th International Symposium on Biomedical Imaging, ISBI, pp. 294–297 (2015)
De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)
Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019)
Finlayson, S.G., Chung, H.W., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296 (2018)
Gale, W., Oakden-Rayner, L., Carneiro, G., Bradley, A.P., Palmer, L.J.: Detecting hip fractures with radiologist-level performance using deep neural networks. arXiv preprint arXiv:1711.06504 (2017)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2017)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC (2018)
Lakhani, P., Sundaram, B.: Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284(2), 574–582 (2017)
LeCun, Y., Cortes, C., Burges, C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Li, X., Pan, D., Zhu, D.: Defending against adversarial attacks on medical imaging AI system, classification or detection? In: 2021 IEEE 18th International Symposium on Biomedical Imaging, ISBI, pp. 1677–1681. IEEE (2021)
Li, X., Zhu, D.: Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging, ISBI, pp. 1154–1158. IEEE (2020)
Li, Y., Zhang, H., Bermudez, C., Chen, Y., Landman, B.A., Vorobeychik, Y.: Anatomical context protects deep learning from adversarial perturbations in medical imaging. Neurocomputing 379, 370–378 (2020)
Liu, X., et al.: A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Dig. Health 1(6), e271–e297 (2019)
Ma, X.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2021)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations, ICLR (2018)
McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020)
Møgelmose, A., Trivedi, M.M., Moeslund, T.B.: Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey. IEEE Trans. Intell. Transp. Syst. 13(4), 1484–1497 (2012)
NHS Counter Fraud Authority: NHS Counter Fraud Authority Annual Report & Accounts 19–20. https://www.gov.uk/government/publications/nhs-counter-fraud-authority-annual-report-and-accounts-2019-to-2020
Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 493–501. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_56
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)
Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR (2014)
Taghanaki, S.A., Abhishek, K., Azizi, S., Hamarneh, G.: A kernelized manifold mapping to diminish the effect of adversarial perturbations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pp. 11340–11349 (2019)
Acknowledgment
The work of Dr Ahmed E. Fetit was supported by the UK Research and Innovation Centre for Doctoral Training in Artificial Intelligence for Healthcare under Grant EP/S023283/1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xie, Y., Fetit, A.E. (2022). How Effective is Adversarial Training of CNNs in Medical Image Analysis?. In: Yang, G., Aviles-Rivero, A., Roberts, M., Schönlieb, CB. (eds) Medical Image Understanding and Analysis. MIUA 2022. Lecture Notes in Computer Science, vol 13413. Springer, Cham. https://doi.org/10.1007/978-3-031-12053-4_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-12053-4_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-12052-7
Online ISBN: 978-3-031-12053-4
eBook Packages: Computer ScienceComputer Science (R0)