Skip to main content
Log in

Adversarial examples: attacks and defences on medical deep learning systems

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years, significant progress has been achieved using deep neural networks (DNNs) in obtaining human-level performance on various long-standing tasks. With the increased use of DNNs in various applications, public concern over DNNs’ trustworthiness has grown. Studies conducted in the last several years have proven that deep learning models are vulnerable to small adversarial perturbations. Adversarial examples are generated from clean images by adding imperceptible perturbations. Adversarial examples are necessary for practical reasons, as they can be physically constructed, implying that DNNs are unsuitable for some image classification applications in their current state. This paper aims to provide an in-depth overview of the numerous adversarial attack strategies and defence methods. The theoretical principles, methods, and applications of adversarial attack strategies are first discussed. After that, a few research attempts on defence techniques covering the field’s broad boundary are outlined. Afterwards, this study reviews recently proposed adversarial attack methods to medical deep learning systems and defence techniques against these attacks. The vulnerability of the DL model is evaluated for different medical image modalities using an adversarial attack and defence method. Some unresolved issues and obstacles are highlighted to ignite additional research efforts in this crucial area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study

References

  1. Agarwal A, Singh R, Vatsa M, Ratha NK (2020) Image transformation based defense against adversarial perturbation on deep learning models. IEEE Trans Dependable Secur Comput 5971:1–1

    Article  Google Scholar 

  2. Agarwal A, Vatsa M, Singh R, Ratha N (2021) Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recogn Lett 146:244–251

    Article  Google Scholar 

  3. Akhtar N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6:14410–14430. https://doi.org/10.1109/ACCESS.2018.2807385

    Article  Google Scholar 

  4. Allyn J, Allou N, Vidal C, Renou A, Ferdynus C (2020) Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations. Med (Baltimore) 99(50):e23568

    Article  Google Scholar 

  5. Anand D, Tank D, Tibrewal H, Sethi A (2020) Self-supervision VS.Transfer learning: Robust Biomedical Image Analysis Against Adversarial attacks. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, pp 1159–1163. https://doi.org/10.1109/ISBI45749.2020.9098369

  6. Asgari Taghanaki S, Das A, Hamarneh G (2018) Vulnerability analysis of chest x-ray image classification against adversarial attacks. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinf) 11038 LNCS:87–94

    Google Scholar 

  7. Athalye A, Engstrom L, Ilyas A, Kevin K (2018) Synthesizing robust adversarial examples. 35th Int Conf Mach Learn ICML 1:449–468

    Google Scholar 

  8. Baluja S, Fischer I (2017) “Adversarial transformation networks: Learning to generate adversarial examples,” arXiv, no. 2013

  9. Baluja S, Fischer I (2018) “Learning to attack: Adversarial transformation networks,” 32nd AAAI Conf. Artif. Intell. AAAI 2018, no. 1, pp. 2687–2695

  10. Biggio B, Nelson B, Laskov P (2012) “Poisoning attacks against support vector machines,” in Proceedings of the 29th International Conference on Machine Learning, ICML 2012, vol. 2, pp. 1467–1474

  11. Biggio B, Fumera G, Russu P, Didaci L, Roli F (2015) Adversarial biometric recognition: a review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Process Mag 32(5):31–41. https://doi.org/10.1109/MSP.2015.2426728

    Article  Google Scholar 

  12. Brendel W, Rauber J, Bethge M (2017) “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” arXiv, pp. 1–12

  13. Buckman J, Roy A, Raffel C, Goodfellow I (2018) “Thermometer encoding: One hot way to resist adversarial examples,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., no. 2016, pp. 1–22

  14. Byra M et al. (2020) “Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method,” arXiv, pp. 9–12

  15. Carlini N, Wagner D (2016) “Defensive Distillation is Not Robust to Adversarial Examples,” vol. 0, pp. 1–3

  16. Carlini N, Wagner D (2017) “Towards Evaluating the Robustness of Neural Networks,” in Proceedings - IEEE Symposium on Security and Privacy, pp. 39–57

  17. Carlini N, Wagner D (2017) “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods,” in the 10th ACM Workshop on Artificial Intelligence and Security (AISec ‘17), pp. 3–14

  18. Carlini N, Katz G, Barrett C, Dill DL (2017) “Ground-Truth Adversarial Examples,” arXiv, no. 2012, pp. 1–12

  19. Chakraborty A, Alam M, Dey V, Chattopadhyay A, Mukhopadhyay D (2021) A survey onadversarial attacks and defences. CAAI Trans Intell Technol 6(1):25–45. https://doi.org/10.1049/cit2.12028

    Article  Google Scholar 

  20. Chen P-Y, Zhang H, Sharma Y, Yi J, Hsieh C-J (2017) ZOO: Zeroth order optimization based black-box atacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp 15–26. https://doi.org/10.1145/3128572.3140448

  21. Chen PY, Sharma Y, Zhang H, Yi J, Hsieh CJ (2018) EAD: Elastic-net attacks to deep neural networks via adversarial examples. In: 32nd AAAI Conf. Artif. Intell. AAAI 2018, pp 10–17. https://doi.org/10.1609/aaai.v32i1.11302

  22. Chen T, Liu J, Xiang Y, Niu W, Tong E, Han Z (2019) Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecurity 2(1):1–22. https://doi.org/10.1186/s42400-019-0027-x

    Article  Google Scholar 

  23. Chen J, Zheng H, Xiong H, Chen R, Du T (2021) FineFool : a novel DNN object contour attack on image recognition based on the attention. Comput Secur 104:102220. https://doi.org/10.1016/j.cose.2021.102220

    Article  Google Scholar 

  24. Chen C et al (2021) “Enhancing MR Image Segmentation with Realistic Adversarial Data Augmentation,” arXiv

  25. Chen R et al (2022) Salient feature extractor for adversarial defense on deep neural networks. Inf Sci (Ny) 600:118–143. https://doi.org/10.1016/j.ins.2022.03.056

    Article  Google Scholar 

  26. Cheng K, Calivá F, Shah R, Han M, Majumdar S, Pedoia V (2020) Addressing the false negative problem of deep learning MRI reconstruction models by adversarial attacks and robust training. In: Proceedings of the Third Conference on Medical Imaging with Deep Learning, vol 121, pp 121–135. [Online]. Available: https://proceedings.mlr.press/v121/cheng20a.html

  27. Chugh T, Cao K, Jain AK (2018) Fingerprint spoof buster: use of minutiae-centered patches. IEEE Trans Inf Foren Secur 13(9):2190–2202. https://doi.org/10.1109/TIFS.2018.2812193

    Article  Google Scholar 

  28. Deldjoo Y, Di Noia T, Merra FA (2021) A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. ACM Comput Surv 54(2):1–38. https://doi.org/10.1145/3439729

    Article  Google Scholar 

  29. Dhillon GS et al (2018) Stochastic activation pruning for robust adversarial defense. CoRR abs/1803.01442 [Online]. Available: http://arxiv.org/abs/1803.01442

  30. Dong Y et al (2018) Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 9185–9193. https://doi.org/10.1109/CVPR.2018.00957

  31. Duan R, Ma X, Wang Y, Bailey J, Qin AK, Yang Y (2020) Adversarial camouflage: hiding physical-world attacks with natural styles. CoRR abs/2003.08757 [Online]. Available: https://arxiv.org/abs/2003.08757

  32. Eberz S, Paoletti N, Roeschlin M, Patani A, Kwiatkowska M, Martinovic I (2017) Broken hearted: how to attack ECG biometrics

  33. Emma Zhang W, Sheng QZ, Alhazmi A, Li C (2020) Adversarial attacks on deep-learning models in natural language processing: a survey. ACM Trans Intell Syst Technol 11(3):1–41

    Article  Google Scholar 

  34. Eykholt K et al (2018) “Robust Physical-World Attacks on Deep Learning Models,” in Proceedings of the IEEE conference on computer vision and pattern recognition., pp. 1625–1634

  35. Fawaz HI, Forestier G, Weber J, Idoumghar L, Muller P-A (2019) Adversarial attacks on deep neural networks for time series classification. CoRR abs/1903.07054 [Online]. Available: http://arxiv.org/abs/1903.07054

  36. Fei J, Xia Z, Yu P, Xiao F (2020) Adversarial attacks on fingerprint liveness detection. Eurasip J Image Vid Process 2020(1):1–11

    Article  Google Scholar 

  37. Feinman R, Curtin RR, Shintre S, Gardner AB (2017) Detecting adversarial samples from artifacts. ArXiv abs/1703.00410

  38. Finlayson SG, Kohane IS, Beam AL (2018) Adversarial attacks against medical deep learning systems. CoRR abs/1804.05296 [Online]. Available: http://arxiv.org/abs/1804.05296

  39. Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam L, Kohane IS (2019) Adversarial attacks on medical machine learning emerging vulnerabilities demand new conversations. Sci (80- ) 363(6433):1287–1290. https://doi.org/10.1126/science.aaw4399

  40. Fischer V, Kumar MC, Metzen JH, Brox T (2019) Adversarial examples for semantic image segmentation. ArXiv abs/1703.01101

  41. Gao J, Wang B, Lin Z, Xu W, Qi Y (2017) Deepcloak: masking deep neural network models for robustness against adversarial samples. arXiv, no. 2014, pp 1–8 [Online]. Available: http://arxiv.org/abs/1702.06763

  42. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. CoRR abs/1412.6572

  43. Grigorescu S, Trasnea B, Cocias T, Macesanu G (2020) A survey of deep learning techniques for autonomous driving. CoRR abs/1910.07738 [Online]. Available: http://arxiv.org/abs/1910.07738

  44. Grosse K, Papernot N, Manoharan P, Backes M, McDaniel P (2016) Adversarial perturbations against deep neural networks for malware classification. ArXiv abs/1606.04435

  45. Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P (2017) “On the (Statistical) detection of adversarial examples,” arXiv

  46. Gu SS, Rigazio L (2014) Towards deep neural network architectures robust to adversarial examples. CoRR abs/1412.5068

  47. Han X, Hu Y, Foschini L, Chinitz L, Jankelson L, Ranganath R (2020) Deep learning models for electrocardiograms are susceptible to adversarial attack. Nat Med 26(3):360–363

    Article  Google Scholar 

  48. Han K, Xia B, Li Y (2022) (AD)2: Adversarial domain adaptation to defense with adversarial perturbation removal. Pattern Recognit 122. https://doi.org/10.1016/j.patcog.2021.108303

  49. He W, Wei J, Chen X, Carlini N, Song D (2017) “Adversarial example defenses: Ensembles of weak defenses are not strong,” in Proceedings of the 11th USENIX Conference on Offensive Technologies, p. 15

  50. He X, Yang S, Li G, Li H, Chang H, Yu Y (2019) Non-local context encoder: robust biomedical image segmentation against adversarial attacks. https://doi.org/10.1609/aaai.v33i01.33018417

  51. He Z, Duan Y, Zhang W, Zou J, He Z, Wang Y, Pan Z (2022) Boosting adversarial attacks with transformed gradient. Comput Secur 118:102720

    Article  Google Scholar 

  52. Hinton G, Vinyals O, Dean J (2015) “Distilling the Knowledge in a Neural Network,” arXiv, pp. 1–9

  53. Hirano H, Minagi A, Takemoto K (2021) Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging 21(1):1–13

    Article  Google Scholar 

  54. Huang S, Papernot N, Goodfellow I, Duan Y, Abbeel P (2017) “Adversarial attacks on neural network policies,” arXiv

  55. Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X (2020) A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput Sci Rev 37:100270. https://doi.org/10.1016/j.cosrev.2020.100270

    Article  MathSciNet  MATH  Google Scholar 

  56. Ilahi I et al (2021) Challenges and countermeasures for adversarial attacks on deep reinforcement learning. IEEE Trans Artif Intell 3(2):90–109. https://doi.org/10.1109/tai.2021.3111139

    Article  Google Scholar 

  57. Jin G, Shen S, Zhang D, Dai F, Zhang Y (2019) APE-GAN: adversarial perturbation elimination with GAN. In: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 3842–3846. https://doi.org/10.1109/ICASSP.2019.8683044

  58. Joel MZ et al (2022) Using adversarial images to assess the robustness of deep learning models trained on diagnostic images in oncology. JCO Clin Cancer Inf (6):1–10. https://doi.org/10.1200/cci.21.00170

  59. Kannan H, Kurakin A, Goodfellow IJ (2018) Adversarial logit pairing. ArXiv abs/1803.06373

  60. Karimian N (2019) How to attack PPG biometric using adversarial machine learning. In: Autonomous Systems: Sensors, Processing and Security for Vehicles & Infrastructure, p. 6. https://doi.org/10.1117/12.2518828

  61. Karimian N, Woodard D, Forte D (2020) ECG biometric: spoofing and countermeasures. IEEE Trans Biometrics, Behav Identity Sci 2(3):257–270. https://doi.org/10.1109/TBIOM.2020.2992274

    Article  Google Scholar 

  62. Kaviani S, Han KJ, Sohn I (2022) Adversarial attacks and defenses on AI in medical imaging informatics: A survey. Expert Syst Appl 198(February 2021):116815

    Article  Google Scholar 

  63. Kingma DP, Ba JL (2015) “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–15

  64. Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial examples in the physical world. ArXiv abs/1607.02533

  65. Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial machine learning at scale. ArXiv abs/1611.01236

  66. Kurakin A, Goodfellow IJ, Bengio S (2017) “(IFGSM) Adversarial examples in the physical world,” Iclr, no. c, pp. 1–14

  67. Lal S, Rehman SU, Shah JH, Meraj T, Rauf HT, Damaševičius R, Mohammed MA, Abdulkareem KH (2021) Adversarial attack and defence through adversarial training and feature fusion for diabetic retinopathy recognition. Sensors 21(11):1–21. https://doi.org/10.3390/s21113922

    Article  Google Scholar 

  68. Lan J, Zhang R, Yan Z, Wang J, Chen Y, Hou R (2022) Adversarial attacks and defenses in Speaker Recognition Systems : A survey. J Syst Archit 127(August 2021):102526

    Article  Google Scholar 

  69. Li X, Zhu D (2020) Robust detection of adversarial attacks on medical images. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp 1154–1158. https://doi.org/10.1109/ISBI45749.2020.9098628

  70. Li J, Liu Y, Chen T, Xiao Z, Li Z, Wang J (2020) Adversarial attacks and defenses on cyber-physical systems: a survey. IEEE Internet Things J 7(6):5103–5115. https://doi.org/10.1109/JIOT.2020.2975654

    Article  Google Scholar 

  71. Li X, Pan D, Zhu D (2020) “Defending against adversarial attacks on medical imaging AI system, classification or detection?,” arXiv

  72. Li Y, Su H, Zhu J (2021) AdvCapsNet: To defense adversarial attacks based on Capsule networks. J Vis Commun Image Represent 75(January):103037. https://doi.org/10.1016/j.jvcir.2021.103037

    Article  Google Scholar 

  73. Li H et al (2021) A defense method based on attention mechanism against traffic sign adversarial samples. Inf Fusion 76(March 2020):55–65. https://doi.org/10.1016/j.inffus.2021.05.005

    Article  Google Scholar 

  74. Li Z, Fang X, Yang G (2022) Remove adversarial perturbations with linear and nonlinear image filters. Displays 73(December 2021):102143. https://doi.org/10.1016/j.displa.2021.102143

    Article  Google Scholar 

  75. Liang Q, Li Q, Nie W (2022) LD-GAN: Learning perturbations for adversarial defense based on GAN structure. Signal Process Image Commun 103(January):116659

    Article  Google Scholar 

  76. Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1778–1787. https://doi.org/10.1109/CVPR.2018.00191

  77. Lin YC, Hong ZW, Liao YH, Shih ML, Liu MY, Sun M (2017) “Tactics of adversarial attack on deep reinforcement learning agents,” in IJCAI Int Joint Conf Artif Intell, vol. 0, pp. 3756–3762

  78. Lin J, Song C, He K, Wang L, Hopcroft JE (2019) Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv Learn. https://doi.org/10.48550/arXiv.1908.06281

  79. Liu Y, Chen X, Liu C, Song D (2017) Delving Into Transfeable Adversarial Examples and Black-Box attacks. ICLR 2017(2):2210–2219

    Google Scholar 

  80. Liu S, Arindra A, Setio A, Ghesu FC, Gibson E (2020) “No Surprises : Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks,” IEEE Trans. Med. Imaging, no. April 2021

  81. Liu Z, Zhang X, Wu D (2021) “Universal adversarial perturbations for CNN classifiers in EEG-based BCIs,” arXiv, pp. 1–11

  82. Lu J, Issaranon T, Forsyth D (2017) SafetyNet: Detecting and Rejecting Adversarial Examples Robustly. Proc IEEE Int Conf Comput Vis 2017-Octob:446–454. https://doi.org/10.1109/ICCV.2017.56

    Article  Google Scholar 

  83. Ma X, Niu Y, Gu L, Wang Y, Zhao Y, Bailey J, Lu F (2021) Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn 110:107332. https://doi.org/10.1016/j.patcog.2020.107332

    Article  Google Scholar 

  84. Maiorana E, Hine GE, La Rocca D, Campisi P (2013) “On the vulnerability of an EEG-based biometric system to hill-climbing attacks algorithms’ comparison and possible countermeasures,” in 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–6

  85. Mardy A et al (2018) “Towards Deep learning Models Resistant To Adversarial Attacks,” in ICLR 2018, pp. 1–23

  86. Martins N, Cruz JM, Cruz T, Henriques Abreu P (2020) Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access 8:35403–35419

    Article  Google Scholar 

  87. Meng D, Chen H (2017) “MagNet: a two-pronged defense against adversarial examples,” Proc ACM Conf Comput Commun Secur, pp. 135–147

  88. Metzen JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. ArXiv abs/1702.04267

  89. Metzen JH, Kumar MC, Brox T, Fischer V (2017) Universal Adversarial Perturbations Against Semantic Image Segmentation. Proc IEEE Int Conf Comput Vis 2017-Octob:2774–2783

    Google Scholar 

  90. Miyato T, Maeda SI, Koyama M, Ishii S (2019) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 41(8):1979–1993. https://doi.org/10.1109/TPAMI.2018.2858821

    Article  Google Scholar 

  91. Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) “DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2574–2582

  92. Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations. Proc - 30th IEEE Conf Comput Vis Pattern Recognition, CVPR 2017-Janua:86–94

    Google Scholar 

  93. Newaz AI, Haque NI, Sikder AK, Rahman MA, Uluagac AS (2020) Adversarial attacks to machine learning-based smart healthcare systems. In: 2020 IEEE Glob. Commun. Conf. GLOBECOM 2020 - Proc. https://doi.org/10.1109/GLOBECOM42002.2020.9322472

  94. Ozbulak U, Van Messem A, De Neve W (2019) “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11765 LNCS, no. 1, pp. 300–308

  95. Papernot N, McDaniel P, Goodfellow I (2016) “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples,” ArXiv, vol. abs/1605.0

  96. Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: Proceedings - 2016 IEEE Symposium on Security and Privacy, SP 2016, pp 582–597. https://doi.org/10.1109/SP.2016.41

  97. Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks,” in Proceedings - 2016 IEEE Symposium on Security and Privacy, SP 2016, pp. 582–597

  98. Paschali M, Conjeti S, Navarro F, Navab N (2018) Generalizability vs. Robustness: Adversarial examples for medical imaging. ArXiv abs/1804.00504

  99. Paul R, Schabath M, Gillies R, Hall L, Goldgof D (2020) Mitigating adversarial attacks on medical image understanding systems. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp 1517–1521. https://doi.org/10.1109/ISBI45749.2020.9098740

  100. Pitropakis N, Panaousis E, Giannetsos T, Anastasiadis E, Loukas G (2019) “A taxonomy and survey of attacks against machine learning,” Comput Sci Rev, vol. 34, no

  101. Puttagunta M, Ravi S (2021) Medical image analysis based on deep learning approach. Multimed Tools Appl 80(16):24365–24398

    Article  Google Scholar 

  102. Qayyum A, Ijaz A, Usama M, Iqbal W, Qadir J, Elkhatib Y and Al-Fuqaha A (2020) Securing machine learning in the cloud: a systematic review of cloud machine learning security. Front Big Data 3:587139. https://doi.org/10.3389/fdata.2020.587139

  103. Qayyum A, Usama M, Qadir J, Al-Fuqaha A (2020) Securing connected autonomous vehicles: challenges posed by adversarial machine learning and the way forward. IEEE Commun Surv Tutor 22(2):998–1026

    Article  Google Scholar 

  104. Qayyum A, Qadir J, Bilal M, Al-Fuqaha A (2021) Secure and robust machine learning for healthcare: a survey. IEEE Rev Biomed Eng 14:156–180

    Article  Google Scholar 

  105. Qiu S, Liu Q, Zhou S, Huang W (2022) Adversarial attack and defense technologies in natural language processing : a survey. Neurocomputing 492:278–307

    Article  Google Scholar 

  106. Rahman A, Hossain MS, Alrajeh NA, Alsolami F (2021) Adversarial examples - security threats to COVID-19 deep learning Systems in Medical IoT devices. IEEE Internet Things J 8(12):9603–9610

    Article  Google Scholar 

  107. Rao C et al (2020) “A thorough comparison study on adversarial attacks and defenses for common thorax disease classification in chest X-rays,”

  108. Rasool RU, Ahmad HF, Rafique W, Qayyum A, Qadir J (2022) Security and privacy of internet of medical things: A contemporary review in the age of surveillance, botnets, and adversarial ML. J Netw Comput Appl 201(December 2021):103332

    Article  Google Scholar 

  109. Ros AS, Doshi-Velez F (2018) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: 32nd AAAI Conf. Artif Intell AAAI 2018, pp 1660–1669

  110. Rozsa A, Rudd EM, Boult TE (2016) “Adversarial diversity and hard positive generation,” IEEE Comput Soc Conf Comput Vis Pattern Recognit Work, pp. 410–417

  111. Sadeghi K, Banerjee A, Gupta SKS (2020) A system-driven taxonomy of attacks and defenses in adversarial machine learning. IEEE Trans Emerg Top Comput Intell 4(4):450–467. https://doi.org/10.1109/TETCI.2020.2968933

    Article  Google Scholar 

  112. Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv, pp 1–4. https://doi.org/10.48550/arXiv.1805.06605

  113. Santhanam GK, Grnarova P (2018) “Defending against adversarial attacks by leveraging an entire GAN,” arXiv

  114. Sarkar S, Bansal A, Mahbub U, Chellappa R (2017) UPSET and ANGRI : breaking high performance image classifiers. arXiv 20742(1):1–9. https://doi.org/10.48550/arXiv.1707.01159

    Article  Google Scholar 

  115. Shao M, Zhang G, Zuo W, Meng D (2021) Target attack on biomedical image segmentation model based on multi-scale gradients. Inf Sci (Ny) 554:33–46

    Article  MathSciNet  Google Scholar 

  116. Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” Proc. ACM Conf. Comput. Commun. Secur., vol. 24–28-Octo, pp. 1528–1540

  117. Song Y, Kim T, Nowozin S, Ermon S, Kushman N (2017) “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” arXiv, pp. 1–20

  118. Srinivasan V, Rohrer C, Marban A, Müller KR, Samek W, Nakajima S (2021) Robustifying models against adversarial attacks by Langevin dynamics. Neural Netw 137:1–17

    Article  Google Scholar 

  119. Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828–841

    Article  Google Scholar 

  120. Sun X, Sun S (2021) Adversarial robustness and attacks for multi-view deep models. Eng Appl Artif Intell 97:104085

    Article  Google Scholar 

  121. Sun M, Tang F, Yi J, Wang F, Zhou J (2018) Identify susceptible locations in medical records via adversarial attacks on deep predictive models. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 793–801. https://doi.org/10.1145/3219819.3219909

  122. Szegedy C et al (2014) Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, pp 1–10. https://doi.org/10.48550/arXiv.1312.6199

  123. Tang S, Huang X, Chen M, Sun C, Yang J (2021) Adversarial attack type I: generating false positives. ArXiv abs/1809.00594

  124. Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) Ensemble adversarial training: Attacks and defenses. arXiv, pp. 1–22. https://doi.org/10.48550/arXiv.1705.07204

  125. Tu J et al (2020) “Physically realizable adversarial examples for lidar object detection,” Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit, pp. 13713–13722

  126. Vakhshiteh F, Nickabadi A, Ramachandra R (2021) Adversarial attacks against face recognition: a comprehensive study. IEEE Access 9:92735–92756

    Article  Google Scholar 

  127. Wang X, Lin J, Hu H, Wang J, He K (2021) “Boosting Adversarial Transferability through Enhanced Momentum,” CoRR, vol. abs/2103.1

  128. Wang L, Zhang C, Luo Z, Liu C, Liu J, Zheng X (2022) “PDAAA: Progressive Defense Against Adversarial Attacks for Deep Learning-as-a-Service in Internet of Things,” pp. 879–886

  129. Wu D et al (2021) “Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review,” pp. 1–12

  130. Xiao C, Zhu JY, Li B, He W, Liu M, Song D (2018) “Spatially transformed adversarial examples,” arXiv, pp. 1–29

  131. Xiao C, Li B, Zhu JY, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. IJCAI Int Jt Conf Artif Intell 2018-July:3905–3911. https://doi.org/10.48550/arXiv.1801.02610

    Article  Google Scholar 

  132. Xie C, Wang J, Zhang Z, Zhou Y, Xie L, Yuille A (2017) Adversarial Examples for Semantic Segmentation and Object Detection. Proceed IEEE Int Conf Comput Vis 2017-Octob:1378–1387

    Google Scholar 

  133. Xie C, Wu Y, van der Maaten L, Yuille A, He K (2018) “Feature denoising for improving adversarial robustness,” arXiv, pp. 501–509

  134. Xu W, Evans D, Qi Y (2017) “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv, no. February

  135. Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 17(2):151–178

    Article  Google Scholar 

  136. Xu M, Zhang T, Li Z, Liu M, Zhang D (2021) Towards evaluating the robustness of deep diagnostic models by adversarial attack. Med Image Anal 69:101977. https://doi.org/10.1016/j.media.2021.101977

    Article  Google Scholar 

  137. Yefet N, Alon U, Yahav E (2020) “Adversarial examples for models of code,” Proc ACM Program Lang, vol. 4, no. OOPSLA

  138. Yin SL, Zhang XL, Zuo LY (2022) Defending against adversarial attacks using spherical sampling-based variational auto-encoder. Neurocomputing 478:1–10

    Article  Google Scholar 

  139. Zhang J, Li C (2020) Adversarial examples: opportunities and challenges. IEEE Trans Neural Netw Learn Syst 31(7):2578–2593

    MathSciNet  Google Scholar 

  140. Zhang X, Wu D (2019) On the vulnerability of CNN classifiers in EEG-based BCIs. IEEE Trans Neural Syst Rehabil Eng 27(5):814–825

    Article  Google Scholar 

  141. Zhang X et al (2021) Tiny noise, big mistakes: adversarial perturbations induce errors in brain–computer interface spellers. Natl Sci Rev 8(4):1–23

    Article  MathSciNet  Google Scholar 

  142. Zhao Z, Dua D, Singh S (2017) “Generating natural adversarial examples,” arXiv, no. 2016, pp. 1–15

Download references

Funding

No funds, grants, or other support was received.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Murali Krishna Puttagunta or S. Ravi.

Ethics declarations

Conflict of interest

The Authors declare that they have no conflict of Interest

Financial interests

The authors have no competing interests to declare that are relevant to the content of this article.

Ethical approval

This article does not contain any studies with animals performed by any of the authors. This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Puttagunta, M.K., Ravi, S. & Nelson Kennedy Babu, C. Adversarial examples: attacks and defences on medical deep learning systems. Multimed Tools Appl 82, 33773–33809 (2023). https://doi.org/10.1007/s11042-023-14702-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14702-9

Keywords

Navigation