Skip to main content

Adversarial Machine Learning

  • Chapter
  • First Online:
Security and Artificial Intelligence

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13049))

Abstract

Recent innovations in machine learning enjoy a remarkable rate of adoption across a broad spectrum of applications, including cyber-security. While previous chapters study the application of machine learning solutions to cyber-security, in this chapter we present adversarial machine learning: a field of study concerned with the security of machine learning algorithms when faced with attackers. Likewise, adversarial machine learning enjoys remarkable interest from the community, with a large body of works that either propose attacks against machine learning algorithms, or defenses against adversarial attacks. In particular, adversarial attacks have been mounted in almost all applications of machine learning. Here, we aim to systematize adversarial machine learning, with a pragmatic focus on common computer security applications. Without assuming a strong background in machine learning, we also introduce the basic building blocks and fundamental properties of adversarial machine learning. This study is therefore accessible both to a security audience without in-depth knowledge of machine learning and to a machine learning audience.

C. J. Hernández-Castro, Z. Liu, A. Serban and I. Tsingenopoulos—Equal contributions, authors ordered alphabetically.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://cloud.google.com/vision.

  2. 2.

    https://www.virustotal.com/gui/home/upload.

  3. 3.

    These signatures typically consisted on code fragments, file properties, hashes of the file or fragments, and combinations of these.

References

  1. Automatic speaker verification spoofing and countermeasures challenge. http://www.asvspoof.org/

  2. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  3. Al-Dujaili, A., Huang, A., Hemberg, E., O’Reilly, U.M.: Adversarial deep learning for robust detection of binary encoded malware. In: S&P Workshops, pp. 76–82. IEEE (2018)

    Google Scholar 

  4. Alzantot, M., Balaji, B., Srivastava, M.: Did you hear that? adversarial examples against automatic speech recognition. In: NIPS Workshop on Machine Deception (2018)

    Google Scholar 

  5. Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.J., Srivastava, M.B.: Genattack: practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119. ACM (2019)

    Google Scholar 

  6. Anderson, H.S., Kharkar, A., Filar, B., Evans, D., Roth, P.: Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917 (2018)

  7. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICLR (2018)

    Google Scholar 

  8. Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010). https://doi.org/10.1007/s10994-010-5188-5

    Article  MathSciNet  MATH  Google Scholar 

  9. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: CCS, pp. 16–25. ACM (2006)

    Google Scholar 

  10. Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation. In: ICLR (2020)

    Google Scholar 

  11. Biggio, B., Russu, P., Didaci, L., Roli, F.: Adversarial biometric recognition : a review on biometric system security from the adversarial machine-learning perspective. IEEE Sig. Process. Mag. 32(5), 31–41 (2015)

    Article  Google Scholar 

  12. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  13. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2013)

    Article  Google Scholar 

  14. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: ICML, pp. 1467–1474 (2012)

    Google Scholar 

  15. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)

    Article  Google Scholar 

  16. Bigham, J.P., Cavender, A.C.: Evaluating existing audio CAPTCHAs and an interface optimized for non-visual users. In: CHI, pp. 1829–1838. ACM (2009)

    Google Scholar 

  17. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018)

    Google Scholar 

  18. Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13(10), 2617–2654 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv:2004.07213 (2020)

  20. Brunner, T., Diehl, F., Le, M.T., Knoll, A.: Guessing smart: biased sampling for efficient black-box adversarial attacks. In: ICCV, pp. 4958–4966 (2019)

    Google Scholar 

  21. Carlini, N., et al.: On evaluating adversarial robustness. arXiv:1902.06705 (2019)

  22. Carlini, N., et al.: Hidden voice commands. In: USENIX Security, pp. 513–530 (2016)

    Google Scholar 

  23. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. arXiv:1810.00069 (2018)

  24. Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: S&P, pp. 668–685 (2020). IEEE

    Google Scholar 

  25. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. AISec 2017. ACM (2017)

    Google Scholar 

  26. Chen, S., Carlini, N., Wagner, D.: Stateful detection of black-box adversarial attacks. arXiv:1907.05587 (2019)

  27. Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: ICCV, pp. 4724–4732 (2019)

    Google Scholar 

  28. Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: KDD, pp. 99–108. ACM (2004)

    Google Scholar 

  29. Dosovitskiy, A., Fischer, P., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2015)

    Article  Google Scholar 

  30. Elson, J., Douceur, J.R., Howell, J., Saul, J.: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization. In: CCS, pp. 366–374. ACM (2007)

    Google Scholar 

  31. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: fooling CNNs with simple transformations. In: NIPS 2017 Workshop on Machine Learning and Computer Security (2017)

    Google Scholar 

  32. Eykholt, K., et al.: Physical adversarial examples for object detectors. arXiv:1807.07769 (2018)

  33. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: CVPR, pp. 1625–1634 (2018)

    Google Scholar 

  34. Ferdowsi, A., Challita, U., Saad, W., Mandayam, N.B.: robust deep reinforcement learning for security and safety in autonomous vehicle systems. In: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, pp. 307–312 (2018)

    Google Scholar 

  35. Fritsch, C., Netter, M., Reisser, A., Pernul, G.: Attacking image recognition Captchas. In: Katsikas, S., Lopez, J., Soriano, M. (eds.) TrustBus 2010. LNCS, vol. 6264, pp. 13–25. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15152-1_2

    Chapter  Google Scholar 

  36. Gao, H., Lei, L., Zhou, X., Li, J., Liu, X.: The robustness of face-based CAPTCHAs. In: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 2248–2255 (2015)

    Google Scholar 

  37. Gao, H., Wang, W., Fan, Y.: Divide and conquer: an efficient attack on Yahoo! CAPTCHA. In: IEEE International Conference on Trust, Security and Privacy in Computing and Communications, pp. 9–16 (2012)

    Google Scholar 

  38. Gao, H., Wang, W., Qi, J., Wang, X., Liu, X., Yan, J.: The robustness of hollow CAPTCHAs. In: CCS, pp. 1075–1086. ACM (2013)

    Google Scholar 

  39. Gao, H., et al.: A simple generic attack on text captchas. NDSS, pp. 21–24 (2016)

    Google Scholar 

  40. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nature Mach. Intell. 2(11), 665–673 (2020)

    Article  Google Scholar 

  41. Gilmer, J., Adams, R.P., Goodfellow, I., Andersen, D., Dahl, G.E.: Motivating the rules of the game for adversarial example research. arXiv:1807.06732 (2018)

  42. Gleave, A., Dennis, M., Wild, C., Kant, N., Levine, S., Russell, S.: Adversarial policies: attacking deep reinforcement learning. In: ICLR (2019)

    Google Scholar 

  43. Globerson, A., Roweis, S.: Nightmare at test time: robust learning by feature deletion. In: ICML (2006)

    Google Scholar 

  44. Golle, P.: Machine learning attacks against the Asirra captcha. In: SOUPS. ACM (2009)

    Google Scholar 

  45. Goodfellow, I.J., Bulatov, Y., Ibarz, J., Arnoud, S., Shet, V.D.: Multi-digit number recognition from street view imagery using deep convolutional neural networks. In: ICLR (2014)

    Google Scholar 

  46. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  47. Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial examples for malware detection. In: Foley, S.N., Gollmann, D., Snekkenes, E. (eds.) ESORICS 2017. LNCS, vol. 10493, pp. 62–79. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66399-9_4

    Chapter  Google Scholar 

  48. Hernández-Castro, C.J., R-Moreno, M.D., Barrero, D.F.: Using JPEG to measure image continuity and break capy and other puzzle CAPTCHAs. IEEE Internet Comput. 19(6), 46–53 (2015)

    Google Scholar 

  49. Hernandez-Castro, C.J., Ribagorda, A., Hernandez-Castro, J.C.: On the strength of EGglue and other logic CAPTCHAs. In: SECRYPT, pp. 157–167 (2011)

    Google Scholar 

  50. Hernandez-Castro, C.J., Ribagorda, A.: Pitfalls in captcha design and implementation: the math captcha, a case study. Comput. Secur. 29(1), 141–157 (2010)

    Article  Google Scholar 

  51. Hernandez-Castro, C.J., Barrero, D.F., R-Moreno, M.D.: A machine learning attack against the civil rights captcha. In: International Symposium on Intelligent Distributed Computing (IDC) (2014)

    Google Scholar 

  52. Hernandez-Castro, C.J., Hernandez-Castro, J.C., Stainton-Ellis, J.D., Ribagorda, A.: Shortcomings in CAPTCHA design and implementation: Captcha2, a commercial proposal. In: International Network Conference (INC) (2010)

    Google Scholar 

  53. Hernández-Castro, C.J., R-moreno, M.D., Barrero, D.F.: Side-channel attack against the Capy HIP. In: International Conference on Emerging Security Technologies (EST), pp. 99–104. IEEE (2014)

    Google Scholar 

  54. Hernandez-Castro, C.J., Ribagorda, A., Saez, Y.: Side-channel attack on labeling captchas. In: SECRYPT (2010)

    Google Scholar 

  55. Hernández-Castro, C., Li, S., R-Moreno, M.: All about uncertainties and traps: statistical oracle-based attacks on a new captcha protection against oracle attacks. Comput. Secur. 92, 101758 (2020)

    Google Scholar 

  56. Hong, S., Chandrasekaran, V., Kaya, Y., Dumitraş, T., Papernot, N.: On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv:2002.11497 (2020)

  57. Hosseini, H., Poovendran, R.: Semantic adversarial examples. In: CVPR Workshops, pp. 1614–1619 (2018)

    Google Scholar 

  58. Hu, W., Tan, Y.: Black-box attacks against RNN based malware detection algorithms. In: AAAI Workshops (2017)

    Google Scholar 

  59. Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GANs. arXiv:1702.05983 (2017)

  60. Huang, S., Papernot, N., Goodfellow, I., Duan, Y., Abbeel, P.: Adversarial attacks on neural network policies. In: ICLR (2017)

    Google Scholar 

  61. Huang, W.R., Geiping, J., Fowl, L., Taylor, G., Goldstein, T.: Metapoison: practical general-purpose clean-label data poisoning. In: NeurIPS (2020)

    Google Scholar 

  62. Huang, W., Stokes, J.W.: MtNet: a multi-task neural network for dynamic malware classification. In: Caballero, J., Zurutuza, U., Rodríguez, R.J. (eds.) DIMVA 2016. LNCS, vol. 9721, pp. 399–418. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40667-1_20

    Chapter  Google Scholar 

  63. Huang, X., et al.: Safety and trustworthiness of deep neural networks: a survey. arXiv:1812.08342 (2018)

  64. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: ICML, pp. 2137–2146 (2018)

    Google Scholar 

  65. Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. In: ICLR (2019)

    Google Scholar 

  66. Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: USENIX Security (2019)

    Google Scholar 

  67. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: S&P, pp. 19–35. IEEE (2018)

    Google Scholar 

  68. Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial Machine Learning. Cambridge University Press, Cambridge (2018)

    MATH  Google Scholar 

  69. Kołcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: CEAS (2009)

    Google Scholar 

  70. Kolosnjaji, B., et al.: Adversarial malware binaries: evading deep learning for malware detection in executables. In: EUSIPCO, pp. 533–537. IEEE (2018)

    Google Scholar 

  71. Kolosnjaji, B., Zarras, A., Webster, G., Eckert, C.: Deep learning for classification of malware system call sequences. In: Kang, B.H., Bai, Q. (eds.) AI 2016. LNCS (LNAI), vol. 9992, pp. 137–149. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50127-7_11

    Chapter  Google Scholar 

  72. Labs, K.: Machine learning methods for malware detection (2020). https://media.kaspersky.com/en/enterprise-security/Kaspersky-Lab-Whitepaper-Machine-Learning.pdf

  73. Laidlaw, C., Feizi, S.: Functional adversarial attacks. NeurIPS (2019)

    Google Scholar 

  74. Larson, M., Liu, Z., Brugman, S., Zhao, Z.: Pixel privacy: increasing image appeal while blocking automatic inference of sensitive scene information. In: Working Notes Proceedings of the MediaEval Workshop (2018)

    Google Scholar 

  75. Lin, Y.C., Hong, Z.W., Liao, Y.H., Shih, M.L., Liu, M.Y., Sun, M.: IJCAI, p. 3756–3762. AAAI Press (2017)

    Google Scholar 

  76. Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., Leung, V.C.: A survey on security threats and defensive techniques of machine learning: a data driven view. IEEE Access 6, 12103–12117 (2018)

    Article  Google Scholar 

  77. Liu, X., Du, X., Zhang, X., Zhu, Q., Wang, H., Guizani, M.: Adversarial samples on android malware detection systems for IoT systems. Sensors 19(4), 974 (2019)

    Article  Google Scholar 

  78. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)

    Google Scholar 

  79. Liu, Z., Zhao, Z., Larson, M.: Pixel privacy 2019: protecting sensitive scene information in images. In: Working Notes Proceedings of the MediaEval Workshop (2019)

    Google Scholar 

  80. Liu, Z., Zhao, Z., Larson, M.: Who’s afraid of adversarial queries? the impact of image modifications on content-based image retrieval. In: ICMR (2019)

    Google Scholar 

  81. Lovisotto, G., Eberz, S., Martinovic, I.: Biometric backdoors: a poisoning attack against unsupervised template updating. In: Euro S&P (2019)

    Google Scholar 

  82. Lowd, D., Meek, C.: Adversarial learning. In: KDD, pp. 641–647. ACM (2005)

    Google Scholar 

  83. Luo, B., Liu, Y., Wei, L., Xu, Q.: Towards imperceptible and robust adversarial example attacks against neural networks. In: AAAI, vol. 32 (2018)

    Google Scholar 

  84. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  85. Malialis, K., Kudenko, D.: Distributed response to network intrusions using multiagent reinforcement learning. Eng. Appl. Artif. Intell. 41, 270–284 (2015)

    Article  Google Scholar 

  86. Mitchell, T.M., et al.: Machine learning. McGraw Hill, Burr Ridge, IL, vol. 45, no. 37, pp. 870–877 (1997)

    Google Scholar 

  87. Naor, M.: Verification of a human in the loop or Identification via the Turing Test (1996). http://www.wisdom.weizmann.ac.il/~naor/PAPERS/human.ps

  88. Nguyen, V.D., Chow, Y.-W., Susilo, W.: Attacking animated CAPTCHAs via character extraction. In: Pieprzyk, J., Sadeghi, A.-R., Manulis, M. (eds.) CANS 2012. LNCS, vol. 7712, pp. 98–113. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35404-5_9

    Chapter  Google Scholar 

  89. Ni, Z., Paul, S.: A multistage game in smart grid security: a reinforcement learning solution. IEEE Transactions on neural networks and learning systems 30(9), 2684–2695 (2019)

    Article  MathSciNet  Google Scholar 

  90. Oh, S.J., Fritz, M., Schiele, B.: Adversarial image perturbation for privacy protection a game theory perspective. In: ICCV, pp. 1491–1500 (2017)

    Google Scholar 

  91. Osadchy, M., Hernandez-Castro, J., Hernandez, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the DeepCAPTCHA! introducing immutable adversarial examples, with applications to CAPTCHA generation. IEEE Trans. Inf. Forensics Secur. 12(11), 2640–2653 (2016)

    Article  Google Scholar 

  92. Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv:1611.03814 (2016)

  93. Qin, Y., Carlini, N., Cottrell, G., Goodfellow, I., Raffel, C.: Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In: ICML, pp. 5231–5240 (2019)

    Google Scholar 

  94. Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., Nicholas, C.K.: Malware detection by eating a whole exe. In: AAAI (2018)

    Google Scholar 

  95. Rajabi, A., Bobba, R.B., Rosulek, M., Wright, C.V., Feng, W.c.: On the (im) practicality of adversarial perturbation for image privacy. In: Proceedings on Privacy Enhancing Technologies, pp. 85–106 (2021)

    Google Scholar 

  96. Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: CVPR Workshops, pp. 25–32 (2016)

    Google Scholar 

  97. Rubinstein, B.I., et al.: Antidote: understanding and defending against poisoning of anomaly detectors. In: ACM SIGCOMM Conference on Internet Measurement, pp. 1–14. ACM (2009)

    Google Scholar 

  98. Sano, S., Otsuka, T., Okuno, H.G.: Solving Google’s continuous audio CAPTCHA with HMM-based automatic speech recognition. In: Sakiyama, K., Terada, M. (eds.) IWSEC 2013. LNCS, vol. 8231, pp. 36–52. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41383-4_3

    Chapter  Google Scholar 

  99. Santamarta, R.: Breaking gmail’s audio captcha. http://blog.wintercore.com/?p=11 (2008). http://blog.wintercore.com/?p=11. Accessed 13 Feb 2010

  100. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Madry, A.: Adversarially robust generalization requires more data. In: NeurIPS, pp. 5014–5026 (2018)

    Google Scholar 

  101. Schönherr, L., Kohls, K., Zeiler, S., Holz, T., Kolossa, D.: Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. In: NDSS (2019)

    Google Scholar 

  102. Schultz, M.G., Eskin, E., Zadok, F., Stolfo, S.J.: Data mining methods for detection of new malicious executables. In: S&P, pp. 38–49. IEEE (2001)

    Google Scholar 

  103. Serban, A., Poll, E., Visser, J.: Adversarial examples on object recognition: a comprehensive survey. ACM Comput. Surv. (CSUR)

    Google Scholar 

  104. Shafahi, A., et al.: Adversarial training for free! In: NeurIPS, pp. 3353–3364 (2019)

    Google Scholar 

  105. Shamsabadi, A.S., Sanchez-Matilla, R., Cavallaro, A.: Colorfool: semantic adversarial colorization. In: CVPR, pp. 1151–1160 (2020)

    Google Scholar 

  106. Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting privacy against unauthorized deep learning models. In: USENIX Security, pp. 1589–1604 (2020)

    Google Scholar 

  107. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: CCS, pp. 1528–1540. ACM (2016)

    Google Scholar 

  108. Shet, V.: Street view and reCAPTCHA technology just got smarter (2014). https://security.googleblog.com/2014/04/street-view-and-recaptcha-technology.html. Accessed 14 Aug 2017

  109. Sidorov, Z.: Rebreakcaptcha: Breaking google’s recaptcha v2 using google (2017). https://east-ee.com/2017/02/28/rebreakcaptcha-breaking-googles-recaptcha-v2-using-google/

  110. Sivakorn, S., Polakis, I., Keromytis, A.D.: I am robot: (deep) learning to break semantic image captchas. In: Euro S&P, pp. 388–403. IEEE (2016)

    Google Scholar 

  111. Sivakorn, S., Polakis, J., Keromytis, A.D.: I’m not a human : breaking the google reCAPTCHA (2016)

    Google Scholar 

  112. Smith, L.N.: A useful taxonomy for adversarial robustness of neural networks. arXiv:1910.10679 (2019)

  113. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: NeurIPS, pp. 3517–3529 (2017)

    Google Scholar 

  114. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2013)

    Google Scholar 

  115. Tam, J., Simsa, J., Hyde, S., von Ahn, L.: Breaking Audio Captchas, pp. 1625–1632. Curran Associates, Inc. (2008)

    Google Scholar 

  116. Tramèr, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: NeurIPS (2020)

    Google Scholar 

  117. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: USENIX Security, pp. 601–618 (2016)

    Google Scholar 

  118. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2019)

    Google Scholar 

  119. Vorobeychik, Y., Kantarcioglu, M.: Adversarial machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–169 (2018)

    Google Scholar 

  120. Wang, D., Moh, M., Moh, T.S.: Using Deep Learning to Solve Google ReCAPTCHA v2’s Image Challenges, pp. 1–5 (2020)

    Google Scholar 

  121. Wong, E., Schmidt, F., Kolter, Z.: Wasserstein adversarial examples via projected sinkhorn iterations. In: ICML, pp. 6808–6817 (2019)

    Google Scholar 

  122. Xiao, C., Li, B., yan Zhu, J., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. In: IJCAI, pp. 3905–3911 (2018)

    Google Scholar 

  123. Xiao, C., Zhu, J.Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. In: ICLR (2018)

    Google Scholar 

  124. Xiao, L., Wan, X., Dai, C., Du, X., Chen, X., Guizani, M.: Security in mobile edge caching with reinforcement learning. IEEE Wirel. Commun. 25(3), 116–122 (2018)

    Article  Google Scholar 

  125. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV, pp. 1369–1378 (2017)

    Google Scholar 

  126. Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers: a case study on pdf malware classifiers. In: NDSS (2016)

    Google Scholar 

  127. Yan, J., Ahmad, A.S.E.: A low-cost attack on a microsoft captcha. In: CCS, pp. 543–554. ACM (2008)

    Google Scholar 

  128. Yan, Q., Liu, K., Zhou, Q., Guo, H., Zhang, N.: Surfingattack: interactive hidden attack on voice assistants using ultrasonic guided wave. In: NDSS (2020)

    Google Scholar 

  129. Yu, D., Deng, L.: Automatic Speech Recognition. SCT, Springer, London (2015). https://doi.org/10.1007/978-1-4471-5779-3

    Book  MATH  Google Scholar 

  130. Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., Xu, W.: Dolphinattack: Inaudible voice commands. In: CCS, pp. 103–117. ACM (2017)

    Google Scholar 

  131. Zhang, H., Avrithis, Y., Furon, T., Amsaleg, L.: Smooth adversarial examples. EURASIP J. Inf. Secur. 2020(1), 1–12 (2020)

    Article  Google Scholar 

  132. Zhao, Z., Liu, Z., Larson, M.: Adversarial color enhancement: generating unrestricted adversarial images by optimizing a color filter. In: BMVC (2020)

    Google Scholar 

  133. Zhao, Z., Liu, Z., Larson, M.: Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: CVPR, pp. 1039–1048 (2020)

    Google Scholar 

  134. Zhou, Y., Yang, Z., Wang, C., Boutell, M.: Breaking google reCAPTCHA v2. J. Comput. Sci. Coll. 34(1), 126–136 (2018)

    Google Scholar 

  135. Zhu, B.B., et al.: Attacks and design of image recognition captchas. In: CCS, pp. 187–200. ACM (2010)

    Google Scholar 

Download references

Acknowledgements

This research is partially funded by the Research Fund KU Leuven, and by the Flemish Research Programme Cybersecurity.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ilias Tsingenopoulos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hernández-Castro, C.J., Liu, Z., Serban, A., Tsingenopoulos, I., Joosen, W. (2022). Adversarial Machine Learning. In: Batina, L., Bäck, T., Buhan, I., Picek, S. (eds) Security and Artificial Intelligence. Lecture Notes in Computer Science, vol 13049. Springer, Cham. https://doi.org/10.1007/978-3-030-98795-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98795-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98794-7

  • Online ISBN: 978-3-030-98795-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics