Abstract
Recent innovations in machine learning enjoy a remarkable rate of adoption across a broad spectrum of applications, including cyber-security. While previous chapters study the application of machine learning solutions to cyber-security, in this chapter we present adversarial machine learning: a field of study concerned with the security of machine learning algorithms when faced with attackers. Likewise, adversarial machine learning enjoys remarkable interest from the community, with a large body of works that either propose attacks against machine learning algorithms, or defenses against adversarial attacks. In particular, adversarial attacks have been mounted in almost all applications of machine learning. Here, we aim to systematize adversarial machine learning, with a pragmatic focus on common computer security applications. Without assuming a strong background in machine learning, we also introduce the basic building blocks and fundamental properties of adversarial machine learning. This study is therefore accessible both to a security audience without in-depth knowledge of machine learning and to a machine learning audience.
C. J. Hernández-Castro, Z. Liu, A. Serban and I. Tsingenopoulos—Equal contributions, authors ordered alphabetically.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
These signatures typically consisted on code fragments, file properties, hashes of the file or fragments, and combinations of these.
References
Automatic speaker verification spoofing and countermeasures challenge. http://www.asvspoof.org/
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Al-Dujaili, A., Huang, A., Hemberg, E., O’Reilly, U.M.: Adversarial deep learning for robust detection of binary encoded malware. In: S&P Workshops, pp. 76–82. IEEE (2018)
Alzantot, M., Balaji, B., Srivastava, M.: Did you hear that? adversarial examples against automatic speech recognition. In: NIPS Workshop on Machine Deception (2018)
Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.J., Srivastava, M.B.: Genattack: practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119. ACM (2019)
Anderson, H.S., Kharkar, A., Filar, B., Evans, D., Roth, P.: Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917 (2018)
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICLR (2018)
Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010). https://doi.org/10.1007/s10994-010-5188-5
Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: CCS, pp. 16–25. ACM (2006)
Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation. In: ICLR (2020)
Biggio, B., Russu, P., Didaci, L., Roli, F.: Adversarial biometric recognition : a review on biometric system security from the adversarial machine-learning perspective. IEEE Sig. Process. Mag. 32(5), 31–41 (2015)
Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2013)
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: ICML, pp. 1467–1474 (2012)
Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)
Bigham, J.P., Cavender, A.C.: Evaluating existing audio CAPTCHAs and an interface optimized for non-visual users. In: CHI, pp. 1829–1838. ACM (2009)
Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018)
Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13(10), 2617–2654 (2012)
Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv:2004.07213 (2020)
Brunner, T., Diehl, F., Le, M.T., Knoll, A.: Guessing smart: biased sampling for efficient black-box adversarial attacks. In: ICCV, pp. 4958–4966 (2019)
Carlini, N., et al.: On evaluating adversarial robustness. arXiv:1902.06705 (2019)
Carlini, N., et al.: Hidden voice commands. In: USENIX Security, pp. 513–530 (2016)
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. arXiv:1810.00069 (2018)
Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: S&P, pp. 668–685 (2020). IEEE
Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. AISec 2017. ACM (2017)
Chen, S., Carlini, N., Wagner, D.: Stateful detection of black-box adversarial attacks. arXiv:1907.05587 (2019)
Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: ICCV, pp. 4724–4732 (2019)
Dalvi, N., Domingos, P., Sanghai, S., Verma, D., et al.: Adversarial classification. In: KDD, pp. 99–108. ACM (2004)
Dosovitskiy, A., Fischer, P., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2015)
Elson, J., Douceur, J.R., Howell, J., Saul, J.: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization. In: CCS, pp. 366–374. ACM (2007)
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: A rotation and a translation suffice: fooling CNNs with simple transformations. In: NIPS 2017 Workshop on Machine Learning and Computer Security (2017)
Eykholt, K., et al.: Physical adversarial examples for object detectors. arXiv:1807.07769 (2018)
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: CVPR, pp. 1625–1634 (2018)
Ferdowsi, A., Challita, U., Saad, W., Mandayam, N.B.: robust deep reinforcement learning for security and safety in autonomous vehicle systems. In: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, pp. 307–312 (2018)
Fritsch, C., Netter, M., Reisser, A., Pernul, G.: Attacking image recognition Captchas. In: Katsikas, S., Lopez, J., Soriano, M. (eds.) TrustBus 2010. LNCS, vol. 6264, pp. 13–25. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15152-1_2
Gao, H., Lei, L., Zhou, X., Li, J., Liu, X.: The robustness of face-based CAPTCHAs. In: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 2248–2255 (2015)
Gao, H., Wang, W., Fan, Y.: Divide and conquer: an efficient attack on Yahoo! CAPTCHA. In: IEEE International Conference on Trust, Security and Privacy in Computing and Communications, pp. 9–16 (2012)
Gao, H., Wang, W., Qi, J., Wang, X., Liu, X., Yan, J.: The robustness of hollow CAPTCHAs. In: CCS, pp. 1075–1086. ACM (2013)
Gao, H., et al.: A simple generic attack on text captchas. NDSS, pp. 21–24 (2016)
Geirhos, R., et al.: Shortcut learning in deep neural networks. Nature Mach. Intell. 2(11), 665–673 (2020)
Gilmer, J., Adams, R.P., Goodfellow, I., Andersen, D., Dahl, G.E.: Motivating the rules of the game for adversarial example research. arXiv:1807.06732 (2018)
Gleave, A., Dennis, M., Wild, C., Kant, N., Levine, S., Russell, S.: Adversarial policies: attacking deep reinforcement learning. In: ICLR (2019)
Globerson, A., Roweis, S.: Nightmare at test time: robust learning by feature deletion. In: ICML (2006)
Golle, P.: Machine learning attacks against the Asirra captcha. In: SOUPS. ACM (2009)
Goodfellow, I.J., Bulatov, Y., Ibarz, J., Arnoud, S., Shet, V.D.: Multi-digit number recognition from street view imagery using deep convolutional neural networks. In: ICLR (2014)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial examples for malware detection. In: Foley, S.N., Gollmann, D., Snekkenes, E. (eds.) ESORICS 2017. LNCS, vol. 10493, pp. 62–79. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66399-9_4
Hernández-Castro, C.J., R-Moreno, M.D., Barrero, D.F.: Using JPEG to measure image continuity and break capy and other puzzle CAPTCHAs. IEEE Internet Comput. 19(6), 46–53 (2015)
Hernandez-Castro, C.J., Ribagorda, A., Hernandez-Castro, J.C.: On the strength of EGglue and other logic CAPTCHAs. In: SECRYPT, pp. 157–167 (2011)
Hernandez-Castro, C.J., Ribagorda, A.: Pitfalls in captcha design and implementation: the math captcha, a case study. Comput. Secur. 29(1), 141–157 (2010)
Hernandez-Castro, C.J., Barrero, D.F., R-Moreno, M.D.: A machine learning attack against the civil rights captcha. In: International Symposium on Intelligent Distributed Computing (IDC) (2014)
Hernandez-Castro, C.J., Hernandez-Castro, J.C., Stainton-Ellis, J.D., Ribagorda, A.: Shortcomings in CAPTCHA design and implementation: Captcha2, a commercial proposal. In: International Network Conference (INC) (2010)
Hernández-Castro, C.J., R-moreno, M.D., Barrero, D.F.: Side-channel attack against the Capy HIP. In: International Conference on Emerging Security Technologies (EST), pp. 99–104. IEEE (2014)
Hernandez-Castro, C.J., Ribagorda, A., Saez, Y.: Side-channel attack on labeling captchas. In: SECRYPT (2010)
Hernández-Castro, C., Li, S., R-Moreno, M.: All about uncertainties and traps: statistical oracle-based attacks on a new captcha protection against oracle attacks. Comput. Secur. 92, 101758 (2020)
Hong, S., Chandrasekaran, V., Kaya, Y., Dumitraş, T., Papernot, N.: On the effectiveness of mitigating data poisoning attacks with gradient shaping. arXiv:2002.11497 (2020)
Hosseini, H., Poovendran, R.: Semantic adversarial examples. In: CVPR Workshops, pp. 1614–1619 (2018)
Hu, W., Tan, Y.: Black-box attacks against RNN based malware detection algorithms. In: AAAI Workshops (2017)
Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GANs. arXiv:1702.05983 (2017)
Huang, S., Papernot, N., Goodfellow, I., Duan, Y., Abbeel, P.: Adversarial attacks on neural network policies. In: ICLR (2017)
Huang, W.R., Geiping, J., Fowl, L., Taylor, G., Goldstein, T.: Metapoison: practical general-purpose clean-label data poisoning. In: NeurIPS (2020)
Huang, W., Stokes, J.W.: MtNet: a multi-task neural network for dynamic malware classification. In: Caballero, J., Zurutuza, U., Rodríguez, R.J. (eds.) DIMVA 2016. LNCS, vol. 9721, pp. 399–418. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40667-1_20
Huang, X., et al.: Safety and trustworthiness of deep neural networks: a survey. arXiv:1812.08342 (2018)
Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: ICML, pp. 2137–2146 (2018)
Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. In: ICLR (2019)
Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: USENIX Security (2019)
Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: S&P, pp. 19–35. IEEE (2018)
Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial Machine Learning. Cambridge University Press, Cambridge (2018)
Kołcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: CEAS (2009)
Kolosnjaji, B., et al.: Adversarial malware binaries: evading deep learning for malware detection in executables. In: EUSIPCO, pp. 533–537. IEEE (2018)
Kolosnjaji, B., Zarras, A., Webster, G., Eckert, C.: Deep learning for classification of malware system call sequences. In: Kang, B.H., Bai, Q. (eds.) AI 2016. LNCS (LNAI), vol. 9992, pp. 137–149. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50127-7_11
Labs, K.: Machine learning methods for malware detection (2020). https://media.kaspersky.com/en/enterprise-security/Kaspersky-Lab-Whitepaper-Machine-Learning.pdf
Laidlaw, C., Feizi, S.: Functional adversarial attacks. NeurIPS (2019)
Larson, M., Liu, Z., Brugman, S., Zhao, Z.: Pixel privacy: increasing image appeal while blocking automatic inference of sensitive scene information. In: Working Notes Proceedings of the MediaEval Workshop (2018)
Lin, Y.C., Hong, Z.W., Liao, Y.H., Shih, M.L., Liu, M.Y., Sun, M.: IJCAI, p. 3756–3762. AAAI Press (2017)
Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., Leung, V.C.: A survey on security threats and defensive techniques of machine learning: a data driven view. IEEE Access 6, 12103–12117 (2018)
Liu, X., Du, X., Zhang, X., Zhu, Q., Wang, H., Guizani, M.: Adversarial samples on android malware detection systems for IoT systems. Sensors 19(4), 974 (2019)
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)
Liu, Z., Zhao, Z., Larson, M.: Pixel privacy 2019: protecting sensitive scene information in images. In: Working Notes Proceedings of the MediaEval Workshop (2019)
Liu, Z., Zhao, Z., Larson, M.: Who’s afraid of adversarial queries? the impact of image modifications on content-based image retrieval. In: ICMR (2019)
Lovisotto, G., Eberz, S., Martinovic, I.: Biometric backdoors: a poisoning attack against unsupervised template updating. In: Euro S&P (2019)
Lowd, D., Meek, C.: Adversarial learning. In: KDD, pp. 641–647. ACM (2005)
Luo, B., Liu, Y., Wei, L., Xu, Q.: Towards imperceptible and robust adversarial example attacks against neural networks. In: AAAI, vol. 32 (2018)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Malialis, K., Kudenko, D.: Distributed response to network intrusions using multiagent reinforcement learning. Eng. Appl. Artif. Intell. 41, 270–284 (2015)
Mitchell, T.M., et al.: Machine learning. McGraw Hill, Burr Ridge, IL, vol. 45, no. 37, pp. 870–877 (1997)
Naor, M.: Verification of a human in the loop or Identification via the Turing Test (1996). http://www.wisdom.weizmann.ac.il/~naor/PAPERS/human.ps
Nguyen, V.D., Chow, Y.-W., Susilo, W.: Attacking animated CAPTCHAs via character extraction. In: Pieprzyk, J., Sadeghi, A.-R., Manulis, M. (eds.) CANS 2012. LNCS, vol. 7712, pp. 98–113. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35404-5_9
Ni, Z., Paul, S.: A multistage game in smart grid security: a reinforcement learning solution. IEEE Transactions on neural networks and learning systems 30(9), 2684–2695 (2019)
Oh, S.J., Fritz, M., Schiele, B.: Adversarial image perturbation for privacy protection a game theory perspective. In: ICCV, pp. 1491–1500 (2017)
Osadchy, M., Hernandez-Castro, J., Hernandez, J., Gibson, S., Dunkelman, O., Pérez-Cabo, D.: No bot expects the DeepCAPTCHA! introducing immutable adversarial examples, with applications to CAPTCHA generation. IEEE Trans. Inf. Forensics Secur. 12(11), 2640–2653 (2016)
Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv:1611.03814 (2016)
Qin, Y., Carlini, N., Cottrell, G., Goodfellow, I., Raffel, C.: Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In: ICML, pp. 5231–5240 (2019)
Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., Nicholas, C.K.: Malware detection by eating a whole exe. In: AAAI (2018)
Rajabi, A., Bobba, R.B., Rosulek, M., Wright, C.V., Feng, W.c.: On the (im) practicality of adversarial perturbation for image privacy. In: Proceedings on Privacy Enhancing Technologies, pp. 85–106 (2021)
Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: CVPR Workshops, pp. 25–32 (2016)
Rubinstein, B.I., et al.: Antidote: understanding and defending against poisoning of anomaly detectors. In: ACM SIGCOMM Conference on Internet Measurement, pp. 1–14. ACM (2009)
Sano, S., Otsuka, T., Okuno, H.G.: Solving Google’s continuous audio CAPTCHA with HMM-based automatic speech recognition. In: Sakiyama, K., Terada, M. (eds.) IWSEC 2013. LNCS, vol. 8231, pp. 36–52. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41383-4_3
Santamarta, R.: Breaking gmail’s audio captcha. http://blog.wintercore.com/?p=11 (2008). http://blog.wintercore.com/?p=11. Accessed 13 Feb 2010
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Madry, A.: Adversarially robust generalization requires more data. In: NeurIPS, pp. 5014–5026 (2018)
Schönherr, L., Kohls, K., Zeiler, S., Holz, T., Kolossa, D.: Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. In: NDSS (2019)
Schultz, M.G., Eskin, E., Zadok, F., Stolfo, S.J.: Data mining methods for detection of new malicious executables. In: S&P, pp. 38–49. IEEE (2001)
Serban, A., Poll, E., Visser, J.: Adversarial examples on object recognition: a comprehensive survey. ACM Comput. Surv. (CSUR)
Shafahi, A., et al.: Adversarial training for free! In: NeurIPS, pp. 3353–3364 (2019)
Shamsabadi, A.S., Sanchez-Matilla, R., Cavallaro, A.: Colorfool: semantic adversarial colorization. In: CVPR, pp. 1151–1160 (2020)
Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting privacy against unauthorized deep learning models. In: USENIX Security, pp. 1589–1604 (2020)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: CCS, pp. 1528–1540. ACM (2016)
Shet, V.: Street view and reCAPTCHA technology just got smarter (2014). https://security.googleblog.com/2014/04/street-view-and-recaptcha-technology.html. Accessed 14 Aug 2017
Sidorov, Z.: Rebreakcaptcha: Breaking google’s recaptcha v2 using google (2017). https://east-ee.com/2017/02/28/rebreakcaptcha-breaking-googles-recaptcha-v2-using-google/
Sivakorn, S., Polakis, I., Keromytis, A.D.: I am robot: (deep) learning to break semantic image captchas. In: Euro S&P, pp. 388–403. IEEE (2016)
Sivakorn, S., Polakis, J., Keromytis, A.D.: I’m not a human : breaking the google reCAPTCHA (2016)
Smith, L.N.: A useful taxonomy for adversarial robustness of neural networks. arXiv:1910.10679 (2019)
Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: NeurIPS, pp. 3517–3529 (2017)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2013)
Tam, J., Simsa, J., Hyde, S., von Ahn, L.: Breaking Audio Captchas, pp. 1625–1632. Curran Associates, Inc. (2008)
Tramèr, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: NeurIPS (2020)
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: USENIX Security, pp. 601–618 (2016)
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2019)
Vorobeychik, Y., Kantarcioglu, M.: Adversarial machine learning. Synth. Lect. Artif. Intell. Mach. Learn. 12(3), 1–169 (2018)
Wang, D., Moh, M., Moh, T.S.: Using Deep Learning to Solve Google ReCAPTCHA v2’s Image Challenges, pp. 1–5 (2020)
Wong, E., Schmidt, F., Kolter, Z.: Wasserstein adversarial examples via projected sinkhorn iterations. In: ICML, pp. 6808–6817 (2019)
Xiao, C., Li, B., yan Zhu, J., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. In: IJCAI, pp. 3905–3911 (2018)
Xiao, C., Zhu, J.Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. In: ICLR (2018)
Xiao, L., Wan, X., Dai, C., Du, X., Chen, X., Guizani, M.: Security in mobile edge caching with reinforcement learning. IEEE Wirel. Commun. 25(3), 116–122 (2018)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV, pp. 1369–1378 (2017)
Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers: a case study on pdf malware classifiers. In: NDSS (2016)
Yan, J., Ahmad, A.S.E.: A low-cost attack on a microsoft captcha. In: CCS, pp. 543–554. ACM (2008)
Yan, Q., Liu, K., Zhou, Q., Guo, H., Zhang, N.: Surfingattack: interactive hidden attack on voice assistants using ultrasonic guided wave. In: NDSS (2020)
Yu, D., Deng, L.: Automatic Speech Recognition. SCT, Springer, London (2015). https://doi.org/10.1007/978-1-4471-5779-3
Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., Xu, W.: Dolphinattack: Inaudible voice commands. In: CCS, pp. 103–117. ACM (2017)
Zhang, H., Avrithis, Y., Furon, T., Amsaleg, L.: Smooth adversarial examples. EURASIP J. Inf. Secur. 2020(1), 1–12 (2020)
Zhao, Z., Liu, Z., Larson, M.: Adversarial color enhancement: generating unrestricted adversarial images by optimizing a color filter. In: BMVC (2020)
Zhao, Z., Liu, Z., Larson, M.: Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: CVPR, pp. 1039–1048 (2020)
Zhou, Y., Yang, Z., Wang, C., Boutell, M.: Breaking google reCAPTCHA v2. J. Comput. Sci. Coll. 34(1), 126–136 (2018)
Zhu, B.B., et al.: Attacks and design of image recognition captchas. In: CCS, pp. 187–200. ACM (2010)
Acknowledgements
This research is partially funded by the Research Fund KU Leuven, and by the Flemish Research Programme Cybersecurity.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Hernández-Castro, C.J., Liu, Z., Serban, A., Tsingenopoulos, I., Joosen, W. (2022). Adversarial Machine Learning. In: Batina, L., Bäck, T., Buhan, I., Picek, S. (eds) Security and Artificial Intelligence. Lecture Notes in Computer Science, vol 13049. Springer, Cham. https://doi.org/10.1007/978-3-030-98795-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-98795-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-98794-7
Online ISBN: 978-3-030-98795-4
eBook Packages: Computer ScienceComputer Science (R0)