Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples has raised concerns about applying deep learning to safety-critical applications. As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures. In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs and text.


Introduction
Deep neural networks have become increasingly popular and successful in many machine learning tasks. They have been deployed in different recognition problems in the domains of images, graphs, text and speech, with remarkable success. In the image recognition domain, they are able to recognize objects with near-human level accuracy (Krizhevsky et al., 2012;He et al., 2016). They are also used in speech recognition , natural language processing (Hochreiter & Schmidhuber, 1997) and for playing games (Silver et al., 2016a).
Because of these accomplishments, deep learning techniques are also applied in safety-critical tasks. For  (CNNs) are used to recognize road signs (CireşAn et al., 2012). The machine learning technique used here is required to be highly accurate, stable and reliable. But, what if the CNN model fails to recognize the "STOP" sign by the roadside and the vehicle keeps going? It will be a dangerous situation. Similarly, in financial fraud detection systems, companies frequently use graph convolutional networks (GCNs) (Kipf & Welling, 2016) to decide whether their customers are trustworthy or not. If there are fraudsters disguising their personal identity information to evade the company's detection, it will cause a huge loss to the company. Therefore, the safety issues of deep neural networks have become a major concern.
In recent years, many works (Szegedy et al., 2013;Goodfellow et al., 2014b;He et al., 2016) have shown that DNN models are vulnerable to adversarial examples, which can be formally defined as -"Adversarial examples are inputs to machine learning models that an attacker intentionally designed to cause the model to make mistakes." In the image classification domain, these adversarial examples are intentionally synthesized images which look almost exactly the same as the original images (see figure 2), but can mislead the classifier to provide wrong prediction outputs. For a well-trained DNN image classifier on the MNIST dataset, almost all the digit samples can be attacked by an imperceptible perturbation, added on the original image. Meanwhile, in other application domains involving graphs, text or audio, similar adversarial attacking schemes also exist to confuse deep learning models. For example, perturbing only a couple of edges can mislead graph neural networks (Zügner et al., 2018), and inserting typos to a sentence can fool text classification or dialogue systems (Ebrahimi et al., 2017). As a result, the existence of adversarial examples in all application fields has cautioned researchers against directly adopting DNNs in safety-critical machine learning tasks.
To deal with the threat of adversarial examples, studies have been published with the aim of finding countermeasures to protect deep neural networks. These approaches can be roughly categorized to three main types: (a) Gradient Masking (Papernot et al., 2016b;Athalye et al., 2018): since most attacking algorithms are based on the gradient information of the classifiers, masking or obfuscating the gradients will confuse the attack mechanisms. (b) Robust Optimization (Madry et al., 2017;Kurakin et al., 2016b): arXiv:1909LG] 9 Oct 2019 these studies show how to train a robust classifier that can correctly classify the adversarial examples. (c) Adversary Detection (Carlini & Wagner, 2017a;Xu et al., 2017): the approaches attempt to check whether a sample is benign or adversarial before feeding it to the deep learning models. It can be seen as a method of guarding against adversarial examples. These methods improve DNN's resistance to adversarial examples.
In addition to building safe and reliable DNN models, studying adversarial examples and their countermeasures is also beneficial for us to understand the nature of DNNs and consequently improve them. For example, adversarial perturbations are perceptually indistinguishable to human eyes but can evade DNN's detection. This suggests that the DNN's predictive approach does not align with human reasoning. There are works (Goodfellow et al., 2014b;Ilyas et al., 2019) to explain and interpret the existence of adversarial examples of DNNs, which can help us gain more insight into DNN models.
In this review, we aim to summarize and discuss the main studies dealing with adversarial examples and their countermeasures. We provide a systematic and comprehensive review on the start-of-the-art algorithms from images, graphs and text domain, which gives an overview of the main techniques and contributions to adversarial attacks and defenses. The main structure of this survey is as follows: In Section 2, we introduce some important definitions and concepts which are frequently used in adversarial attacks and their defenses. It also gives a basic taxonomy of the types of attacks and defenses. In Section 3 and Section 4, we discuss main attack and defense techniques in the image classification scenario. We use Section 5 to briefly introduce some studies which try to explain the phenomenon of adversarial examples. Section 6 and Section 7 review the studies in graph and text data, respectively.

Definitions and Notations
In this section, we give a brief introduction to the key components of model attacks and defenses. We hope that our explanations can help our audience to understand the main components of the related works on adversarial attacks and their countermeasures. By answering the following questions, we define the main terminology: • Adversary's Goal (2.1.1) What is the goal or purpose of the attacker? Does he want to misguide the classifier's decision on one sample, or influence the overall performance of the classifier?
• Adversary's Knowledge (2.1.2) What information is available to the attacker? Does he know the classifier's structure, its parameters or the training set used for classifier training?
• Victim Models (2.1.3) What kind of deep learning models do adversaries usually attack? Why are adversaries interested in attacking these models?
• Security Evaluation (2.2) How can we evaluate the safety of a victim model when faced with adversarial examples? What is the relationship and difference between these security metrics and other model goodness metrics, such as accuracy or risks?

• Poisoning Attack vs Evasion Attack
Poisoning attacks refer to the attacking algorithms that allow an attacker to insert/modify several fake samples into the training database of a DNN algorithm. These fake samples can cause failures of the trained classifier. They can result in the poor accuracy (Biggio et al., 2012), or wrong prediction on some given test samples (Zügner et al., 2018). This type of attacks frequently appears in the situation where the adversary has access to the training database. For example, web-based repositories and "honeypots" often collect malware examples for training, which provides an opportunity for adversaries to poison the data.
In evasion attacks, the classifiers are fixed and usually have good performance on benign testing samples. The adversaries do not have authority to change the classifier or its parameters, but they craft some fake samples that the classifier cannot recognize. In other words, the adversaries generate some fraudulent examples to evade detection by the classifier. For example, in autonomous driving vehicles, sticking a few pieces of tapes on the stop signs can confuse the vehicle's road sign recognizer (Eykholt et al., 2017).
• Targeted Attack vs Non-Targeted Attack In targeted attack, when the victim sample (x, y) is given, where x is feature vector and y ∈ Y is the ground truth label of x, the adversary aims to induce the classifier to give a specific label t ∈ Y to the perturbed sample x . For example, a fraudster is likely to attack a financial company's credit evaluation model to disguise himself as a highly credible client of this company.
If there is no specified target label t for the victim sample x, the attack is called non-targeted attack. The adversary only wants the classifier to predict incorrectly.

ADVERSARY'S KNOWLEDGE
• White-Box Attack In a white-box setting, the adversary has access to all the information of the target neural network, including its architecture, parameters, gradients, etc. The adversary can make full use of the network information to carefully craft adversarial examples. White-box attacks have been extensively studied because the disclosure of model architecture and parameters helps people understand the weakness of DNN models clearly and it can be analyzed mathematically. As stated by (Tramèr et al., 2017), security against white-box attacks is the property that we desire ML models to have.
• Black-Box Attack In a black-box attack setting, the inner configuration of DNN models is unavailable to adversaries. Adversaries can only feed the input data and query the outputs of the models. They usually attack the models by keeping feeding samples to the box and observing the output to exploit the model's input-output relationship, and identity its weakness. For conventional machine learning tools, there is a long history of studying safety issues. Biggio et al. (2013) attack SVM classifiers and fully-connected shallow neural networks for the MNIST dataset. Barreno et al. (2010) examine the security of SpamBayes, a Bayesian method based spam detection software. In (Dalvi et al., 2004), the security of Naive Bayes classifiers is checked. Many of these ideas and strategies have been adopted in the study of adversarial attacks in deep neural networks.

B Deep Neural Networks
Different from traditional machine learning techniques which require domain knowledge and manual feature engineering, DNNs are end-to-end learning algorithms. The models use raw data directly as input to the model, and learn objects' underlying structures and attributes.
The end-to-end architecture of DNNs makes it easy for adversaries to exploit their weakness, and generate high-quality deceptive inputs (adversarial examples). Moreover, because of the implicit nature of DNNs, some of their properties are still not well understood or interpretable. Therefore, studying the security issues of DNN models is necessary. Next, we'll briefly introduce some popular victim deep learning models which are used as "benchmark" models in attack/defense studies.
(a) Fully-Connected Neural Networks Fully-connected neural networks (FC) are composed of layers of artificial neurons. In each layer, the neurons take the input from previous layers, process it with the activation function and send it to the next layer; the input of first layer is sample x, and the (softmax) output of last layer is the score F (x). An m-layer fully connected neural network can be formed as: One thing to note is that, the back-propagation algorithm helps calculate ∂F (x;θ) ∂θ , which makes gradient descent effective in learning parameters. In adversarial learning, back-propagation also facilitates the calculation of the term: ∂F (x;θ) ∂x , representing the output's response to a change in input. This term is widely used in the studies to craft adversarial examples.

(b) Convolutional Neural Networks
In computer vision tasks, Convolutional Neural Networks (Krizhevsky et al., 2012) is one of the most widely used models. CNN models aggregate the local features from the image to learn the representations of image objects. CNN models can be viewed as a sparse-version of fully connected neural networks: most of the weights between layers are zero. Its training algorithm or gradients calculation can also be inherited from fully connected neural networks. (c) Graph Convolutional Networks The work of (Kipf & Welling, 2016) introduces the graph convolutional networks, which later became a popular node classification model for graph data. The idea of graph convolutional networks is similar to CNN: it aggregates the information from neighbor nodes to learn representations for each node v, and outputs the score F (v, X) for prediction: where X denotes the input graph's feature matrix, andÂ depends on graph degree matrix and adjacency matrix.
Recurrent Neural Networks are very useful for tackling sequential data. As a result, they are widely used in natural language processing. The RNN models, especially LSTM (Hochreiter & Schmidhuber, 1997), are able to store the previous time information in memory, and exploit useful information from previous sequence for next-step prediction.

Security Evaluation
We also need to evaluate the model's resistance to adversarial examples. "Robustness" and "Adversarial Risk" are two terms used to describe this resistance of DNN models to one single sample, and the total population, respectively.

ROBUSTNESS
Definition 1. (minimal perturbation): Given the classifier F and data (x, y), the adversarial perturbation has the least norm (the most unnoticeable perturbation): Here, || · || usually refers to l p norm.
Definition 3. (global robustness): The expectation of robustness over the whole population D: The minimal perturbation can find the adversarial example which is most similar to x under the model F . Therefore, the larger r(x, F ) or ρ(F ) is, the adversary needs to sacrifice more similarity to generate adversarial samples, implying that the classifier F is more robust or safe.

ADVERSARIAL RISK (LOSS)
Definition 4. (Most-Adversarial Example) Given the classifier F and data x, the sample x adv with the largest loss value in x's -neighbor ball: Definition 5. (Adversarial Loss): The loss value for the most-adversarial example: Definition 6. (global adversarial loss): The expectation of the loss value on x adv over the data distribution D: The most-adversarial example is the point where the model is most likely to be fooled in the neighborhood of x. A lower loss value L adv indicates a more robust model F .

ADVERSARIAL RISK VS RISK
The definition of Adversarial Risk is drawn from the definition of classifier risk (empirical risk): Risk studies a classifier's performance on samples from natural distribution D. Whereas, adversarial risk from Equation (1) studies a classifier's performance on adversarial example x . It is important to note that x may not necessarily follow the distribution D. Thus, the studies on adversarial examples are different from these on model generalization. Moreover, a number of studies reported the relation between these two properties Su et al., 2018;Stutz et al., 2019;Zhang et al., 2019b). From our clarification, we hope that our audience get the difference and relation between risk and adversarial risk, and the importance of studying adversarial countermeasures.

Notations
With the aforementioned definitions, Table 1 lists the notations which will be used in the subsequent sections.

Generating Adversarial Examples
In this section, we introduce main methods for generating adversarial examples in the image classification domain.
Studying adversarial examples in the image domain is considered to be essential because: (a) perceptual similarity between fake and benign images is intuitive to observers, and (b) image data and image classifiers have simpler structure than other domains, like graph or audio. Thus, many studies concentrate on attacking image classifiers as a standard case. In this section, we assume that the image classifiers refer to fully connected neural networks and Convolutional Neural Networks (Krizhevsky et al., 2012). The most common datasets used in these studies include (1) handwritten letter images dataset MNIST, (2) CIFAR10 object dataset and (3) ImageNet (Deng et al., 2009). Next, we go through Notations Description x Victim data sample x Perturbed data sample δ Perturbation B (x) l p -distance neighbor ball around x with radius D Natural data distribution || · || p l p norm y Sample x's ground truth label t Target label t Y Set of possible labels. Usually we assume there are m labels C Classifier whose output is a label: C(x) = y F DNN model which outputs a score vector: Logits: last layer outputs before softmax: Activation function used in neural networks θ Parameters of the model F L Loss function for training. We simplify L(F (x), y) in the form L(θ, x, y). some main methods used to generate adversarial image examples for evasion attack (white-box, black-box, grey-box, physical-world attack), and poisoning attack settings. Note that we also summarize all the attack methods in Table A in the Appendix A.

White-box Attacks
Generally, in a white-box attack setting, when the classifier C (model F ) and the victim sample (x, y) are given to the attacker, his goal is to synthesize a fake image x perceptually similar to original image x but it can mislead the classifier C to give wrong prediction results. It can be formulated as: where || · || measures the dissimilarity between x and x, which is usually l p norm. Next, we will go through main methods to realize this formulation.

BIGGIO'S ATTACK
In (Biggio et al., 2013), adversarial examples are generated on the MNIST dataset targeting conventional machine learning classifiers like SVMs and 3-layer fully-connected neural networks.
It optimizes the discriminant function to mislead the classifier. For example, on the MNIST dataset, for a linear SVM classifier, its discriminant function g(x) = w, x + b, will mark a sample x with positive value g(x) > 0 to be in class "3", and x with g(x) ≤ 0 to be in class "not 3". An example of this attack is shown in figure 1.
Suppose we have a sample x which is correctly classified to be "3". For this model, Biggio's attack crafts a new example x to minimize the discriminant value g(x ) while keeping ||x −x|| 1 small. If g(x ) is negative, the sample is classified as "not 3", but x is still close to x, so the classifier is fooled. The studies about adversarial examples for conventional machine learning models (Dalvi et al., 2004;Biggio et al., 2012; have inspired investigations on safety issues of deep learning models.

SZEGEDY'S L-BFGS ATTACK
The work of (Szegedy et al., 2013) is the first to attack deep neural network image classifiers. They formulate their optimization problem as a search for minimal distorted adversarial example x , with the objective: The problem is approximately solved by introducing the loss function, which results in the following objective: In the optimization objective of this problem, the first term imposes the similarity between x and x. The second term encourages the algorithm to find x which has a small loss value to label t, so the classifier C is very likely to predict x as t. By continuously changing the value of constant c, they can find an x which has minimum distance to x, and at the same time fool the classifier C. To solve this problem, they implement the L-BFGS (Liu & Nocedal, 1989) algorithm.

FAST GRADIENT SIGN METHOD
In (Goodfellow et al., 2014b), an one-step method is introduced to fast generate adversarial examples. The formula- Figure 2. By adding an unnoticeable perturbation, "panda" is classified as "gibbon". (Image Credit: (Goodfellow et al., 2014b)) tion is: For a targeted attack setting, this formulation can be seen as a one-step of gradient descent to solve the problem: The objective function in (3) searches the point which has the minimum loss value to label t in x's -neighbor ball, which is the location where model F is most likely to predict it to the target class t. In this way, the one-step generated sample x is also likely to fool the model. An example of FGSM-generated samples on ImageNet is shown in Figure  2.
Compared to the iterative attack in Section 3.  (Kurakin et al., 2016a), uses FGSM to produce adversarial samples for all instances in the training set.

DEEP FOOL
In , the authors study a classifier F 's decision boundary around data point x. They try to find a path such that x can go beyond the decision boundary, as shown in figure 3, so that the classifier will give a different prediction for x. For example, to attack x 0 (true label is digit 4) to digit class 3, the decision boundary is described as In each attacking step, it linearizes the decision boundary hyperplane using Taylor ex- , and calculates the orthogonal vector ω from x 0 to plane F 3 . This vector ω can be the perturbation that makes x 0 go beyond the decision boundary F 3 . By moving along the vector ω, the algorithm is able to find the adversarial example x 0 that is classified to class 3.   (Papernot et al., 2016a) introduced a method based on calculating the Jacobian matrix of the score function F . It can be viewed as a greedy attack algorithm by iteratively manipulating the pixel which is the most influential to the model output.
The authors used the Jacobian matrix ∂xi } i×j to model F (x)'s change in response to the change of its input x. For a targeted attack setting where the adversary aims to craft an x that is classified to the target class t, they repeatedly search and manipulate pixel x i whose increase (decrease) will cause F t (x) to increase or decrease j =t F j (x). As a result, for x, the model will give it the largest score to label t.

BASIC ITERATIVE METHOD (BIM) / PROJECTED GRADIENT DESCENT (PGD) ATTACK
The Basic Iterative Method was first introduced by (Kurakin et al., 2016a) and (Kurakin et al., 2016b). It is an iterative version of the one-step attack FGSM in Section 3.1.3. In a non-targeted setting, it gives an iterative formulation to craft x : Here, Clip denotes the function to project its argument to the surface of x's -neighbor ball B (x) : {x : ||x − x|| ∞ ≤ }. The step size α is usually set to be relatively small (e.g. 1 unit of pixel change for each pixel), and step numbers guarantee that the perturbation can reach the border (e.g. step = /α + 10). This iterative attacking method is also known as Projected Gradient Method (PGD) if the algorithm is added by a random initialization on x, used in work (Madry et al., 2017).
This BIM (or PGD) attack heuristically searches the samples x which have the largest loss value in the l ∞ ball around the original sample x. These adversarial examples are called "most-adversarial" examples: they are the sample points which are most aggressive and most-likely to fool the classifiers, when the perturbation intensity (its l p norm) is limited. Finding these adversarial examples is helpful to find the weaknesses of deep learning models.

CARLINI & WAGNER'S ATTACK
Carlini and Wagner's attack (Carlini & Wagner, 2017b) counterattacks the defense strategy (Papernot et al., 2016b) which were shown to be successful against FGSM and L-BFGS attacks. C&W's attack aims to solve the same problem as defined in L-BFGS attack (section 3.1.2), namely trying to find the minimally-distorted perturbation (Equation 2).
The authors address the problem (2) by instead solving: encourages the algorithm to find an x that has larger score for class t than any other label, so that the classifier will predict x as class t. Next, by applying a line search on constant c, we can find the x that has the least distance to x.
The function f (x, y) can also be viewed as a loss function for data (x, y): it penalizes the situation where there are some labels i with scores Z(x) i larger than Z(x) y . It can also be called margin loss function.
The only difference between this formulation and the one in L-BFGS attack (section 3.1.2) is that C&W's attack uses margin loss f (x, t) instead of cross entropy loss L(x, t). The benefit of using margin loss is that when C(x ) = t, the margin loss value f (x , t) = 0, the algorithm will directly minimize the distance from x to x. This procedure is more efficient for finding the minimally distorted adversarial example.
The authors claim their attack is one of the strongest attacks, breaking many defense strategies which were shown to be successful. Thus, their attacking method can be used as a benchmark to examine the safety of DNN classifiers or the quality of other adversarial examples.

GROUND TRUTH ATTACK
Attacks and defenses keep improving to defeat each other. In order to end this stalemate, the work of (Carlini et al., 2017) tries to find the "provable strongest attack". It can be seen as a method to find the theoretical minimally-distorted adversarial examples.
This attack is based on Reluplex , an algorithm for verifying the properties of neural networks. It encodes the model parameters F and data (x, y) as the subjects of a linear-like programming system, and then solves the system to check whether there exists an eligible sample x in x's neighbor B (x) that can fool the model. If we keep reducing the radius of search region B (x) until the system determines that there does not exist such an x that can fool the model, the last found adversarial example is called the ground truth adversarial example, because it has been proved to have least dissimilarity with x.
The ground-truth attack is the first work to seriously calculate the exact robustness (minimal perturbation) of classifiers. However, this method involves using a SMT solver (a complex algorithm to check the satisfiability of a series of theories), which will make it slow and not scalable to large networks. More recent works (Tjeng et al., 2017;Xiao et al., 2018c) have improved the efficiency of the ground-truth attack.
3.1.9. OTHER l p ATTACKS Previous studies are mostly focused on l 2 or l ∞ normconstrained perturbations. However, there are other papers which consider other types of l p attacks.
(a) One-pixel Attack. In (Su et al., 2019), it studies a similar problem as in Section 3.1.2, but constrains the perturbation's l 0 norm. Constraining l 0 norm of the perturbation x −x will limit the number of pixels that are allowed to be changed. It shows that: on CIFAR10 dataset, for a well-trained CNN classifier (e.g. VGG16, which has 85.5% accuracy on test data), most of the testing samples (63.5%) can be attacked by changing the value of only one pixel in a non-targeted setting. This also demonstrates the poor robustness of deep learning models.
(b) EAD: Elastic-Net Attack. In , it also studies a similar problem as in Section 3.1.2, but constrains the perturbations l 1 and l 2 norm together. As shown in their experimental work , some strong defense models that aim to reject l ∞ and l 2 norm attacks (Madry et al., 2017) are still vulnerable to the l 1 -based Elastic-Net attack.

UNIVERSAL ATTACK
Previous methods only consider one specific targeted victim sample x. However, the work (Moosavi-Dezfooli et al., 2017a) devises an algorithm that successfully misleads a classifier's decision on almost all testing images. They try to find a perturbation δ satisfying: This formulation aims to find a perturbation δ such that the classifier gives wrong decisions on most of the samples.
In their experiments, for example, they successfully find a perturbation that can attack 85. 3.1.11. SPATIALLY TRANSFORMED ATTACK Traditional adversarial attack algorithms directly modify the pixel value of an image, which changes the image's color intensity. The work (Xiao et al., 2018b) devises another method, called a Spatially Transformed Attack. They perturb the image by doing slight spatial transformation: they translate, rotate and distort the local image features slightly.
The perturbation is small enough to evade human inspection but it can fool the classifiers. One example is in figure (4).
Classified as '5' Classified as '3' Figure 4. The top part of digit "5"' is perturbed to be "thicker". For the image which was correctly classified as "5", after distortion is now classified as "3".

UNRESTRICTED ADVERSARIAL EXAMPLES
Previous attack methods only consider adding unnoticeable perturbations into images. However, the work  introduces a method to generate unrestricted adversarial examples. These samples do not necessarily look exactly the same as the victim samples, but are still legitimate samples for human eyes and can fool the classifier. Previous successful defense strategies that target perturbation-based attacks fail to recognize them.
In order to attack a given classifier C, Song et al. pretrained an Auxiliary Classifier Generative Adversarial Network (AC-GAN), (Odena et al., 2017), so they can generate one legitimate sample x from a noise vector z 0 from class y. Then, to craft an adversarial example, they will find a noise vector z near z 0 , but require that the output of AC-GAN generator G(z) be wrongly classified by victim model C.
Because z is near z 0 in the latent space of the AC-GAN, its output should belong to the same class y. In this way, the generated sample G(z) is different from x, misleading classifier F , but it is still a legitimate sample.

Physical World Attack
All the previously introduced attack methods are applied digitally, where the adversary supplies input images directly to the machine learning model. However, this is not always the case for some scenarios, like those that use cameras, microphones or other sensors to receive the signals as input.
In this case, can we still attack these systems by generating physical-world adversarial objects? Recent works show that such attacks do exist. For example, in (Eykholt et al., 2017), stickers are attached to road signs that can severely threaten autonomous cars' sign recognizer. These kinds of adversarial objects are more destructive for deep learning models because they can directly challenge many practical applications of DNNs, such as face recognition, autonomous vehicle, etc.

EXPLORING ADVERSARIAL EXAMPLES IN PHYSICAL WORLD
In (Kurakin et al., 2016b), authors explore the feasibility of crafting physical adversarial objects, by checking whether the generated adversarial images (FGSM, BIM) are "robust" under natural transformation (such as changing viewpoint, lighting etc.). Here, "robust" means the crafted images remain adversarial after the transformation. To apply the transformation, they print out the crafted images, and let test subjects use cellphones to take photos of these printouts. In this process, the shooting angle or lighting environment are not constrained, so the acquired photos are transformed samples from previously generated adversarial examples. The experimental results demonstrate that after transformation, a large portion of these adversarial examples, especially those generated by FGSM, remain adversarial to the classifier. These results suggest the possibility of physical adversarial objects which can fool the sensor under different environ-ments.

EYKHOLT'S ATTACK ON ROAD SIGNS
The work (Eykholt et al., 2017), shown in figure 5, crafts physical adversarial objects by "contaminating" road signs to mislead road sign recognizers. They achieve the attack by putting stickers on the stop sign in the desired positions.
The approach consists of: (1) Implement l 1 -norm based attack (those attacks that constrain ||x − x|| 1 ) on digital images of road signs to roughly find the region to perturb (l 1 attacks render sparse perturbation, which helps to find the attack location). These regions will later be the location of stickers.
(2) Concentrate on the regions found in step 1, and use an l 2 -norm based attack to generate the color for the stickers.
(3) Print out the perturbation found in steps 1 and 2, and stick them on road sign. The perturbed stop sign can confuse an autonomous vehicle from any distance and viewpoint.

ATHALYE'S 3D ADVERSARIAL OBJECT
The work (Athalye et al., 2017) reported the first work which successfully crafted physical 3D adversarial objects. As shown in figure 6, the authors use 3D-printing to manufacture an "adversarial" turtle. To achieve the goal, they implement a 3D rendering technique. Given a textured 3D object, they first optimize the object's texture such that the rendering images are adversarial from any viewpoint. In this process, they also ensure that the perturbation remains adversarial under different environments: camera distance, lighting conditions, rotation and background. After finding the perturbation on 3D rendering, they print an instance of the 3D object.

SUBSTITUTE MODEL
The work of  was the first to introduce an effective algorithm to attack DNN classifiers, under Figure 6. The image classifier fails to correctly recognize the adversarial object, but the original object can be correctly predicted with 100% accuracy. (Image Credit: (Athalye et al., 2017)) the condition that the adversary has no access to the classifier's parameters or training set (black-box). An adversary can only feed input x to obtain the output label y from the classifier. Additionally, the adversary may have only partial knowledge about: (a) the classifier's data domain (e.g. handwritten digits, photographs, human faces) and (b) the architecture of the classifier (e.g., CNNs, RNNs). can only obtain the label information from the classifier, it assumes that the attacker has access to the prediction confidence (score) from the victim classifier's output . In this case, there is no need to build the substitute training set and substitute model. Chen et al. give an algorithm to "scrape" the gradient information around victim sample x by observing the changes in the prediction confidence F (x) as the pixel values of x are tuned.
The equation 4 shows for each index i of sample x, we add (or subtract) x i by h. If h is small enough, we can scrape the gradient information from the output of F (·) by: Utilizing the approximate gradient, we can apply the attack formulations introduced in Section 3.1.3 and Section 3.1.7. The success rate of ZOO is higher than substitute model (Section 3.3.1) because it can utilize the information of prediction confidence, instead of solely the predicted labels.

QUERY-EFFICIENT BLACK-BOX ATTACK
Previously introduced black-box attacks require lots of input queries to the classifier, which may be prohibitive in practical applications. There are some studies on improving the efficiency of generating black-box adversarial examples via a limited number of queries. For example, a more efficient way is introduced to estimate the gradient information from model outputs (Ilyas et al., 2018). It uses Natural Evolutional Strategies (Wierstra et al., 2014) to sample the model's output based on the queries around x, and estimate the expectation of gradient of F on x. This procedure requires fewer queries to the model. Moreover, a genetic algorithm is applied to search the neighbors of a benign image for adversarial examples (Alzantot et al., 2018).

Semi-white (Grey) box Attack
In (Xiao et al., 2018a), a semi-white box attack framework is introduced. It first trained a Generative Adversarial Network (GAN) (Goodfellow et al., 2014a), targeting the model of interest. The attacker can then craft adversarial examples directly from the generative network. The advantage of the GAN-based attack is that it accelerates the process of producing adversarial examples, and makes more natural and more undetectable samples. Later, in (Deb et al., 2019), GAN is used to generate adversarial faces to evade the face recognition software. Their crafted face images appear to be more natural and have barely distinguishable difference from target face images.

Poisoning attacks
The attacks we have discussed so far are evasion attacks, which are launched after the classification model is trained. Some works instead craft adversarial examples before training. These adversarial examples are inserted into the training set in order to undermine the overall accuracy of the learned classifier, or influence its prediction on certain test examples. This process is called a poisoning attack.
Usually, the adversary in a poisoning attack setting has knowledge about the architecture of the model which is later trained on the poisoned dataset. Poisoning attacks frequently applied to attack graph neural network, because of the GNN's specific transductive learning procedure. Here, we introduce studies that craft image poisoning attacks.

BIGGIO'S POISONING ATTACK ON SVM
The work of (Biggio et al., 2012) introduced a method to poison the training set in order to reduce SVM model's accuracy. In their setting, they try to figure out a poison sample x c which, when inserted into the training set, will result in the learned SVM model F xc having a large total loss on the whole validation set. They achieve this by using incremental learning techniques for SVMs (Cauwenberghs & Poggio, 2001), which can model the influence of training samples on the learned SVM model.
A poisoning attack based on procedure above is quite successful for SVM models. However, for deep learning models, it is not easy to explicitly figure out the influence of training samples on the trained model. Below we introduce some approaches for applying poisoning attacks on DNN models.

KOH'S MODEL EXPLANATION
In (Koh & Liang, 2017), a method is introduced to interpret deep neural networks: how would the model's predictions change if a training sample were modified? Their model can explicitly quantify the change in the final loss without retraining the model when only one training sample is modified. This work can be naturally adopted to poisoning attacks by finding those training samples that have large influence on model's prediction.

POISON FROGS
In (Shafahi et al., 2018a), a method is introduced to insert an adversarial image with true label to the training set, in order to cause the trained model to wrongly classify a target test sample. In this work, given a target test sample x t with the true label y t , the attacker first uses a base sample x b from class y b . Then, it solves the objective to find x : After inserting the poison sample x into the training set, the new model trained on X train + {x} will classify x as class y b , because of the small distance between x and x b .
Using a new trained model to predict x t , the objective of x forces the score vector of x t and x to be close. Thus, x and x t will have the same prediction outcome. In this way, the new trained model will predict the target sample x t as class y b .

Countermeasures Against Adversarial Examples
In order to protect the security of deep learning models, different strategies have been considered as countermeasures against adversarial examples. There are basically three main categories of these countermeasures: 1. Gradient Masking/Obfuscation Since most attack algorithms are based on the gradient information of the classifier, masking or hiding the gradients will confound the adversaries.

Robust Optimization
Re-learning a DNN classifier's parameters can increase its robustness. The trained classifier will correctly classify the subsequently generated adversarial examples. In (Papernot et al., 2016b), it reformulates the procedure of distillation to train a DNN model that can resist adversarial examples, such as FGSM, Szegedy's L-BFGS Attack or DeepFool. It designs the training process as:

Adversarial Examples Detection
(1) Train a network F on the given training set (X, Y ) by setting the temperature 1 of the softmax to T .
(2) Compute the scores (after softmax) given by F (X) again and evaluate the scores at temperature T .
(3) Train another network F T using softmax at temperature T on the dataset with soft labels (X, F (X)). We refer the model F T as the distilled model.
(4) Use the distilled network F T with softmax at temperature 1, which is denoted as F 1 during prediction on test data X test (or adversarial examples), In Carlini & Wagner (2017b), it explains why this algorithm works: When we train a distilled network F T at temperature T and test it at temperature 1, we effectively cause the inputs to the softmax to become larger by a factor of T. Let us say T = 100, the logits Z(·) for sample x and its neighbor points x will be 100 times larger. It will lead to the softmax function F 1 (·) = softmax(Z(·), 1) outputting a score vector like ( , , ..., 1 − (m − 1) , , ..., ), where the target output class has a score extremely close to 1, and all other classes have scores close to 0. In practice, the value of is so small that its 32-bit floating-point value for computer is rounded to 0. In this way, the computer cannot find the gradient of score function F 1 , which inhibits the gradient-based attacks.

SHATTERED GRADIENTS
Some studies (Buckman et al., 2018;Guo et al., 2017) try to protect the model by preprocessing the input data. They add a non-smooth or non-differentiable preprocessor g(·) and then train a DNN model f on g(X). The trained classifier f (g(·)) is not differentiable in term of x, causing the failure of adversarial attacks.
For example, Thermometer Encoding (Buckman et al., 2018) uses a preprocessor to discretize an image's pixel value x i into a l-dimensional vector τ (x i ). (e.g. when l = 10, τ (0.66) = 1111110000). The vector τ (x i ) acts as a "thermometer" to record the pixel x i 's value. A DNN model is later trained on these vectors. The work in (Guo et al., 2017) studies a number of image processing tools, such as image cropping, compressing or total-variance minimization, to determine whether these techniques help to protect the model against adversarial examples. All these approaches block up the smooth connection between the model's output and the original input samples, so the attacker cannot easily find the gradient ∂F (x) ∂x for attacking.
1 Note that the softmax function at a temperature T means: sof tmax(x, T )i = e x i /T j e x j /T , where i = 0, 2, ..., K − 1 4.1.3. STOCHASTIC/RANDOMIZED GRADIENTS Some defense strategies try to randomize the DNN model in order to confound the adversary. For instance, we train a set of classifiers s = {F t : t = 1, 2, ..., k}. During evaluation on data x, we randomly select one classifier from the set s and predict the label y. Because the adversary has no idea which classifier is used by the prediction model, the attack success rate will be reduced.
Some examples of this strategy include the work in  that randomly drops some neurons of each layer of the DNN model, and the work in (Xie et al., 2017a) that resizes the input images to a random size and pads zeros around the input image. Both of these methods add a generative network before the classifier DNN, which will cause the final classification model to be an extremely deep neural network. The underlying reason that these defenses succeed is because: the cumulative product of partial derivatives from each layer will cause the gradient ∂L(x) ∂x to be extremely small or irregularly large, which prevents the attacker accurately estimating the location of adversarial examples.

GRADIENT MASKING/OBFUSCATION METHODS ARE NOT SAFE
The work in (Carlini & Wagner, 2017b) shows that the method of "Defensive Distillation" (section 4.  1.4)). The main weakness of the gradient masking strategy is that: it can only "confound" the adversaries, but it cannot eliminate the existence of adversarial examples.

Robust Optimization
Robust optimization methods aim to improve the classifier's robustness (Section (2.2)) by changing DNN model's manner of learning. They study how to learn model parameters that can give promising predictions on potential adversar-ial examples. In this field, the works majorly focus on: (1) learning model parameters θ * to minimize the average adversarial loss: (Section 2.2.2) or (2) learning model parameters θ * to maximize the average minimal perturbation distance: (Section 2.2.1) Typically, a robust optimization algorithm should have a prior knowledge of its potential threat or potential attack (adversarial space D). Then, the defenders build classifiers which are safe against this specific attack. For most of the related works (Goodfellow et al., 2014b;Kurakin et al., 2016b;Madry et al., 2017), they aim to defend against adversarial examples generated from small l p (specifically l ∞ and l 2 ) norm perturbation. Even though there is a chance that these defenses are still vulnerable to attacks from other mechanisms, (e.g. (Xiao et al., 2018b)), studying the security against l p attack is fundamental and can be generalized to other attacks.
In this section, we concentrate on defense approaches using robustness optimization against l p attacks. We categorize the related works into three groups: (a) regularization methods, (b) adversarial (re)training and (c) certified defenses.

REGULARIZATION METHODS
Some early studies on defending against adversarial examples focus on exploiting certain properties that a robust DNN should have in order to resist adversarial examples. For example, in (Szegedy et al., 2013), it suggests that a robust model should be stable when its inputs are distorted, so they turn to constrain the Lipschitz constant to impose this "stability" of model output. Training on these regularizations can sometimes heuristically help the model be more robust.

Penalize Layers' Lipschitz Constant
When first claimed the vulnerability of DNN models to adversarial examples, authors in (Szegedy et al., 2013) suggested that adding regularization terms on the parameters during training can force the trained model to be stable. It suggested constraining the Lipschitz constant L k between any two layers: so that the outcome of each layer will not be easily influenced by the small distortion of its input. The work  formalized this idea, by claiming that the model's adversarial risk (5) is right dependent on this instability L k : where λ p is the Lipschitz constant of the loss function. This formula states that during the training process, penalizing the large instability for each hidden layer can help to decrease the adversarial risk of the model, and consequently increase the robustness of model. The idea of constraining instability also appears in the work (Miyato et al., 2015) for semi-supervised, and unsupervised defenses.

Penalize Layers' Partial Derivative The work in (Gu & Rigazio, 2014) introduced a Deep
Contractive Network algorithm to regularize the training. It was inspired by the Contractive Autoencoder (Rifai et al., 2011), which was introduced to denoise the encoded representation learning. The Deep Contractive Network suggests adding a penalty on the partial derivatives at each layer into the standard backpropagation framework, so that the change of the input data will not cause large change on the output of each layer. Thus, it becomes difficult for the classifier to give different predictions on perturbed data samples.

ADVERSARIAL (RE)TRAINING (i) Adversarial Training with FGSM
The work in (Goodfellow et al., 2014b) is the first to suggest feeding generated adversarial examples into the training process. By adding the adversarial examples with true label (x , y) into the training set, the training set will tell the classifier that x belongs to class y, so that the trained model will correctly predict the label of future adversarial examples.

By training on benign samples augmented with adversarial examples, they increase the robustness against adversarial examples generated by FGSM.
The training strategy of this method is changed in (Kurakin et al., 2016b) so that the model can be scaled to larger dataset such as ImageNet. They suggest that using batch normalization (Ioffe & Szegedy, 2015) Algorithm should improve the efficiency of adversarial training. We give a short sketch of their algorithm in Algorithm (1).
The trained classifier has good robustness on FGSM attacks, but it is still vulnerable to iterative attacks. Later, the work in (Tramèr et al., 2017) argues that this defense is also vulnerable to single-step attacks. Adversarial training with FGSM will cause gradient obfuscation (Section (4.1)), where there is an extreme non-smoothness of the trained classifier F near the test sample x.

(ii) Adversarial Training with PGD
The work in (Madry et al., 2017) suggests using Projected Gradient Descent attack (Section (3.1.6)) for adversarial training, instead of using single-step attacks like FGSM. The PGD attacks (Section (3.1.6)) can be seen as a heuristic method to find the "most adversarial" example in the l ∞ ball around x: B (x): Here, the most-adversarial example x adv is the location where the classifier F is most likely to be mis- The trained model under this method demonstrates good robustness against both single-step and iterative attacks on MNIST and CIFAR10 datasets. However, this method involves an iterative attack for all the training samples. Thus, the time complexity of this adversarial training will be k (using k-step PGD) times as large as that for natural training, and as a consequence, it is hard to scale to large datasets such as ImageNet.

(iii) Ensemble Adversarial Training
The work in (Tramèr et al., 2017) introduced an adversarial training method which can protect CNN models against single-step attacks and can be also applied to large datasets such as ImageNet.
The main approach is to augment the classifier's training set with adversarial examples crafted from other pre-trained classifiers. For example, if we aim to train a robust classifier F , we can first pre-train classifiers F 1 , F 2 , and F 3 as references. These models have different hyperparameters with model F . Then, for each sample x, we use a single-step attack FGSM to craft adversarial examples on F 1 , F 2 and F 3 to get x 1 adv , x 2 adv , x 3 adv . Because of the transferability property (section 5.3) of the single-step attacks across different models, x 1 adv , x 2 adv , x 3 adv are also likely to mislead the classifier F . It means that these samples are a good approximation for the "most adversarial" example (7) for model F on x. Training on these samples together will approximately minimize the adversarial loss in (5).
This ensemble adversarial training algorithm is more efficient than these in Section (i) and Section (ii), since it decouples the process of model training and gener-  (iv) Accelerate Adversarial Training While it is one of the most promising and reliable defense strategies, adversarial training with PGD attack (Madry et al., 2017) is generally slow and computationally costly.
The work in (Shafahi et al., 2019) proposes a free adversarial training algorithm which improves the efficiency by reusing the backward pass calculations. In this algorithm, the gradient of the loss to input: ∂L(x+δ,θ) ∂x and the gradient of the loss to model parameters: ∂L(x+δ,θ) ∂θ can be computed together in one back propagation iteration, by sharing the same components of chain rule. Thus, the adversarial training process is highly accelerated. The free adversarial training algorithm is shown in Algorithm (3).
The work in (Zhang et al., 2019a) argues that when the model parameters are fixed, the PGD-generated adversarial example is only coupled with the weights of the first layer of DNN. It is based on solving a Pontryagin's Maximal Principle (Pontryagin, 2018). Therefore, the work in (Zhang et al., 2019a) develops an algorithm You Only Propagate Once (YOPO) to reuse the gradient of the loss to the model's first layer output ∂L(x+δ,θ) ∂Z1(x) during generating PGD attacks. In this way, YOPO avoids a large amount of times to access the gradient and therefore it reduces the computational cost.

PROVABLE DEFENSES
Adversarial training has been shown to be effective in protecting models against adversarial examples. However, this is still no formal guarantee about the safety of the trained classifiers. We will never know whether there are more aggressive attacks that can break those defenses, so directly applying these adversarial training algorithms in safety-critical tasks would be irresponsible.
As we mentioned in Section (3.1.8), the work (Carlini et al., 2017) was the first to introduce a Reluplex algorithm to seriously verify the robustness of DNN models: when the model F is given, the algorithm figures out the exact value of minimal perturbation distance r(x; F ). This is to say, the classifier is safe against any perturbations with norm less than r(x; F ). If we apply Reluplex on the whole test set, we can tell what percentage of samples are absolutely safe against perturbations less than norm r 0 . In this way, we gain confidence and reduce the expected risk when building DNN models.
The method of Reluplex seeks to find the exact value of r(x; F ) that can verify the model F 's robustness on x. Alternately, works such as (Raghunathan et al., 2018a;Wong & Kolter, 2017;Hein & Andriushchenko, 2017) try to find trainable "certificates" C(x; F ) to verify the model robustness. For example, in (Hein & Andriushchenko, 2017), a certificate C(x, F ) is calculated for model F on x, which is a lower bound of minimal perturbation distance: C(x, F ) ≤ r(x, F ). As shown in Figure (8), the model must be safe against any perturbation with norm limited by C(x, F ). Moreover, these certificates are trainable. Training to optimize these certificates will grant good robustness to the classifier. In this section, we'll briefly introduce some methods to design these certificates.

(i) Lower Bound of Minimal Perturbation
The work in (Hein & Andriushchenko, 2017) derives a lower bound C(x, F ) for the minimal perturbation distance of F on x based on Cross-Lipschitz Theorem: The detailed derivation can be found in (Hein & Andriushchenko, 2017). Note that the formulation of C(x, F ) only depends on F and x, and it is easy to calculate for a neural network with one hidden layer. The model F thus can be proved to be safe in the region within distance C(x, F ). Training to maximize this lower bound will make the classifier more robust.

(ii) Upper Bound of Adversarial Loss
The works in (Raghunathan et al., 2018a;Wong & Kolter, 2017) aim to solve the same problem. They try to find an upper bound U(x, F ) which is larger than adversarial loss L adv (x, F ): Recall that in Section (2.2.2), we introduced the function max i =y Z i (x ) − Z y (x ) as a type of loss function called margin loss.
The certificate U(x, F ) acts in this way: if U(x, F ) < 0, then adversarial loss L(x, F ) < 0. Thus, the classifier always gives the largest score to the true label y in the region B (x), and the model is safe in this region. To increase the model's robustness, we should learn parameters that have the smallest U value, so that more and more data samples will have negative U values.
The work (Raghunathan et al., 2018a) uses integration inequalities to derive the certificate and use semidefinite programming (SDP) (Vandenberghe & Boyd, 1996) to solve the certificate. In contrast, the work (Wong & Kolter, 2017) transforms the problem (8) into a linear programming problem and solves the problem via training an alternative neural network. Both methods only consider neural networks with one hidden layer. There are also studies (Raghunathan et al., 2018b;Wong et al., 2018) that improved the efficiency and scalability of these algorithms.
Furthermore, the work in (Sinha et al., 2017) combines adversarial training and provable defense together. They train the classifier by feeding adversarial examples which are sampled from the distribution of worst-case perturbation, and derive the certificates by studying the Lagrangian duality of adversarial loss.

Adversarial Example Detection
Adversarial example detection is another main approach to protect DNN classifiers. Instead of predicting the model's input directly, these methods first distinguish whether the input is benign or adversarial. Then, if it can detect that the input is adversarial, the DNN classifier will refuse to predict its label. In (Carlini & Wagner, 2017a), it sorts the threat models into 3 categories that the detection techniques should deal with: 1. A Zero-Knowledge Adversary only has access to the classifier F 's parameters, and has no knowledge of the detection model D.

A Perfect-Knowledge
Adversary is aware of the model F , and the detection scheme D and its parameters.  (Hendrycks & Gimpel, 2016), adversarial examples are found to place a higher weight on the larger (later) principle components where the natural images have larger weight on early principle components. Thus, they can split them by PCA.

A Limited-Knowledge
In (Grosse et al., 2017), it uses a statistical test: Maximum Mean Discrepancy (MMD) test (Gretton et al., 2012), which is used to test whether two datasets are drawn from the same distribution. They use this testing tool to test whether a group of data points are benign or adversarial.

CHECKING THE PREDICTION CONSISTENCY
Other studies focus on checking the consistency of the sample x's prediction outcome. They usually manipulate the model parameters or the input examples themselves, to check whether the outputs of the classifier have significant changes. These are based on the belief that the classifier will have stable predictions on natural examples under these manipulations.
The work in (Feinman et al., 2017) randomizes the classifier using Dropout (Srivastava et al., 2014). If these classifiers give very different prediction outcomes on x after randomization, this sample x is very likely to be an adversarial one.
The work in (Xu et al., 2017) manipulates the input sample itself to check the consistency. For each input sample x, it reduces the color depth of the image (e.g. one 8-bit grayscale image with 256 possible values for each pixel becomes a 7-bit with 128 possible values), as shown in Figure  9. It hypothesizes that for natural images, reducing the color Figure 9. Images from MNIST and CIFAR10. From left to right, the color depth is reduced from 8-bit, 7-bit,...,2-bit,1-bit. (Image Credit: (Xu et al., 2017)) depth will not change the prediction result, but the prediction on adversarial examples will change. In this way, they can detect adversarial examples. Similar to reducing the color depth, the work (Feinman et al., 2017) also introduced other feature squeezing methods such as spatial smoothing.

SOME ATTACKS WHICH EVADE ADVERSARIAL DETECTION
The work in (Carlini & Wagner, 2017a) bypassed 10 of the detection methods which fall into the three categories above. The feature squeezing methods were broken by , which introduced a "stronger" adversarial attack.
The authors in (Carlini & Wagner, 2017a) claim that the properties which are intrinsic to adversarial examples are not very easy to find. They also gave several suggestions on future detection works: 1. Randomization can increase the required attacking distortion.
2. Defenses that directly manipulate on raw pixel values are ineffective.
3. Evaluation should be down on multiple datasets besides MNIST.
4. Report false positive and true positive rates for detection.
5. Use a strong attack and simply focusing on white-box attacks is risky.

Explanations for the Existence of Adversarial Examples
In addition to crafting adversarial examples and defending them, explaining the reason behind these phenomena is also important. In this section, we briefly introduce the recent works and hypotheses on the key questions of adversarial learning. We hope our introduction will give our audience a basic view on the existing ideas and solutions for these questions.

Why Do Adversarial Examples Exist?
Some original works such as (Szegedy et al., 2013), state that the existence of adversarial examples is due to the fact that DNN models do not generalize well in low probability space of data. The generalization issue may be caused by the high complexity of DNN model structures.
However, even linear models are also vulnerable to adversarial attacks (Goodfellow et al., 2014b). Furthermore, in the work (Madry et al., 2017), they implement experiments to show that an increase in model capacity will improve the model robustness.
Some insight can be gained about the existence of adversarial examples by studying the model's decision boundary.
The adversarial examples are almost always close to decision boundary of a natural trained model, which may be because the decision boundary is too flat , too curved (Moosavi-Dezfooli et al., 2017b), or inflexible (Fawzi et al., 2018).
Studying the reason behind the existence of adversarial examples is important because it can guide us in designing more robust models, and help us to understand existing deep learning models. However, there is still no consensus on this problem.

Can We Build an Optimal Classifier?
Many recent works hypothesize that it might be impossible to build optimally robust classifier. For example, the work in (Shafahi et al., 2018b) claims that adversarial examples are inevitable because the distribution of data in each class is not well-concentrated, which leaves room for adversarial examples. In this vein, the work  claims that to improve the robustness of a trained model, it is necessary to collect more data. Moreover, the work in  suggests that even if we can build models with high robustness, it must take cost of some accuracy.

What is Transferability?
Transferability is one of the key properties of adversarial examples. It means that the adversarial examples generated to target one victim model also have a high probability of misleading other models.
Some works compare the transferability between different attacking algorithms. In (Kurakin et al., 2016a), they claim that in ImageNet, single step attacks (FGSM) are more likely to transfer between models than iterative attacks (BIM) under same perturbation intensity.
The property of transferability is frequently utilized in attacking techniques in black-box setting ). If the model parameters are veiled to attackers, they can turn to attack other substitute models and enjoy the transferability of their generated samples. The property of transferability is also utilized by defending methods as in (Hendrycks & Gimpel, 2016): since the adversarial examples for model A are also likely to be adversarial for model B, adversarial training using adversarial examples from B will help defend A.

Graph Adversarial Examples
Adversarial examples also exist in graph-structured data (Zügner et al., 2018;Dai et al., 2018). Attackers usually slightly modify the graph structure and node features, in an effort to cause the graph neural networks (GNNs) to give wrong prediction for node classification or graph classification tasks. These adversarial attacks therefore raise concerns on the security of applying GNN models. For example, a bank needs to build a reliable credit evaluation system where their model should not be easily attacked by malicious manipulations.
There are some distinct differences between attacking graph models and attacking traditional image classifiers: • Non-Independence Samples of the graph-structured data are not independent: changing one's feature or connection will influence the prediction on others.
• Poisoning Attacks Graph neural networks are usually performed in a transductive learning setting for node classification: the test data are also used to trained the classifier. This means that we modify the test data, the trained classifier is also changed.
• Discreteness When modifying the graph structure, the search space for adversarial example is discrete. Previous gradient methods to find adversarial examples may be invalid in this case.
Below are the methods used by some successful works to attack and defend graph neural networks.

Definitions for Graphs and Graph Models
In this section, the notations and definitions of the graph structured data and graph neural network models are defined below. A graph can be represented as G = {V, E}, where V is a set of N nodes and E is a set of M edges. The edges describe the connections between the nodes, which can also be expressed by an adjacency matrix A ∈ {0, 1} N ×N . Furthermore, a graph G is called an attributed graph if each node in V is associated with a d-dimensional attribute vector The attributes for all the nodes in the graph can be summarized as a matrix X ∈ R N ×d , the i-th row of which represents the attribute vector for node v i .
The goal of node classification is to learn a function g : V → Y that maps each node to one class in Y, based on a group of labeled nodes in G. One of the most successful node classification models is Graph Convolutional Network (GCN) (Kipf & Welling, 2016). The GCN model keeps aggregating the information from neighboring nodes to learn representations for each node v, where σ is a non-linear activation function, the matrixÂ is defined asÂ =D − 1 2ÃD − 1 2 ,Ã = A + I N , andD ii = jÃ ij . The last layer outputs the score vectors of each node for prediction: H (m) v = F (v, X).

Zugner's Greedy Method
In the work of (Zügner et al., 2018), they consider attacking node classification models, Graph Convolutional Networks (Kipf & Welling, 2016), by modifying the nodes connections or node features (binary). In this setting, an adversary is allowed to add/remove edges between nodes, or flip the feature of nodes with limited number of operations. The goal is to mislead the GCN model which is trained on the perturbed graph (transductive learning) to give wrong predictions. In their work, they also specify three levels of adversary capabilities: they can manipulate (1) all nodes, (2) a set of nodes A including the target victim x, and (3) a set of nodes A which does not include the target node x. A sketch is shown in Figure 10.
Similar to the objective function in (Carlini & Wagner, 2017b) for image data, they formulate the graph attacking problem as a search for a perturbed graph G such that the learned GCN classifier Z * has the largest score margin: The authors solve this objective by finding perturbations on a fixed, linearized substitute GCN classifier G sub which is trained on the clean graph. They use a heuristic algorithm to find the most influential operations on graph G sub (e.g. removing/adding the edge or flipping the feature which can cause largest increase in (9)). The experimental results demonstrate that the adversarial operations are also effective on the later trained classifier Z * .
During the attacking process, the authors also impose two key constraints to ensure the similarity of the perturbed graph to the original one: (1) the degree distribution should be maintained, and (2) two positive features which never happen together in G should also not happen together in G . Later, some other graph attacking works (e.g. (Ma et al., 2019)) suggest the eigenvalues/eigenvectors of the graph Laplacian Matrix should also be maintained during attacking, otherwise the attacks are easily detected. However, there is still no firm consensus on how to formally define the similarity between graphs and generate unnoticeable perturbation.

Dai's RL Method : RL-S2V
Different from Zugner's greedy method, the work (Dai et al., 2018) introduced a Reinforcement Learning method to attack the graph neural networks. This work only considers adding or removing edges to modify the graph structure.
In (Dai et al., 2018)'s setting, a node classifier F trained on the clean graph G (0) = G is given, node classifier F is unknown to the attacker, and the attacker is allowed to modify m edges in total to alter F 's prediction on the victim node v 0 . The authors formulate this attacking mission as a Q-Learning game (Mnih et al., 2013), with the defined Markov Decision Process as below: • State The state s t is represented by the tuple (G (t) , v 0 ), where G (t) is the modified graph with t iterative steps.
• Action To represent the action to add/remove edges, a single action at time step t is a t ∈ V × V, which means the edge to be added or removed.
• Reward In order to encourage actions to fool the classifier, we should give a positive reward if v 0 's label is altered. Thus, the authors define the reward function as : r(s t , a t ) = 0, ∀t = 1, 2, ..., m − 1, and for the last step: • Termination The process stops once the agent finishes modifying m edges.
The Q-Learning algorithm helps the adversary have knowledge about which actions to take (add/remove which edge) on the given state (current graph structure), in order to get largest reward (change F 's output).

Graph Structure Poisoning via Meta-Learning
Previous graph attack works only focus on attacking one single victim node. The work in (Zügner & Günnemann, 2019) attempts to poison the graph so that the global node classification performance of GCN can be undermined and made almost useless. The approach is based on meta learning (Finn et al., 2017), which is traditionally used for hyperparameter optimization, few-shot image recognition, and fast reinforcement learning. In the work (Zügner & Günnemann, 2019), they use meta learning technique which takes the graph structure as the hyperparameter of the GCN model to optimize. Using their algorithm to perturb 5% edges of a CITESEER graph dataset, they can increase the misclassification rate to over 30%.

Attack on Node Embedding
The work in (Bojcheski & Günnemann, 2018) studies how to perturb the graph structure in order to corrupt the quality of node embedding, and consequently hinder subsequent learning tasks such as node classification or link prediction. Specifically, they study DeepWalk (Perozzi et al., 2014) as a random-walk based node embedding learning approach and approximately find the graph which has the largest loss of the learned node embedding.

ReWatt: Attacking Graph Classifier via Rewiring
The ReWatt method (Ma et al., 2019) attempts to attack the graph classification models, where each input of the model is a whole graph. The proposed algorithm can mislead the model by making unnoticeable perturbations on graph.
In their attacking scheme, they utilize reinforcement learning to find a rewiring operation a = (v 1 , v 2 , v 3 ) at each step, which is a set of 3 nodes. The first two nodes were connected in the original graph and the edge between them is removed in the first step of the rewiring process. The second step of the rewiring process adds an edge between the node v 1 and v 3 , where v 3 is constrained to be within 2-hops away from v 1 . Some analysis in (Ma et al., 2019) shows that the rewiring operation tends to keep the eigenvalues of the graph's Laplacian matrix, which makes it difficult to detect the attacker.

Defending Graph Neural Networks
Many works have shown that graph neural networks are vulnerable to adversarial examples, even though there is still no consensus on how to define the unnoticeable perturbation. Some defending works have already appeared. Many of them are inspired by the popular defense methodology in image classification, using adversarial training to protect GNN models, (Feng et al., 2019;Xu et al., 2019), which provides moderate robustness.

Adversarial Examples in Audio and Text Data
Adversarial examples also exist in DNNs' applications in audio and text domains. An adversary can craft fake speech or fake sentences that mislead the machine language processors. Meanwhile, deep learning models on audio/text data have already been widely used in many tasks, such as Apple Siri and Amazon Echo. Therefore, the studies on adversarial examples in audio/text data domain also deserve our attention.
As for text data, the discreteness nature of the inputs makes the gradient-based attack on images not applicable anymore and forces people to craft discrete perturbations on different granularities of text (character-level, word-level, sentencelevel, etc.). In this section, we introduce the related works in attacking NLP architectures for different tasks.

Speech Recognition Attacks
The work in ) studies how to attack state-of-art speech-to-text transcription networks, such as DeepSpeech (Hannun et al., 2014). In their setting, when given any speech waveform x, they can add an inaudible sound perturbation δ that makes the synthesized speech x+δ be recognized as any targeted desired phrase.
In their attacking work, they limited the maximum Decibels (dB) on any time of the added perturbation noise, so that the audio distortion is unnoticeable. Moreover, they inherit the C & W's attack method (Carlini & Wagner, 2017b) on their audio attack setting.

Text Classification Attacks
Text classification is one of main tasks in natural language processing. In text classification, the model is devised to understand a sentence and correctly label the sentence. For example, text classification models can be applied on the IMDB dataset for characterizing user's opinion (positive or negative) on the movies, based on the provided reviews. Recent works of adversarial attacks have demonstrated that text classifiers are easily misguided by slightly modifying the texts' spelling, words or structures.

ATTACKING WORD EMBEDDING
The work (Miyato et al., 2016) considers to add perturbation on the word embedding (Mikolov et al., 2013), so as to fool a LSTM (Hochreiter & Schmidhuber, 1997) classifier. However, this attack only considers perturbing the word embedding, instead of original input sentence.

ATTACKING WORDS AND LETTERS
The work HotFlip (Ebrahimi et al., 2017) considers to replace a letter in a sentence in order to mislead a characterlevel text classifier (each letter is encoded to a vector). For example, as shown in Figure 11, changing a single letter in a sentence alters the model's prediction on its topic. The attack algorithm manages to achieve this by finding the most-influential letter replacement via gradient information. These adversarial perturbations can be noticed by human readers, but they don't change the content of the text as a whole, nor do they affect human judgments. The work in  considers to manipulate the victim sentence on word and phrase levels. They try adding, removing or modifying the words and phrases in the sentences. In their approach, the first step is similar to HotFlip (Ebrahimi et al., 2017). For each training sample, they find the most-influential letters, called "hot characters". Then, they label the words that have more than 3 "hot characters" as "hot words". "Hot words" composite "hot phrases", which are most-influential phrases in the sentences. Manipulating these phrases is likely to influence the model's prediction, so these phrases composite a "vocabulary" to guide the attacking. When given a sentence, an adversary can use this vocabulary to find the weakness of the sentence, add one hot phrase, remove a hot phrase in the given sentence, or insert a meaningful fact which is composed of hot phrases.
DeepWordBug  and TextBugger Li et al. (2018a) are black-box attack methods for text classification. The basic idea of the former is to define a scoring strategy to identify the key tokens which will lead to a wrong prediction of the classifier if modified. Then they try four types of "imperceivable" modifications on such tokens: swap, substitution, deletion and insertion, to mislead the classifier. The latter follows the same idea, and improves it by introducing new scoring functions.
The works (Samanta & Mehta, 2017;Iyyer et al., 2018) start to craft adversarial sentences that grammatically correct and maintain the syntax structure of the original sentence. The work in (Samanta & Mehta, 2017) achieves this by using synonyms to replace original words, or adding some words which have different meanings in different contexts. On the other hand, the work (Iyyer et al., 2018) manages to fool the text classifier by paraphrasing the structure of sentences.
The work in (Lei et al., 2018) conducts sentence and word paraphrasing on input texts to craft adversarial examples. In this work, they first build a paraphrasing corpus that contains a lot of word and sentence paraphrases. To find an optimal paraphrase of an input text, a greedy method is adopted to search valid paraphrases for each word or sentence from the corpus. Moreover, they propose a gradient-guided method to improve the efficiency of greedy search. This work also has significant contributions in theory: they formally define the task of discrete adversarial attack as an optimization problem on a set functions and they prove that the greedy algorithm ensures a 1 − 1 e approximation factor for CNN and RNN text classifiers.

ATTACK ON READING COMPREHENSION SYSTEMS
The work (Jia & Liang, 2017) studies whether Reading Comprehension models are vulnerable to adversarial attacks. In reading comprehension tasks, the machine learning model is asked to answer a given question, based on the model's "understanding" from a paragraph of an article. For example, the work (Jia & Liang, 2017) concentrates on Stanford Question Answering Dataset (SQuAD) where systems answer questions about paragraphs from Wikipedia.
The authors successfully degrade the intelligence of the state-of-art reading comprehension models on SQuAD by inserting adversarial sentences. As shown in Figure 12, the inserted sentence (blue) looks similar to the question, but it does not contradict the correct answer. This inserted sentence is understandable for human readers but confuses the machine a lot. As a result, the proposed attacking algorithm reduced the performance of 16 state-of-art reading comprehension models from average 75% F1 score (accuracy) to 36%. Their proposed algorithm AddSent shows a four-step operation to find adversarial sentences. The work in (Belinkov & Bisk, 2017) studies the stability of machine learning translation tools when their input sentences are perturbed from natural errors (typos, misspellings, etc.) and manually crafted distortions (letter replacement, letter reorder). The experimental results show that the stateof-arts translation models are vulnerable to both two types of errors, and suggest adversarial training to improve the model's robustness.
Seq2Sick (Cheng et al., 2018) tries to attack seq2seq models in neural machine translation and text summarization. In their setting, two goals of attacking are set: to mislead the model to generate an output which has on overlapping with the ground truth, and to lead the model to produce an output with targeted keywords. The model is treated as a white-box and the attacking problem is formulated as an optimization problem where they seek to solve a discrete perturbation by minimizing a hinge-like loss function.

Dialogue Generation
Unlike the tasks above where success and failure are clearly defined, in the task of dialogue, there is no unique appropriate response for a given context. Thus, instead of misleading a well-trained model to produce incorrect outputs, works about attacking dialogue models seek to explore the property of neural dialogue models to be interfered by the perturbations on the inputs, or lead a model to output targeted responses.
The work in (Niu & Bansal, 2018) explores the oversensitivity and over-stability of neural dialogue models by using some heuristic techniques to modify original inputs and observe the corresponding outputs. They evaluate the robustness of dialogue models by checking whether the outputs change significantly with the modifications on the inputs. They also investigate the effects that take place when retraining the dialogue model using these adversarial examples to improve the robustness and performance of the underlying model.
In the work (He & Glass, 2018), the authors try to find trigger inputs which can lead a neural dialogue model to generate targeted egregious responses. They design a searchbased method to determine the word in the input that maximizes the generative probability of the targeted response. Then, they treat the dialogue model as a white-box and take advantage of the gradient information to narrow the search space. Finally they show that this method works for "normal" targeted responses which are decoding results for some input sentences, but for manually written malicious responses, it hardly succeeds.
The work  treats the neural dialogue model as a black-box and adopts a reinforcement learning framework to effectively find trigger inputs for targeted responses. The black-box setting is stricter but more realistic, while the requirements for the generated responses are properly relaxed. The generated responses are expected to be semantically identical to the targeted ones but not necessarily exactly match with them.

Adversarial Examples in Miscellaneous Tasks
In this section, we summarize some adversarial attacks in other domains. Some of these domains are safety-critical, so the studies on adversarial examples in these domains are also important.

Face Recognition
The work (Sharif et al., 2016) seeks to attack face recognition models on both a digital level and physical level. The main victim model is based on the architecture in (Parkhi et al.), which is a 39-layer DNN model for face recognition tasks. The attack on the digital level is based on traditional attacks, like Szegedy's L-BFGS method (Section 3.1.2).
Beyond digital-level adversarial faces, they also succeed in misleading face recognition models on physical level. They achieve this by asking subjects to wear their 3D printed sunglasses frames. The authors optimize the color of these glasses by attacking the model on a digital level: by considering various adversarial glasses and the most effective adversarial glasses are used for attack. As shown in Figure 13, an adversary wears the adversarial glasses and successfully fool the detection of victim face recognition system.

Object Detection and Semantic Segmentation
There are also studies on semantic segmentation and object detection models in computer vision (Xie et al., 2017b;Metzen et al., 2017b). In both semantic segmentation and object detection tasks, the goal is to learn a model that associates an input image x with a series of labels Y = {y 1 , y 2 , ...y N }. Semantic segmentation models give each pixel of x a label y i , so that the image is divided to different segments. Similarly, object detection models label all proposals (regions where the objects lie).
The work (Xie et al., 2017b) can generate an adversarial perturbation on x which can cause the classifier to give wrong prediction on all the output labels of the model, in order to fool either semantic segmentation or object detection models. The work (Metzen et al., 2017b) finds that there exists universal perturbation for any input image for semantic segmentation models.

Video Adversarial Examples
Most works concentrate on attacking static image classification models. However, success on image attacks cannot guarantee that there exist adversarial examples on videos and video classification systems. The work  uses GAN (Goodfellow et al., 2014a) to generate a dynamic perturbation on video clips that can mislead the classification of video classifiers.

Generative Models
The work (Kos et al., 2018) attacks the variational autoencoder (VAE) (Kingma & Welling, 2013) and VAE-GAN (Larsen et al., 2015). Both VAE and VAE-GAN use an encoder to project the input image x into a lower-dimensional latent representation z, and a decoder to reconstruct a new imagex from z. The reconstructed image should maintain the same semantics as the original image. In (Kos et al., 2018)'s attack setting, they aim to slightly perturb the input image x fed to encoder, which will cause the decoder to generate image f dec (f enc (x)) having different meaning from the input x. For example, in MNIST dataset, the input image is "1", and the reconstructed image is "0".

Malware Detection
The existence of adversarial examples in safety-critical tasks, such as malware detection, should be paid much attention. The work (Grosse et al., 2016) built a DNN model on the DREBIN dataset by (Arp et al., 2014), which contains 120,000 Android application samples, where over 5,000 are malware samples. The trained model has 97% accuracy, but malware samples can evade the classifier if attackers add fake features to them. Some other works (Hu & Tan, 2017;Anderson et al., 2016) consider using GANs (Goodfellow et al., 2014a) to generate adversarial malware.

Fingerprint Recognizer Attacks
Fingerprint recognition systems are also one of the most safety-critical fields where machine learning models are adopted. While, there are adversarial attacks undermining the reliability of these models. For example, fingerprint spoof attacks copy an authorized person's fingerprint and replicate it on some special materials such as liquid latex or gelatin. Traditional fingerprint recognition techniques especially minutiae-based models fail to distinguish the fingerprint images generated from different materials. The works ) design a modified CNN to effectively detect the fingerprint spoof attack.

Reinforcement Learning
Different from classification tasks, deep reinforcement learning (RL) aims to learn how to perform some human tasks, such as play Atari 2600 games (Mnih et al., 2013) or play Go (Silver et al., 2016b). For example, to play an Atari game Pong, (Figure 14-left), the trained model takes input from the latest images of game video (state -x), and outputs a decision to move up or down (action -y). The learned model can be viewed as a rule (policy -π θ ) to win the game (reward -L(θ, x, y)). A simple sketch can be: x π θ −→ y, which is in parallel to classification tasks: x f − → y. The RL algorithms are trained to learn the parameters of π θ .
The work (Huang et al., 2017) shows that deep reinforcement learning models are also vulnerable to adversarial examples. Their approach is inherited from FGSM (Goodfellow et al., 2014b), to take one-step gradient on the state x (latest images of game video) to craft a fake state x . The policy's decision on x can be totally useless to achieve the reward. Their results show that a slight perturbation on RL models' state, can cause large difference on the models' decision and performance. Their work shows that Deep Q Learning (Mnih et al., 2013), TRPO (Schulman et al., 2015) and A3C (Mnih et al., 2016) are all vulnerable to their attacks.

Conclusion
In this survey, we give a systemic, categorical and comprehensive overview on the recent works regarding adversarial examples and their countermeasures, in multiple data domains. We summarize the studies from each section in the chronological order as shown in Appendix B, because these works are released with relatively high frequency in response to one another. The current state-of-the-art attacks will likely be neutralized by new defenses, and these defenses will subsequently be circumvented. We hope that our work can shed some light on the main ideas of adversarial learning and related applications in order to encourage progress in the field.