Adversarial Coreset Selection for Efficient Robust Training

It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, ℓp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _p$$\end{document}-PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.


Introduction
Neural networks have achieved great success in the past decade.Today, they are one of the primary candidates in solving a wide variety of machine learning tasks, from object detection and classification (He et al., 2016, Wu et al., 2019) to photo-realistic image generation (Karras et al., 2020, Vahdat andKautz, 2020) and beyond.Despite their impressive performance, neural networks are vulnerable to adversarial attacks (Biggio et al., 2013, Szegedy et al., 2014): adding wellcrafted, imperceptible perturbations to their input can change their output.This unexpected behavior of neural networks prevents their widespread deployment in safety-critical applications, including autonomous driving (Eykholt et al., 2018) and medical diagnosis (Ma et al., 2021).As such, training robust neural networks against adversarial attacks is of paramount importance and has gained ample attention.
Adversarial training is one of the most successful approaches in defending neural networks against adversarial attacks.This approach first constructs a perturbed version of the training data.Then, the neural network is optimized over these perturbed inputs instead of the clean samples.This procedure must be done iteratively as the perturbations depend on the neural network weights.Since the weights are optimized during training, the perturbations also need to be adjusted for each data sample in every iteration. 1 Various adversarial training methods primarily differ in how they define and find the perturbed version of the input (Laidlaw et al., 2021, Madry et al., 2018, Zhang et al., 2019).However, they all require repetitive construction of these perturbations during training which is often cast as another non-linear optimization problem.Therefore, the time/computational complexity of adversarial training is much higher than vanilla training.In practice, neural networks require massive amounts of training data (Adadi, 2021) and need to be trained multiple times with various hyperparameters to get their best performance (Killamsetty et al., 2021a).Thus, reducing the time/computational complexity of adversarial training is critical to enabling the environmentally efficient application of robust neural networks in real-world scenarios (Schwartz et al., 2020, Strubell et al., 2019).
Fast Adversarial Training (FAT) (Wong et al., 2020) is a successful approach proposed for efficient training of robust neural networks.Contrary to the common belief that building the perturbed versions of the inputs using Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) does not help in training arbitrary robust models (Madry et al., 2018, Tramèr et al., 2018), Wong et al. (2020) show that by carefully applying uniformly random initialization before the FGSM step one can make this training approach work.Using FGSM to generate the perturbed input in a single step combined with implementation tricks such as mixed precision and cyclic learning rate (Smith, 2017), FAT can significantly reduce the training time of robust neural networks.
Despite its success, FAT may exhibit unexpected behaviors in different settings.For 1 Note that adversarial training in the literature generally refers to a particular approach proposed by Madry et al. (2018).In this paper, we refer to any method that builds adversarial attacks around the training data and incorporates them into the training of the neural network as adversarial training.Using this taxonomy, methods such as TRADES (Zhang et al., 2019), ℓp-PGD (Madry et al., 2018) or Perceptual Adversarial Training (PAT) (Laidlaw et al., 2021) are all considered different versions of adversarial training.
instance, it was shown that FAT suffers from catastrophic overfitting where the robust accuracy during training suddenly drops to 0% (Andriushchenko andFlammarion, 2020, Wong et al., 2020).Another fundamental issue with FAT and its variations such as GradAlign (Andriushchenko and Flammarion, 2020) and N-FGSM (de Jorge Aranda et al., 2022) is that they are specifically designed and implemented for ℓ ∞ adversarial training.This is because FGSM, particularly an ℓ ∞ perturbation generator, is at the heart of these methods.As a result, the quest for a unified approach that can reduce the time complexity of all types of adversarial training is not over.
Motivated by the limited scope of FAT, in this paper, we take an important step toward finding a general yet principled approach for reducing the time complexity of adversarial training.We notice that repetitive construction of adversarial examples for each data point is the main bottleneck of robust training.While this needs to be done iteratively, we speculate that perhaps we can find a subset of the training data that is more important to robust network optimization than the rest.Specifically, we ask the following research question: Can we train an adversarially robust neural network using a subset of the entire training data without sacrificing clean or robust accuracy?
This paper shows that the answer to this question is affirmative: by selecting a weighted subset of the data based on the neural network state, we run weighted adversarial training only on this selected subset.To achieve this goal, we first theoretically analyze adversarial subset selection convergence under gradient descent for a few idealistic settings.Our study demonstrates that the convergence bound is directly related to the capability of the weighted subset in approximating the loss gradient over the entire training set.Motivated by this analysis, we propose using the gradient approximation error as our adversarial coreset selection objective for training robust neural networks.We then draw an elegant connection between adversarial training and vanilla coreset selection algorithms.In particular, we use Danskin's    the entire training data and solve a respective subset selection objective.Afterward, training can be performed on this selected subset of the training data.In our approach, shown in Figure 1, adversarial coreset selection is only required every few epochs, effectively reducing the time complexity of robust training algorithms.We demonstrate how our proposed approach can be used as a general framework in conjunction with different adversarial training objectives, opening the door to a more principled approach for efficient training of robust neural networks in a general setting.Our experimental results show that one can reduce the time complexity of various robust training objectives by 2-3 times without sacrificing too much clean and robust accuracy.
In summary, we make the following contributions: • We propose a practical yet principled algorithm for efficient training of robust neural networks based on adaptive coreset selection.
To the best of our knowledge, we are the first to use coreset selection in adversarial training.• We provide theoretical guarantees for the convergence of our adversarial coreset selection algorithm under different settings.
• Based on our theoretical study, we develop adversarial coreset selection for neural networks and show that our approach can be applied to a variety of robust learning objectives, including TRADES (Zhang et al., 2019), ℓ p -PGD (Madry et al., 2018) and Perceptual (Laidlaw et al., 2021) Adversarial Training.Our approach encompasses a broader range of robust training objectives compared to the limited scope of the existing methods.
• Our experiments demonstrate that the proposed approach can result in a 2-3 fold reduction of the training time in adversarial training, with only a slight reduction in the clean and robust accuracy.The rest of this paper is organized as follows.
In Section 2, we go over the preliminaries of our work and review the related work.We then propose our approach in Section 3. Next, we present and discuss our experimental results in Section 4. Finally, we conclude the paper in Section 5.

Preliminaries
In this section, we review the related background to our work.

Adversarial Training
Let D = {(x i , y i )} n i=1 ⊂ X × Y denote a training dataset consisting of n i.i.d.samples.Each data point contains an input data x i from domain X and an associated label y i taking one of k possible values Y = [k] = {1, 2, . . ., k}.Without loss of generality, in this paper we focus on the image domain X .Furthermore, assume that f θ : X → R k denotes a neural network classifier with parameters θ that takes x ∈ X as input and maps it to a logit value f θ (x) ∈ R k .Then, training a neural network in its most general format can be written as the following minimization problem: Here, Φ (f θ ; x, y) is a function that takes a data point (x, y) and a function f θ as its inputs, and its output is a measure of discrepancy between the input x and its ground-truth label y.Also,

Vanilla Training
In case of vanilla training, the function Φ is a simple evaluation of an appropriate loss function over the neural network output f θ (x) and the groundtruth label y.In other words, for vanilla training we have: where L CE (•, •) is the cross-entropy loss.

FGSM, ℓ p -PGD, and Perceptual Adversarial Training
In adversarial training, the objective is itself an optimization problem: (3) where d(•, •) is an appropriate distance measure over image domain X , and ε denotes a scalar.The constraint over d (x, x) is used to ensure visual similarity between x and x.It can be shown that solving Eq. (3) amounts to finding an adversarial example x for the clean sample x (Madry et al., 2018).Different choices of the visual similarity measure d(•, •) and solvers for Eq.(3) result in different adversarial training objectives: • FGSM (Goodfellow et al., 2015) assumes that d(x, x) = ∥x − x∥ ∞ .Using this ℓ ∞ assumption, the solution to Eq. ( 3) is computed using one iteration of gradient ascent.• ℓ p -PGD (Madry et al., 2018) utilizes ℓ p norms as a proxy for visual similarity d(•, •).Then, several steps of projected gradient ascent is taken to solve Eq. ( 3).• Finally, Perceptual Adversarial Training (PAT) (Laidlaw et al., 2021) uses Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) as its distance measure.Laidlaw et al. (2021) propose to solve the inner maximization of Eq. (3) objective using either projected gradient ascent or Lagrangian relaxation.

TRADES Adversarial Training
This approach uses a combination of Eqs. ( 2) and ( 3).The intuition behind TRADES (Zhang et al., 2019) is creating a trade-off between clean and robust accuracy.In particular, the objective is written as: such that d = (x, x) ≤ ε.Here, λ is a regularization parameter that controls the trade-off.

Coreset Selection
Coreset selection (also referred to as adaptive data subset selection) attempts to find a weighted subset of the data that can approximate specific attributes of the entire population (Feldman, 2020).Coreset selection algorithms start with defining a criterion based on which the subset of interest is found: In this definition, S is a subset of the entire data V, and γ denotes the weights associated with each sample in the subset S.Moreover, C(•, •) denotes a selection criterion based on which the coreset S * and its weights γ * are aimed to be found.Once the coreset is found, one can work with these samples to represent the entire dataset.Figure 2 depicts this definition of coreset selection.
Traditionally, coreset selection has been used for different machine learning tasks such as kmeans and k-medians (Har-Peled and Mazumdar, 2004), Naïve Bayes and nearest neighbor classifiers (Wei et al., 2015), and Bayesian inference (Campbell and Broderick, 2018).Recently, coreset selection algorithms are developed for neural network training (Killamsetty et al., 2021a,b,c, Mirzasoleiman et al., 2020a,b).The main idea behind such methods is often to approximate the full gradient using a subset of the training data.
Existing coreset selection algorithms can only be used for the vanilla training of neural networks.As such, they still suffer from adversarial vulnerability.This paper extends coreset selection algorithms to robust neural network training and shows how they can be adapted to various robust training objectives.

Proposed Method
The main bottleneck in the time/computational complexity of adversarial training stems from constructing adversarial examples for the entire training set at each epoch.FAT (Wong et al., 2020) tries to eliminate this issue by using FGSM as its adversarial example generator.However, this simplification (1) may lead to catastrophic overfitting (Andriushchenko andFlammarion, 2020, Wong et al., 2020), and (2) is not easily applicable to different types of adversarial training as FGSM is designed explicitly for ℓ ∞ attacks.
Instead of using a faster adversarial example generator, here, we take a different, orthogonal path and try to reduce the training set size effectively.This way, the original adversarial training algorithm can still be used on this smaller subset of training data.This approach can reduce the time/computational complexity, while optimizing a similar objective as the initial training.In this sense, it leads to unified method that can be used along various types of adversarial training objectives, including the ones that already exist and the ones that will be proposed in the future.
The main hurdle in materializing this idea is the following question: How should we select this subset of the training data while minimizing the impact on the clean or robust accuracy?
To answer this question, we next provide convergence guarantees for adversarial training using a subset of the training data.This analysis would lay the foundation of our adversarial coreset selection objective in the subsequent sections.

Convergence Guarantees
This section provides theoretical insights into our proposed adversarial coreset selection.Specifically, we aim to find a convergence bound for adversarial training over a subset of the data and see how it relates to the optimal solution.
Let L(θ) denote the adversarial training objective over the entire training dataset such that:2 where L(θ; xi ) is the evaluation of the loss over input xi with network parameters θ. 3 The goal is to find the optimal set of parameters θ such that this objective is minimized.To optimize the parameters θ of the underlying learning algorithm, we use gradient descent.Let t = 0, 1, . . ., T − 1 denote the current epoch.Then, gradient descent update can be written as: where α t is the learning rate.
As demonstrated in Eq. ( 5), the ultimate goal of coreset selection is to find a subset S ⊆ V of the training data with weights γ to approximate certain behaviors of the entire population V.In our case, the aim is to successfully train a robust neural network over this weighted dataset using: which is the weighted loss over the coreset S. 4Once a coreset S is found, we can replace the gradient descent update rule in Eq. ( 7) with: where is the weighted empirical loss over the coreset S t at iteration t.
The following theorem extends the convergence guarantees of Killamsetty et al. (2021a) to adversarial training.
Theorem 1 Let γ t and S t denote the weights and subset derived by any adversarial coreset selection algorithm at iteration t of the full gradient descent.Also, let θ * be the optimal model parameters, L be a convex loss function with respect to θ, and that the parameters are bounded such that ∥θ − θ * ∥ ≤ ∆.Moreover, let us define the gradient approximation error at iteration t with: Then, for t = 0, 1, • • • , T − 1 the following guarantees hold: (1) For a Lipschitz continuous loss function L with parameter σ and constant learning rate α = ∆ σ √ T we have: (2) Moreover, for a Lipschitz continuous loss L with parameter σ and strongly convex with parameter µ, by setting a learning rate α t = 2 nµ(1+t) we have: where n is the total number of training data.
Proof Sketch We first draw a connection between the Lipschitz and strongly convex properties of the loss function L and its max function max L.
Then, we exploit these lemmas as well as Danskin's theorem (Theorem 2) to provide the convergence guarantees.For more details, please see Appendix B. □

Coreset Selection for Efficient Adversarial Training
As our analysis in Theorem 1 indicates, the convergence bound consists of two terms: an irreducible noise term and an additional term consisting of gradient approximation errors.Motivated by our analysis for this idealistic setting, we set our adversarial coreset selection objective to minimize the gradient approximation error.
In particular, let us assume that we have a neural network that we aim to robustly train using: min where V denotes the entire training data, and Φ(•) takes one of Eqs. ( 3) and ( 4) formats.We saw that we need a subset of data that can minimize the gradient approximation error to have a tight convergence bound.This choice also makes intuitive sense: since the gradient contains the relevant information for training a neural network using gradient descent, we must attempt to find a subset of the data that can approximate the full gradient.
As such, we set the adversarial coreset selection criterion to: where S * ⊆ V is the coreset, and γ * j 's are the weights of each sample in the coreset.Once the coreset is found, instead of training the neural network using Eq. ( 11), we can optimize it just over the coreset using a weighted training objective min It can be shown that solving Eq. ( 12) is NPhard (Mirzasoleiman et al., 2020a,b).Roughly, various coreset selection methods differ in how they approximate the solution of the aforementioned objective.For instance, Craig (Mirzasoleiman et al., 2020a) casts this objective as a submodular set cover problem and uses existing greedy solvers to get an approximate solution.
As another example, GradMatch (Killamsetty et al., 2021a) analyzes the convergence of stochastic gradient descent using adaptive data subset selection.Based on this study, Killamsetty et al. (2021a) propose to use Orthogonal Matching Pursuit (OMP) (Elenberg et al., 2016, Pati et al., 1993) as a greedy solver of the data selection objective.More information about these methods is provided in Appendix A.
The issue with the aforementioned coreset selection methods is that they are designed explicitly for vanilla training of neural networks (see Figure 1b), and they do not reflect the requirements of adversarial training.As such, we should modify these methods to make them suitable for our purpose of robust neural network training.Meanwhile, we should also consider the fact that the field of coreset selection is still evolving.Thus, we aim to find a general modification that can later be used alongside newer versions of greedy coreset selection algorithms.
We notice that various coreset selection methods proposed for vanilla neural network training only differ in their choice of greedy solvers.Therefore, we narrow down the changes we want to make to the first step of coreset selection: gradient computation.Then, existing greedy solvers can be used to find the subset of training data that we are looking for.To this end, we draw a connection between coreset selection methods and adversarial training using Danskin's theorem, as outlined next.Our analysis shows that for adversarial coreset selection, one needs to add a pre-processing step where adversarial attacks for the raw training data need to be computed (see Figure 1c).

From Vanilla to Adversarial Coreset Selection
To construct the coreset selection objective given in Eq. ( 12), we need to compute the loss gradient with respect to the neural network weights.Once done, we can use existing greedy solvers to find the solution.The gradient computation needs to be performed for the entire training set.In particular, using our notation from Section 2.1, this step can be written as: where V denotes the training set.
For vanilla neural network training (see Section 2.1) the above gradient is simply equal to ∇ θ L CE (f θ (x i ), y i ) which can be computed using standard backpropagation.In contrast, for the adversarial training objectives in Eqs. ( 3) and ( 4), this gradient requires taking partial derivative of a maximization objective.To this end, we use the famous Danskin's theorem (Danskin, 1967) as stated below.
Theorem 2 (Theorem A.1 in Madry et al. (2018)) Let K be a nonempty compact topological space, L : R m × K → R be such that L(•, δ) is differentiable and convex for every δ ∈ K, and is locally Lipschitz continuous, convex, directionally differentiable, and its directional derivatives along vector h satisfy In summary, Theorem 2 indicates how to take the gradient of a max-function.To this end, it suffices to (1) find the maximizer, and (2) evaluate the normal gradient at this point.Now that we have stated Danskin's theorem, we are ready to show how it can provide the connection between vanilla coreset selection and the adversarial training objectives of Eqs. ( 3) and (4).We show this for the two cases of adversarial training and TRADES, but it can also be used for any other robust training objective.

Case 1 (ℓ p -PGD and Perceptual
Adversarial Training) Going back to Eq. ( 14), we need to compute this gradient term for our coreset selection objective in Eq. (3).In particular, we need to compute: under the constraint d(x, x) ≤ ε for every training sample.Based on Danskin's theorem, we deduce: where x * is the solution to: The conditions under which Danskin's theorem hold might not be satisfied for neural networks in general.This is due to the presence of functions with discontinuous gradients, such as ReLU activation, in neural networks.More importantly, finding the exact solution of Eq. ( 17) is not straightforward as neural networks are highly nonconvex.Usually, the exact solution x * is replaced with its approximation, which is an adversarial example generated under the Eq. ( 17) objective (Kolter and Madry, 2018).Based on this approximation, we can re-write Eq. ( 16) as: In other words, to perform coreset selection for ℓ p -PGD (Madry et al., 2018) and Perceptual (Laidlaw et al., 2021) Adversarial Training, one needs to add a pre-processing step to the gradient computation.At this step, adversarial examples for the entire training set must be constructed.Then, the coresets can be built as in vanilla neural networks.

Case 2 (TRADES Adversarial Training)
For TRADES (Zhang et al., 2019), the gradient computation is slightly different as the objective in Eq. ( 4) consists of two terms.In this case, the gradient can be written as: The first term is the normal gradient of the neural network.For the second term, we apply Danskin's theorem to obtain: where x adv is an approximate solution to: Then, we compute the second gradient term in Eq. ( 20) using the multi-variable chain rule (see Section B.1).We can write the final TRADES gradient as: where freeze(•) stops the gradients from backpropagating through its argument function.Having found the loss gradients ∇ θ Φ (f θ ; x i , y i ) for ℓ p -PGD, PAT (Case 1), and TRADES (Case 2), we can construct Eq. ( 12) and use existing greedy solvers like Craig (Mirzasoleiman et al., 2020a) or Grad-Match (Killamsetty et al., 2021a) to find the coreset.Conceptually, adversarial coreset selection amounts to adding a pre-processing step where we need to build perturbed versions of the training data using their respective objectives in Eqs. ( 17) and ( 21).Afterward, greedy subset selection algorithms are used to construct the coresets based on the value of the gradients.Finally, having selected the coreset data, one can run a weighted adversarial training only on the data that remains in the coreset: As can be seen, we are not changing the essence of the training objective in this process.We are just reducing the training size to enhance the computational efficiency of our proposed solution, and as such, we can use it along any adversarial training objective.

Practical Considerations
Since coreset selection depends on the current values of the neural network weights, it is crucial to update the coresets as the training progresses.
Prior work (Killamsetty et al., 2021a,b) has shown that this selection needs to be done every T epochs, where T is usually greater than 15.Also, we employ small yet critical practical changes while using coreset selection to increase efficiency.We summarize these practical tweaks below.Further detail can be found in (Killamsetty et al., 2021a, Mirzasoleiman et al., 2020a).
Gradient Approximation.As we saw, both Eqs. ( 18) and ( 22) require computation of the loss gradient with respect to the neural network weights.This is equal to backpropagation through the entire neural network, which is inefficient.Instead, it is common to replace the exact gradients in Eqs. ( 18) and ( 22) with their last-layer approximation (Katharopoulos and Fleuret, 2018, Killamsetty et al., 2021a, Mirzasoleiman et al., 2020a) Parameters: learning rate α, total epochs E, warm-start coefficient κ, coreset update period T , batch size b, coreset size k, perturbation bound ε.

18:
end for 19: end for the entire dataset.Afterward, the coreset selection is activated, and adversarial training is only performed using the coreset data.

Final Algorithm
Figure 1 and Algorithm 1 summarize our coreset selection approach for adversarial training.As can be seen, our proposed method is a generic and principled approach in contrast to existing methods such as FAT (Wong et al., 2020).In particular, our approach provides the following advantages compared to existing methods: 1.The proposed approach does not involve algorithmic level manipulations and dependency on specific training attributes such as ℓ ∞ bound or cyclic learning rate.Also, it controls the training speed through coreset size, which can be specified solely based on available computational resources.2. The simplicity of our method makes it compatible with any existing/future adversarial training objectives.Furthermore, as we will see in Section 4, our approach can be combined with any greedy coreset selection algorithms to deliver robust neural networks.
These characteristics are important as they increase the likelihood of applying our proposed method for robust neural network training no matter the training objectives.This starkly contrasts with existing methods focusing solely on a particular training objective.

Experimental Results
In this section, we present our experimental results.We show how our proposed approach can efficiently reduce the training time of various robust objectives in different settings.To this end, we train our approach using TRADES (Zhang et al., 2019), ℓ p -PGD (Madry et al., 2018) and PAT (Laidlaw et al., 2021) on CIFAR-10 ( Krizhevsky and Hinton, 2009), SVHN (Netzer et al., 2011)

TRADES and ℓ p -PGD Robust Training
In our first set of experiments, we train wellknown neural network classifiers on CIFAR-10, SVHN, and ImageNet-100 datasets using TRADES, ℓ ∞ and ℓ 2 -PGD adversarial training objectives.In each case, we set the training hyperparameters such as the learning rate, the number of epochs, and attack parameters.Then, we train the network using the entire training data and our adversarial coreset selection approach.
For our approach, we use batch-wise versions of Craig (Mirzasoleiman et al., 2020a) and Grad-Match (Killamsetty et al., 2021a) with warmstart.We set the coreset size (the percentage of training data to be selected) to 50% for CIFAR-10 and ImageNet-100, and 30% for SVHN to get a reasonable balance between accuracy and training time.We report the clean and robust accuracy (in %) as well as the total training time (in minutes) in Table 1.For our approach, we also report the difference with full training in parenthesis.In each case, we evaluate the robust accuracy using an attack with similar attributes as the training objective.
As can be seen in Table 1, in most cases, we reduce the training time by more than a factor of two, while keeping the clean and robust accuracy almost intact.Note that in these experiments, all the training attributes such as the hyper-parameters, learning rate scheduler, etc. are the same among different training schemes.This is important since we want to clearly show the relative boost in performance that one can achieve just by using coreset selection.Nonetheless, it is likely that by tweaking the hyper-parameters for our approach, one can obtain even better results in terms of clean and robust accuracy.The first version uses PGD, and the second is a relaxation of the original problem using the Lagrangian form.We refer to these two versions as PPGD (Perceptual PGD) and LPA (Lagrangian Perceptual Attack), respectively.Then, Laidlaw et al. (2021) proposed to utilize a fast version of LPA to enable its efficient usage in adversarial training.More information on this approach can be found in (Laidlaw et al., 2021).
For our next set of experiments, we show how our approach can be adapted to this unusual training objective.This is done to showcase the compatibility of our proposed method with different training objectives as opposed to existing methods that are carefully tuned for a particular training objective.To this end, we train ResNet-50 classifiers using Fast-LPA.In this case, we train the classifiers on CIFAR-10 and ImageNet-12 datasets.Like our previous experiments, we set the hyper-parameters of the training to be fixed and then train the models using the entire training data and our adversarial coreset selection method.For our method, we use batch-wise versions of Craig (Mirzasoleiman et al., 2020a) and GradMatch (Killamsetty et al., 2021a) with warm-start.The coreset size for CIFAR-10 and ImageNet-12 were set to 40% and 50%, respectively.As in Laidlaw et al. (2021), we measure the performance of the trained models against unseen attacks during training, as well as the two variants of perceptual attacks.The unseen attacks for each dataset were selected in a similar manner to Laidlaw et al. ( 2021), and the attack parameters can be found in Appendix C. We also record the total training time taken by each method.
Table 2 summarizes our results on PAT using Fast-LPA (full results can be found in Appendix D).As seen, our adversarial coreset selection approach can deliver a competitive performance in terms of clean and average unseen attack accuracy while reducing the training time by at least a factor of two.These results indicate the flexibility of our adversarial coreset selection  that can be combined with various objectives.This is due to the orthogonality of the proposed approach with the existing efficient adversarial training methods.In this case, we can make Fast-LPA even faster by using our approach.

Compatibility with Existing Methods
To showcase that our adversarial coreset selection approach is complementary to existing methods, we integrate it with two existing baselines that aim to improve the efficiency of adversarial training.
Early Termination.Going through our results in Tables 1 and 2, one might wonder what would happen if we decrease the number of training epochs by half.To perform this experiment, we select the WideResNet-28-10 architecture and train robust neural networks over CIFAR-10 and SVHN datasets.We set all our hyper-parameters in a similar manner to the ones used for the experiments in Table 1, and only halve the number of training epochs.To make sure that the learning rate is also comparable, we halve the learning rate scheduler epochs as well.Then, we train the neural networks using ℓ ∞ and ℓ 2 -PGD adversarial training.
Table 3 shows our results compared to the ones reported in Table 1.As can be seen, adversarial coreset selection obtains a similar performance to using the entire data by consuming 2-3 times less training time.
Fast Adversarial Training.Additionally, we integrate adversarial coreset selection with a stable version of Fast Adversarial Training (FAT) (Wong et al., 2020) that does not use cyclic learning rate.Specifically, we train a neural network using FAT (Wong et al., 2020), and then add adversarial coreset selection to this approach and record the training time and clean and robust accuracy.We run the experiments on the CIFAR-10 dataset and train a ResNet-18 for each case.We set our methods' coreset size to 50%.The results are shown in Table 4.As seen, our approach can be easily combined with existing methods to deliver faster training.This is due to the orthogonality of our approach with existing methods that we discussed previously.
Moreover, we show that adversarial coreset selection gives a better approximation to ℓ ∞ -PGD adversarial training compared to using FGSM (Goodfellow et al., 2015) as done in FAT (Wong et al., 2020).To this end, we use our adversarial GradMatch to train neural networks with the original ℓ ∞ -PGD objective.We also train  Fig. 3: Robust accuracy as a function of ℓ ∞ attack norm.We train neural networks with a perturbation norm of ∥ε∥ ∞ ≤ 8 on CIFAR-10.At inference, we evaluate the robust accuracy against PGD-50 with various attack strengths.
these networks using FAT (Wong et al., 2020) that uses FGSM.We train neural networks with a perturbation norm of ∥ε∥ ∞ ≤ 8.Then, we evaluate the trained networks against PGD-50 adversarial attacks with different attack strengths to see how each network generalizes to unseen perturbations.As seen in Figure 3, adversarial coreset selection is a closer approximation to ℓ ∞ -PGD compared to FAT (Wong et al., 2020).This indicates the success of the proposed approach in retaining the characteristics of the original objective as opposed to existing methods like (Andriushchenko andFlammarion, 2020, Wong et al., 2020).

Ablation Studies
In this section, we perform a few ablation studies to examine the effectiveness of our adversarial coreset selection method.In our first set of experiments, we compare a random data selection with adversarial GradMatch.Figure 4 shows that Table 5: Performance of ℓ ∞ -PGD.In "Half-Half", we mix half adversarial coreset selection samples with another half of clean samples and train a neural network similar to (Tsipras et al., 2019).In "ONLY-Core" we just use adversarial coreset samples.Settings are given in Table C2.The results are averaged over 5 runs.for any given coreset size, our adversarial coreset selection method results in a lower robust error.Furthermore, we modify the warm-start epochs for a fixed coreset size of 50%.The proposed method is not very sensitive to the number of warm-start epochs, although a longer warm-start is generally beneficial.
In another comparison, we run an experiment similar to that of Tsipras et al. (2019).Specifically, we minimize the average of adversarial and vanilla training in each epoch.The non-coreset data is treated as clean samples to minimize the vanilla objective, while for the coreset samples, we would perform adversarial training.Table 5 shows the results of this experiment.As seen, adding the non-coreset data as clean inputs to the training improves the clean accuracy while decreasing the robust accuracy.These results align with the observations of Tsipras et al. (2019) around the existence of a trade-off between clean and robust accuracy.
Next, we investigate the effect of adversarial coreset selection frequency.Remember from Section 3.4 where we argued that performing adversarial coreset selection every T epochs would help with the speed-up.However, one must note that setting T to a large number might come at the cost of sacrificing clean and robust accuracy.To show this, we perform our early Table 6: Clean (ACC) and robust (RACC) accuracy, and total training time (T) of ℓ ∞ -PGD adversarial training over CIFAR-10 for WideResNet-28-10 architecture.For each method, all the hyper-parameters were kept the same as Table 3.The frequency column indicates the number of epoch that we wait to update the coreset using Algorithm 1.The information on the computation of RACC in each case is given in Appendix C.

Freq.
( stopping experiments from Table 3 with different coreset renewal frequencies.Our results are given in Table 6.As can be seen, decreasing coreset selection renewal frequency would be helpful in gaining more speed-up but it could hurt the overall model performance. Finally, we study the accuracy vs. speed-up trade-off in different versions of adversarial coreset selection.For this study, we train our adversarial coreset selection method using different versions of Craig (Mirzasoleiman et al., 2020a) and Grad-Match (Killamsetty et al., 2021a) on CIFAR-10 using TRADES.In particular, for each method, we start with the base algorithm and add the batch-wise selection and warm-start one by one.Also, to capture the effect of the coreset size, we vary this number from 50% to 10%. Figure 5 shows the clean and robust error vs. speed-up compared to full adversarial training.In each case, the combination of warm-start and batch-wise versions of the adversarial coreset selection gives the best performance.Moreover, as we gradually decrease the coreset size, the training speed goes up.However, this gain in training speed is achieved at the cost of increasing the clean and robust error.Both of these observations align with that of Killamsetty et al. (2021a) around vanilla coreset selection.

Conclusion
In this paper, we proposed a general yet principled approach for efficient adversarial training based on the theory of coreset selection.We discussed how repetitive computation of adversarial attacks for the entire training data could impede the training speed.Unlike previous works that try to solve this issue by making the adversarial attack more straightforward, here, we took an orthogonal path to reduce the training set size without modifying the attacker.We first provided convergence bounds for adversarial training using a subset of the training data.Our analysis showed that the convergence bound is related to how well this selected subset can approximate the loss gradient computed with the entire data.Based on this study, we proposed to use the gradient approximation error as our coreset selection objective and tried to make a connection with vanilla coreset selection.To this end, we discussed how coreset selection could be viewed as a two-step process, where first, the gradients for the entire training data are computed.Then greedy solvers choose a weighted subset of data that can approximate the full gradient.Using Danskin's theorem, we drew a connection between greedy coreset selection algorithms and adversarial training.We then showed the flexibility of our adversarial coreset selection Acknowledgments.This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne.This Facility was established with the assistance of LIEF Grant LE170100200.Sarah Erfani is in part supported by Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) DE220100680.Moreover, this research was partially supported by the ARC Centre of Excellence for Automated Decision-Making and Society (CE200100005), and funded partially by the Australian Government through the Australian Research Council.
11: return S, γ. (2) Here, step (1) is derived using the multi-variable chain rule.Also, step (2) is the re-writing of step (1) by using the freeze(•) kernel that stops the gradients from backpropagating through its argument function.

B.2 Convergence Proofs
First, we establish the relationship between the Lipschitzness and strong convexity of the (loss) function L and its max-function max L. To this end, we prove the following two lemmas.These results will be an essential part of the main convergence theorem proof.Note that our results in the section follow our notations established in Section 3.1.
Lemma 1 Let K be a nonempty compact topological space, L : R m × K → R be such that L(•, δ) is Lipschitz continuous for every δ ∈ K.Then, the corresponding max-function is also Lipschitz continuous.
Theorem 1 (restated) Let γ t and S t denote the weights and subset derived by any adversarial coreset selection algorithm at iteration t of the full gradient descent.Also, let θ * be the optimal model parameters, L be a convex loss function with respect to θ, and that the parameters are bounded such that ∥θ − θ * ∥ ≤ ∆.Moreover, let us define the gradient approximation error at iteration t with: Then, for t = 0, 1, • • • , T − 1 the following guarantees hold: (1) For a Lipschitz continuous loss function L with parameter σ and constant learning rate α = ∆ σ √ T we have: (2) Moreover, for a Lipschitz continuous loss L with parameter σ and strongly convex with parameter µ, by setting a learning rate α t = 2 nµ(1+t) we have: where n is the total number of training data.
Proof We take a similar approach to that of Killamsetty et al. (2021a) to prove these convergence guarantees.In particular, we first derive a general relationship that relates the gradients to the optimal model parameters and then use the conditions of each part to simplify and get to the final result.
Using the gradient descent update rule in Eq. ( 12), we can re-write it as: Using the identity 2a ⊺ b = ∥a∥ 2 + ∥b∥ 2 − ∥a − b∥ 2 , we can simplify Eq. (B14) with: where we replaced θ t − θ t+1 with α t ∇ θ L γ t (θ t ) using the gradient descent update rule from Eq. ( 9).Now, we can re-write the LHS of Eq. (B15) as: by adding and subtracting the full gradient ∇ θ L(θ t ).Keeping ∇ θ L(θ t ) ⊺ (θ t − θ * ) on the LHS, we move the remaining two terms to the RHS of Eq. (B15).Thus, we get: Using this, we can conclude from Eq. ( 8) that: where ( 1) is the result of the triangle inequality, and (2) is derived using Eq.(B22).We further assume that the gradients as well as the weights are normalized such that j∈S t γ t j = 1.Hence, we can plug Eq. (B23) into Eq.(B20) to get: where ( 1) is deduced using Eq.(B23), and (2) is derived using the Cauchy-Schwarz inequality.From our assumptions we have ∥θ − θ * ∥ ≤ ∆.As such, we re-write Eq. (B24) using ∥θ − θ * ∥ ≤ ∆ and dividing both sides by T : we can get: And finally, we can show that α 2 = ∆ 2 σ 2 T minimizes the addition of the first two terms.Hence, by plugging we get our first result: Lipschitz Continuous and Strongly Convex Loss Function L. Since the loss function L(θ; x) is µstrongly convex in θ for every x, from Lemma 2 we know that max x L(θ; x) is also going to be µ-strongly convex.Thus, using the additive property of strongly convex functions, we conclude that L(θ) = i∈V max xi L(θ; xi ) is strongly convex.It is straightforward to show that if max xi L(θ; xi ) is µ-strongly convex, then their summation L(θ) is strongly convex with parameter µn = nµ, where n is the number of terms in L(θ).Using the conclusions above, from the definition of strongly convex functions, we can write: Putting Eqs.(B16) and (B28) together, we can deduce: Now, let us set the learning rate α t = 2 µn(t+1) .We have: Rearranging Eq. (B30), we can multiply both sides by a factor t ≥ 0 to get: Since we assume that L(θ; x) is Lipschitz continuous with constant σ, from Lemma 1 we know that max x L(θ; x) is also Lipschitz continuous with parameter σ.Thus, similar to Eq. ( B23) we can conclude that: where once again we assume that the gradients are normalized so that j∈S t γ t j = 1.Replacing Eq. (B32) into Eq.(B31), we have: Using the Cauchy-Schwarz inequality, we also write: Adding all terms for t = 0, 1, . . ., T − 1, we get: For the first sum we write: The second and third terms on the RHS form a telescoping sum.That is, defining a t = µnt(t−1) 4 ∥θ t − θ * ∥ 2 we have: Combining Eqs.(B35) to (B37), we deduce: Using our assumption 0 ≤ ∥θ − θ * ∥ ≤ ∆, we then have: To simplify the LHS of Eq. (B39), note that min t=0:T ).Thus, we can have: After putting Eq. (B40) into Eq.(B39) and rearranging the terms, we will finally arrive at:

Appendix C Implementation Details
This section provides the details of our experiments in Section 4. We used a single NVIDIA Tesla V100-SXM2-16GB GPU for CIFAR-10 ( Krizhevsky and Hinton, 2009) and SVHN (Netzer et al., 2011), a single NVIDIA Tesla V100-SXM2-32GB GPU for ImageNet-12 (Liu et al., 2020), and a single NVIDIA A100 for ImageNet-100 (Russakovsky et al., 2015).Our implementation is released at this repository.7 Table C1: Hyper-parameters of attacks used for the evaluation of PAT models in Section 4.

C.1 Training Settings
Tables C2 and C3 show the entire set of hyper-parameters and settings used for training the models of Section 4.  C1.Note that we chose the same set of unseen/seen attacks to evaluate each case.However, since we trained our model with slightly different ε bounds, we changed the attacks' settings accordingly.

Appendix D Extended Experimental Results
Table D4 shows the full details of our experiments on PAT (Laidlaw et al., 2021).In each case, we train ResNet-50 (He et al., 2016) classifiers using LPIPS (Zhang et al., 2018) objective of PAT (Laidlaw et al., 2021).In each dataset, all the training hyper-parameters are fixed.The only difference is that we enable adversarial coreset selection for our method.During inference, we evaluate each trained model against a few unseen attacks, as well as two variants of Perceptual Adversarial Attacks (Laidlaw et al., 2021) that the models are trained initially on.As can be seen, adversarial coreset selection can significantly reduce the training time while experiencing only a small reduction in the average robust accuracy.

Finish
Selection is done every T epochs.During the next episodes, the network is only trained on this subset.Coreset selection module for adversarial training.

Fig. 1 :
Fig. 1: Overview of neural network training using coreset selection.Contrary to vanilla coreset selection, in our adversarial version we first need to construct adversarial examples and then perform coreset selection.

Fig. 2 :
Fig. 2: Coreset selection aims at finding a weighted subset of the data that can approximate certain behaviors of the entire data samples.In this figure, we denote the behavior of interest as a function B(•, •) that receives a set and its associated weights.The goal of coreset selection is to move from the original data V with uniform wights 1 to a weighted subset S * with weights γ * .

4. 2
Perceptual Adversarial Training vs. Unseen Attacks in Eq. (3) with LPIPS(Zhang et al., 2018) distance.The logic behind this choice is that ℓ p norms can only capture a small portion of images similar to the clean one, limiting the search space of adversarial attacks.Motivated by this reason,Laidlaw et al. (2021) proposes two different ways of finding the solution to Eq. (3) when d(•, •) is the LPIPS distance.

Fig. 4 :
Fig.4: Relative robust error vs. speed up for TRADES.For a given subset size, we compare our adversarial coreset selection (GradMatch) against random data selection.Furthermore, we show our results for a selection of different warm-start settings.

Fig. 5 :
Fig. 5: Relative error vs. speed up curves for different versions of adversarial coreset selection in training CIFAR-10 models using the TRADES objective.In each curve, the coreset size is changed from 50% to 10% (left to right).(a, b) Clean and robust error vs. speed up compared to full TRADES for different versions of Adversarial Craig.(c, d) Clean and robust error vs. speed up compared to full TRADES for different versions of Adversarial GradMatch.

For
the evaluation of TRADES and ℓ p -PGD adversarial training, we use PGD attacks.In particular, for TRADES and ℓ ∞ -PGD adversarial training, we use ℓ ∞ -PGD attacks with ε = 8/255, step-size α = 1/255, 50 iterations, and 10 random restarts.The only exception is ImageNet-100, where we evaluate the models using ℓ ∞ -PGD with ε = 4/255 and step-size α = 2/255.Also, for ℓ 2 -PGD adversarial training we use ℓ 2 -PGD attacks with ε = 80/255, step-size α = 8/255, 50 iterations and 10 random restarts.For Perceptual Adversarial Training (PAT), we report the attacks' settings in Table theorem and demonstrate how the entire training data can effectively be approximated with an informative weighted subset.To conduct this selection, our study shows that one needs to build adversarial examples for

Table 1 :
Clean (ACC) and robust (RACC) accuracy, and total training time (T) of different adversarial training methods.For each method, all the hyper-parameters were kept the same as full training.For our proposed approach, the difference with full training is shown in parentheses.The information on the computation of RACC in each case is given in Appendix C.

Table 2 :
Clean (ACC) and robust (RACC) accuracy, and total training time (T) of Perceptual AdversarialTraining for CIFAR-10 and ImageNet-12 datasets.At inference, the networks are evaluated against five attacks that were not seen during training (Unseen RACC), as well as different versions of Perceptual Adversarial Attack (Seen RACC).In each case, the average is reported.For more information and details about the experiment, please see Appendices C and D.

Table 3 :
Clean (ACC) and robust (RACC) accuracy, and total training time (T) of different adversarial training methods over WideResNet-28-10.For each method, all the hyper-parameters were kept the same as Table1.The only exception is that all the epoch-related parameters were halved.The difference with full training is shown in parentheses for our proposed approach.The information on the computation of RACC in each case is given in Appendix C.

Table 4 :
(Wong et al., 2020)ust (RACC) accuracy, and average training speed (S avg ) of Fast Adversarial Training(Wong et al., 2020)without and with our adversarial coreset selection on CIFAR-10.For our proposed approach, the difference with full training is shown in parentheses.

Table C2 :
Training details for experimental results of Section 4.

Table D4 :
Laidlaw et al. (2021))acy (%), and total training time (mins) of Perceptual Adversarial Training for CIFAR-10 and ImageNet-12 datasets.The training objective uses Fast Lagrangian Perceptual Attack (LPA)(Laidlaw et al., 2021)to train the network.At test time, the networks are evaluated against attacks not seen during training and different versions of Perceptual Adversarial Attack (PPGD and LPA).The unseen attacks were selected similar toLaidlaw et al. (2021)in each case.Please see the Section C for more information about the settings.