A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples

Deep neural networks (DNNs) are vulnerable to adversarial attacks that generate adversarial examples by adding small perturbations to the clean images. To combat adversarial attacks, the two main defense methods used are denoising and adversarial training. However, both methods result in the DNN having lower classification accuracy for clean images than conventionally trained DNN models. To overcome this problem, we propose a hybrid adversarial training (HAT) method that trains the denoising network and DNN model simultaneously. The proposed HAT method uses both clean images and adversarial examples denoised by the denoising network and non-denoised clean images and adversarial examples to train the DNN model. The results of experiments conducted on the MNIST, CIFAR-10, CIFAR-100, and GTSRB datasets show that the HAT method results in a higher classification accuracy than both conventional training with a denoising network and previous adversarial training methods. They also indicate that training with the HAT method results in average improvements in robustness of 0.84%, 27.33%, 28.99%, and 17.61% against adversarial attacks compared with several state-of-the-art adversarial training methods on the MNIST, CIFAR-10, CIFAR-100, and GTSRB datasets, respectively. Thus, the proposed HAT method results in improved robustness for DNNs against a wider range of adversarial attacks.


Introduction
Deep neural networks (DNNs) have achieved high performance in various applications, such as image classification [1], object detection [2], and natural language processing [3]. However, despite their success, DNNs are vulnerable to adversarial attacks that generate malicious inputs to misclassify samples. These malicious inputs, which are created by imperceptible perturbations in the original sample, are called adversarial examples, and various attack methods have been proposed to generate them [4][5][6]. The purpose of these attacks is to maximize the classification error while Daeseon Choi sunchoi@ssu.ac.kr Gwonsang Ryu gsryu@ssu.ac.kr 1 Cyber Security Research Center, Soongsil University, Seoul, 06978, South Korea 2 Department of Software, Soongsil University, Seoul, 07027, South Korea minimizing the perturbations so that a human cannot distinguish between a normal sample and an adversarial example. These adversarial attacks can deceive models trained on different network architectures or different subsets of training data [4]. Moreover, adversarial attacks are effective on DNNs deployed in real-world settings, such as selfdriving cars [7,8], facial recognition [9,10], and object detection [11].
To combat adversarial attacks, two main types of defense methods have been proposed. First, denoising methods [12,13] mitigate the perturbations on adversarial examples when they are input to the target model. Denoising methods enable DNN models to correctly classify adversarial examples by returning them to the DNN models after removing the perturbations. Second type is adversarial training [14,15], which generates adversarial examples and subsequently trains the model to classify them as the correct class. However, denoising methods and adversarial training have a common limitation in that after training, the classification accuracy of the DNN for clean images decreases. This is because denoising methods perform training separately from the DNN model to remove adversarial perturbations and the DNN model is not trained on the denoised clean images, thus decreasing the classification accuracy of the DNN model for the denoised clean images. In addition, adversarial training decreases the classification accuracy for clean images because it trains the model using only adversarial examples [14] or by adjusting the balance for the importance of natural and robust errors [15].
To overcome this limitation, we propose a hybrid adversarial training method that trains the denoising network and DNN model simultaneously. The proposed method repeats three steps for every epoch: 1)  We demonstrate that the proposed training method results in enhanced robustness against adversarial attacks and higher classification accuracy for clean images than applying a denoising network that is trained separately from the DNN model and adversarial training methods. The main contributions of this study are summarized as follows: • We propose a hybrid adversarial training method that results in higher classification accuracy for clean images and enables DNN models to be more robust against various adversarial attacks. • We show that the clean image classification performance of a model trained using our proposed method is higher than that resulting from state-of-the-art methods. • We show that our proposed method outperforms several adversarial training methods and denoising methods that involve separate training by the model.
The remainder of this paper is organized as follows. Section 2 reviews the related work on adversarial attacks, adversarial training, and denoising-based adversarial defense methods. Section 3 describes our proposed training method. Section 4 compares the performance of the proposed method with that of several state-of-the-art methods. Section 5 discusses the experimental results and limitations of the proposed method. Section 6 presents concluding remarks.

Background and related work
In this section, we introduce various adversarial attack methods, denoising methods to mitigate adversarial perturbation on adversarial examples, and relevant background on adversarial training. We also summarize the advantages and disadvantages of denoising methods and adversarial training methods.

Adversarial attacks
Adversarial attacks can be categorized into two types of threat models: white-box and black-box. A white-box adversary has full access to the DNN parameters, network architecture, and weights, whereas a black-box adversary has no knowledge of the target DNN or cannot access it.
Szegedy et al. [4] were the first to show that for whitebox attacks, the existence of small perturbations in images might lead to DNN misclassification. They generated adversarial examples using the box-constrained limitedmemory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. Additionally, they were the first to demonstrate the property of transferability, where adversarial examples that deceive one model could deceive other models. Further, Goodfellow et al. [16] proposed the fast gradient sign method (FGSM) to generate adversarial examples using only a one-step gradient update along the direction of the gradient at each pixel. Kurakin et al. [17] and Madry et al. [14] proposed multi-step attack methods, called the iterative fast gradient sign method (I-FGSM) and projected gradient descent (PGD), both of which are capable of generating adversarial examples and achieving higher attack success rates than FGSM. Moosavi-Dezfooli et al. [6] proposed DeepFool, which is a simple yet accurate method for computing and comparing the robustness of different models to adversarial perturbations. Carlini et al. [18] proposed an optimization-based attack method that determines the smallest perturbation that can deceive the target model.
Alternatively, Papernot et al. [19] introduced the first black-box attack method using a substitute model. Park et al. [20] proposed a substitute model attack method that emulates a partial target model by adjusting the partial classification boundary to decrease the number of queries. Additionally, Dong et al. [21] introduced a class of attack methods, called the momentum-based iterative method (MIM), in which the gradients of the loss function are accumulated at each iteration to stabilize the optimization and avoid a poor local maximum. This method results in more transferable adversarial examples.

Denoising-based adversarial defenses
To combat such adversarial attacks, denoising-based defense methods perform image denoising to remove the adversarial perturbations. Liao et al. [22] proposed a highlevel representation guided denoiser (HGD), which uses a loss function defined as the difference between the target model's outputs activated by the natural image and the denoised image. Samangouei et al. [23] proposed Defense-GAN, which projects adversarial examples into a trained generative adversarial network (GAN) to approximate the input sample using a generated clean sample. Song et al. [24] proposed PixelDefend, which reconstructs adversarial examples so that they follow the training distribution using PixelCNN [25]. Prakash et al. [12] proposed pixel deflection, that is, a defense method in which some pixels are randomly replaced with nearby pixels and then wavelets are used to denoise the image. Naseer et al. [13] proposed NRP; a method that recovers a legitimate sample from a given adversarial example using an adversarially trained purifier network. Kang et al. [26] proposed CAP-GAN, which integrates the idea of pixel-level and feature-level consistency to achieve reasonable purification under cycle-consistent learning. These denoising methods can mitigate adversarial perturbations on adversarial examples, but they can remove the important information from clean images. Therefore, the accuracy of the DNN model decreases for clean images because the DNN model may not correctly classify the denoised clean images.

Adversarial training
Adversarial training is a defense method for effectively enhancing the robustness of models against adversarial attacks. Adversarial training is based on the principles of model loss maximization and minimization. In the maximization step, adversarial examples are generated by an adversarial attack using each mini-batch included in the training set. In the minimization step, the model parameters are updated using the generated adversarial examples for loss minimization. Recently, many studies [14-16, 27, 28] have focused on analyzing and improving adversarial machine learning. For example, Goodfellow et al. [16] were the first to suggest feeding generated adversarial examples into the model in the training process, while Madry et al. [14] formulated adversarial training as a minmax optimization problem as follows: where θ is a parameter of the model, x is an original image, x is an adversarial example, f θ (·) is the output vector of the model, D is the distribution of the training dataset, L ce is the cross-entropy (CE) loss function, and S § is the allowed perturbation space that is typically selected as an L p norm ball around x. The inner maximization step generates adversarial examples using the FGSM [16] or PGD [14] attack methods. In the outer minimization step, the model is trained to minimize the adversarial loss induced by the inner maximization step. Zhang et al. [15] proposed the tradeoff-inspired adversarial defense via surrogate-loss minimization (TRADES) to optimize a regularized surrogate loss, and defined it as follows: where L kl is the Kullback-Leibler divergence loss function, and λ is a parameter that adjusts the balance for the importance of natural and robust errors. In the inner maximization step, the decision boundary of the model is pushed away from the sample instances by minimizing the difference between the prediction of the natural image and that of the adversarial example. In the outer minimization step, a model is trained to stimulate the optimization of the natural error by minimizing the difference between the prediction and ground truth of the natural image. Ryu et al. [34] proposed an adversarial training method that trains DNNs to be robust against transferable adversarial examples and to maximize their classification performance for clean images in black-box settings.
Shafahi et al. [27] and Zheng et al. [28] proposed adversarial training methods to reduce computational cost, called "free" and adversarial training with transferable adversarial examples (ATTA), respectively. The "free" adversarial training method [27] eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating the model parameters. Meanwhile, to reduce the computational cost, ATTA [28] can generate adversarial examples using fewer attack iterations by accumulating adversarial perturbations through epochs. As the purpose of these methods is to reduce the computational cost of adversarial training, both methods use the maximization and minimization steps of Madry's adversarial training (MAT) [14] or TRADES [15]. These adversarial training methods train DNN models to be robust against various adversarial attacks, but the resulting DNN models have lower classification accuracy for clean images than conventionally trained DNN models.

Hybrid adversarial training
In this section, we describe our proposed hybrid adversarial training (HAT) method that can be used to train a DNN model to be robust against adversarial attacks while retaining its classification accuracy for clean images. Figure 1 shows the process employed in the HAT method. The

Loss function of denoising network
To mitigate adversarial perturbations on adversarial examples, we train an autoencoder-based denoising network with a U-Net [29] structure. The denoising network has four convolution layers and four de-convolution layers with batch normalization. We define a loss term to minimize the difference between the reconstructed adversarial example and the clean image as follows: where ψ is a parameter of the denoising network, P ψ (·) is the output of the denoising network, x is a clean image, x is an adversarial example generated by adding adversarial perturbations to the clean image, and · 2 is the L 2 distance. The reconstructed adversarial example mitigates the adversarial perturbations on the adversarial example because it is reconstructed in a manner similar to the clean image. In addition, we add a loss term itL recon adv so that the DNN model correctly classifies the reconstructed adversarial example. L recon adv is defined depending on the adversarial training method [14,15]. If we use MAT [14], L recon adv is defined as follows: where y is the ground truth corresponding to x, θ is a parameter of the model, f θ (·) is the output of the model for a particular input, and L ce is the CE loss function. If we use TRADES [15], L recon adv is defined as follows: where L kl is the Kullback-Leibler divergence loss function.
To train the denoising network, we minimize the loss function as follows: where α controls the weighting of L recon adv . Further, we use only the adversarial example. When both the adversarial example and the clean image are used, the denoising network reconstructs the input image to intermediate values between the clean image and the adversarial example. In other words, the denoising network cannot effectively mitigate adversarial perturbations on adversarial examples.

Loss function of the DNN model
To train a DNN model that correctly classifies the reconstructed input images, we use the clean images and adversarial examples as well as the reconstructed clean images and reconstructed adversarial examples. We first define a loss term to correctly classify the clean image as follows: To correctly classify the clean image reconstructed by the denoising network, we define another loss term as follows: To correctly classify the adversarial example, we define a loss term L adv depending on the adversarial training method used, such as (4) or (5). To correctly classify the denoised adversarial examples, we define the loss function for training the DNN model as follows: where β 1 , β 2 , and β 3 control the weighting of L recon clean , L adv , and L recon adv , respectively. The HAT method trains the denoising network and the DNN model simultaneously. We construct a loss function for the denoising network to mitigate adversarial perturba-

Training setting
The MNIST dataset contains 60,000 training and 10,000 test images with input sizes of 1×28×28 and 10 classes. To train the model on the MNIST dataset, we used a network with four convolutional layers followed by three fully connected layers-the same architecture as that used in [15]. We set the perturbation budget = 0.3, step size α = 0.01, number of iterations K = 40, learning rate η = 0.1, and batch size m = 128, then ran 100 epochs on the training dataset.
The CIFAR-10 and CIFAR-100 datasets contain 50,000 training and 10,000 test images with input sizes of 3×32×32. The GTSRB dataset contains 39,209 training and 12,630 test images of various image sizes and 43 classes. We resized the images in the GTSRB dataset to 3×32×32. To train models on the CIFAR-10, CIFAR-100, and GTSRB datasets, we used the wide residual network (WRN)-34-10 [32], which is the same as that used in [15]. The CIFAR-100 dataset is a more challenging dataset than the CIFAR-10 dataset because it includes more classes (100 vs. 10 in the CIFAR-10 dataset). Therefore, there are only 600 images per class in the CIFAR-100 dataset. We set the perturbation budget = 0.031, step size α = 0.007, number of iterations K = 10, learning rate η = 0.1, and batch size m = 64, then ran 100 epochs on the training dataset. For all the datasets, we set α and all betas as 0.5 or 1.0 when training the DNN model using MAT [14] in the proposed method, and α, β 2 , and β 3 as 1.0 or 6.0, β 1 as 0.5 or 1.0 when training the DNN model using TRADES [15] in the proposed method.

Attack setting
To verify the robustness of our method, we used FGSM [16], PGD [14], DeepFool [6], CW [18], and MIM [21] attacks to generate adversarial examples. For the MNIST dataset, we set the perturbation budget to 0.3 for all attack methods. Additionally, we set the step size α to 0.01 and the number of iterations K to 40 for the PGD and MIM attacks. For the CIFAR-10, CIFAR-100, and GTSRB datasets, we set the perturbation budget to 0.031 for all attack methods. We set the step size α to 0.003 and the number of iterations K to 20 for the PGD and MIM attacks. Moreover, we performed a CW attack by applying the CW's objective function [18] within the PGD framework. The attack parameters were the same as in [14].

Selecting optimal alpha and betas
To study the robustness of the proposed method against adversarial attacks, we must select the optimal values for alpha and betas. To do so, we trained the denoising network and DNN model by adjusting the alpha and betas. For training the denoising network and DNN model by applying MAT, α and all betas were set as 0.5 and 1.0. For training the denoising network and DNN model by applying TRADES, α, β 2 , and β 3 were set as 1.0 and 6.0, and β 1 was selected as 0.5 and 1.0.
For the MNIST dataset, when α and all betas were set to 1.0, the DNN model trained using MAT had the lowest classification accuracy of 99.49% for clean images, but it was the most robust against FGSM, PGD, DeepFool, CW, and MIM attacks with accuracies of 98.12%, 97.67%, 98.89%, 99.08, and 97.45%, respectively. Therefore, we selected the optimal value for α and all betas to be 1.0. When we set alpha and all betas to 1.0, the DNN model trained using TRADES had the highest classification accuracy of 99.51% for the clean images, and was most robust against FGSM, DeepFool, and MIM attacks with accuracies of 98.04%, 98.88%, and 97.38%, respectively. In addition, it was similarly robust with models trained using other parameter settings against PGD and CW attacks. Therefore, we selected the optimal value for α and all betas to be 1.0. Table 1 shows the robustness of the DNN models trained using the proposed HAT method according to the values of alpha and betas against adversarial attacks on the MNIST dataset.
For the CIFAR-10 dataset, when we set α and all betas to 1.0, the DNN model trained using MAT had the third highest classification accuracy of 89.20% for the clean images, but it was the most robust against FGSM, PGD, CW, and MIM attacks with accuracies of 93.99%, 94.78%, 75.41%, and 95.02%, respectively, whereas it was the second-most robust performance against DeepFool attacks with accuracy of 89.13%. Therefore, we selected the optimal value of α and all betas to be 1.0. When we set α = 1.0, β 1 = 0.5, β 2 = 1.0, and β 3 = 1.0, the DNN model trained using TRADES had the highest classification accuracy of 90.58% for the clean images, and it showed the most robust performance against FGSM, PGD, CW, and MIM attacks with accuracies of 80.07%, 71.69%, 71.98%, and 76.44%, respectively. Therefore, we set the optimal values as α = 1.0, β 1 = 0.5, β 2 = 1.0, and β 3 = 1.0. Table 2 shows the robustness of the DNN models trained using the proposed HAT method according to the values of alpha and betas against adversarial attacks on the CIFAR-10 dataset.
For the CIFAR-100 dataset, when we set α and all betas to 1.0, the model trained using MAT had the lowest classification accuracy of 62.58% for the clean images, but it showed the most robust performance against all adversarial attacks, with accuracies of 71.54%, 72.99%, 62.45%, 46.33%, and 73.58%, respectively. Therefore, we selected the optimal values as α and set all betas to 1.0. When we set α = 6.0, β 1 = 0.5, β 2 = 6.0, and β 3 = 6.0, the model trained using TRADES had the secondhighest classification accuracy of 66.55% for the clean images, but it had the most robust performance against FGSM, PGD, DeepFool, and CW attacks with accuracies of 43.89%, 45.97%, 66.97%, and 50.21%, respectively, and it showed the second-most robust performance against MIM attacks with accuracy of 45.60%. Therefore, we selected the optimal values as α = 6.0, β 1 = 0.5, β 2 = 6.0, and β 3 = 6.0. Table 3 shows the robustness of the DNN models trained using the proposed HAT method according to the values of alpha and betas against adversarial attacks on the CIFAR-100 dataset.
For the GTSRB dataset, when we set α = 1.0, β 1 = 0.5, β 2 = 1.0, and β 3 = 1.0, the model trained using MAT had the highest classification accuracy of 95.52% for the clean images. In addition, it had the most robust performance against all adversarial attacks with accuracies of 91.94%, 81.84%, 91.20%, 86.29%, and 81.62%, respectively. Therefore, we selected the optimal values as α = 1.0, β 1 = 0.5, β 2 = 1.0, and β 3 = 1.0. When we set α and all betas to 1.0, the model trained using TRADES had the highest classification accuracy of 96.56% for the clean images. In addition, it had the most robust performance against FGSM, PGD, and MIM attacks with accuracies of 82.53%, 81.10%, and 81.50%, respectively, and it showed similar robust performance against DeepFool and CW attacks, with accuracies of 92.97% and 80.01%. Therefore, we selected the optimal value for α and all betas to be 1.0. Table 4 shows the robustness of the DNN models trained using the proposed method according to the values of alpha and betas against adversarial attacks on the GTSRB dataset.

Robustness comparison with previous methods
To compare the robustness of the proposed method with that of the conventionally trained DNN models with and without the denoising network, we first conventionally trained the DNN model and then trained the denoising network using adversarial examples to deceive the conventionally trained DNN model. In addition, we compared the robustness of the proposed method with that of the previous methods, including MAT [14], TRADES [15], and ATTA [28].
For the MNIST dataset, the conventionally trained DNN model without the denoising network had accuracy of 99.46% for the clean images, but it was vulnerable to PGD, DeepFool, CW, and MIM attacks with accuracies of 9.38%, 1.06%, 2.70%, and 17.07%, respectively. Additionally, it showed accuracy of 63.81% against FGSM attacks. The conventionally trained DNN model with the denoising network for the clean images had lower accuracy than the conventionally trained DNN model without the denoising network. However, it was more robust than the conventionally trained DNN model without the denoising network against all adversarial attacks. The DNN models trained using MAT, TRADES, and ATTA were more robust than the conventionally trained DNN model without the denoising network against all attack methods, but their classification accuracies were less than that of the conventionally trained DNN model without the denoising network for the clean images. In addition, the DNN models trained using MAT, TRADES, and ATTA were less robust than the conventionally trained DNN model with the denoising network against most adversarial attacks. The DNN models trained using the proposed method had higher accuracies than the conventionally trained DNN model with the denoising network and the DNN models trained using MAT, TRADES, and ATTA for clean images. In addition, the DNN models trained using the proposed method were more robust than the DNN models trained using MAT, TRADES, and ATTA against most adversarial attacks. Table 5 shows the robustness comparison between the proposed method and the previous methods against various adversarial attacks on the MNIST dataset.
For the CIFAR-10 dataset, the conventionally trained DNN model without the denoising network had accuracy of 96.01% for the clean images, but it was vulnerable to all adversarial attacks. The conventionally trained DNN model with the denoising network had lower accuracy than the conventionally trained DNN model without the denoising network for the clean images, but the conventionally trained DNN model with the denoising network was more robust than the conventionally trained DNN model without the denoising network against all adversarial attacks. The DNN models trained using MAT, TRADES, and ATTA were more robust than the conventionally trained DNN model without the denoising network against all attack methods, but their classification accuracies were less than that of the conventionally trained DNN model without the denoising network for the clean images. In addition, the DNN models trained using MAT, TRADES, and ATTA were more robust than the conventionally trained DNN model with the denoising network against some adversarial attacks. The DNN models trained using the proposed method had higher accuracies than the conventionally trained DNN model with the denoising network and the DNN models trained by MAT, TRADES, and ATTA for clean images. In addition, 63.23% β 2 = 6.0, β 3 = 6.0) the DNN models trained using the proposed method were more robust than the DNN models trained using MAT, TRADES, and ATTA against all adversarial attacks. Table 6 shows the robustness comparison between the proposed method and the previous methods against various adversarial attacks on the CIFAR-10 dataset.
For the CIFAR-100 dataset, the conventionally trained DNN model without denoising network had accuracy of 79.45% for the clean images, but it was very vulnerable to all adversarial attacks. The DNN model with denoising network had lower accuracy than the DNN model without denoising network for the clean images, but it was more robust than the DNN model without denoising network against all adversarial attacks. The DNN models trained using MAT, TRADES, and ATTA had lower accuracies than the conventionally trained DNN model with denoising network for the clean images, and they were less robust against all adversarial attacks. The DNN model trained using the proposed method with MAT had lower accuracy than the conventionally trained DNN model with denoising network for the clean images, but it had higher accuracy than the DNN models trained using TRADES and ATTA for the clean images. In addition, the DNN model trained using the proposed method with MAT was more robust than the conventionally trained DNN model with denoising network against FGSM, PGD, and MIM attacks. Additionally, it was more robust than the DNN models trained using MAT and TRADES against all adversarial attacks. Table 7 shows the robustness comparison between the proposed method and the previous methods against adversarial attacks on the CIFAR-100 dataset.
For the GTSRB dataset, the conventionally trained DNN model without denoising network had accuracy of 98.69% for the clean images, but it was very vulnerable to all adversarial attacks. The DNN model with denoising network had lower accuracy than the DNN model without denoising network for the clean images, but it was more robust than the DNN model without denoising network against all adversarial attacks. In addition, the DNN model with denoising network was more robust than the DNN models trained using MAT, TRADES, ATTA against all adversarial attacks. The DNN model trained using the proposed method with MAT had higher accuracy than the conventionally trained DNN model with denoising network for the clean images. In addition, the DNN models trained using the proposed method were as robust as the conventionally trained DNN model with denoising network against all adversarial attacks. Table 8 shows the robustness comparison between the proposed method and the previous methods against adversarial attacks on the GTSRB dataset.

Discussion
In this section, we discuss the classification performance for clean images, robustness against adversarial attacks, and the denoising network.

Classification performance
For the clean images, the conventionally trained DNN model with a denoising network has lower accuracy than the model without a denoising network because the conventionally trained DNN model is not trained on the denoised clean images and the important features of the clean images may be removed by the denoising network. The adversarially trained DNN model also has lower accuracy than the conventionally trained DNN model because MAT [14] trains the DNN model using only adversarial examples, whereas TRADES [15] trains the DNN model by adjusting the balance for the importance of natural and robust errors. Furthermore, ATTA [28] generates adversarial examples using MAT or TRADES by accumulating adversarial perturbations in every epoch to reduce the computational cost of adversarial training. Therefore, the DNN model trained using ATTA has lower accuracy than the conventionally trained DNN model. Our proposed method uses not only clean images and adversarial examples but also denoised clean images and denoised adversarial examples to train the DNN model. Therefore, the proposed method trains the DNN model using four times more images than conventional training, consequently enabling the DNN model trained using the proposed method to correctly classify the denoised clean images.

Robustness against adversarial attacks
Our proposed method results in greater robustness than previous adversarial training methods, including MAT, TRADES, and ATTA. The proposed method repeatedly trains the denoising network to remove the adversarial perturbations on adversarial examples by minimizing the difference between adversarial examples reconstructed by the denoising network and the original images. In addition,

Denoising network
In this study, we only used an autoencoder-based denoising network to mitigate adversarial perturbations on adversarial examples. We were able to defend DNN models against various adversarial attacks using a simple autoencoderbased denoising network. However, there are many other denoising network structures, including NRP [13] and CAP-GAN [26], which are more effective than autoencoder-based denoising networks and can be alternatively used in the proposed method. With other denoising networks in the proposed method, we can expect higher classification accuracy for clean images and more robust performance against adversarial attacks. However, because other denoising networks, such as NRP [13] and CAP-GAN [26], have complex architectures, if they are used in the proposed method, a longer time will be necessary for their training and that of the DNN model.

Limitations
Our study has the following limitations. First, the computational cost of adversarial training is higher than that of conventional training because it trains the DNN model using the generated adversarial examples to deceive the DNN model. A high computational cost means that it takes a long time to train the DNN model. Our proposed method requires more time than the adversarial training method to train the DNN model because it trains the DNN model and the denoising network simultaneously.
Second, denoising networks are vulnerable to adaptive attacks, such as backward pass differentiable approximation (BPDA) attacks [33]. The proposed method generates adversarial examples to deceive the DNN model by assuming that adversaries not only know the DNN model but also know exactly the denoising network that the defender uses. In other words, adversaries generate adversarial examples that can deceive the DNN model even if adversarial perturbations on adversarial examples are mitigated by the denoising network. If adversaries perform a BPDA attack against the proposed method, we anticipate that the proposed method would not correctly classify the adversarial examples generated.

Conclusion and future work
We proposed a hybrid adversarial training method that trains a DNN model and a denoising network simultaneously. The proposed method trains the DNN model such that it correctly classifies non-denoised clean images and adversarial examples as well as denoised clean images and adversarial examples. We showed that the DNN model trained using the proposed method has higher classification accuracy than the conventionally trained DNN model with a denoising network on the MNIST, CIFAR-10, CIFAR-100, and GTSRB datasets. In addition, the DNN model trained using the proposed method was more robust against adversarial attacks than the DNN model trained using previous adversarial training methods on all datasets. However, the denoising network is vulnerable to adaptive attacks such as BPDA. In future studies, we will focus on adversarial defense methods that are robust to adaptive attacks. In addition, we will extend our hybrid adversarial training method to 3D engineering applications [36][37][38].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.