1 Introduction

Deep learning is being used in diverse areas including the medical field, engineering, fraud detection, and so on. As the purpose of deep learning is to learn about the huge amount of information to perform the task, collecting enough amount of data is an essential task to guarantee the performance of deep learning. However, collecting the data for training purposes has inherent risks as the data may contain sensitive information, so it may cause serious privacy infringement if not used properly or attacked. To deal with the given privacy issues, the Google AI team introduce a new training concept called Federated Learning, which enables participants to perform the learning process collaboratively without exposing their data that might include sensitive information [1]. In the Federated Learning process, a baseline model is distributed to participants, and the participants perform training with their data. The trained model parameter or the gradient is collected for the update of the given model when the training is completed, and the process repeats until the model returns satisfactory results. Federated learning is considered a relatively safe method against privacy issues because it only collects the trained model parameters instead of data itself.

A few recent studies proved that stealing the hidden data from the trained model is not an impossible task. For example, Shokri et al. introduced a membership inference attack that figures out whether certain information is included in a training data or not [2], and Fredrikson et al. introduced a model inversion attack that extracts the target information from the model parameters [3]. The attackers have a significant advantage in stealing data especially when the training model is open to the public, such as when the developers utilize machine learning as a service (MaaS). Differential privacy is a widely used solution that protects sensitive data from attackers. The main idea of differential privacy is to protect sensitive data from attacks by perturbing the queries with randomly distributed noise, so the adversary cannot find if the target information is included in the dataset or not. However, a few issues exist regarding the deployment of differential privacy. As the deployment of differential privacy requires the usage of noise to defend intrusive queries from attackers, it inevitably involves the trade-off between utility and privacy caused by the noise. Also, the complicated background of differential privacy impedes the optimized implementation even for professionals.

To overcome the inherent issues of differential privacy, we introduce a new defense strategy against the proposed privacy attacks based on the augmentation of data. The original purpose of data augmentation is to facilitate the feature extraction process of the training model by increasing the amount of data through alterations or synthesis of new data, therefore the model could return better performance. The idea of this research is based on the observations that image data is often significantly modified when some types of augmentations are applied to it, so it might be used like the randomized noise deployed in differential privacy-based defense strategies that protect the sensitive data.

This paper provides meaningful results by performing comprehensive experiments, which demonstrates the effectiveness of the augmentation strategies as a potential way to defend the attacks while preserving enough utility. Our experiments utilize CIFAR-10 and CIFAR-100 datasets with 14 different augmentations and the magnitude range of 1 to 9, and as a result, we could find the optimized augmentation strategies for each label of the dataset that outperforms the result of differential privacy-based defense strategy. In the case of the CIFAR-10, for example, the solarize augmentation that inverts the pixels above threshold shows 16\(\%\) and 11.6\(\%\) better performance in model accuracy and attack accuracy with magnitude 7, which inverts approximately 78\(\%\) of pixels from an image. The posterize augmentation presents approximately 4.47\(\%\) and 12.42\(\%\) better model and attack accuracy when it reduces 7-bit from each RGB channel. In the case of the CIFAR-100, we separately sampled 10 labels from the dataset that returns notable performance and the other 10 labels that returns limited performance compared to differential privacy-based defense strategies. The difference between the accuracy of conventional differential privacy and our augmentation-based defense strategy is provided as a form of the advantage score. Finally, we discuss future research that finds the optimized augmentation strategies for the given image type through deep reinforcement learning. In addition to the results, how augmentations applied to image data modifies, and how it successfully defends the leakage of training data are discussed.

2 Literature review

2.1 Model inversion attack

Fredrikson et al. proposed the first model inversion attack method against neural network in 2014 [3]. The research showed that the adversary can infer the genotype of the victim from a linear regression model with black-box access and some non-sensitive attributes. Fredrikson et al. published extended research of model inversion attack that recovers the image data from facial recognition system, but it could not reconstruct recognizable object from the setting [4]. Hidano et al. introduced an enhanced model of Fredrikson’s research [3] that does not require non-sensitive attributes by injecting the malicious data that adversary possesses to modify the target model [5]. In 2019, Zhu et al. showed that private data can be leaked from shared gradients by minimizing the difference between original gradient and dummy gradient [6]. Zhao et al. made continuous research that discovers the ground-truth image with improved optimization performance and less number of iterations [7]. However, the attacks proposed by Zhu et al. and Zhao et al. were only tested on a shallow neural network model and could not retrieve the big image data. The continuous research in model inversion attack method with notable reconstruction quality was introduced by Geiping et al. [8]. The research showed that neural networks can be attacked regardless its depth or image size. The authors also showed that they can perform multi-image reconstruction from model parameters at the same time.

2.2 Differential privacy

In previous model inversion research, many authors mentioned that differential privacy could be a solution for diverse attacks against privacy including model inversion attack [3, 9, 10]. The core concept of differential privacy is based on the usage of Laplacian and Gaussian noises based on \(l_1\) norm and \(l_2\) norm, which were introduced by Dwork et al. and mcsherry et al. [11, 12]. The recent researches adopted the idea of differential privacy for secure deep learning [13,14,15,16]. Shokri and Shmatikov introduced distributed selective stochastic gradient descent (DSSGD) that injects Laplacian noise into the optimization process for collaborative deep learning [13]. Abadi et al. proposed an improved strategy that utilizes Gaussian noise and moments account called differentially private stochastic gradient descent (DPSGD) to control the amount of injected noise, therefore enabling tracking the amount of privacy spent [14]. Phan et al. has increased the efficiency of training throughout the adaptive controlling of noise amount in the optimization process based on the importance of features [15]. Mironov proposed an enhanced differential privacy concept called Rényi Differential Privacy (RDP) based on Rényi divergence of order \(\alpha \), which measures the divergence between two adjacent datasets [16]. Truex et al. showed that differential privacy can be used in federated learning by adding a local differential privacy module that guarantees the privacy of sensitive data before sending parameters to a centralized server [17]. Girgis et al. proposed a stochastic gradient descent algorithm that inputs the sampled clients and the data points of chosen clients to the shuffler, therefore guaranteeing privacy by hiding which clients were chosen in a federated environment [18].

2.3 Data augmentation

The studies in data augmentation have been made with the development of image vision by exploring various methods, including but not limited to traditional augmentations, Generative Adversarial Network (GAN), reinforcement learning, and automated machine learning. A few innovative augmentation strategies were made, such as Cutout [19], and Random Erasing [20], which arbitrarily determines the area of an image to be masked out. In addition to that, Wu et al. introduced multiple color augmentation strategies that adjust color pixels to diversify input features [21]. Unlike the traditional augmentation strategies that modify the given input, Generative Adversarial Networks (GAN) creates the whole new data that is similar to the original image through the synthesis of given data. GAN-based augmentations can be utilized in classification [22, 23], privacy de-identification [24], synthesis of high resolution image [25, 26] and so on. A few recent researches adopted automated machine learning to find optimized augmentations based on reinforcement learning [27], Bayesian optimization [28], differentiation of policy search [29], and grid search [30].

3 Preliminaries

3.1 Federated learning

The purpose of Federated Learning is to facilitate the usage of data stored in distributed data centers. The idea of Federated Learning introduced by Mcmahan et al. does not require the data sharing between the centralized server and participants, but enables collaborative learning as a federation [1]. The formal description of Federated Learning is as follows: The centralized server distributes the model \(W_1^k, W_2^k, \dots ,W_n^k\) to n local clients, and the clients train the received model \(W^k\) with their dataset. Then the local updates \(U:= W_i^{k} - W^{k}\) made by clients are aggregated for the averaging process. Using the gradient descent, the globally updated model \(W^{k+1} = W^{k}- \eta U'\) is made by the server, where \(\eta \) denote as learning rate and \(U^{'}=\frac{1}{n}\sum U^{i}\). Then \(W^{'}\) is distributed to clients repeatedly for continuous updates. Figure 1 visualizes the overall process of federated learning.

Fig. 1
figure 1

Federated learning

3.2 Differentially private stochastic gradient descent

The concepts of differential privacy proposed by Dwork et al. [31] is as follows: given two datasets that has at most one different element denoted by \(\Delta (D_1, D_2)\le 1\), a randomized computation F gives \((\varepsilon ,\delta )\)-privacy if

$$\begin{aligned} Pr[F(D_1) \in S] \le e^\varepsilon \cdot Pr[F(D_2) \in S] + \delta , \end{aligned}$$
(1)

where F means query and noise added to the query, S means all probable output of F, \(\varepsilon \) means the maximum distance between the same queries on \(D_1, D_2\) that means privacy loss, and finally, \(\delta \) means the probability of accidental information leakage. The additive noise for differential privacy is generated based on the Laplace mechanism or Gaussian mechanism. In deep learning, adopting the Gaussian mechanism has a few advantages over the Laplace mechanism: that is, it allows using either l1 sensitivity or l2 sensitivity depending on its purpose while the Laplace mechanism only allows the usage of l1 sensitivity, therefore it guarantees flexibility in adopting differential privacy. Also, it requires less amount of noise and a privacy budget when the Gaussian mechanism follows l2 sensitivity, which is significantly lower than l1 sensitivity followed by the Laplace mechanism. As the amount of noise significantly impacts the performance of a neural network that consists of a series of weights, [14] adopted Gaussian mechanism for differentially private deep learning in their study, which is denoted as follows:

given optimizer as Stochastic Gradient Descent, a gradient \(g_{t}(x_i)=\nabla _\theta \mathcal {L}(\theta , x_i)\) from randomly selected small batches \(\{x_1, x_2, ... , x_n\}\) is computed, and each gradient is clipped with clipping threshold C, which is denoted by \(g_t(x_i)/ max(1, \frac{\left\| g_t(x_i) \right\| }{\mathcal {C}} )\) . Then the Gaussian noise \(\mathcal {N}\) \((0, \sigma ^2)\) is injected, where \(\mathcal {N}\) \((0, \sigma ^2)\) means Gaussian distribution with variance \(\sigma ^2\).

3.3 Exploitation of model for data reconstruction

The purpose of the model inversion attack is to extract the training data that is not open to the public. The proposed attack model for data reconstruction introduced in [8] steals significant information by exploiting model parameters shared in federated learning. The proposition is as follows: The participants are given the initial model parameter \(\theta ^k\) from a centralized server and train it with their data \(x_i\) and label \(y_i\), then send back the gradient \(\nabla \mathcal {L}(x_i,y_i) \) to have the server update the model parameter to \(\theta ^{k+1}\). The adversary can extract the data from \(\nabla \mathcal {L}(x_i,y_i) \) as the angle between two data points in gradient descent steps provides information that changes prediction. Previous researches [6, 7] minimizes the difference between the gradients from dummy data \((x'_i,y_i)\) and real data \((x_i,y_i)\) by computing Euclidean distance:

$$\begin{aligned} ||\nabla \mathcal {L}(x'_i,y_i) - \nabla \mathcal {L}(x_i,y_i)||^2. \end{aligned}$$
(2)

However, due to its inefficiency of computation and initialization issue against practical architectures used in recent studies, Geiping et al. minimizes gradient difference through cosine similarity that computes the similarity between gradient vectors [8]. The idea is denoted as follows:

$$\begin{aligned} \frac{\nabla \mathcal {L}(x_i,y_i)\cdot \nabla \mathcal {L}(x'_i,y_i)}{max(||\nabla \mathcal {L}(x_i,y_i)||\cdot ||\nabla \mathcal {L}(x'_i,y_i)||, \varepsilon )}, \end{aligned}$$
(3)

where \(\varepsilon \) refers to a small value that prevents division by zero. As the gradient of the image always includes significant information that can be extracted through cosine similarity, the adversary can always extract the training data even from pre-trained models. Figure 2 visualizes how the gap between gradient vectors is minimized.

Fig. 2
figure 2

Model Inversion Attack

3.4 Data augmentation

The purpose of data augmentation is to facilitate the extraction of input features by increasing the amount of training data \(D=(x_1,x_2,...,x_t)\) throughout the modification or the synthesis of data. As the amount of data increases from D to \(D''= D + D'\), where \(D'=(x'_1,x'_2,...,x'_t)\), the training model can have better regularization performance, so overfitting issue can be mitigated. In general, augmentation applied to training data generally consists of geometric transformation and color space augmentation [32]. The geometric transformation includes various types of affine transformations that adjust the geometric location of an image while preserving the colinearity after transformation, and color space augmentation converts the RGB value of image pixels variously to remove biases in training data. Depending on the level of modification, applying data augmentation can significantly expand the size of training data with different features.

4 Materials and methods

4.1 System settings

In this section, we define the attack model introduced in [8] to compare the performance of differential privacy-based and augmentation-based defense strategies. The proposed attack model exploits the model parameters of the neural network to reconstruct the hidden training data. Unlike the attack models with limited performance introduced in previous research [6, 7], the proposed attack model works well for a realistic environment that utilizes a deep neural network and pretrained model. We conducted reconstruction experiments with Adam optimizer, learning rate 0.1, and set the maximum iteration to 4,000. As the number of model parameters affects the reconstruction time and quality, the model chosen for this experiment is the VGG-11 without the fully connected layers that balanced reconstruction time and quality. The customed VGG-11 model has approximately 9.23M parameters. In the reconstruction process, we reconstruct one image in each run to guarantee the highest quality of reconstructed images. Two datasets are used in this experiment: CIFAR-10 and CIFAR-100 where the number indicates the number of labels included in the dataset. Both datasets include 60,000 32\(\times \)32 sized images equally separated in each label. In this experiment, we targeted reconstructing 30% of randomly sampled test data from each label to present reliable classification results. Two different types of accuracy called model accuracy and attack accuracy will be provided in this paper to compare the performance of conventional differential privacy-based defense strategy and our augmentation-based defense strategy. Model accuracy is the accuracy of the data before reconstruction and Attack accuracy is the accuracy of the reconstructed data. A total of 8 RTX-2080 GPUs was used to reconstruct 780,000 images, 390,000 for CIFAR-10 and CIFAR-100 each.

4.2 Differential privacy settings

Based on the idea of differential privacy that utilizes noise to perturb adversarial queries, we implemented a DP-SGD optimizer that stochastically scatters noise during the optimization process to compare its performance as a conventional defense strategy. The followings are the default hyperparameter settings: first, the clipping threshold that bounds the maximum gradient norm is set to 1, denoted by \(C=1\). We determined to utilize the Gaussian noise over Laplacian noise and the amount of Gaussian noise is controlled by its standard deviation \(\sigma \). Three different amount of Gaussian noise is used in this experiment. For CIFAR-10, we utilized \(\sigma =\) 0.1, 0.5, and 1.0. For CIFAR-100, we reduced the amount of noise to \(\sigma =\) 0.1, 0.2, and 0.3 due to the sensitivity towards the amount of noise. Two famous open-source differential privacy libraries named PyVacy [33] and Opacus [34] were referenced for correct implementation. As we plan to defend against a reconstruction attack that steals training data, we prepared six pre-trained models trained with DP-SGD optimizer that injects different amounts of noise. The VGG-11 model introduced in system settings was trained 200 epochs each to create pre-trained models. Learning rate and \(\delta \) was set to \(1e^{-3}\) and \(1e^{-5}\) each.

4.3 Augmentation settings

The augmentations tested in this paper include 14 different augmentations introduced in [27,28,29,30], which includes diverse color space augmentation and affine transformations. Each augmentation introduced below have 9 augmentation level from 1 to 9, which determines the level of modification from low to high. Figure 3 presents how the introduced augmentations modify the input data.

Fig. 3
figure 3

Visualized augmentation strategies

  • Autocontrast - maximize the contrast of an image.

  • Brightness - randomly adjusts the brightness of an image based on magnitude.

  • Color - adjusts the balance of color of an image.

  • Contrast - adjusts the contrast of an image based on magnitude.

  • Equalize - adjusts the image histogram to be equalized. Histogram of an image refers to the distribution of pixels in a digital image.

  • Invert - invert all pixels of an image.

  • Posterize - Reduce the bits from each RGB color channel.

  • Rotate - rotate an image.

  • Sharpness - adjust the blurriness of an image. Low magnitude returns a sharper image.

  • ShearX - shear an image vertically based on magnitude.

  • ShearY - shear an image horizontally based on magnitude.

  • Solarize - invert pixels of an image above magnitude.

  • TranslateX - move an image along the X-axis.

  • TranslateY - move an image along the Y-axis.

Due to the characteristics of an augmentation, three augmentations (autocontrast, equalize, invert) return the same images regardless of magnitude. Also, the reduction range of bits for posterize augmentation was defined from 0 to 4 bits in [27,28,29,30]. However, we extended the reduction range from 0 to 7 to maximize the effect of augmentation. Note that solarize inverts the input pixel above the threshold, so it returns the equivalent images with invert augmentation when the magnitude is set to 9 as the threshold is set to 0.

5 Experimental results

Approximately 780,000 images (390,000 from each of CIFAR-10 and CIFAR-100) were reconstructed with various augmentations and differential privacy settings throughout the experiment. Firstly, we conducted reconstruction with the pre-trained models that were trained with DP-SGD optimizer with different levels of noise. Tables  1 and  2 below show differential privacy-based accuracy for CIFAR-10 and CIFAR-100. Note that model accuracy is the accuracy of original data with noise, and attack accuracy is the accuracy of reconstructed data with DP-SGD optimizer. ResNet-50 and VGG-11 without fully connected layers described previously were used for measuring accuracy and making pretrained models, respectively. As mentioned previously, DP-based defense inevitably involves utility loss as the noise affects the optimization performance in the classification process. In addition to that, DP-based defense still leaks some amount of training data in the reconstruction process even when enough amount of noise is injected as the noise is scattered randomly in the optimization process. The reconstruction time of each image is approximately 4 minutes for differential privacy settings, and 2 minutes for augmented images, on average.

Table 1 CIFAR-10 accuracy table for DP-SGD
Table 2 CIFAR-100 accuracy table with DP-SGD optimizer
Fig. 4
figure 4

Reconstructed CIFAR-10 images with Gaussian noise

Fig. 5
figure 5

Reconstructed CIFAR-100 images with Gaussian noise (10 samples)

Figures 4 and 5 show the inherent issues in differential privacy-based defense strategy. The frog image in Fig. 4 and the butterfly image in Fig. 5 are clearly recognizable regardless of the injected noise, and adding more noise significantly hurts the utility. We believe an augmentation-based defense strategy could be an enhanced solution for this problem depending on the type of augmentations and magnitude. In Figs. 678, and 9, linear graphs that show the accuracy change of CIFAR-10 and CIFAR-100 based on magnitude are provided. The provided figures show that augmentations applied to a dataset distort the given image data directly, therefore the accuracy decreases as the magnitude increases.

Fig. 6
figure 6

Model accuracy range for CIFAR-10 augmentations

Fig. 7
figure 7

Attack accuracy range for CIFAR-10 augmentations

Fig. 8
figure 8

Model accuracy range for CIFAR-100 augmentations

Fig. 9
figure 9

Attack accuracy range for CIFAR-100 augmentations

An advantage score is used to describe the efficiency of augmentation strategies in this paper, which is defined as follows:

$$\begin{aligned} \text{ Advantage } \text{ score } = (MA_{DP} - MA_{Aug}) + (AA_{DP} - AA_{Aug}), \end{aligned}$$
(4)

where MA denotes a model accuracy, AA denotes an attack accuracy, DP is differential privacy with \(\sigma =0.5\) and \(\sigma =0.2\) respectively for CIFAR-10 and CIFAR-100, and Aug denotes an augmentation scheme. That is augmentations that return higher model accuracy and lower attack accuracy return a high advantage score. Tables 3 and 4 are the augmentation strategies that we have found based on the advantage score. The chosen augmentation with magnitude has lower attack accuracy and higher model accuracy than the DP-based defense strategy, which means it should successfully defend reconstruction attack while preserving a certain level of utility even when augmentation significantly distorts the original data. From the results above, we can see that color space augmentations have better efficiency compared to geometric transformation in the majority of cases. In Table 3, Geometric transformations such as TranslateX and TranslateY with magnitude 5 are only ranked in 2nd and 3rd best places for deer label, and rotate with magnitude 9 is ranked in 2nd for ship label. In Table 4, the inefficient augmentation columns returned notable advantage scores, however, they leaked a lot of information from the reconstructed data. The problem of geometric transformation is that it still leaks information from the area that is not augmented. As the geometric augmentations only affect a certain area of image data, it cannot be considered privacy-preserving even if it shows notable accuracy.

Table 3 Optimized augmentations for CIFAR-10
Table 4 Sampled augmentations for CIFAR-100
Fig. 10
figure 10

CIFAR-10 advantage score matrix

Table 5 Advantage scores of best CIFAR-10 augmentations per label

In Fig. 10, we provide the advantage score of CIFAR-10 and the best augmentations. A few labels return the best performance in model and attack accuracy when a certain type of augmentations are applied. For example, the airplane label in CIFAR-10 works best when solarize augmentation with magnitude 7 is applied, which shows 16.07\(\%\) higher model accuracy and approximately 11.67\(\%\) lower attack accuracy. In the case of the truck label located at the end of the graphs, it has slightly lower model accuracy, however, it shows significantly lower attack accuracy compared to DP-SGD when Solarize augmentation with magnitude 3 is applied. Figures 11 and 12 provide the model advantage and attack the advantage of CIFAR-10. The sum of values for each label provided in Figs. 11 and  12 is the total advantage scores provided in Table 5. A few outstanding results in Table 5 are automobile and truck labels that return very high advantage scores, 57.19 and 40.79 relatively. Both labels show the best performance when solarize augmentation with magnitude 7 is applied. The truck label shows lower model accuracy compared to DP-SGD results, however, it provides 59.33% lower attack accuracy, which is outstanding defense performance.

Fig. 11
figure 11

CIFAR-10 model accuracy advantages

Fig. 12
figure 12

CIFAR-10 attack accuracy advantages

Fig. 13
figure 13

CIFAR-100 advantage score matrix

Table 6 Sampled advantage scores of CIFAR-100 augmentations

Same as the CIFAR-10, Figs. 1314, and  15 provide the advantage score of CIFAR-100 denoted in Table 6. We sampled the 10 best labels of CIFAR-100 due to the significant amount of classes in CIFAR-100. As mentioned before, color space augmentations return high advantage scores in most cases. In the case of the Orchid label, it returns a significant advantage score of 51.34, which is the result when posterize augmentation with magnitude 7 is applied. Note that CIFAR-100 differential privacy results return relatively low accuracy compared to CIFAR-10 due to the sensitivity towards the noise.

Fig. 14
figure 14

CIFAR-100 model accuracy advantages

Fig. 15
figure 15

CIFAR-100 attack accuracy advantages

In this experiment, we found that posterize augmentation shows significant efficiency for the majority of introduced labels. All image data given in the dataset consists of red, green, and blue images, which are included in the RGB channel. Each channel consists of 8 bits and they support up to \(2^8=256\) colors each. As the magnitude determines the number of bits to be removed from each color channel, only a few color pixels remain with high magnitude settings, therefore only the silhouette remains when reconstructed. The advantage scores of posterize augmentation are visualized in Figs. 16 and  17, and also the overall advantage score is provided in Table 7.

Table 7 CIFAR-10 advantage scores of posterize (M9)
Fig. 16
figure 16

CIFAR-10 posterize (M9) model accuracy advantages

Fig. 17
figure 17

CIFAR-10 posterize (M9) attack accuracy advantages

Fig. 18
figure 18

Reconstructed images with Gaussian noise/Posterize (M9)

Finally, Fig. 18 shows the reconstructed images to visualize how to posterize augmentation prevents the data leakage from model inversion attack in the given environment. The overall results of CIFAR-10 are in the appendix section, and CIFAR-100 results are available from our website.Footnote 1

6 Future works

We presented the results of our augmentation-based defense strategy against privacy attacks that reconstructs training data from model parameters. A few outstanding augmentations with optimized magnitude were found in the experiment, however, all the searching process was done manually this time, so there are many other outstanding augmentations that can give even better results compared to differential privacy-based defense strategies. These techniques will be analyzed and implemented in our future work. As we proved that distorting the given image through augmentation can prevent reconstruction from model parameters, future research will be about developing the adaptive augmentation that provides noticeable accuracy in classification while preventing the reconstruction of data. Our future research plan includes finding an automated solution that selects and applies the best augmentations to the given training data. For example, when an image that has 256 \(\times \) 256 sizes are given as input, 65,536 pixels and a total \(256^3\) RGB value for each pixel will be observed. Based on the observation, selecting the best action from numerous cases, which means selecting and applying augmentation to the best pixels that affect the result of the reconstruction attack, will be the key to future research.

7 Conclusions

In this paper, we discussed federated learning and inherent privacy risks regarding the reconstruction of hidden training data from model parameters used in the training process. A traditional way to protect sensitive data from the proposed attacks that exploit the model parameter is to deploy differential privacy with a sufficient amount of noise. However, a few issues for deploying differential privacy exist, which are controlling the amount of noise and optimizing the hyperparameters. To present a new privacy-preserving solution that outperforms differential privacy with a simple implementation process, we conducted multiple reconstruction experiments applying 14 augmentations with 9 magnitudes. approximately 780,000 images were reconstructed during the experiment to secure the meaningful amount of data, and as a result, our experiment showed that a few augmentations successfully preserved the privacy against attacks exploiting model parameters and achieves noticeable accuracy in classification compared to differential privacy based defense strategy. We found a few good matches of augmentations and data classes from both datasets that returns the best performance during the experiment. Color space augmentations proposed in this paper shows superior performance to geometric transformations, and the posterize augmentation with the highest magnitude worked greatly for various image classes in both CIFAR datasets. Our augmentation-based defense strategy is easy to implement and can be applied regularly to whole data, therefore contributing to building a secure environment against model inversion attacks. Although the optimized augmentations and magnitudes for each label of the dataset were chosen manually this time, the adaptive augmentation algorithms and the optimized hyperparameters that outperform the current results will be found in the next research based on deep reinforcement learning.