Abstract
Effective generation of adversarial examples can help to improve the training of neural models to avoid adversarial example attacks. Watermark-based adversarial example generation methods regard watermark as a meaningful noise to perturb the neural models. Therefore, the resulting adversarial examples are more similar to the original images yet more difficult to defend. Existing Watermark-based adversarial example generation methods adopt the visible watermarking technology. This however may reduce the success rate of the attacks because the adversarial examples with visible watermarks can be easily perceptible by humans. To address this issue, we propose a novel approach to generate adversarial examples based on the combination of frequency domain and color space perturbation. In particular, we use wavelet transform to hide the watermark, making it invisible and introducing noises to the frequency of the images. We then select the Lab color space Similarity as an optimization scheme for perturbations control. Experimental results show that under the same dataset, the maximum attack success rate of the adversarial example generated by our algorithm can reach 98.56%. In addition, the generated adversarial examples are highly portable, the successful attacks on VGG, Resnet101, and Inception-v3 can reach more than 95%, and the color space perturbation optimization achieves an average RGB channel similarity of 97.22%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inform. Process. Syst. 25(2012)
Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world. In: ICLR Workshop (2016)
Collobert, R., Weston, J.: A unified architecture for natural language processing: Deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160−167 (2008)
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 8297 (2012)
He, W., Wei, J., Chen, X., Carlini, N., Song, D.: Adversarial example defenses: ensembles of weak defenses are not strong (2017). https://arxiv.org/abs/1706.04701
Jia, X., Wei, X., Cao, X., Han, X.: Adv-watermark: a novel watermark perturbation for adversarial examples (2020). https://arxiv.org/abs/2008.01919
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 2017 IEEE Symposium on Security and Privacy (sp), pp. 3957. IEEE (2017).https://arxiv.org/abs/1412.6572
Moosavi-Dezfooli, S-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016). https://doi.org/10.1109/CVPR.2016.282
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Berkay Celik, Z., Swami, A.: The limitations of deep learning in adversarial settings (2015). https://arxiv.org/abs/1511.07528
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. In: ICLR Computerence (2015)
Johnson, J., Alahi, A., Li, F-F.: Perceptual losses for real-time style transfer and super-resolution (2016). https://arxiv.org/abs/1603.08155
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP)
Croce, F., Hein, M.: Sparse and imperceivable adversarial attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4724–4732 (2019)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). https://arxiv.org/abs/1706.06083
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness (2017). https://arxiv.org/abs/1712.02779
Sharif, M., Bauer, L., Reiter, M.K.: On the suitability of lp-norms for creating and preventing adversarial examples (2018). https://arxiv.org/abs/1802.09653
Eykholt, K., et al.: Robust physical-world attacks on deep learning models (2017). https://arxiv.org/abs/1707.08945
Gragnaniello, D., Marra, F., Poggi, G., Verdoliva, L.: Perceptual quality-preserving black-box attack against deep learning image classifiers (2019). https://arxiv.org/abs/1902.07776
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks (2015a). https://arxiv.org/abs/1506.01497
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828841 (2019). https://doi.org/10.1109/tevc.2019.2890858
Brown, T.B., Mane, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017a)
Lee, M., Kolter, Z.: On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897 (2019b)
Thys, S., Van Ranst, W., Goedeme, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection (2019b). https://arxiv.org/abs/1904.08653
Khanam, T., Dhar, P.K., Kowsar, S., Kim, J-M.: SVD-based image watermarking using the fast walsh-hadamard transform, key mapping, and coefficient ordering for ownership protection. Symmetry 12(1), 52, (2019). https://doi.org/10.3390/sym12010052
Zhao, J., Xu, W., Zhang, S., Fan, S., Zhang, W.: A strong robust zero-watermarking scheme based on shearlets high ability for capturing directional features. Math. Probl. Eng. 2016 (2016). https://doi.org/10.1155/2016/2643263
Jiang, F., Gao, T., Li, De.: A robust zero-watermarking algorithm for color image based on tensor mode expansion. Multim Tools Appl. 79(11), 75997614 (2020). https://doi.org/10.1007/s11042-019-08459-3
Liu, X., Yang, H., Liu, Z., Song, L., Li, H., Chen, J.: Dpatch: an adversarial patch attack on object detectors. (2018a). https://arxiv.org/abs/1806.02299
Ye, M., Luo, J., Zheng, G., Xiao, C., Wang, T., Ma, F.: Medat- tacker: exploring black-box adversarial attacks on risk prediction models in healthcare (2021). https://arxiv.org/abs/2112.06063
Zheng, X., Fan, Y., Wu, B., Zhang, Y., Wang, J., Pan, S.: Robust physical-world attacks on face recognition (2021). https://arxiv.org/abs/2109.09320
Tram`er, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses (2017). https://arxiv.org/abs/1705.07204
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: A general frame work for adversarial examples with objectives. ACM Trans. Privacy Secur.22(3), 130 (2019b)
Acknowledgment
This work was supported in part by the Key Research and Development Program of Hainan Province under grant No. ZDYF2020008, ZDYF2020008, the Natural Science Foundation of Hainan Province under the grant No. 2019RCO88, 2019CXTD400, and grants from State Key Laboratory of Marine Resource Utilization in South China Sea and Key Laboratory of Big Data and Smart Services of Hainan Province.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xu, Z., Ye, C., Dong, S. (2023). BWA: Research on Adversarial Disturbance Space Based on Blind Watermarking and Color Space. In: Hung, J.C., Chang, JW., Pei, Y. (eds) Innovative Computing Vol 2 - Emerging Topics in Future Internet. IC 2023. Lecture Notes in Electrical Engineering, vol 1045. Springer, Singapore. https://doi.org/10.1007/978-981-99-2287-1_95
Download citation
DOI: https://doi.org/10.1007/978-981-99-2287-1_95
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-2286-4
Online ISBN: 978-981-99-2287-1
eBook Packages: Computer ScienceComputer Science (R0)