Skip to main content

Advertisement

Log in

Protecting image privacy through adversarial perturbation

  • 1174: Futuristic Trends and Innovations in Multimedia Systems Using Big Data, IoT and Cloud Technologies (FTIMS)
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In current digital era, users of various social media upload photos which usually contain tremendous amount of private information on daily basis. Though the private information contained within photos can assist enterprises to provide users with better services, it is also at the risk of being disclosed. Especially, with deep learning techniques developed for object detection tasks, users’ privacy can be extracted with no difficulty. Therefore, we propose an approach to prevent DNN detectors from detecting private objects, especially human body. An algorithm is developed by exploiting an inherent vulnerability of deep learning models known as the adversarial sample problem, and is integrated under a general framework which is also proposed in this work. We evaluate our method on the task of reducing the performance of DNN detectors on PASCAL VOC dataset. Our proposed algorithm can reduce the recall of human detection from 81.1% to 18.0%, while having few effects on pixel value. The results show that our proposed method performs remarkably well on preventing privacy from being exposed by DNN detectors, while causing very limited degradation to the visual quality of images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Blank G, Dutton WH, Lefkowitz J (2019) Perceived threats to privacy online: The internet in Britain, the Oxford internet survey, 2019

  2. Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

  3. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (sp), pp. 39–57. IEEE

  4. Chen Z, Kandappu T, Subbaraju V (2020) Privattnet: Predicting privacy risks in images using visual attention (2021). In: Int Conf Pattern Recognit 25th ICPR pp. 10–15

  5. Croce F, Hein M (2019) Sparse and imperceivable adversarial attacks. In: Proc IEEE/CVF Int Conf Comput Vis pp. 4724–4732

  6. Dabbagh M, Rayes A (2019) Internet of things security and privacy. In: Internet of Things From Hype to Reality pp. 211–238. Springer

  7. Everingham M, Van Gool L, Williams CKI, Winn, J, Zisserman A (2007) The PASCAL Visual Object Classes Challenge (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html

  8. Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples

  9. Hao H, Güera D, Reibman AR, Delp EJ (2019) A utility-preserving gan for face obscuration. arXiv preprint arXiv:1906.11979

  10. Jia X, Wei X, Cao X, Foroosh H (2019) Comdefend: An efficient image compression model to defend adversarial examples. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit pp. 6084–6092

  11. Jiang L, Ma X, Chen S, Bailey J, Jiang YG (2019) Black-box adversarial attacks on video recognition models. In: Proc 27th ACM Int Conf Multimed pp. 864–872

  12. Jiao R, Zhang L, Li A (2020) Ieye: Personalized image privacy detection. In: 2020 6th Int Conf Big Data Comput Commun (BIGCOM) pp. 91–95. IEEE

  13. Kopeykina L, Savchenko A (2020) Photo privacy detection based on text classification and face clustering

  14. Kurakin A, Goodfellow I, Bengio S et al (2016) Adversarial examples in the physical world

  15. Li X, Li D, Yang Z, Chen W (2017) A patch-based saliency detection method for assessing the visual privacy levels of objects in photos. IEEE Access 5:24332–24343

    Article  Google Scholar 

  16. Liu Y, Chen X, Liu C, Song D (2016) Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770

  17. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: Int Conf Learn Represent

  18. McPherson R, Shokri R, Shmatikov V (2016) Defeating image obfuscation with deep learning. arXiv preprint arXiv:1609.00408

  19. Mirjalili V, Raschka S, Ross A (2018) Gender privacy: An ensemble of semi adversarial networks for confounding arbitrary gender classifiers. In: 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–10. IEEE

  20. Oh SJ, Fritz M, Schiele B (2017) Adversarial image perturbation for privacy protection a game theory perspective. In: 2017 IEEE Int Conf Comput Vis (ICCV) pp. 1491–1500. IEEE

  21. Qiu H, Zeng Y, Zhang T, Jiang Y, Qiu M (2020) Fencebox: A platform for defeating adversarial examples with data augmentation techniques. arXiv preprint arXiv:2012.01701

  22. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497

  23. Rouani BD, Samragh M, Javidi T, Koushanfar F (2019) Safe machine learning and defeating adversarial attacks. IEEE Secur Priv 17(2):31–38

    Article  Google Scholar 

  24. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199

  25. Tonge A, Caragea C (2019) Dynamic deep multi-modal fusion for image privacy prediction. In: The World Wide Web Conference pp. 1829–1840

  26. Tonge A, Caragea C (2020) Image privacy prediction using deep neural networks. ACM Transactions on the Web (TWEB) 14(2):1–32

    Article  Google Scholar 

  27. Yu J, Wu M, Li C, Zhu S (2020) A street view image privacy detection and protection method based on mask-rcnn. In: 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), vol. 9, pp. 2184–2188. IEEE

  28. Yu J, Zhang B, Kuang Z, Lin D, Fan J (2016) iprivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inf Forensics Secur 12(5):1005–1016

    Article  Google Scholar 

  29. Yuan L, Ebrahimi T (2017) Image privacy protection with secure jpeg transmorphing. IET Signal Proc 11(9):1031–1038

    Article  Google Scholar 

  30. Zhao Z, Liu Z, Larson M (2020) Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit pp. 1039–1048

Download references

Acknowledgements

This study is partially supported by the National Key R&D Program of China (No. 2018YFB2101100 and No.2019YFB2101600), National Natural Science Foundation of China (62176016), Guizhou Province Science and Technology Project: Research and Demonstration of Sci. & Tech Big Data Mining Technology Based on Knowledge Graph (supported by Qiankehe[2021] General 382), Training Program of the Major Research Plan of the National Natural Science Foundation of China (Grant No. 92046015), and Beijing Natural Science Foundation Program and Scientific Research Key Program of Beijing Municipal Commission of Education (Grant No. KZ202010025047).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Tong.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: More comparable examples

Appendix: More comparable examples

We list more examples to show the performance of our method, which could be found in Table 4. More specifically, it shows that our method preserves the visual quality of images comparing with other obfuscation method, and indeed can protect privacy objects from being detected and can be used in practice.

Table 4 We use different obfuscation methods on same original images (the first column) to get the corresponding processed images (the last four columns) which all by set the confidence threshold to 0.5. The adversarial images generated by our method (\(\gamma\) = \(\infty\)) can maintain the much better visual quality

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, B., Tong, C., Lang, C. et al. Protecting image privacy through adversarial perturbation. Multimed Tools Appl 81, 34759–34774 (2022). https://doi.org/10.1007/s11042-021-11394-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11394-x

Keywords

Navigation