Advertisement

Cycle GAN-Based Attack on Recaptured Images to Fool both Human and Machine

  • Wei Zhao
  • Pengpeng Yang
  • Rongrong NiEmail author
  • Yao Zhao
  • Wenjie Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11378)

Abstract

Recapture can be used to hide the traces left by some operations such as JPEG compression, copy-move, etc. However, various detectors have been proposed to detect recaptured images. To counter these techniques, in this paper, we propose a method that can translate recaptured images to fake “original images” to fool both human and machines. Our method is proposed based on Cycle-GAN which is a classic framework for image translation. To obtain better results, two improvements are proposed: (1) Considering that the difference between original and recaptured images focuses on the part of high frequency, high pass filter are used in the generator and discriminator to improve the performance. (2) In order to guarantee that the images content is not changed too much, a penalty term is added on the loss function which is the L1 norm of the difference between images before and after translation. Experimental results show that the proposed method can not only eliminate traces left by recapturing in visual effect but also change the statistical characteristics effectively.

Keywords

Recaptured images Cycle-GAN Fool human and machine 

References

  1. 1.
    Lyu, S., Farid, H.: How realistic is photorealistic? IEEE Trans. Signal Process. 53, 845–850 (2005)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Cao, H., Alex, K.C.: Identification of recaptured photographs on LCD screens. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1790–1793 (2010)Google Scholar
  3. 3.
    Li, R., Ni, R., Zhao, Y.: An effective detection method based on physical traits of recaptured images on LCD screens. In: Shi, Y.-Q., Kim, H.J., Pérez-González, F., Echizen, I. (eds.) IWDW 2015. LNCS, vol. 9569, pp. 107–116. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-31960-5_10CrossRefGoogle Scholar
  4. 4.
    Yang, P., Ni, R., Zhao, Y.: Recapture Image forensics based on laplacian convolutional neural networks. In: Shi, Y.Q., Kim, H.J., Perez-Gonzalez, F., Liu, F. (eds.) IWDW 2016. LNCS, vol. 10082, pp. 119–128. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-53465-7_9CrossRefGoogle Scholar
  5. 5.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  6. 6.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2017)
  7. 7.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Wei Zhao
    • 1
    • 2
  • Pengpeng Yang
    • 1
    • 2
  • Rongrong Ni
    • 1
    • 2
    Email author
  • Yao Zhao
    • 1
    • 2
  • Wenjie Li
    • 1
    • 2
  1. 1.Institute of Information ScienceBeijing Jiaotong UniversityBeijingChina
  2. 2.Beijing Key Laboratory of Advanced Information Science and Network TechnologyBeijingChina

Personalised recommendations