Advertisement

Thinking in Frequency: Face Forgery Detection by Mining Frequency-Aware Clues

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

As realistic facial manipulation technologies have achieved remarkable progress, social concerns about potential malicious abuse of these technologies bring out an emerging research topic of face forgery detection. However, it is extremely challenging since recent advances are able to forge faces beyond the perception ability of human eyes, especially in compressed images and videos. We find that mining forgery patterns with the awareness of frequency could be a cure, as frequency provides a complementary viewpoint where either subtle forgery artifacts or compression errors could be well described. To introduce frequency into the face forgery detection, we propose a novel Frequency in Face Forgery Network (F\(^3\)-Net), taking advantages of two different but complementary frequency-aware clues, 1) frequency-aware decomposed image components, and 2) local frequency statistics, to deeply mine the forgery patterns via our two-stream collaborative learning framework. We apply DCT as the applied frequency-domain transformation. Through comprehensive studies, we show that the proposed F\(^3\)-Net significantly outperforms competing state-of-the-art methods on all compression qualities in the challenging FaceForensics++ dataset, especially wins a big lead upon low-quality media.

Keywords

Face forgery detection Frequency Collaborative learning 

Notes

Acknowledgements

This work is supported by SenseTime Group Limited, in part by key research and development program of Guangdong Province, China, under grant 2019B010154003. The contribution of Yuyang Qian and Guojun Yin are Equal.

References

  1. 1.
  2. 2.
  3. 3.
    Afchar, D., Nozick, V., Yamagishi, J., Echizen, I.: Mesonet: a compact facial video forgery detection network. In: 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–7. IEEE (2018)Google Scholar
  4. 4.
    Ahmed, N., Natarajan, T., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. 100(1), 90–93 (1974)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Amerini, I., Galteri, L., Caldelli, R., Del Bimbo, A.: Deepfake video detection through optical flow based CNN. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)Google Scholar
  6. 6.
    Bayar, B., Stamm, M.C.: A deep learning approach to universal image manipulation detection using a new convolutional layer. In: Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, pp. 5–10 (2016)Google Scholar
  7. 7.
    Bentley, P.M., McDonnell, J.: Wavelet transforms: an introduction. Electron. Commun. Eng. J. 6(4), 175–186 (1994)CrossRefGoogle Scholar
  8. 8.
    Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018)
  9. 9.
    Carvalho, T., Faria, F.A., Pedrini, H., Torres, R.D.S., Rocha, A.: Illuminant-based transformed spaces for image forensics. IEEE Trans. Inf. Forensics Secur. 11(4), 720–733 (2015)Google Scholar
  10. 10.
    Chen, M., Sedighi, V., Boroumand, M., Fridrich, J.: JPEG-phase-aware convolutional neural network for steganalysis of JPEG images. In: Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, pp. 75–84 (2017)Google Scholar
  11. 11.
    Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)Google Scholar
  12. 12.
    Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)Google Scholar
  13. 13.
    Cozzolino, D., Gragnaniello, D., Verdoliva, L.: Image forgery localization through the fusion of camera-based, feature-based and pixel-based techniques. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 5302–5306. IEEE (2014)Google Scholar
  14. 14.
    Cozzolino, D., Poggi, G., Verdoliva, L.: Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In: Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, pp. 159–164 (2017)Google Scholar
  15. 15.
    D’Avino, D., Cozzolino, D., Poggi, G., Verdoliva, L.: Autoencoder with recurrent neural networks for video forgery detection. Electron. Imaging 2017(7), 92–99 (2017)CrossRefGoogle Scholar
  16. 16.
    De Carvalho, T.J., Riess, C., Angelopoulou, E., Pedrini, H., de Rezende Rocha, A.: Exposing digital image forgeries by illumination color classification. IEEE Trans. Inf. Forensics Secur. 8(7), 1182–1194 (2013)CrossRefGoogle Scholar
  17. 17.
    Denemark, T.D., Boroumand, M., Fridrich, J.: Steganalysis features for content-adaptive JPEG steganography. IEEE Trans. Inf. Forensics Secur. 11(8), 1736–1746 (2016)CrossRefGoogle Scholar
  18. 18.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)Google Scholar
  19. 19.
    Durall, R., Keuper, M., Pfreundt, F.J., Keuper, J.: Unmasking deepfakes with simple features. arXiv preprint arXiv:1911.00686 (2019)
  20. 20.
    Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 6202–6211 (2019)Google Scholar
  21. 21.
    Ferrara, P., Bianchi, T., De Rosa, A., Piva, A.: Image forgery localization via fine-grained analysis of CFA artifacts. IEEE Trans. Inf. Forensics Secur. 7(5), 1566–1577 (2012)CrossRefGoogle Scholar
  22. 22.
    Fogel, I., Sagi, D.: Gabor filters as texture discriminator. Biol. Cybern. 61(2), 103–113 (1989)CrossRefGoogle Scholar
  23. 23.
    Franzen, F.: Image classification in the frequency domain with neural networks and absolute value DCT. In: Mansouri, A., El Moataz, A., Nouboud, F., Mammass, D. (eds.) ICISP 2018. LNCS, vol. 10884, pp. 301–309. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-94211-7_33CrossRefGoogle Scholar
  24. 24.
    Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 7(3), 868–882 (2012)CrossRefGoogle Scholar
  25. 25.
    Fujieda, S., Takayama, K., Hachisuka, T.: Wavelet convolutional neural networks for texture classification. arXiv preprint arXiv:1707.07394 (2017)
  26. 26.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  27. 27.
    Gunawan, T.S., Hanafiah, S.A.M., Kartiwi, M., Ismail, N., Za’bah, N.F., Nordin, A.N.: Development of photo forensics algorithm by detecting photoshop manipulation using error level analysis. Indonesian J. Electr. Eng. Comput. Sci. (IJEECS) 7(1), 131–137 (2017)Google Scholar
  28. 28.
    Haley, G.M., Manjunath, B.: Rotation-invariant texture classification using a complete space-frequency model. IEEE Trans. Image Process. 8(2), 255–269 (1999)CrossRefGoogle Scholar
  29. 29.
    Hsu, C.C., Hung, T.Y., Lin, C.W., Hsu, C.T.: Video forgery detection using correlation of noise residue. In: 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 170–174. IEEE (2008)Google Scholar
  30. 30.
    Hsu, C.C., Lee, C.Y., Zhuang, Y.X.: Learning to detect fake face images in the wild. In: 2018 International Symposium on Computer, Consumer and Control (IS3C), pp. 388–391. IEEE (2018)Google Scholar
  31. 31.
    Huang, H., He, R., Sun, Z., Tan, T.: Wavelet-SRNet: A wavelet-based CNN for multi-scale face super resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1689–1697 (2017)Google Scholar
  32. 32.
    Huang, Y., Zhang, W., Wang, J.: Deep frequent spatial temporal learning for face anti-spoofing. arXiv preprint arXiv:2002.03723 (2020)
  33. 33.
    Jeon, H., Bang, Y., Woo, S.S.: FDFtNet: Facing off fake images using fake detection fine-tuning network. arXiv preprint arXiv:2001.01265 (2020)
  34. 34.
    Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)
  35. 35.
    Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)Google Scholar
  36. 36.
    Kay, W., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)
  37. 37.
    Li, H., Li, B., Tan, S., Huang, J.: Detection of deep network generated images using disparities in color components. arXiv preprint arXiv:1808.07276 (2018)
  38. 38.
    Li, J., Wang, Y., Tan, T., Jain, A.K.: Live face detection based on the analysis of Fourier spectra. In: Biometric Technology for Human Identification, vol. 5404, pp. 296–303. International Society for Optics and Photonics (2004)Google Scholar
  39. 39.
    Li, J., You, S., Robles-Kelly, A.: A frequency domain neural network for fast image super-resolution. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)Google Scholar
  40. 40.
    Li, L., et al.: Face X-ray for more general face forgery detection. arXiv preprint arXiv:1912.13458 (2019)
  41. 41.
    Loshchilov, I., Hutter, F.: SGDR: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)
  42. 42.
    van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)Google Scholar
  43. 43.
    Marra, F., Gragnaniello, D., Cozzolino, D., Verdoliva, L.: Detection of GAN-generated fake images over social networks. In: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 384–389. IEEE (2018)Google Scholar
  44. 44.
    McCloskey, S., Albright, M.: Detecting GAN-generated imagery using color cues. arXiv preprint arXiv:1812.08247 (2018)
  45. 45.
    Nguyen, H.H., Fang, F., Yamagishi, J., Echizen, I.: Multi-task learning for detecting and segmenting manipulated facial images and videos. arXiv preprint arXiv:1906.06876 (2019)
  46. 46.
    Nguyen, H.H., Yamagishi, J., Echizen, I.: Use of a capsule network to detect fake images and videos. arXiv preprint arXiv:1910.12467 (2019)
  47. 47.
    Pan, X., Zhang, X., Lyu, S.: Exposing image splicing with inconsistent local noise variances. In: 2012 IEEE International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2012)Google Scholar
  48. 48.
    Pandey, R.C., Singh, S.K., Shukla, K.K.: Passive forensics in image and video using noise features: a review. Digit. Invest. 19, 1–28 (2016)CrossRefGoogle Scholar
  49. 49.
    Rahmouni, N., Nozick, V., Yamagishi, J., Echizen, I.: Distinguishing computer graphics from natural images using convolution neural networks. In: 2017 IEEE Workshop on Information Forensics and Security (WIFS), pp. 1–6. IEEE (2017)Google Scholar
  50. 50.
    Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: Faceforensics++: learning to detect manipulated facial images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1–11 (2019)Google Scholar
  51. 51.
    Sabir, E., Cheng, J., Jaiswal, A., AbdAlmageed, W., Masi, I., Natarajan, P.: Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3, 1 (2019)Google Scholar
  52. 52.
    Sarlashkar, A., Bodruzzaman, M., Malkani, M.: Feature extraction using wavelet transform for neural network based image classification. In: Proceedings of Thirtieth Southeastern Symposium on System Theory, pp. 412–416. IEEE (1998)Google Scholar
  53. 53.
    Stuchi, J.A., et al.: Improving image classification with frequency domain layers for feature extraction. In: 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE (2017)Google Scholar
  54. 54.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)Google Scholar
  55. 55.
    Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Trans. Graph. (TOG) 38(4), 1–12 (2019)CrossRefGoogle Scholar
  56. 56.
    Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2face: real-time face capture and reenactment of RGB videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2387–2395 (2016)Google Scholar
  57. 57.
    Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. arXiv preprint arXiv:1912.11035 (2019)
  58. 58.
    Yu, N., Davis, L.S., Fritz, M.: Attributing fake images to GANs: Learning and analyzing GAN fingerprints. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7556–7566 (2019)Google Scholar
  59. 59.
    Zhang, X., Karaman, S., Chang, S.F.: Detecting and simulating artifacts in GAN fake images. arXiv preprint arXiv:1907.06515 (2019)
  60. 60.
    Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Two-stream neural networks for tampered face detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1831–1839. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.SenseTime ResearchHong KongChina
  2. 2.University of Electronic Science and Technology of ChinaChengduChina
  3. 3.College of SoftwareBeihang UniversityBeijingChina
  4. 4.Northwestern Polytechnical UniversityXi’anChina

Personalised recommendations