Skip to main content
Log in

Fake-checker: A fusion of texture features and deep learning for deepfakes detection

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The evolution of sophisticated deep learning algorithms such as Generative Adversarial Networks has made it possible to create deepfakes videos with convincing reality. Deepfake identification is important to address internet disinformation campaigns and lessen negative social media effects. Existing studies either use handcrafted features or deep learning-based models for deepfake detection. To effectively combine the attributes of both approaches, this paper presents a fusion of deep features with handcrafted texture features to create a powerful fused feature vector for accurate deepfakes detection. We propose a Directional Magnitude Local Hexadecimal Pattern (DMLHP) to extract the 320-D texture features and extract the deep feature vector of 2048-D using inception V3. Next, we employ the Principal Component Analysis to reduce the feature dimensions to 320 for a balanced representation of features after fusion. The deep and handcrafted features are combined to form a fused feature vector of 640-D. Further, we employ the proposed features to train the XGBoost model for the classification of frames as genuine or forged. We evaluated our proposed model on Faceforensic +  + and Deepfake Detection Challenge Preview (DFDC-P) datasets. Our method achieved the accuracy and area under the curve of 97.7% and 99.3% on Faceforensic +  + , whereas 90.8% and 93.1% on the DFDC-P dataset, respectively. Moreover, we performed a cross-set and cross-dataset evaluation to show the generalization capability of our model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1

Similar content being viewed by others

Data availability

The dataset used in the current study is available publicly. Data sharing is not applicable to this article as no datasets were generated during the current study.

References

  1. Khan SA (2022) Hybrid transformer network for deepfake detection. ArXiv. /abs/2208.05820

  2. Mirsky Y, Lee W (2021) The creation and detection of deepfakes: A survey. ACM Comput Surv (CSUR) 54(1):1–41

    Article  Google Scholar 

  3. Zhang Y, Zheng L, Thing VL (2017) Automated face swapping and its detection. In: 2017 IEEE 2nd international conference on signal and image processing (ICSIP). IEEE, pp 15–19

  4. Agarwal S, Farid H, Gu Y, He M, Nagano K, Li H (2019) Protecting world leaders against deep fakes. In: CVPR workshops, vol 1, p 38

  5. Xu B, Liu J, Liang J, Lu W, Zhang Y (2021) DeepFake videos detection based on texture features. CMC-Comput Mater Contin 68(1):1375–1388

    Google Scholar 

  6. Nguyen HH, Fang F, Yamagishi J, Echizen I (2019) Multi-task learning for detecting and segmenting manipulated facial images and videos. In: 2019 IEEE 10th international conference on biometrics theory, applications and systems (BTAS). IEEE, pp 1–8

  7. Agarwal S, Farid H, El-Gaaly T, Lim S-N (2020) Detecting deep-fake videos from appearance and behavior. 2020 IEEE international workshop on information forensics and security (WIFS): IEEE. p 1–6

  8. Ciftci UA, Demir I, Yin L (2020) Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE transactions on pattern analysis and machine intelligence

  9. Matern F, Riess C, Stamminger M (2019) Exploiting visual artifacts to expose deepfakes and face manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW): IEEE. p 83–92

  10. Güera D, Baireddy S, Bestagini P, Tubaro S, Delp EJ (2019) We need no pixels: Video manipulation detection using stream descriptors. arXiv preprint arXiv:190608743

  11. Jack K (2011) Video demystified: a handbook for the digital engineer. Elsevier

  12. Yang X, Li Y, Lyu S (2019) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 8261–8265

  13. Jung T, Kim S, Kim K (2020) Deepvision: Deepfakes detection using human eye blinking pattern. IEEE Access 8:83144–83154

    Article  Google Scholar 

  14. Afchar D, Nozick V, Yamagishi J, Echizen I (2018) Mesonet: a compact facial video forgery detection network. 2018 IEEE international workshop on information forensics and security (WIFS): IEEE, p 1–7

  15. Güera D, Delp EJ (2018) Deepfake video detection using recurrent neural networks. 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS): IEEE. p 1–6

  16. Cozzolino D, Thies J, Rössler A, Riess C, Nießner M, Verdoliva L (2018) Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:181202510

  17. Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nießner M (2019) Faceforensics++: learning to detect manipulated facial images. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 1–11

  18. Sabir E, Cheng J, Jaiswal A, AbdAlmageed W, Masi I, Natarajan P (2019) Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3(1):80–87

    Google Scholar 

  19. Montserrat DM, Hao H, Yarlagadda SK, Baireddy S, Shao R, Horváth J, ... Delp EJ (2020) Deepfakes detection with automatic face weighting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 668–669

  20. Hu J, Liao X, Wang W, Qin Z (2021) Detecting compressed deepfake videos in social networks using frame-temporality two-stream convolutional network. IEEE Trans Circuits Syst Video Technol 32(3):1089–1102

    Article  Google Scholar 

  21. Heo YJ, Choi YJ, Lee YW, Kim BG (2021) Deepfake detection scheme based on vision transformer and distillation. arXiv preprint arXiv:2104.01353

  22. Hu J, Liao X, Wang W, Qin Z (2022) Detecting compressed deepfake videos in social networks using frame-temporality two-stream convolutional network. IEEE Trans Circuits Syst Video Technol 32(3):1089–1102

    Article  Google Scholar 

  23. Hu J, Liao X, Liang J, Zhou W, Qin Z (2022) Finfer: frame inference-based deepfake detection for high-visual-quality videos. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, no 1, pp 951–959

  24. Aslam N, Kolekar MH (2022) Unsupervised anomalous event detection in videos using spatio-temporal inter-fused autoencoder. Multimed Tools Appl 81(29):42457–42482

    Article  Google Scholar 

  25. Aslam N, Rai PK, Kolekar MH (2022) A3N: Attention-based adversarial autoencoder network for detecting anomalies in video sequence. J Vis Commun Image Represent 87:103598

    Article  Google Scholar 

  26. Aslam N, Kolekar MH (2023) DeMAAE: deep multiplicative attention-based autoencoder for identification of peculiarities in video sequences. Visual Comput 2023:1–15

    Google Scholar 

  27. Khalid F, Javed A, Irtaza A, Malik KM (2023) Deepfakes catcher: a novel fused truncated densenet model for deepfakes detection. In: Proceedings of international conference on information technology and applications: ICITA 2022. Springer Nature Singapore, Singapore, pp 239–250

  28. Khalid F, Javed A, Ilyas H, Irtaza A (2023) DFGNN: An interpretable and generalized graph neural network for deepfakes detection. Expert Syst Appl 222:119843

    Article  Google Scholar 

  29. Ilyas H, Javed A, Aljasem MM, Alhababi M (2023) Fused swish-ReLU efficient-net model for deepfakes detection. In: 2023 9th International Conference on Automation, Robotics and Applications (ICARA). IEEE, pp 368–372

  30. Li Y, Lyu S (2018) Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:181100656

  31. Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499–1503

    Article  Google Scholar 

  32. Ma J, Yuan Y (2019) Dimension reduction of image deep feature using PCA. J Vis Commun Image Represent 63:102578

    Article  Google Scholar 

  33. Yang J, Zhang D, Frangi AF, Yang J-Y (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Anal Mach Intell 26(1):131–7

    Article  Google Scholar 

  34. Dolhansky B, Howes R, Pflaum B, Baram N, Ferrer CC (2019) The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854

  35. Qian Y, Yin G, Sheng L, Chen Z, Shao J (2020) Thinking in frequency: Face forgery detection by mining frequency-aware clues. Springer, European conference on computer vision, pp 86–103

    Google Scholar 

  36. Tolosana R, Romero-Tapiador S, Fierrez J, Vera-Rodriguez R (2021, January) Deepfakes evolution: analysis of facial regions and fake detection performance. In: International conference on pattern recognition. Springer International Publishing, Cham, pp 442–456

  37. Wang R, Juefei-Xu F, Ma L, Xie X, Huang Y, Wang J, Liu Y (2019) Fakespotter: a simple yet robust baseline for spotting ai-synthesized fake faces. arXiv preprint arXiv:1909.06122

  38. Amerini I, Caldelli R (2020, June) Exploiting prediction error inconsistencies through LSTM-based classifiers to detect deepfake videos. In: Proceedings of the 2020 ACM workshop on information hiding and multimedia security, pp 97–102

  39. Liu H, Li X, Zhou W, Chen Y, He Y, Xue H, ... Yu N (2021) Spatial-phase shallow learning: rethinking face forgery detection in frequency domain. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 772–781

  40. Khormali A, Yuan J-S (2022) DFDT: An End-to-End DeepFake Detection Framework Using Vision Transformer. Appl Sci 12(6):2953

    Article  Google Scholar 

  41. Li X, Lang Y, Chen Y, Mao X, He Y, Wang S, ... Lu Q (2020) Sharp multiple instance learning for deepfake video detection. In: Proceedings of the 28th ACM international conference on multimedia, pp 1864–1872

  42. Lee S, An J, Woo SS (2022) BZNet: unsupervised multi-scale branch zooming network for detecting low-quality deepfake videos. In: Proceedings of the ACM Web Conference 2022, pp 3500–3510

Download references

Acknowledgements

This work was supported by the grant of the Punjab Higher Education Commission (PHEC) of Pakistan via Award No. (PHEC/ARA/PIRCA/20527/21).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rehan Ashraf.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huda, N.u., Javed, A., Maswadi, K. et al. Fake-checker: A fusion of texture features and deep learning for deepfakes detection. Multimed Tools Appl 83, 49013–49037 (2024). https://doi.org/10.1007/s11042-023-17586-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-17586-x

Keywords

Navigation