Skip to main content

Video Forensics for Object Removal Based on Darknet3D

  • Conference paper
  • First Online:
Information and Communications Security (ICICS 2022)


To address the problems of insufficient analysis of time-domain information of tampered video by 2D convolutional neural networks and the loss of details in the pooling layer when processing frame images, a 3D video object removal tamper detection and localization model based on Darknet53 optimization network is proposed. While the Darknet53 network can fully retain the detail information in the frame, we try to give the two-dimensional Darknet53 network, which can only process spatial information, the ability to process time-domain information, and extend the two-dimensional convolutional layer into a three-dimensional convolutional layer, and also improve the detection efficiency by adjusting the network structure to reduce feature redundancy, making it more suitable for efficient processing of video tampering detection binary classification tasks. A D3D (Darknet3D) network is constructed to improve feature adequacy representation. Experimental results reveal that the temporal domain classification accuracy of the tamper detection model based on the Darknet3D is 98.9%, and the average Intersection over Union of spatial localization and tamper area labeling is 49.7%, which can effectively detect and locate the object removal tampering.

Supported by the National Key Research and Development Program on Cyberspace Security (2018YFB0803601) and the Advanced Discipline Construction Project of Beijing Universities (20210086Z0401).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others


  1. Battiato, S., Farinella, G.M., Messina, E., Puglisi, G.: Robust image alignment for tampering detection. IEEE Trans. Inf. Forensics Secur. 7(4), 1105–1117 (2012).

    Article  Google Scholar 

  2. Chen, L., Yang, Q., Yuan, L.: Passive forensic based on spatio-temporal location of video object removal tampering. J. Commun. (7) (2020)

    Google Scholar 

  3. Chen, S., Tan, S., Li, B., Huang, J.: Automatic detection of object-based forgery in advanced video. IEEE Trans. Circ. Syst. Video Technol. 26(11), 2138–2151 (2016)

    Article  Google Scholar 

  4. Chen, W.B., Yang, G.B., Chen, R.C., Zhu, N.B.: Digital video passive forensics for its authenticity and source. J. Commun. 32(6), 177–183 (2011)

    Google Scholar 

  5. Fadl, S.M., Han, Q., Li, Q.: CNN spatiotemporal features and fusion for surveillance video forgery detection. Sign. Process. Image Commun. 90, 116066 (2020)

    Article  Google Scholar 

  6. Hsu, C.C., Hung, T.Y., Lin, C.W., Hsu, C.T.: Video forgery detection using correlation of noise residue. In: 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 170–174 (2008).

  7. Jiang, B., Luo, R., Mao, J., Xiao, T., Jiang, Y.: Acquisition of localization confidence for accurate object detection (2018)

    Google Scholar 

  8. Jin, X., He, Z., Xu, J., Wang, Y., Su, Y.: Object-based video forgery detection via dual-stream networks. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021).

  9. Kono, K., Yoshida, T., Ohshiro, S., Babaguchi, N.: Passive video forgery detection considering spatio-temporal consistency. In: Madureira, A.M., Abraham, A., Gandhi, N., Silva, C., Antunes, M. (eds.) SoCPaR 2018. AISC, vol. 942, pp. 381–391. Springer, Cham (2020).

    Chapter  Google Scholar 

  10. Li, L., Wang, X., Zhang, W., Yang, G., Hu, G.: Detecting removed object from video with stationary background. In: Shi, Y.Q., Kim, H.J., Pérez-González, F. (eds.) The International Workshop on Digital Forensics and Watermarking 2012, pp. 242–252. Springer, Berlin Heidelberg, Berlin, Heidelberg (2013).

    Chapter  Google Scholar 

  11. Long, C., Smith, E., Basharat, A., Hoogs, A.: A C3D-based convolutional neural network for frame dropping detection in a single video shot. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)

    Google Scholar 

  12. Mane, S., Mangale, S.: Moving object detection and tracking using convolutional neural networks. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 1809–1813 (2018).

  13. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525 (2017)

    Google Scholar 

  14. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv e-prints (2018)

    Google Scholar 

  15. Saxena, S., Subramanyam, A., Ravi, H.: Video inpainting detection and localization using inconsistencies in optical flow. In: 2016 IEEE Region 10 Conference (TENCON), pp. 1361–1365 (2016).

  16. Su, C., Wei, J.: Hybrid model of vehicle recognition based on convolutional neural network. In: 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 1246–1251 (2020).

  17. Su, L., Luo, H., Wang, S.: A novel forgery detection algorithm for video foreground removal. IEEE Access 7, 109719–109728 (2019).

    Article  Google Scholar 

  18. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497 (2015).

  19. Wang, Q., Zhang, R.: A blind image forensic algorithm based on double quantization mapping relationship of DCT coefficients. J. Electron. Inf. Technol. 36(009), 2068–2074 (2014)

    Google Scholar 

  20. Wang, X., Lu, Z.: Automatic localization of image tampering area based on JPEG block effect difference. Comput. Sci. 37(002), 269–273 (2010)

    Google Scholar 

  21. Wu, W., Zhan, L.: Detection of tampering using color filter array characteristics and fuzzy estimation. Comput. Eng. Des. 28(21), 5179–5180, 5256 (2007)

    Google Scholar 

  22. Yang, H., Zhou, Z., Zhou, C.: Mobile image tampering detection based on pattern noise. J. Comput. Syst. Appl. (2013)

    Google Scholar 

  23. Yao, Y., Shi, Y., Weng, S., Guan, B.: Deep learning for detection of object-based forgery in advanced video. Symmetry 10(1), 3 (2018).

    Article  Google Scholar 

  24. Zhang, J., Chen, J., Su, Y.: Detection of region-duplication forgery in the video streams. Electron. Meas. Technol. 34(011), 66–69 (2011)

    Google Scholar 

  25. Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Learning rich features for image manipulation detection. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1053–1061 (2018).

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Yuhao Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, K., Wang, Y., Yu, X. (2022). Video Forensics for Object Removal Based on Darknet3D. In: Alcaraz, C., Chen, L., Li, S., Samarati, P. (eds) Information and Communications Security. ICICS 2022. Lecture Notes in Computer Science, vol 13407. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15776-9

  • Online ISBN: 978-3-031-15777-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics