Skip to main content
Log in

Detection and localization of inter-frame video forgeries based on inconsistency in correlation distribution between Haralick coded frames

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

With the immensely growing rate of cyber forgery today, the integrity and authenticity of digital multimedia data are highly at stake. In this work, we deal with forensic investigation of cyber forgery in digital videos. The most common types of inter-frame forgery in digital videos are frame insertion, deletion and duplication attacks. A number of significant researches have been carried out in this direction, in the past few years. In this paper, we propose a two-step forensic technique to detect frame insertion, deletion and duplication types of video forgery. In the first step, we detect outlier frames, based on Haralick coded frame correlation; and in the second step, we perform a finer degree of detection, to eliminate false positives, hence to optimize the forgery detection accuracy. Our experimental results prove that the proposed method outperforms the state–of–the–art with an average F1 score of 0.97 in terms of inter–frame video forgery detection accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. http://www.ffmpeg.org

References

  1. Aghamaleki JA, Behrad A (2016) Inter-frame video forgery detection and localization using intrinsic effects of double compression on quantization errors of video coding. Signal Process Image Commun 47:289–302

    Article  Google Scholar 

  2. Aghamaleki JA, Behrad A (2017) Malicious inter-frame video tampering detection in mpeg videos using time and spatial domain analysis of quantization effects. Multimed Tools Appl 76(20):20691–20717

    Article  Google Scholar 

  3. Amidan BG, Ferryman TA, Cooley SK (2005) Data outlier detection using the Chebyshev theorem. In: IEEE aerospace conference, pp 3814–3819

  4. Binh VP, Yang SH (2013) A better bit-allocation algorithm for h.264/svc. Proceedings of the 4th international symposium on information and communication technology. pp 18–26

  5. Chao J, Jiang X, Sun T (2013) A novel video inter-frame forgery model detection scheme based on optical flow consistency. In: Proceedings of the 11th international conference on digital forensics and watermaking, IWDW’12. Springer, Berlin, pp 267–281

  6. Chen W, Shi YQ (2008) Detection of double mpeg compression based on first digit statistics. In: International workshop on digital watermarking. Springer, Berlin, pp 16–30

  7. de Almeida CW, de Souza RM, Candeias ALB (2010) Texture classification based on co-occurrence matrix and self-organizing map. In: IEEE international conference on systems man and cybernetics (SMC), pp 2487–2491

  8. Fu X, Wei W (2008) Centralized binary patterns embedded with image euclidean distance for facial expression recognition. In: Fourth Int Conf Nat Comput, vol 4, pp 115–119

  9. Hall G (2015) Pearson’s correlation coefficient. http://www.hep.ph.ic.ac.uk/~hallg/UG_2015/Pearsons.pdf, pp 1-4

  10. Hall-beyer M (2017) Glcm texture: a tutorial v 3.0 March 2017. https://prism.ucalgary.ca/bitstream/handle/1880/51900/texture%20tutorial%20v%203_0%20180206.pdf?sequence=11&isAllowed=y

  11. Haralick RM, Shanmugam K, et al. (1973) Textural features for image classification. IEEE Trans Syst Man Cybern (6):610–621

  12. Kekre H, Thepade SD, Sarode TK, et al. (2010) Image retrieval using texture features extracted from glcm, lbg and kpe. Int J Comput Theory Eng 2(5):695

    Article  Google Scholar 

  13. Kobayashi M, Okabe T, Sato Y (2010) Detecting forgery from static-scene video based on inconsistency in noise level functions. IEEE Trans Inf Forensics Secur 5 (4):883–892

    Article  Google Scholar 

  14. Li Z, Zhang Z, Guo S, et al. (2016) Video inter-frame forgery identification based on the consistency of quotient of mssim. Secur Commun Netw 9(17):4548–4556

    Article  Google Scholar 

  15. Liao SX, Pawlak M (1998) A study of Zernike moment computing. In: Asian conference on computer vision. Springer, Berlin, pp 394–401

  16. Lin P-Y (2009) Basic image compression algorithm and introduction to jpeg standard. National Taiwan University, Taipei

    Google Scholar 

  17. Liu H, Li S, Bian S (2014) Detecting frame deletion in h.264 video. Springer International Publishing, Cham, pp 262–270

    Google Scholar 

  18. Liu Y, Huang T (2017) Exposing video inter-frame forgery by Zernike opponent chromaticity moments and coarseness analysis. Multimed Syst 23(2):223–238

    Article  MathSciNet  Google Scholar 

  19. Luo W, Wu M, Huang J (2008) Mpeg recompression detection based on block artifacts. In: Security, forensics, steganography, and watermarking of multimedia contents X. International Society for Optics and Photonics, vol 6819, pp 68190X

  20. Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn Lett 29 (1):51–59

    Article  Google Scholar 

  21. Pulipaka A, Seeling P, Reisslein M, et al. (2013) Traffic and statistical multiplexing characterization of 3-d video representation formats. IEEE Trans Broadcast 59(2):382–389

    Article  Google Scholar 

  22. Qadir G, Yahahya S, Ho A (2012) A Surrey university library for forensic analysis (sulfa). In: Proceedings of the IET IPR

  23. Richardson IE (2004) H. 264 and MPEG-4 video compression: video coding for next-generation multimedia. Wiley, New York

    Google Scholar 

  24. Sahoo M (2011) Biomedical image fusion and segmentation using glcm. In: International journal of computer application special issue on 2nd national conference—computing, communication and sensor network CCSN, pp 34–39

  25. Shanableh T (2013) Detection of frame deletion for digital video forensics. Digit Investig 10(4):350–360

    Article  Google Scholar 

  26. Singh C, Upneja R (2012) Fast and accurate method for high order Zernike moments computation. Appl Math Comput 218(15):7759–7773

    MathSciNet  MATH  Google Scholar 

  27. Sitara K, Mehtre B (2016) Digital video tampering detection: an overview of passive techniques. Digit Investig 18(Supplement C):8–22

    Article  Google Scholar 

  28. Sonka M, Hlavac V, Boyle R (2014) Image processing, analysis, and machine vision. Cengage Learning, Boston

    Google Scholar 

  29. Su Y, Zhang J, Liu J (2009) Exposing digital video forgery by detecting motion-compensated edge artifact. In: International conference on computational intelligence and software engineering, pp 1–4

  30. Su Y, Nie W, Zhang C (2011) A frame tampering detection algorithm for mpeg videos. In: 6th IEEE joint international information technology and artificial intelligence conference, vol 2, pp 461–464

  31. Tuceryan M (1994) Moment-based texture segmentation. Pattern Recogn Lett 15(7):659–668

    Article  Google Scholar 

  32. Wang Q, Li Z, Zhang Z et al (2014) Video inter-frame forgery identification based on consistency of correlation coefficients of gray values. J Comput Commun 2 (04):51

    Article  Google Scholar 

  33. Wu Y, Jiang X, Sun T, et al. (2014) Exposing video inter-frame forgery based on velocity field consistency. In: IEEE international conference on acoustics speech and signal processing (ICASSP), pp 2674–2678

  34. Yu L, Wang H, Han Q, et al. (2016) Exposing frame deletion by detecting abrupt changes in video streams. Neurocomputing 205:84–91

    Article  Google Scholar 

  35. Zhang Y (1999) Optimisation of building detection in satellite images by combining multispectral classification and texture filtering. ISPRS J Photogramm Remote Sens 54(1):50–60

    Article  Google Scholar 

  36. Zhang Z, Hou J, Ma Q, et al. (2015) Efficient video frame insertion and deletion detection based on inconsistency of correlations between local binary pattern coded frames. Secur Commun Netw 8(2):311–320

    Article  Google Scholar 

Download references

Acknowledgments

This work is funded by Board of Research in Nuclear Sciences (BRNS), Department of Atomic Energy (DAE), Govt. of India, Grant No. 34/20/22/2016-BRNS/34363, dated: 16/11/2016.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jamimamul Bakas.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this section, we present the fourteen Haralick features proposed in [11], which have been adopted in our work. They are defined based on the following notations:

P(i,j) = (i,j)th entry in normalized Gray Level Co-occurrence Matrix (GLCM).

Px(i) = i th entry in the Marginal–Probability Matrix obtained by summing the rows of P(i,j), i.e, \(P_{x}(i)={\sum }_{j}^{N_{g}} P(i,j)\), where Ng = number of distinct gray levels in the quantized image.

Py(j) = j th entry in the Marginal–Probability Matrix obtained by summing the columns of P(i,j), i.e, \(P_{y}(j)={\sum }_{i}^{N_{g}} P(i,j)\).

\(P_{x+y}(k) = {\sum }_{i = 1}^{N_{g}}{\sum }_{j = 1}^{N_{g}}P(i,j)\), where k = i + j ∈ [2,⋯ , 2Ng].

\(P_{x-y}(k)= {\sum }_{i = 1}^{N_{g}}{\sum }_{j = 1}^{N_{g}}P(i,j)\), where k = |ij|∈ [0,⋯ ,Ng − 1].

Mathematical equations to compute the fourteen Haralick features, based on the above notations, are as follows:

  1. 1.

    Angular Second Moment (Energy): This feature measures the uniformity of an image, computes as:

    $$Angular Second Moment = \sum\limits_{i = 1}^{N_{g}} \sum\limits_{j = 1}^{N_{g}} P^{2}(i,j) $$
  2. 2.

    Contrast: It measures the frequency of local changes in an image, by computing intensity contrast between a pixel and its neighborhood. This represents the neighborhood based gray tone linear dependencies in an image. This feature is computed as follows:

    $$Contrast=\sum\limits_{|i-j|= 0}^{N_{g} - 1}|i-j|^{2}\left( \sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}}P(i,j)\right) $$

    For a constant image, contrast is 0.

  3. 3.

    Sum of Squares (Variance): This is a measure of heterogeneity in an image, and strength of its correlation to first order statistical variables such as standard deviation. Variance is proportional to difference between gray level values and their mean, computed as:

    $$Variance= \sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}}(i-\mu)^{2}P(i,j) $$

    where μ is the mean of Px(i).

  4. 4.

    Correlation: This feature measures how correlated a pixel is to its neighborhood. It is the measure of gray tone linear dependencies in the image.

    $$Correlation= \frac{{\sum}_{i = 1}^{N_{g}}{\sum}_{j = 1}^{N_{g}} (i,j)P(i,j) - \mu_{x}\mu_{y}}{\sigma_{x}\sigma_{y}} $$

    where μx, μy, σx and σy are the means and standard deviations of Px and Py, respectively.

  5. 5.

    Sum Average:

    $$\sum\limits_{i = 2}^{2N_{g}}i ~P_{x+y}(i) $$

    where x and y represent the row and column respectively of a GLCM entry, and Px + y(i) is the probability of GLCM coordinates summing to x + y.

  6. 6.

    Sum Entropy:

    $$\sum\limits_{i = 2}^{2N_{g}}P_{x+y}(i)\log{P_{x+y}(i)} $$
  7. 7.

    Sum Variance:

    $$\sum\limits_{i = 2}^{2N_{g}}(1-SE)^{2} P_{x+y}(i) $$

    where SE = Sum Entropy.

  8. 8.

    Inverse Difference Moment (Homogeneity): This feature measures the similarity between pixels and their neighborhoods. Local textures having minimal changes, lead to high homogeneity. It is computed as:

    $$Inverse ~Difference ~Moment= \sum\limits_{i = 1}^{N_{g}} \sum\limits_{j = 1}^{N_{g}}\frac{P(i,j)}{1+(i-j)^{2}} $$
  9. 9.

    Entropy: Entropy measures the textural randomness within an image. Entropy is computed as:

    $$Entropy = -\sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}}P(i,j)\log_{2}(P(i,j)) $$
  10. 10.

    Difference Variance:

    $$\sum\limits_{i = 0}^{N_{g}-1} i^{2} P_{x-y}(i) $$
  11. 11.

    Difference Entropy:

    $$-\sum\limits_{i = 0}^{N_{g}-1} P_{x-y}(i)\log({P_{x-y}(i)}) $$
  12. 12.

    Information Measure of Correlation 1:

    $$\frac{HXY-HXY1}{max\{HX,HY\}} $$

    where,

    $$\begin{array}{@{}rcl@{}} HXY&=&-\sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}} P(i,j)\log(P(i,j))\\ HXY1&=&-\sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}} P(i,j)\log{P_{x}(i)P_{y}(j)}\\ HX&=&\text{Entropy of}~P_{x}(i), HY = \text{Entropy of} ~P_{y}(j) \end{array} $$
  13. 13.

    Information Measure of Correlation 2:

    $$(1-\exp {[-2(HXY2-HXY)]})^{\frac{1}{2}}$$

    where

    $$HXY2=-\sum\limits_{i = 1}^{N_{g}}\sum\limits_{j = 1}^{N_{g}} P_{x}(i) P_{y}(j) \log ({P_{x}(i) P_{y}(j)}) $$
  14. 14.

    Max Correlation Coefficient:

    $$\sqrt{\text{Second largest eigenvalue of Q}} $$

    where,

    $$Q(i,j)=\sum\limits_{k = 1}^{N_{g}} \frac{P(i,k) P(j,k)} {P_{x}(i) P_{y}(k)} $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bakas, J., Naskar, R. & Dixit, R. Detection and localization of inter-frame video forgeries based on inconsistency in correlation distribution between Haralick coded frames. Multimed Tools Appl 78, 4905–4935 (2019). https://doi.org/10.1007/s11042-018-6570-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6570-8

Keywords

Navigation