Skip to main content
Log in

Robust camera model identification using demosaicing residual features

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this paper, we propose a new framework for performing accurate and robust camera model identification by fully exploiting demosaicing information in a camera’s output images. Instead of fitting a camera’s demosaicing process into parametric models, our framework works by exposing and extracting a diverse set of intra-channel and inter-channel color value correlations originated from the demosaicing process. To expose these correlations, we first apply a number of diversified baseline demosaicing algorithms to re-demosaic the image under investigation, and gather a set of both linear and nonlinear demosaicing residuals. To further extract demosaicing correlations with respect to the color filter array (CFA) structure, co-occurrence matrices are calculated using a new set of geometric patterns. These patterns are specifically designed to extract different types of color value dependencies within the repeated lattice of the CFA pattern. We design a multi-class ensemble classifier to utilize all extracted color value correlations to perform camera model identification. A series of experiments show that our proposed framework can achieve an accuracy of 98.14% on a database with 68 camera models, and is highly robust to post-JPEG compression and contrast enhancement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://gitlab.com/users/MISLgit/projects

References

  1. Aly M (2005) Survey on multiclass classification methods. Neural Netw 19:1–9

    Google Scholar 

  2. Bayar B, Stamm MC (2017) Design principles of convolutional neural networks for multimedia forensics. Electron Imaging 2017(7):77–86

    Article  Google Scholar 

  3. Bayram S, Sencar H, Memon N, Avcibas I (2005) Source camera identification based on cfa interpolation. In: IEEE International conference on image processing 2005, vol 3. IEEE, pp III–69

  4. Bondi L, Baroffio L, Güera D, Bestagini P, Delp EJ, Tubaro S (2017) First steps toward camera model identification with convolutional neural networks. IEEE Signal Process Lett 24(3):259–263

    Article  Google Scholar 

  5. Cao H, Kot AC (2009) Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans Inf Forensic Secur 4(4):899–910

    Article  Google Scholar 

  6. Chen C, Stamm MC (2015) Camera model identification framework using an ensemble of demosaicing features. In: 2015 IEEE international workshop on Information forensics and security (WIFS). IEEE, pp 1–6

  7. Chen M, Fridrich J, Goljan M, Lukáš J (2008) Determining image origin and integrity using sensor noise. IEEE Trans Inf Forensic Secur 3(1):74–90

    Article  Google Scholar 

  8. Deng Z, Gijsenij A, Zhang J (2011) Source camera identification using auto-white balance approximation. In: 2011 International conference on computer vision. IEEE, pp 57–64

  9. Filler T, Fridrich J, Goljan M (2008) Using sensor pattern noise for camera model identification. In: 2008. ICIP 2008. 15th IEEE international conference on Image processing, pp 1296–1299

  10. Fridrich J, Kodovskỳ J (2012) Rich models for steganalysis of digital images. IEEE Trans Inf Forensic Secur 7(3):868–882

    Article  Google Scholar 

  11. Gunturk BK, Altunbasak Y, Mersereau RM (2002) Color plane interpolation using alternating projections. IEEE Trans Image Process 11(9):997–1013

    Article  Google Scholar 

  12. Kee E, Johnson M, Farid H (2011) Digital image authentication from jpeg headers. IEEE Trans Inf Forensic Secur 6(3):1066–1075

    Article  Google Scholar 

  13. Kharrazi M, Sencar H, Memon N (2004) Blind source camera identification. In: 2004. ICIP ’04. 2004 international conference on Image processing, vol 1, pp 709–712

  14. Kodovskỳ J, Fridrich J, Holub V (2012) Ensemble classifiers for steganalysis of digital media. IEEE Trans Inf Forensic Secur 7(2):432–444

    Article  Google Scholar 

  15. Marra F, Poggi G, Sansone C, Verdoliva L (2016) A study of co-occurrence based local features for camera model identification. Multimedia Tools and Applications 76:1–17

    Google Scholar 

  16. McKay C, Swaminathan A, Gou H, Wu M (2008) Image acquisition forensics: Forensic analysis to identify imaging source. In: 2008. ICASSP 2008. IEEE international conference on Acoustics, speech and signal processing. IEEE, pp 1657–1660

  17. Milani S, Bestagini P, Tagliasacchi M, Tubaro S (2014) Demosaicing strategy identification via eigenalgorithms. Architecture 12:17

    Google Scholar 

  18. Paliy D, Katkovnik V, Bilcu R, Alenius S, Egiazarian K (2007) Spatially adaptive color filter array interpolation for noiseless and noisy data. Int J Imaging Syst Technol 17(3):105–122

    Article  Google Scholar 

  19. San Choi K, Lam EY, Wong KK (2006) Source camera identification by jpeg compression statistics for image forensics. In: TENCON 2006. 2006 IEEE Region 10 conference. IEEE, pp 1–4

  20. San Choi K, Lam EY, Wong KK (2006) Source camera identification using footprints from lens aberration. In: SPIE Electronic imaging 2006, pp 60690j–60690j

  21. Stamm MC, Liu KJR (2010) Forensic detection of image manipulation using statistical intrinsic fingerprints. IEEE Trans Inf Forensic Secur 5(3):492–506

    Article  Google Scholar 

  22. Stamm MC, Wu M, Liu KJR (2013) Information forensics: an overview of the first decade. IEEE Access 1:167–200

    Article  Google Scholar 

  23. Swaminathan A, Wu M, Liu K (2007) Nonintrusive component forensics of visual sensors using output images. IEEE Trans Inf Forensic Secur 2 (1):91–106

    Article  Google Scholar 

  24. The ’dresden image database’. http://forensics.inf.tu-dresden.de/ddimgdb/. (Accessed on 04/28/2017)

  25. Tuama A, Comby F, Chaumont M (2016) Camera model identification with the use of deep convolutional neural networks. In: 2016 IEEE international workshop on Information forensics and security (WIFS). IEEE, pp 1–6

  26. Van LT, Emmanuel S, Kankanhalli MS (2007) Identifying source cell phone using chromatic aberration. In: 2007 IEEE international conference on Multimedia and expo. IEEE, pp 883–886

  27. Zhao X, Stamm MC (2016) Computationally efficient demosaicing filter estimation for forensic camera model identification. In: 2016 IEEE international conference on Image processing (ICIP). IEEE, pp 151–155

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew C. Stamm.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This material is based upon work supported by the National Science Foundation under Grant No. 1553610. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Appendix

Appendix

In this appendix, we present the convolution filters designed to realize the horizontal and vertical bilinear demosaicing algorithms in our baseline algorithm set.

In Fig. 12, we show the red, green and blue layers after re-sampling according to the Bayer pattern. The blank blocks correspond to missing color components that must be interpolated using the sampled colors. Due to the symmetry of the green layer, all the missing green components can be interpolated using the same filter. The filter coefficients used to interpolate the green components are [0.5, 0, 0.5] for the horizontal bilinear demosaicing algorithm. For the vertical bilinear demosaicing algorithm, the filter is the transpose of the horizontal one, i.e. [0.5, 0, 0.5]T. For convenience, we use horizontal filter to refer to the interpolation filter for the horizontal bilinear demosaicing algorithm, and similarly, vertical filter for the vertical bilinear demosaicing algorithm.

Fig. 12
figure 12

Red, green and blue layer after re-sampling according to the Bayer pattern

For the red and blue layers, the three color components to be re-demosaiced in each Bayer pattern require different filters. For colors in position P1 in both the red and blue layers, the horizontal filter is [0.5, 0, 0.5] and the vertical filter covers a 5 × 3 window:

$$ \frac{1}{6}\left[ \begin{array}{lll} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{array} \right] $$
(10)

For missing red and blue components in position P2, the horizontal filter is:

$$ \frac{1}{2}\left[ \begin{array}{lll} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right] $$
(11)

and the vertical filter is the transpose of the horizontal filter. For color components in position P3, the vertical filter is [0.5, 0, 0.5]T and the horizontal filter is:

$$ \frac{1}{6}\left[ \begin{array}{lllll} 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 \end{array} \right] $$
(12)

which is the transpose of vertical filter for color components in position P1.

We can see that every time a color component is interpolated, the horizontal demosaicing algorithm uses more re-sampled colors in the horizontal direction and the vertical demosaicing algorithm has a preference for colors in vertical direction. By designing the interpolation filters in this way, we expect vertical and horizontal bilinear demosaicing algorithms to expose directional information of a camera’s demosaicing process.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, C., Stamm, M.C. Robust camera model identification using demosaicing residual features. Multimed Tools Appl 80, 11365–11393 (2021). https://doi.org/10.1007/s11042-020-09011-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09011-4

Keywords

Navigation