Content Recapture Detection Based on Convolutional Neural Networks

  • Hak-Yeol Choi
  • Han-Ul Jang
  • Jeongho Son
  • Dongkyu Kim
  • Heung-Kyu LeeEmail author
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 424)


Detecting recaptured images has been considered as an important issue. The previous techniques tried to make hand-crafted features represent the statistical characteristics of the recaptured images. Different to the existing methods, the proposed method solves the recapturing detection problem based on a deep learning technique which shows high performance for various applications in recent image processing. Specifically, we propose a recaptured image classification scheme based on a convolutional neural networks (CNNs). To our best knowledge, this is the first work of applying CNNs into the recaptured image detection. For reliable performance evaltuation, we used high-quality database for training and testing. The experimental results show high performance compared to the state-of-the-art methods.


Convolutional neural networks Multimedia forensic Image recapture detection Deep learning 



This work was supported by the Institute for Information & communications Technology Promotion(IITP) grant funded by the Korean government(MSIP) (No. R0126-16-1024, Managerial Technology Development and Digital Contents Security of 3D Printing based on Micro Licensing Technology) and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2016R1A2B2009595)


  1. 1.
    Cao, H., Kot, A.C.: Identification of recaptured photographs on LCD screens. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1790–1793 (2010)Google Scholar
  2. 2.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2323 (1998)CrossRefGoogle Scholar
  3. 3.
    Salakhutdinov, R., Hinton, G.: Deep boltzmann machines. In: International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 1, pp. 448–455 (2009)Google Scholar
  4. 4.
    Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1, 1–40 (2009)zbMATHGoogle Scholar
  5. 5.
    Farid, H., Lyu, S.: Higher-order wavelet statistics and their application to digital forensics. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 8, pp. 1–8 (2003)Google Scholar
  6. 6.
    Bai, J., Ng, T.T., Gao, X., Shi, Y.Q.: Is physics-based liveness detection truly possible with a single image? In: 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems (ISCAS), pp. 3425–3428 (2010)Google Scholar
  7. 7.
    Gao, X.T., Ng, T.T., Qiu, B., Chang, S.F.: Single-view recaptured image detection based on physics-based features. In: 2010 IEEE International Conference on Multimedia and Expo (ICME), pp. 1469–1474 (2010)Google Scholar
  8. 8.
    Mahdian, B., Novozamsky, A., Saic, S.: Identification of aliasing-based patterns in re-captured LCD screens. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 616–620 (2015)Google Scholar
  9. 9.
    Thongkamwitoon, T., Muammar, H., Dragotti, P.L.: An image recapture detection algorithm based on learning dictionaries of edge profiles. IEEE Trans. Inf. Forensics Secur. 10, 953–968 (2015)CrossRefGoogle Scholar
  10. 10.
    Liu, Y., Yao, X.: Evolutionary design of artificial neural networks with different nodes. In: Proceedings of IEEE International Conference on Evolutionary Computation, pp. 913–917 (1996)Google Scholar
  11. 11.
    Scherer, D., Müller, A., Behnke, S.: Evaluation of pooling operations in convolutional architectures for object recognition. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010. LNCS, vol. 6354, pp. 92–101. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15825-4_10 CrossRefGoogle Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations, pp. 1–14 (2015)Google Scholar
  13. 13.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors, pp. 1–18 (2012). arXiv:1207.0580
  14. 14.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia (ACMMM), pp. 675–678 (2014)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  • Hak-Yeol Choi
    • 1
  • Han-Ul Jang
    • 1
  • Jeongho Son
    • 1
  • Dongkyu Kim
    • 1
  • Heung-Kyu Lee
    • 1
    Email author
  1. 1.School of ComputingKorea Advanced Institute of Science and TechnologyDaejeonRepublic of Korea

Personalised recommendations