A New Database and Protocol for Image Reuse Detection

  • Furkan Isikdogan
  • İlhan Adıyaman
  • Alkım Almila Akdağ Salah
  • Albert Ali SalahEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9913)


The use of visual elements of an existing image while creating new ones is a commonly observed phenomenon in digital artworks. The practice, which is referred to as image reuse, is not an easy one to detect even with the human eye, less so using computational methods. In this paper, we study the automatic image reuse detection in digital artworks as an image retrieval problem. First, we introduce a new digital art database (BODAIR) that consists of a set of digital artworks that re-use stock images. Then, we evaluate a set of existing image descriptors for image reuse detection, providing a baseline for the detection of image reuse in digital artworks. Finally, we propose an image retrieval method tailored for reuse detection, by combining saliency maps with the image descriptors.


Image database Digital art Image retrieval Feature extraction DeviantArt Image reuse BODAIR 


  1. 1.
    Wölfflin, H.: Kunstgeschichtliche Grundbegriffe: das Problem der Stilentwicklung in der neueren Kunst. Münich, Hugo Bruckmann (1915)Google Scholar
  2. 2.
    Stork, D.G.: Computer vision and computer graphics analysis of paintings and drawings: an introduction to the literature. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS, vol. 5702, pp. 9–24. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03767-2_2 CrossRefGoogle Scholar
  3. 3.
    Akdag Salah, A.A., Salah, A.A.: Flow of innovation in deviantart: following artists on an online social network site. Mind Soc. 12(1), 137–149 (2013)CrossRefGoogle Scholar
  4. 4.
    Smeulders, A., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1349–1380 (2000)CrossRefGoogle Scholar
  5. 5.
    Liu, Y., Zhang, D., Lu, G., Ma, W.Y.: A survey of content-based image retrieval with high-level semantics. Pattern Recogn. 40(1), 262–282 (2007)CrossRefzbMATHGoogle Scholar
  6. 6.
    Buter, B., Dijkshoorn, N., Modolo, D., Nguyen, Q., van Noort, S., van der Poel, B., Akdag Salah, A.A., Salah, A.A.: Explorative visualization and analysis of a social network for arts: the case of deviantART. J. Convergence 2(2), 87–94 (2011)Google Scholar
  7. 7.
    Bayram, S., Avcibas, I., Sankur, B., Memon, N.: Image manipulation detection. J. Electron. Imaging 15(4), 041102 (2006)CrossRefGoogle Scholar
  8. 8.
    Ke, Y., Sukthankar, R., Huston, L., Ke, Y., Sukthankar, R.: Efficient near-duplicate detection and sub-image retrieval. In: ACM Multimedia, pp. 869–876 (2004)Google Scholar
  9. 9.
    Fridrich, A.J., Soukal, B.D., Lukáš, A.J.: Detection of copy-move forgery in digital images. In: in Proceedings of Digital Forensic Research Workshop, Citeseer (2003)Google Scholar
  10. 10.
    Kim, C.: Content-based image copy detection. Signal Process. Image Commun. 18(3), 169–184 (2003)CrossRefGoogle Scholar
  11. 11.
    Zhao, W.L., Ngo, C.W.: Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection. IEEE Trans. Image Process. 18(2), 412–423 (2009)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Bosch, A., Muoz, X., Mart, R.: Which is the best way to organize/classify images by content? Image Vis. Comput. 25(6), 778–791 (2007)CrossRefGoogle Scholar
  13. 13.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  14. 14.
    Van De Sande, K., Gevers, T., Snoek, C.: Evaluating color descriptors for object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1582–1596 (2010)CrossRefGoogle Scholar
  15. 15.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)Google Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE CVPR, pp. 770–778 (2016)Google Scholar
  17. 17.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  18. 18.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  19. 19.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-10590-1_53 Google Scholar
  21. 21.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in neural information processing systems, pp. 3320–3328 (2014)Google Scholar
  22. 22.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVpPR 2005, vol. 1, pp. 886–893. IEEE (2005)Google Scholar
  23. 23.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Isikdogan, F., Salah, A.: Affine invariant salient patch descriptors for image retrieval. In: International Workshop on Image and Audio Analysis for Multimedia Interactive Services, Paris, France (2013)Google Scholar
  25. 25.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  26. 26.
    Zhang, J., Sclaroff, S.: Saliency detection: a Boolean map approach. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 153–160 (2013)Google Scholar
  27. 27.
    Nowak, E., Jurie, F., Triggs, B.: Sampling strategies for bag-of-features image classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 490–503. Springer, Heidelberg (2006). doi: 10.1007/11744085_38 CrossRefGoogle Scholar
  28. 28.
    Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Furkan Isikdogan
    • 1
    • 2
  • İlhan Adıyaman
    • 2
  • Alkım Almila Akdağ Salah
    • 3
  • Albert Ali Salah
    • 2
    Email author
  1. 1.Department of Electrical and Computer EngineeringThe University of Texas at AustinAustinUSA
  2. 2.Department of Computer EngineeringBoğaziçi UniversityIstanbulTurkey
  3. 3.College of Communicationİstanbul Şehir UniversityIstanbulTurkey

Personalised recommendations