Advertisement

Deep Transfer Learning for Art Classification Problems

  • Matthia SabatelliEmail author
  • Mike Kestemont
  • Walter Daelemans
  • Pierre Geurts
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11130)

Abstract

In this paper we investigate whether Deep Convolutional Neural Networks (DCNNs), which have obtained state of the art results on the ImageNet challenge, are able to perform equally well on three different art classification problems. In particular, we assess whether it is beneficial to fine tune the networks instead of just using them as off the shelf feature extractors for a separately trained softmax classifier. Our experiments show how the first approach yields significantly better results and allows the DCNNs to develop new selective attention mechanisms over the images, which provide powerful insights about which pixel regions allow the networks successfully tackle the proposed classification challenges. Furthermore, we also show how DCNNs, which have been fine tuned on a large artistic collection, outperform the same architectures which are pre-trained on the ImageNet dataset only, when it comes to the classification of heritage objects from a different dataset.

Keywords

Deep Convolutional Neural Networks Art classification Transfer learning Visual attention 

Notes

Acknowledgements

The authors wish to acknowledge Jeroen De Meester (Museums and Heritage Antwerp) for sharing his expertise on the Antwerp dataset. The research for this project was financially supported by BELSPO, Federal Public Planning Service Science Policy, Belgium, in the context of the BRAIN-be project: “INSIGHT. Intelligent Neural Systems as InteGrated Heritage Tools”.

References

  1. 1.
    Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016)Google Scholar
  2. 2.
    Ackermann, S., Schawinksi, K., Zhang, C., Weigel, A.K., Turp, M.D.: Using transfer learning to detect galaxy mergers. Mon. Not. Roy. Astronom. Soc. 479, 415–425 (2018)CrossRefGoogle Scholar
  3. 3.
    Allen, N.: Collaboration through the colorado digitization project. First Monday 5(6) (2000)Google Scholar
  4. 4.
    Bidoia, F., Sabatelli, M., Shantia, A., Wiering, M.A., Schomaker, L.: A deep convolutional neural network for location recognition and geometry based information. In: Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2018, 16–18 January 2018, Funchal, Madeira, Portugal, pp. 27–36 (2018)Google Scholar
  5. 5.
    Bojarski, M., et al.: Visualbackprop: efficient visualization of CNNs. arXiv preprint arXiv:1611.05418 (2016)
  6. 6.
    Caruana, R., Lawrence, S., Giles, C.L.: Overfitting in neural nets: backpropagation, conjugate gradient, and early stopping. In: Advances in Neural Information Processing Systems, pp. 402–408 (2001)Google Scholar
  7. 7.
    Chollet, F.: Xception: Deep learning with depthwise separable convolutions. arXiv preprint (2016)Google Scholar
  8. 8.
    Chollet, F., et al.: Keras (2015)Google Scholar
  9. 9.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 248–255. IEEE (2009)Google Scholar
  10. 10.
    Donahue, J., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp. 647–655 (2014)Google Scholar
  11. 11.
    Efstathios, S.: A survey of modern authorship attribution methods. J. Am. Soc. Inf. Sci. Technol. 3, 538–556 (2009).  https://doi.org/10.1002/asi.21001CrossRefGoogle Scholar
  12. 12.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  14. 14.
    Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1 (2017)Google Scholar
  15. 15.
    Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst (2007)Google Scholar
  16. 16.
    Kornblith, S., Shlens, J., Le, Q.V.: Do better imagenet models transfer better? arXiv preprint arXiv:1805.08974 (2018)
  17. 17.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  18. 18.
    Ma, L., Lu, Z., Shang, L., Li, H.: Multimodal convolutional neural networks for matching image and sentence. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2623–2631 (2015)Google Scholar
  19. 19.
    Masters, D., Luschi, C.: Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612 (2018)
  20. 20.
    Mensink, T., Van Gemert, J.: The rijksmuseum challenge: museum-centered visual recognition. In: Proceedings of International Conference on Multimedia Retrieval, p. 451. ACM (2014)Google Scholar
  21. 21.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010)CrossRefGoogle Scholar
  22. 22.
    Parry, R.: Digital heritage and the rise of theory in museum computing. In: Museum Management and Curatorship, pp. 333–348 (2005)CrossRefGoogle Scholar
  23. 23.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 512–519. IEEE (2014)Google Scholar
  24. 24.
    Reyes, A.K., Caicedo, J.C., Camargo, J.E.: Fine-tuning deep convolutional networks for plant recognition. In: CLEF (Working Notes) (2015)Google Scholar
  25. 25.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  26. 26.
    Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1453–1460. IEEE (2011)Google Scholar
  27. 27.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)Google Scholar
  28. 28.
    Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35, 1299–1312 (2016)CrossRefGoogle Scholar
  29. 29.
    Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. In: COURSERA: Neural Networks for Machine Learning, pp. 26–31 (2012)Google Scholar
  30. 30.
    Tomè, D., Monti, F., Baroffio, L., Bondi, L., Tagliasacchi, M., Tubaro, S.: Deep convolutional neural networks for pedestrian detection. Sig. Process.: Image Commun. 47, 482–489 (2016)Google Scholar
  31. 31.
    Weibel, S., Kunze, J., Lagoze, C., Wolf, M.: Dublin core metadata for resource discovery. Technical report (1998)Google Scholar
  32. 32.
    van de Wolfshaar, J., Karaaba, M.F., Wiering, M.A.: Deep convolutional neural networks and support vector machines for gender recognition. In: 2015 IEEE Symposium Series on Computational Intelligence, pp. 188–195. IEEE (2015)Google Scholar
  33. 33.
    Wollheim, R.: On Art and the Mind. Essays and Lectures. Allen Lane, London (1972)Google Scholar
  34. 34.
    Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995. IEEE (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Matthia Sabatelli
    • 1
    Email author
  • Mike Kestemont
    • 2
  • Walter Daelemans
    • 3
  • Pierre Geurts
    • 1
  1. 1.Montefiore Institute, Department of Electrical Engineering and Computer ScienceUniversité de LiègeLiègeBelgium
  2. 2.Antwerp Center for Digital Humanities and Literary Criticism (ACDC)Universiteit AntwerpenAntwerpBelgium
  3. 3.CLiPS, Computational Linguistics GroupUniversiteit AntwerpenAntwerpBelgium

Personalised recommendations