Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Classification of basic artistic media based on a deep convolutional approach

  • 173 Accesses

  • 2 Citations

Abstract

Artistic media play an important role in recognizing and classifying artworks in many artwork classification works and public artwork databases. We employ deep CNN structure to recognize artistic media from artworks and to classify them into predetermined categories. For this purpose, we define basic artistic media as oilpaint brush, pastel, pencil and watercolor and build artwork image dataset by collecting artwork images from various websites. To build our classifier, we implement various recent deep CNN structures and compare their performances. Among them, we select DenseNet, which shows best performance for recognizing artistic media. Through the human baseline experiment, we show that the performance of our classifier is compatible with that of trained human. Furthermore, we also show that our classifier shows a similar recognition and classification pattern with human in terms of well-classifying media, ill-classifying media, confusing pair and confusing case. We also collect synthesized oilpaint artwork images from fourteen important oilpaint literatures and apply them to our classifier. Our classifier shows a meaningful performance, which will lead to an evaluation scheme for the artistic media simulation techniques of non-photorealistic rendering (NPR) society.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

References

  1. 1.

    Anwer, R., Khan, F., van de Weijer, J., Laaksonen, J.: Combining holistic and part-based deep representations for comp‘utational painting categorization. In: Proceedings of ACM International Conference on Multimedia Retrieval 2016, pp. 339–342 (2016)

  2. 2.

    Bar, Y., Levy, N., Wolf, L.: Classification of artistic styles using binarized features derived from a deep neural network. In: Proceedings of Workshop at the European Conference on Computer Vision 2014, pp. 71–84 (2014)

  3. 3.

    Bergamo, A., Torresani, L., Fitzgibbon, A.W.: Picodes: learning a compact code for novel-category recognition. In: Proceedings of Advances in Neural Information Processing Systems 2011, pp. 2088–2096 (2011)

  4. 4.

    Cetinic, E., Grgic, S.: Genre classification of paintings. In: Proceedings of International Symposium ELMAR 2016, pp. 201–204 (2016)

  5. 5.

    Cetinic, E., Lipic, T., Grgic, S.: Fine-tuning convolutional neural networks for fine art classification. Expert Syst. Appl. 114, 107–118 (2018)

  6. 6.

    Chen, Z., Kim, B., Ito, D., Wang, H.: Wetbrush: Gpu-based 3d painting simulation at the bristle level. ACM Trans. Graph. 34(6), 200 (2015)

  7. 7.

    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: A deep convolutional activation feature for generic visual recognition. In: Proceedings of the International Conference on Machine Learning 2014, pp. 647–655 (2014)

  8. 8.

    Falomir, Z., Museros, L., Sanz, I., Gonzalez-Abril, L.: Categorizing paintings in art styles based on qualitative color descriptors, quantitative global features and machine learning (qart-learn). Expert Syst. Appl. 97, 83–94 (2018)

  9. 9.

    Fiser, J., Jamriska, O., Simons, D., Shechtman, E., Lu, J., Asente, P., Lukac, M., Sykora, D.: Example-based synthesis of stylized facial animations. ACM Trans. Graph. 36(4), 155 (2017)

  10. 10.

    Florea, C., Toca, C., Gieseke, F.: Artistic movement recognition by boosted fusion of color structure and topographic description. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision 2017, pp. 569–577 (2017)

  11. 11.

    Gatys, L., Ecker, A., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, pp. 2414–2423 (2016)

  12. 12.

    Hays, J., Essa, I.: Image and video based painterly animation. In: Proceedings of the Non-photorealistic Rendering and Animation 2004, pp. 113–120 (2004)

  13. 13.

    Hertzmann, A.: Painterly rendering with curved brush strokes of multiple sizes. In: Proceedings of the Siggraph 1998, pp. 453–460 (1998)

  14. 14.

    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of European Conference on Computer Vision 2016, pp. 694–711 (2016)

  15. 15.

    Kagaya, M., Brendel, W., Deng, Q., Kesterson, T., Todorovic, S., Neill, P., Zhang, E.: Video painting with space-time varying style parameters. IEEE Trans. Vis. Comput. Graph. 17(1), 74–87 (2011)

  16. 16.

    Karayev, S., Trentacoste, M., Han, H., Agarwala, A., Darrell, T., Hertzmann, A., Winnemoeller, H.: Recognizing image style. In: Proceedings of the British Machine Vision Conference 2014, pp. 1–20 (2014)

  17. 17.

    Keren, D.: Painter identification using local reatures and naive bayes. In: Proceedings of the International Conference on Pattern Recognition 2002, pp. 474–477 (2002)

  18. 18.

    Kyprianidis, J., Collomosse, J., Wang, T., Isenberg, T.: State of the art: a taxonomy of artistic stylization techniques for images and video. IEEE Trans. Vis. Comput. Graph. 19(5), 866–885 (2013)

  19. 19.

    Li, J., Wang, J.: Studying digital imagery of ancient paintings by mixtures of stochastic models. IEEE Trans. Image Process. 13(3), 340–353 (2004)

  20. 20.

    Lin, L., Zeng, K., Lv, H., Wang, Y., Xu, Y., Zhu, S.: Painterly animation using video semantics and feature correspondence. In: Proceedings of the Non-photorealistic Rendering and Animation 2010, pp. 73–80 (2010)

  21. 21.

    Litwinowicz, P.: Processing images and video for an impressionist effect. In: Proceedings of the Siggraph 1997, pp. 407–414 (1997)

  22. 22.

    Liu, G., Yan, Y., Ricci, E., Yang, Y., Han, Y., Winkler, S., Sebe, N.: Inferring painting style with multi-task dictionary learning. In: Proceedings of the International Conference on Artificial Intelligence 2015, pp. 2162–2168 (2015)

  23. 23.

    Lyu, S., Rockmore, D., Farid, H.: . a digital technique for art authentication. In: Proceedings of the National Academy of Sciences of the United States of America 2004, pp. 17,006–17,010 (2004)

  24. 24.

    Mao, H., Cheung, M., She, J.: Deepart: Learning joint representations of visual arts. In: Proceedings of ACM International Conference on Multimedia 2017, pp. 1183–1191 (2017)

  25. 25.

    Mensink, T., van Gemert, J.: The rijksmuseum challenge: museum-centered visual recognition. In: Proceedings of ACM International Conference on Multimedia Retrieval 2014, p. 451 (2014)

  26. 26.

    Messina, P., Dominguez, V., Parra, D., Trattner, C., Soto, A.: Content-based artwork recommendation: integrating painting metadata with neural and manually-engineered visual features. User Model. User Adapt. Interact. 1–40 (2018) (Online-First). https://doi.org/10.1007/s11257-018-9206-9

  27. 27.

    Ng, A.: Machine learning yearning. http://mlyearning.org (2017)

  28. 28.

    van Noord, N., Hendriks, E., Postma, E.: Toward discovery of the artist’s style: learning to recognize artists by their artworks. IEEE Sig. Process. Mag. 32(4), 46–54 (2015)

  29. 29.

    O’Donovan, P., Hertzmann, A.: Anipaint: Interactive painterly animation from video. IEEE Trans. Vis. Comput. Graph. 18(3), 475–487 (2012)

  30. 30.

    Peng, K.C., Chen, T.: A framework of extracting multi-scale features using multiple convolutional neural networks. In: Proceedings of the International Conference on Multimedia and EXPO 2015, pp. 1–6 (2015)

  31. 31.

    Picard, D., Gosselin, P.H., Gaspard, M.C.: Challenges in content-based image indexing of cultural heritage collections. IEEE Sig. Process. Mag. 32(4), 95–102 (2015)

  32. 32.

    Seguin, B., Striolo, C., Kaplan, F.: Visual link retrieval in a database of paintings. In: Proceedings of European Conference on Computer Vision 2016, pp. 753–767 (2016)

  33. 33.

    Selim, A., Mohamed, E., Linda, D.: Painting style transfer for head portraits using convolutional neural networks. ACM Trans. Graph. 35(4), 129 (2016)

  34. 34.

    Shamir, L., Macura, T., Orlov, N., Eckley, D., Goldberg, I.: Impressionism, expressionism, surrealism: Automated recognition of painters and schools of art. ACM Trans. Appl. Percept. 7(2), 8 (2010)

  35. 35.

    Strezoski, G., Worring, M.: Omniart: Multi-task deep learning for artistic data analysis. arXiv print arXiv:1708.00684 (2017)

  36. 36.

    Sun, T., Wang, Y., Yang, J., Hu, X.: Convolution neural networks with two pathways for image style recognition. IEEE Trans. Image Process. 26(9), 4102–4113 (2017)

  37. 37.

    Tan, W., Chan, C., Aguirre, H., Tanaka, K.: Ceci nést pas une pipe: A deep convolutional network for fine-art paintings classification. In: Proceedings of the IEEE International Conference on Image Processing 2016, pp. 3703–3707 (2016)

  38. 38.

    Wu, Y., Tsai, Y., Lin, W., Li, W.: Generating pointillism paintings based on seurat’s color composition. Comput. Graph. Forum 32(4), 153–162 (2013)

  39. 39.

    Zeng, K., Zhao, M., Xiong, C., Zhu, S.: From image parsing to painterly rendering. ACM Trans. Graph. 21(1), 2 (2009)

  40. 40.

    Zhao, M., Zhu, S.: Sisley the abstract painter. In: Proceedings of the Non-photorealistic Rendering and Animation 2010, pp. 99–107 (2010)

  41. 41.

    Zhao, M., Zhu, S.: Portrait painting using active templates. In: Proceedings of the Non-photorealistic Rendering and Animation 2011, pp. 117–124 (2011)

  42. 42.

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, pp. 2921–2929 (2016)

Download references

Funding

This research was supported National Research Foundation of Korea (NRF) through NRF-2018R1D1A1A02050292 and NRF-2017R1D1A12B03034137.

Author information

Correspondence to Kyungha Min.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest. The research grant is the grant supported by the university.

Human and animal rights

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yang, H., Min, K. Classification of basic artistic media based on a deep convolutional approach. Vis Comput 36, 559–578 (2020). https://doi.org/10.1007/s00371-019-01641-6

Download citation

Keywords

  • Classification
  • Convolutional neural network
  • Artistic media
  • NPR