Abstract
Indexing artwork is not only a tedious job; it is an impossible task to complete manually given the amount of online art. In any case, the automatic classification of art styles is also a challenge due to the relative lack of labeled data and the complexity of the subject matter. This complexity means that common data augmentation techniques may not generate useful data; in fact, they may degrade performance in practice. In this paper, we use Generative Adversarial Networks for data augmentation so as to improve the accuracy of an art style classifier, showing that we can improve performance of EfficientNet B0, a state of art classifier. To achieve this result, we introduce Class-by-Class Performance Analysis; we also present a modified version of the SAGAN training configuration that allows better control against mode collapse and vanishing gradient in the context of artwork.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This feeling can be found in articles on the study of art: https://www.artsy.net/article/alina-cohen-art-movements-matter.
References
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv (2017)
Arora, R.S., Elgammal, A.: Towards automated classification of fine-art painting style: a comparative study. In: Proceedings - International Conference on Pattern Recognition, pp. 3541–3544 (2012)
Bar, Y., Levy, N., Wolf, L.: Classification of artistic styles using binarized features derived from a deep neural network. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 71–84. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_5
Bianco, S., Mazzini, D., Napoletano, P., Schettini, R.: Multitask painting categorization by deep multibranch neural network. Expert Syst. Appl. 135, 90–101 (2019). https://doi.org/10.1016/j.eswa.2019.05.036
Brock, A., Donahuey, J., Simonyany, K.: Large scale gan training for high fidelity natural image synthesis, pp. 1–35 (2018). arXiv
Cetinic, E., Lipic, T., Grgic, S.: Fine-tuning convolutional neural networks for fine art classification. Expert Syst. Appl. 114, 107–118 (2018). https://doi.org/10.1016/j.eswa.2018.07.026
Chen, L., Yang, J.: Recognizing the style of visual arts via adaptive cross-layer correlation. In: MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia, pp. 2459–2467 (2019). https://doi.org/10.1145/3343031.3350977
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Neural Information Processing Systems, pp. 2180–2188 (2016)
Chu, W.T., Wu, Y.L.: Image style classification based on learnt deep correlation features. IEEE Trans. Multimedia 20, 2491–2502 (2018). https://doi.org/10.1109/TMM.2018.2801718
Condorovici, R.G., Florea, C., Vertan, C.: Automatically classifying paintings with perceptual inspired descriptors. J. Visual Commun. Image Represent. 26, 222–230 (2015). https://doi.org/10.1016/j.jvcir.2014.11.016
Daras, G., Odena, A., Zhang, H., Dimakis, A.G.: Your local gan: designing two dimensional local attention mechanisms for generative models. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 14519–14527 (2020). https://doi.org/10.1109/CVPR42600.2020.01454
Eldeen, N., Khalifa, M.: Detection of coronavirus (covid-19) associated pneumonia based on generative adversarial networks and a fine-tuned deep transfer learning model using chest x-ray dataset (2020). http://www.egyptscience.net
Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: Can: creative adversarial networks, generating “art" by learning about styles and deviating from style norms, pp. 1–22 (2017). arXiv
Elgammal, A., Liu, B., Kim, D., Elhoseiny, M., Mazzone, M.: The shape of art history in the eyes of the machine. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 2183–2191 (2018)
Farthing, S.: Tudo sobre Arte. 2 edn. (2018)
Florea, C., Toca, C., Gieseke, F.: Artistic movement recognition by boosted fusion of color structure and topographic description. In: Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 569–577 (2017). https://doi.org/10.1109/WACV.2017.69
Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: Gan-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321, 321–331 (2018). https://doi.org/10.1016/j.neucom.2018.09.013
Gao, X., Tian, Y., Qi, Z.: RPD-GAN: learning to draw realistic paintings with generative adversarial network. IEEE Trans. Image Process. 29, 8706–8720 (2020). https://doi.org/10.1109/TIP.2020.3018856
Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63, 139–144 (2014). https://doi.org/10.1145/3422622
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein gans, vol. 2017-Decem, pp. 5768–5778 (2017)
Han, C., et al.: Synthesizing diverse lung nodules wherever massively: 3d multi-conditional gan-based ct image augmentation for object detection. In: Proceedings - 2019 International Conference on 3D Vision, 3DV 2019, pp. 729–737 (2019). https://doi.org/10.1109/3DV.2019.00085
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745
Karayev, S., et al.: Recognizing image style. In: BMVC 2014 - Proceedings of the British Machine Vision Conference 2014, pp. 1–20 (2014). https://doi.org/10.5244/c.28.122
Kastan, D.S., Farthing, S.: On Color. Yale University Press, New Haven (2018)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2012). https://doi.org/10.1145/3065386
Lecoutre, A., Negrevergne, B., Yger, F.: Recognizing art style automatically in painting with deep learning. J. Mach. Learn. Res. 77, 327–342 (2017)
Mirza, M., Osindero, S.: Conditional generative adversarial nets, pp. 1–7 (2014)
Qin, Z., Liu, Z., Zhu, P., Xue, Y.: A gan-based image synthesis method for skin lesion classification. Comput. Methods Prog. Biomed. 195 (2020). https://doi.org/10.1016/j.cmpb.2020.105568
Rodriguez, C.S., Lech, M., Pirogova, E.: Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018, 12th International Conference on Signal Processing and Communication Systems, ICSPCS 2018 - Proceedings, pp. 1–7 (2019). https://doi.org/10.1109/ICSPCS.2018.8631731
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
Sandoval, C., Pirogova, E., Lech, M.: Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access 7, 41770–41781 (2019)
Shamir, L., Macura, T., Orlov, N., Eckley, D.M., Goldberg, I.G.: Impressionism, expressionism, surrealism: automated recognition of painters and schools of art. ACM Trans. Appl. Percept. 7 (2010). https://doi.org/10.1145/1670671.1670672
Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6, 1–48 (2019)
Suh, S., Lee, H., Lukowicz, P., Lee, Y.O.: Cegan: classification enhancement generative adversarial networks for unraveling data imbalance problems. Neural Netw. 133, 69–86 (2021)
Tan, M., Le, Q.V.: Efficientnet: rethinking model scaling for convolutional neural networks. In: 36th International Conference on Machine Learning, ICML 2019 2019-June, pp. 10691–10700 (2019)
Tan, W.R., Chan, C.S., Aguirre, H.E., Tanaka, K.: Ceci n’est pas une pipe: a deep convolutional network for fine-art paintings classification. In: Proceedings - International Conference on Image Processing, ICIP 2016-August, pp. 3703–3707 (2016). https://doi.org/10.1109/ICIP.2016.7533051
Tomei, M., Cornia, M., Baraldi, L., Cucchiara, R.: Art2real: unfolding the reality of artworks via semantically-aware image-to-image translation, vol. 2019-June, pp. 5842–5852 (2019). https://doi.org/10.1109/CVPR.2019.00600
Waheed, A., Goyal, M., Gupta, D., Khanna, A., Al-Turjman, F., Pinheiro, P.R.: Covidgan: data augmentation using auxiliary classifier GAN for improved covid-19 detection. IEEE Access 8, 91916–91923 (2020). https://doi.org/10.1109/ACCESS.2020.2994762
Wang, Z., She, Q., Ward, T.E.: Generative adversarial networks: a survey and taxonomy, pp. 1–41 (2019). arXiv
Wu, J., Huang, Z., Thoma, J., Acharya, D., Van Gool, L.: Wasserstein divergence for GANs. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 673–688. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_40
Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks, pp. 3863–3871 (2020). arXiv
Özal Yıldırım, Pławiak, P., Tan, R.S., Acharya, U.R.: Arrhythmia detection using deep convolutional neural network with long duration ecg signals (2018)
Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 7354–7363. PMLR (2019)
Zhong, S., Huang, X., Xiao, Z.: Fine-art painting classification via two-channel dual path networks. Int. J. Mach. Learn. Cybern. 11, 137–152 (2020)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks, vol. 2017-Octob, pp. 2242–2251 (2017)
Zhu, Y., Ji, Y., Zhang, Y., Xu, L., Zhou, A.L., Chan, E.: Machine: the new art connoisseur (2019). http://arxiv.org/abs/1911.10091
Acknowledgements
This work is part of the Center for Data Science with funding by Itaú Unibanco. The first author thanks Itaú Unibanco for its generosity in authorizing research activities that led to this work. The second author was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), grant 312180/2018-7, and by São Paulo Research Foundation (FAPESP), grant 2019/07665-4.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Pérez, S.P., Cozman, F.G. (2021). How to Generate Synthetic Paintings to Improve Art Style Classification. In: Britto, A., Valdivia Delgado, K. (eds) Intelligent Systems. BRACIS 2021. Lecture Notes in Computer Science(), vol 13074. Springer, Cham. https://doi.org/10.1007/978-3-030-91699-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-91699-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91698-5
Online ISBN: 978-3-030-91699-2
eBook Packages: Computer ScienceComputer Science (R0)