Abstract
Given the recent advantages in multimodal image pretraining where visual models trained with semantically dense textual super- vision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain. We perform exhaustive experiments on the NoisyArt dataset which is a collection of artwork images collected from public resources on the web. On such dataset CLIP achieve impressive results on (zero-shot) classification and promising results in both artwork-to-artwork and description-to-artwork domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Bibliography
Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the CVPR (2016)
Del Chiaro, R., Bagdanov, A.D., Del Bimbo, A.: Noisyart: a dataset for webly-supervised artwork recognition. In: VISIGRAPP (4: VISAPP), pp. 467–475 (2019)
Del Chiaro, R., Bagdanov, A.D., Del Bimbo, A.: Webly-supervised zero-shot learning for artwork instance recognition. Pattern Recogn. Lett. 128, 420–426 (2019). ISSN 0167–8655. https://doi.org/10.1016/j.patrec.2019.09.027. https://www.sciencedirect.com/science/article/pii/S0167865519302739
Delhumeau, J., Gosselin, P.-H., Jégou, H., Pérez, P.: Revisiting the VLAD image representation. In: Proceedings of the ACM MM (2013)
Frome, A., et al.: Devise: a deep visual-semantic embedding model. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds) Advances in Neural Information Processing Systems, vol. 26. Curran Associates, Inc. (2013). https://proceedings.neurips.cc/paper/2013/file/7cce53cf90577442771720a370c3c723-Paper.pdf
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the CVPR, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Jégou, H., Douze, M., Schmid, C.: Improving bag-of-features for large scale image search. Int. J. Comput. Vis. 87(3), 316–336 (2010). https://doi.org/10.1007/s11263-009-0285-2
Jégou, H., Perronnin, F., Douze, M., Sánchez, J., Pérez, P., Schmid, C.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1704–1716 (2012). ISSN 1939–3539. https://doi.org/10.1109/TPAMI.2011.235
Kalantidis, Y., Mellina, C., Osindero, S.: Cross-dimensional weighting for aggregated deep convolutional features (2016). https://doi.org/10.1007/978-3-319-46604-0_48
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the NIPS (2012)
Mikulik, A., Perdoch, M., Chum, O., Matas, J.: Learning vocabularies over a fine quantization. Int. J. Comput. Vision 103(1), 163–175 (2013)
Perronnin, F., Sánchez, J., Mensink, T.: Improving the fisher kernel for large-scale image classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 143–156. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_11
Radenovic, F., Tolias, G., Chum, O.: Fine-tuning CNN image retrieval with no human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1655–1668 (2019). https://doi.org/10.1109/TPAMI.2018.2846566
Radford, A., et al: Learning transferable visual models from natural language supervision (2021). https://doi.org/10.1007/978-3-319-46604-0_48
Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: Bach, F., Blei, D. (eds) Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2152–2161, Lille, France. PMLR (2015). https://proceedings.mlr.press/v37/romera-paredes15.html
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge (2015). https://doi.org/10.1007/s11263-015-0816-y
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient- based localization. Int. J. Comput. Vis. 128(2), 336359 (2019). ISSN 1573–1405. https://doi.org/10.1007/s11263-019-01228-7
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Sivic, J., Zisserman, A.: Video google: a text retrieval approach to object matching in videos. In: Proceedings of the ICCV (2003). https://doi.org/10.1109/ICCV.2003.1238663
Tolias, G., Sicre, R., J´egou, H.: Particular object retrieval with integral max-pooling of CNN activations. In: Proceedings of the ICLR (2016)
Vaccaro, F., Bertini, M., Uricchio, T., Del Bimbo, A.: Image retrieval using multi-scale CNN features pooling (2020)
Zheng, L., Yang, Y., Tian, Q.: Sift meets cnn: a decade survey of instance retrieval (2017)
Acknowledgments
This work was partially supported by the European Commission under European Horizon 2020 Programme, grant number 101004545 - ReInHerit.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Baldrati, A., Bertini, M., Uricchio, T., Del Bimbo, A. (2022). Exploiting CLIP-Based Multi-modal Approach for Artwork Classification and Retrieval. In: Furferi, R., Governi, L., Volpe, Y., Seymour, K., Pelagotti, A., Gherardini, F. (eds) The Future of Heritage Science and Technologies: ICT and Digital Heritage. Florence Heri-Tech 2022. Communications in Computer and Information Science, vol 1645. Springer, Cham. https://doi.org/10.1007/978-3-031-20302-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-20302-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20301-5
Online ISBN: 978-3-031-20302-2
eBook Packages: Computer ScienceComputer Science (R0)