Skip to main content
Log in

MTFFNet: a Multi-task Feature Fusion Framework for Chinese Painting Classification

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract 

Different artists have their unique painting styles, which can be hardly recognized by ordinary people without professional knowledge. How to intelligently analyze such artistic styles via underlying features remains to be a challenging research problem. In this paper, we propose a novel multi-task feature fusion architecture (MTFFNet), for cognitive classification of traditional Chinese paintings. Specifically, by taking the full advantage of the pre-trained DenseNet as backbone, MTFFNet benefits from the fusion of two different types of feature information: semantic and brush stroke features. These features are learned from the RGB images and auxiliary gray-level co-occurrence matrix (GLCM) in an end-to-end manner, to enhance the discriminative power of the features for the first time. Through abundant experiments, our results demonstrate that our proposed model MTFFNet achieves significantly better classification performance than many state-of-the-art approaches.

In this paper, an end-to-end multi-task feature fusion method for Chinese painting classification is proposed. We come up with a new model named MTFFNet, composed of two branches, in which one branch is top-level RGB feature learning and the other branch is low-level brush stroke feature learning. The semantic feature learning branch takes the original image of traditional Chinese painting as input, extracting the color and semantic information of the image, while the brush feature learning branch takes the GLCM feature map as input, extracting the texture and edge information of the image. Multi-kernel learning SVM (supporting vector machine) is selected as the final classifier. Evaluated by experiments, this method improves the accuracy of Chinese painting classification and enhances the generalization ability. By adopting the end-to-end multi-task feature fusion method, MTFFNet could extract more semantic features and texture information in the image. When compared with state-of-the-art classification method for Chinese painting, the proposed method achieves much higher accuracy on our proposed datasets, without lowering speed or efficiency. The proposed method provides an effective solution for cognitive classification of Chinese ink painting, where the accuracy and efficiency of the approach have been fully validated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Zabalza J, Ren J, Zheng J, et al. Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing. 2016;185:1–10.

    Article  Google Scholar 

  2. Zhang D, Han J, Zhao L, et al. Leveraging prior-knowledge for weakly supervised object detection under a collaborative self-paced curriculum learning framework. Int J Comput Vision. 2019;1:4–10.

    MATH  Google Scholar 

  3. Ren J, ANN vs. SVM: Which one performs better in classification of MCCs in mammogram imaging. Knowledge-Based Systems. 2012;26:144–153.

  4. Huang G, Liu Z, Weinberger K, Maaten L. Densely connected convolutional networks . arXiv preprint arXiv. 2016;1608(06993):1–4.

    Google Scholar 

  5. Li J, Wang J. Studying digital imagery of ancient paintings by mixtures of stochastic models. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society. 2004;13(3):340–353.

    Article  Google Scholar 

  6. Wang JZ. Image processing for artist identification - computerized analysis of Vincent van Gogh’s painting brushstrokes. Nuclear instruments & methods in physics research. 2008;122(4):650–656.

  7. Jiang S, Huang Q, Ye Q, et al. An effective method to detect and categorize digitized traditional Chinese paintings. Pattern Recogn Lett. 2006;27(7):734–746.

    Article  Google Scholar 

  8. Johnson CR, Hendriks E, Berezhnoy IJ, et al. Image processing for artist identification. Signal Processing Magazine IEEE. 2008;25(4):37–48.

    Article  Google Scholar 

  9. Li J. Rhythmic brushstrokes distinguish van Gogh from his contemporaries: findings via automated brushstroke extraction. IEEE Trans Pattern Anal Mach Intell. 2012;34(6):1159–1176.

    Article  Google Scholar 

  10. Niu XX, Suen CY. A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recogn. 2012;45(4):1318–1325.

    Article  Google Scholar 

  11. Shi L, Zhang Y, Cheng J, Lu H. Skeleton-based action recognition with multi-stream adaptive graph convolutional networks. IEEE. 2019;29:3247–3257.

  12. Padfield N, Zabalza J, Zhao H, et al. EEG-based brain-computer interfaces using motor-imagery: techniques and challenges[J]. Sensors. 2019;19(6):1–5.

    Article  Google Scholar 

  13. Abdel-Hamid O, Mohamed AR, Jiang H, et al. Convolutional neural networks for speech recognition. IEEE/ACM Transactions on Audio Speech & Language Processing. 2014;22(10):1533–1545.

    Article  Google Scholar 

  14. Han J, Cheng G, Li Z, et al. A unified metric learning-based framework for co-saliency detection. IEEE Trans Circuits Syst Video Technol. 2018;28(10):2473–83.

    Article  Google Scholar 

  15. Leibe B, Matas J, Sebe N, et al. [Lecture Notes in Computer Science] Computer Vision -ECCV 2016 Volume 9912. Saliency Detection via Combining Region-Level and Pixel-Level Predictions with CNNs https://doi.org/10.1007/978-3-319-46484-8(Chapter49):809-825.

  16. Niu M, Song K, Huang L, et al. Unsupervised saliency detection of rail surface defects using stereoscopic images. IEEE Transactions on Industrial Informatics. PP(99):1–1.

  17. Gong C, Junwei H, Peicheng Z, et al. Learning rotation-invariant and Fisher discriminative convolutional neural networks for object detection. IEEE Trans Image Process. 2018;28:265–278.

    MathSciNet  MATH  Google Scholar 

  18. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS’2012). 2012:1–5.

  19. Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database. Proc of IEEE Computer Vision & Pattern Recognition. 2009;1:248–255.

    Google Scholar 

  20. Thomas L. The classification of style in painting. Dissertation Abstracts International, Volume: 66–09, Section: B, page: 4908.Advisers: Charles Tappe, 2008.

  21. Saleh B, Elgammal A. Large-scale classification of fine-art paintings: learning the right metric on the right feature. J Comput Sci Technol. 2015;25(3):595–605.

    Google Scholar 

  22. Zabalza J, Ren J, Zheng J, et al. Novel two-dimensional singular spectrum analysis for effective feature extraction and data classification in hyperspectral imaging. IEEE Transactions on Geoence and Remote Sensing. 2015;53(8):1–16.

    Article  Google Scholar 

  23. Zabalza J, Ren J, Yang M, et al. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. IsprJournal of Photogrammetry & Remote Sensing. 2014;93(7):112–122.

    Article  Google Scholar 

  24. Zamani F, Jamzad M. A feature fusion based localized multiple kernel learning system for real world image classification. Eurasip J Image Video Process. 2017;2017(1):75–8.

    Article  Google Scholar 

  25. Liu P, Guo JM, Chamnongthai K, et al. Fusion of color histogram and LBP-based features for texture image retrieval and classification. Information ences. 2017;390:95–111.

    Google Scholar 

  26. Niazmardi S, Demir B, Bruzzone L, et al. Multiple kernel learning for remote sensing image classification. IEEE Transactions on Geoence & Remote Sensing. 2018;1:1–19.

    Google Scholar 

  27. Saleh B, Elgammal A. Large-scale classification of fine-art paintings: learning the right metric on the right feature. J Comput Sci Technol. 2015;25(3):595–605.

    Google Scholar 

  28. Tan W, Chan C, Aguirre H, et al. Ceci n'est pas une pipe: A deep convolutional network for fine-art paintings classification. 2016 IEEE International Conference on Image Processing (ICIP). 2016;3703–3707.

  29. Huang X, Zhong SH, Xiao Z. Fine-art painting classification via two-channel deep residual network. Pacific Rim Conference on Multimedia. Springer, Cham. 2017(2):1–5.

  30. Qian W, Xu D, Guan Z, et al. Simulating chalk art style painting. International Journal of Pattern Recognition and Artificial Intelligence. 2017;31(12):1759026.1–20.

  31. Jia S. Automatic categorization of traditional Chinese paintings based on wavelet transform. Comput Sci. 2014;41(2):317–319.

    Google Scholar 

  32. Simon N, Friedman J, Hastie T, et al. A sparse-group Lasso. J Comput Graph Stat. 2013;22(2):231–245.

    Article  MathSciNet  Google Scholar 

  33. Utgoff PE, Berkman NC, Clouse JA. Decision tree induction based on efficient tree restructuring[J]. Mach Learn. 1997;29(1):5–44.

    Article  Google Scholar 

Download references

Funding

This research work was financially supported by the National Natural Science Foundation, China, under grant nos. 61772360 and 61876125.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Wang.

Ethics declarations

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, W., Wang, X., Ren, J. et al. MTFFNet: a Multi-task Feature Fusion Framework for Chinese Painting Classification. Cogn Comput 13, 1287–1296 (2021). https://doi.org/10.1007/s12559-021-09896-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-021-09896-9

Keywords

Navigation