Advertisement

Deep Learning Does Not Generalize Well to Recognizing Cats and Dogs in Chinese Paintings

  • Qianqian GuEmail author
  • Ross King
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11828)

Abstract

Although Deep Learning (DL) image analysis has made recent rapid advances, it still has limitations that indicate that its approach differs significantly from human vision, e.g. the requirement for large training sets, and adversarial attacks. Here we show that DL also differs in failing to generalize well to Traditional Chinese Paintings (TCPs). We developed a new DL object detection method A-RPN (Assembled Region Proposal Network), which concatenates low-level visual information, and high-level semantic knowledge to reduce coarseness in region-based object detection. A-RPN significantly outperforms YOLO2 and Faster R-CNN on natural images (P < 0.02). We applied YOLO2, Faster R-CNN and A-RPN to TCPs with a 12.9%, 13.2% and 13.4% drop in mAP compared to natural images. There was little or no difference in recognizing humans, but a large drop in mAP for cats and dogs (27% & 31%), and very large drop for horses (35.9%). The abstract nature of TCPs may be responsible for DL poor performance.

Keywords

Traditional Chinese Paintings Computational aesthetics Deep Learning Object recognition Machine learning 

References

  1. 1.
    Grill-Spector, K., Kourtzi, Z., Kanwisher, N.: The lateral occipital complex and its role in object recognition. Vis. Res. 41(10–11), 1409–1422 (2001)CrossRefGoogle Scholar
  2. 2.
    Schrimpf, M., et al.: Brain-score: which artificial neural network for object recognition is most brain-like? BioRxiv, p. 407007 (2018)Google Scholar
  3. 3.
    Grill-Spector, K., Weiner, K.S.: The functional architecture of the ventral temporal cortex and its role in categorization. Nat. Rev. Neurosci. 15(8), 536 (2014)CrossRefGoogle Scholar
  4. 4.
    Cichy, R.M., Khosla, A., Pantazis, D., Torralba, A., Oliva, A.: Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6, 27755 (2016)CrossRefGoogle Scholar
  5. 5.
    Birkhoff, G.D.: Aesthetic Measure. Harvard University Press, Cambridge (1933)CrossRefGoogle Scholar
  6. 6.
    Greenfield, G.: On the origins of the term computational aesthetics (2005)Google Scholar
  7. 7.
    Neumann, L., Sbert, M., Gooch, B., Purgathofer, W., et al.: Defining computational aesthetics. In: Computational Aesthetics in Graphics, Visualization and Imaging, pp. 13–18 (2005)Google Scholar
  8. 8.
    Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. arXiv preprint: arXiv:1508.06576 (2015)
  9. 9.
    Crowley, E.J., Zisserman, A.: The art of detection. In: Hua, G., Jégou, H. (eds.) ECCV 2016, Part I. LNCS, vol. 9913, pp. 721–737. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46604-0_50CrossRefGoogle Scholar
  10. 10.
    Lanciotti, L.: The British museum book of Chinese art (1993)Google Scholar
  11. 11.
    Wu, X., Li, G., Liang, Y.: Modeling Chinese painting images based on ontology. In: 2013 International Conference on Information Technology and Applications, pp. 113–116. IEEE (2013)Google Scholar
  12. 12.
    Jiang, S., Huang, T.: Categorizing traditional Chinese painting images. In: Aizawa, K., Nakamura, Y., Satoh, S. (eds.) PCM 2004, Part I. LNCS, vol. 3331, pp. 1–8. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30541-5_1CrossRefGoogle Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRefGoogle Scholar
  15. 15.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580–587 (2014)Google Scholar
  16. 16.
    Girshick, R.: Fast R-CNN. In: The IEEE International Conference on Computer Vision (ICCV), December 2015Google Scholar
  17. 17.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  18. 18.
    Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, pp. 379–387 (2016)Google Scholar
  19. 19.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969 (2017)Google Scholar
  20. 20.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)Google Scholar
  21. 21.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)Google Scholar
  22. 22.
    Liu, W., et al.: SSD: Single Shot MultiBox Detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part I. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  23. 23.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint: arXiv:1409.1556 (2014)
  24. 24.
    Lin, T.-Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017Google Scholar
  25. 25.
    Zeiler, M.D., Taylor, G.W., Fergus, R., et al.: Adaptive deconvolutional networks for mid and high level feature learning. In: The IEEE International Conference on Computer Vision (ICCV), vol. 1, p. 6 (2011)Google Scholar
  26. 26.
    Lin, T.-Y., Maji, S.: Visualizing and understanding deep texture representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2791–2799 (2016)Google Scholar
  27. 27.
    Elgammal, A., Liu, B., Kim, D., Elhoseiny, M., Mazzone, M.: The shape of art history in the eyes of the machine. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  28. 28.
    Shaw, M.: Buddhist and taoist influences on Chinese landscape painting. J. Hist. Ideas 49(2), 183–206 (1988)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.The University of ManchesterManchesterUK
  2. 2.The Alan Turing InstituteLondonUK

Personalised recommendations