CNN-Based Classification of Illustrator Style in Graphic Novels: Which Features Contribute Most?

  • Jochen LaubrockEmail author
  • David Dubray
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11296)


Can classification of graphic novel illustrators be achieved by convolutional neural network (CNN) features evolved for classifying concepts on photographs? Assuming that basic features at lower network levels generically represent invariants of our environment, they should be reusable. However, features at what level of abstraction are characteristic of illustrator style? We tested transfer learning by classifying roughly 50,000 digitized pages from about 200 comic books of the Graphic Narrative Corpus (GNC, [6]) by illustrator. For comparison, we also classified Manga109 [18] by book. We tested the predictability of visual features by experimentally varying which of the mixed layers of Inception V3 [29] was used to train classifiers. Overall, the top-1 test-set classification accuracy in the artist attribution analysis increased from 92% for mixed-layer 0 to over 97% when adding mixed-layers higher in the hierarchy. Above mixed-layer 5, there were signs of overfitting, suggesting that texture-like mid-level vision features were sufficient. Experiments varying input material show that page layout and coloring scheme are important contributors. Thus, stylistic classification of comics artists is possible re-using pre-trained CNN features, given only a limited amount of additional training material. We propose that CNN features are general enough to provide the foundation of a visual stylometry, potentially useful for comparative art history.


Convolutional Neural Network Classification Graphic Novels Stylometry 


  1. 1.
    Chu, W., Wu, Y.: Image style classification based on learnt deep correlation features. IEEE Trans. Multimedia 20(9), 2491–2502 (2018). Scholar
  2. 2.
    Chu, W.T., Li, W.W.: Manga FaceNet: face detection in manga based on deep neural network. In: ICMR 2017, pp. 412–415. ACM, New York (2017).
  3. 3.
    Cichy, R.M., Khosla, A., Pantazis, D., Torralba, A., Oliva, A.: Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6, 27755 (2016). Scholar
  4. 4.
    Crowley, E.J., Zisserman, A.: In search of art. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 54–70. Springer, Cham (2015). Scholar
  5. 5.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)Google Scholar
  6. 6.
    Dunst, A., Hartel, R., Laubrock, J.: The graphic narrative corpus (GNC): design, annotation, and analysis for the digital humanities. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), vol. 03, pp. 15–20, November 2017.
  7. 7.
    Dunst, A., Hartel, R.: The quantitative analysis of comics: towards a visual stylometry of graphic narrative. In: Dunst, A., Laubrock, J., Wildfeuer, J. (eds.) Empirical Comics Research: Digital, Multimodal, and Cognitive Methods, chap. 12, pp. 239–263. Routledge, New York (2018)Google Scholar
  8. 8.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks, pp. 2414–2423, June 2016.
  9. 9.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Texture and art with deep neural networks. Curr. Opin. Neurobiol. 46, 178–186 (2017). Scholar
  10. 10.
    Greenberg, C.: American-type painting. Partisan Rev. 22(2), 179–196 (1955)Google Scholar
  11. 11.
    Hubel, D.H., Wiesel, T.N.: Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 148, 574–91 (1959)CrossRefGoogle Scholar
  12. 12.
    Juola, P.: Authorship attribution. Found. Trends\({\textregistered }\). Inf. Retrieval 1(3), 233–334 (2008). Scholar
  13. 13.
    Karayev, S., Hertzmann, A., Winnemoeller, H., Agarwala, A., Darrell, T.: Recognizing image style. CoRR abs/1311.3715 (2013).
  14. 14.
    Kümmerer, M., Wallis, T.S.A., Gatys, L.A., Bethge, M.: Understanding low- and high-level contributions to fixation prediction. In: The IEEE International Conference on Computer Vision (ICCV), October 2017Google Scholar
  15. 15.
    Laubrock, J., Hohenstein, S., Kümmerer, M.: Attention to comics: Cognitive processing during the reading of graphic literature. In: Dunst, A., Laubrock, J., Wildfeuer, J. (eds.) Empirical Comics Research: Digital, Multimodal, and Cognitive Methods, chap. 12, pp. 239–263. Routledge, New York (2018)Google Scholar
  16. 16.
    LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989). Scholar
  17. 17.
    Manovich, L.: How to compare one million images? In: Berry, D.M. (ed.) Understanding Digital Humanities. Palgrave Macmillan, New York (2012)Google Scholar
  18. 18.
    Matsui, Y., et al.: Sketch-based manga retrieval using Manga109 dataset. Multimedia Tools Appl. 76(20), 21811–21838 (2017). Scholar
  19. 19.
    Moretti, F.: Distant Reading. Verso, London/New York (2013)Google Scholar
  20. 20.
    Nguyen, N., Rigaud, C., Burie, J.: Comic characters detection using deep learning. In: 2nd International Workshop on coMics Analysis, Processing, and Understanding, 14th IAPR International Conference on Document Analysis and Recognition, ICDAR 2017, Kyoto, Japan, 9–15 November 2017, pp. 41–46 (2017).
  21. 21.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 9(1), 62–66 (1979). Scholar
  22. 22.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986). Scholar
  23. 23.
    Saito, M., Matsui, Y.: Illustration2vec: a semantic vector representation of illustrations. In: SIGGRAPH Asia 2015 Technical Briefs, SA 2015, pp. 5:1–5:4. ACM, New York (2015).
  24. 24.
    Saleh, B., Elgammal, A.M.: Large-scale classification of fine-art paintings: learning the right metric on the right feature. CoRR abs/1505.00855 (2015).
  25. 25.
    Sanakoyeu, A., Kotovenko, D., Lang, S., Ommer, B.: A style-aware content loss for real-time HD style transfer (2018)Google Scholar
  26. 26.
    Seguin, B., Striolo, C., diLenardo, I., Kaplan, F.: Visual link retrieval in a database of paintings. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9913, pp. 753–767. Springer, Cham (2016). Scholar
  27. 27.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014).
  28. 28.
    Sirirattanapol, C., Matsui, Y., Satoh, S., Matsuda, K., Yamamoto, K.: Deep image retrieval applied on kotenseki ancient Japanese literature. In: 2017 IEEE International Symposium on Multimedia (ISM), pp. 495–499, December 2017.
  29. 29.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015).
  30. 30.
    van der Walt, S., et al.: The scikit-image contributors: scikit-image: image processing in python. PeerJ 2(e453), 1–18 (2014)Google Scholar
  31. 31.
    Wölfflin, H.: Kunstgeschichtliche Grundbegriffe: Das Problem der Stilentwickelung in der neueren Kunst. Bruckmann, München (1915)Google Scholar
  32. 32.
    Yamins, D.L.K., DiCarlo, J.J.: Using goal-driven deep learning models to understand sensory cortex. Nature Neurosci. 19(3), 356–365 (2016). Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of PotsdamPotsdamGermany

Personalised recommendations