Advertisement

Understanding Convolutional Neural Networks in Terms of Category-Level Attributes

  • Makoto OzekiEmail author
  • Takayuki Okatani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9004)

Abstract

It has been recently reported that convolutional neural networks (CNNs) show good performances in many image recognition tasks. They significantly outperform the previous approaches that are not based on neural networks particularly for object category recognition. These performances are arguably owing to their ability of discovering better image features for recognition tasks through learning, resulting in the acquisition of better internal representations of the inputs. However, in spite of the good performances, it remains an open question why CNNs work so well and/or how they can learn such good representations. In this study, we conjecture that the learned representation can be interpreted as category-level attributes that have good properties. We conducted several experiments by using the dataset AwA (Animals with Attributes) and a CNN trained for ILSVRC-2012 in a fully supervised setting to examine this conjecture. We report that there exist units in the CNN that can predict some of the 85 semantic attributes fairly accurately, along with a detailed observation that this is true only for visual attributes and not for non-visual ones. It is more natural to think that the CNN may discover not only semantic attributes but non-semantic ones (or ones that are difficult to represent as a word). To explore this possibility, we perform zero-shot learning by regarding the activation pattern of upper layers as attributes describing the categories. The result shows that it outperforms the state-of-the-art with a significant margin.

Keywords

Input Image Semantic Attribute Convolutional Neural Network Killer Whale Deep Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

This work was supported by JSPS KAKENHI Grant Numbers 25135701, 25280054.

References

  1. 1.
    Bengio, Y., Courville, A.C., Vincent, P.: Representation learning: a review and new perspectives, Computing Research Repository abs/1206.5538 (2012)Google Scholar
  2. 2.
    Cireşan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: CVPR (2012)Google Scholar
  3. 3.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  4. 4.
    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: ICML (2014)Google Scholar
  5. 5.
    Farhadi, A., Endres, I., Hoiem D., Forsyth, D.: Describing objects by their attributes. In: CVPR (2009)Google Scholar
  6. 6.
    Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: ICML (2013)Google Scholar
  7. 7.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors, Computing Research Repository abs/1207.0580 (2012)Google Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)Google Scholar
  10. 10.
    Lampert, C.H., Nichisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: CVPR (2009)Google Scholar
  11. 11.
    Le, Q.V., Ranzato, M., Monga, R., Devin, M., Corrado, G.S., Dean, J., Ng A.Y.: Building high-level features using large scale unsupervised learning. In: ICML (2012)Google Scholar
  12. 12.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: ICML (2009)Google Scholar
  13. 13.
    Wan, L., Zeiler, M.D., Zhang, S., LeCun, Y., Fergus, R.: Regularization of neural networks using dropconnect. In: ICML (2013)Google Scholar
  14. 14.
    Yu, F.X., Cao, L., Feris, R.S., Smith, J.R., Chang S.-F.: Designing category-level attributes for discriminative visual recognition. In: CVPR (2013)Google Scholar
  15. 15.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part I. LNCS, vol. 8689, pp. 818–833. Springer, Heidelberg (2014) CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Tohoku UniversitySendaiJapan

Personalised recommendations