Advertisement

On the Spatial Extents of SIFT Descriptors for Visual Concept Detection

  • Markus Mühling
  • Ralph Ewerth
  • Bernd Freisleben
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6962)

Abstract

State-of-the-art systems for visual concept detection typically rely on the Bag-of-Visual-Words representation. While several aspects of this representation have , such as keypoint sampling strategy, vocabulary size, projection method, weighting scheme or the integration of color, the impact of the spatial extents of local SIFT descriptors has not been studied in previous work. In this paper, the effect of different spatial extents in a state-of-the-art system for visual concept detection is investigated. Based on the observation that SIFT descriptors with different spatial extents yield large performance differences, we propose a concept detection system that combines feature representations for different spatial extents using multiple kernel learning. It is shown experimentally on a large set of 101 concepts from the Mediamill Challenge and on the PASCAL Visual Object Classes Challenge that these feature representations are complementary: Superior performance can be achieved on both test sets using the proposed system.

Keywords

Visual Concept Detection Video Retrieval SIFT Bag-of-Words Magnification Factor Spatial Bin Size 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bosch, A., Zisserman, A., Muñoz, X.: Scene Classification Using a Hybrid Generative/Discriminative Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(4), 712–727 (2008)CrossRefGoogle Scholar
  2. 2.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: In: The PASCAL Visual Object Classes Challenge 2007, VOC 2007 (2007), http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
  3. 3.
    Hauptmann, A., Yan, R., Lin, W.-H.: How Many High-Level Concepts Will Fill the Semantic Gap in News Video Retrieval?. In: International Conference on Image and Video Retrieval, pp. 627–634. ACM, New York (2007)Google Scholar
  4. 4.
    Jiang, Y.-G., Ngo, C.-W., Yang, J.: Towards Optimal Bag-of-Features for Object Categorization and Semantic Video Retrieval. In: International Conference on Image and Video Retrieval, pp. 494–501. ACM, New York (2007)Google Scholar
  5. 5.
    Jiang, Y.-G., Yang, J., Ngo, C.-W., Hauptmann, A.G.: Representations of Keypoint-Based Semantic Concept Detection: A Comprehensive Study. IEEE Transactions on Multimedia 12, 42–53 (2010)CrossRefGoogle Scholar
  6. 6.
    Joachims, T.: Text Categorization With Support Vector Machines: Learning With Many Relevant Features. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 137–142. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  7. 7.
    Kovashka, A., Grauman, K.: Learning a Hierarchy of Discriminative Space-time Neighborhood Features for Human Action Recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2046–2053 (2010)Google Scholar
  8. 8.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2169–2178. IEEE Computer Society, USA (2006)Google Scholar
  9. 9.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  10. 10.
    Naphade, M.R., Smith, J.R.: On the Detection of Semantic Concepts at TRECVID. In: International Conference on Multimedia, pp. 660–667. ACM, USA (2004)Google Scholar
  11. 11.
    National Institute of Standards and Technology (NIST): TREC Video Retrieval Evaluation (TRECVID), http://www-nlpir.nist.gov/projects/trecvid/
  12. 12.
    Nowak, E., Jurie, F., Triggs, B.: Sampling Strategies for Bag-of-Features Image Classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 490–503. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    van de Sande, K.E., Gevers, T., Snoek, C.G.: A Comparison of Color Features for Visual Concept Classification. In: International Conference on Content-Based Image and Video Retrieval, pp. 141–150. ACM, USA (2008)Google Scholar
  14. 14.
    Snoek, C.G.M., Worring, M., van Gemert, J.C., Geusebroek, J.M., Smeulders, A.W.M.: The Challenge Problem for Automated Detection of 101 Semantic Concepts in Multimedia. In: ACM International Conference on Multimedia, pp. 421–430. ACM, USA (2006)Google Scholar
  15. 15.
    Sonnenburg, S., Rätsch, G., Henschel, S., Widmer, C., Behr, J., Zien, A., Bona, F., Binder, A., Gehl, C., Franc, V.: The SHOGUN Machine Learning Toolbox. Journal of Machine Learning Research 99, 1799–1802 (2010)MATHGoogle Scholar
  16. 16.
    Vedaldi, A., Fulkerson, B.: VLFeat: An Open and Portable Library of Computer Vision Algorithms (2008), http://www.vlfeat.org/
  17. 17.
    Yang, J., Jiang, Y.G., Hauptmann, A.G., Ngo, C.: Evaluating Bag-of-Visual-Words Representations in Scene Classification. In: International Workshop on Multimedia Information Retrieval, pp. 197–206. ACM, USA (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Markus Mühling
    • 1
  • Ralph Ewerth
    • 1
  • Bernd Freisleben
    • 1
  1. 1.Department of Mathematics & Computer ScienceUniversity of MarburgMarburgGermany

Personalised recommendations