Advertisement

A Selective Weighted Late Fusion for Visual Concept Recognition

  • Ningning Liu
  • Emmanuel Dellandréa
  • Bruno Tellez
  • Liming Chen
Chapter
Part of the Advances in Computer Vision and Pattern Recognition book series (ACVPR)

Abstract

We propose a novel multimodal approach to automatically predict the visual concepts of images through an effective fusion of visual and textual features. It relies on a Selective Weighted Late Fusion (SWLF) scheme which, in optimizing an overall Mean interpolated Average Precision (MiAP), learns to automatically select and weight the best features for each visual concept to be recognized. Experiments were conducted on the MIR Flickr image collection within the ImageCLEF Photo Annotation challenge. The results have brought to the fore the effectiveness of SWLF as it achieved a MiAP of 43.69 % in 2011 which ranked second out of the 79 submitted runs, and a MiAP of 43.67 % that ranked first out of the 80 submitted runs in 2012.

Keywords

Textual Feature Visual Feature Average Precision Fusion Rule Late Fusion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Salton G, McGill MJ (1986) Introduction to modern information retrieval. McGraw-Hill Inc, New YorkGoogle Scholar
  2. 2.
    Smeulders AWM, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22:1349–1380CrossRefGoogle Scholar
  3. 3.
    Picard RW (2000) Affective computing. MIT press, CambridgeGoogle Scholar
  4. 4.
    Mojsilović A, Gomes J, Rogowitz B (2004) Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues. Int J Comput Vision 56:79–107CrossRefGoogle Scholar
  5. 5.
    Snelick R, Uludag U, Mink A, Indovina M, Jain A (2005) Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems. IEEE Trans Pattern Anal Mach Intell 27:450–455CrossRefGoogle Scholar
  6. 6.
    Machajdik J, Hanbury A (2010) Affective image classification using features inspired by psychology and art theory. In: Proceedings of the international conference on Multimedia, ACM, pp 83–92Google Scholar
  7. 7.
    Liu N, Dellandréa E, Chen L, Zhu C, Zhang Y, Bichot CE, Bres S, Tellez B (2013) Multimodal recognition of visual concepts using histograms of textual concepts and selective weighted late fusion scheme. Comput Vis Image Underst 117:493–512CrossRefGoogle Scholar
  8. 8.
    Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88:303–338CrossRefGoogle Scholar
  9. 9.
    Smeaton AF, Over P, Kraaij W (2006) Evaluation campaigns and trecvid. In: MIR ’06: Proceedings of the 8th ACM international workshop on multimedia, information retrieval, pp 321–330Google Scholar
  10. 10.
    Nowak S, Nagel K, Liebetrau J (2011) The clef 2011 photo annotation and concept-based retrieval tasks. In: CLEF workshop notebook paperGoogle Scholar
  11. 11.
    Guillaumin M, Verbeek JJ, Schmid C (2010) Multimodal semi-supervised learning for image classification. In: Proceedings of CVPR, pp 902–909Google Scholar
  12. 12.
    Snoek CGM, Worring M, Smeulders AWM (2005) Early versus late fusion in semantic video analysis. In: Proceedings of the 13th annual ACM international conference on multimedia, pp 399–402Google Scholar
  13. 13.
    Ah-Pine J, Bressan M, Clinchant S, Csurka G, Hoppenot Y, Renders JM (2009) Crossing textual and visual content in different application scenarios. Multimedia Tools Appl 42:31–56CrossRefGoogle Scholar
  14. 14.
    Snoek CGM, Worring M, Geusebroek JM, Koelma DC, Seinstra FJ (2004) The mediamill trecvid 2004 semantic video search engine. In: Proceedings of the TRECVID workshopGoogle Scholar
  15. 15.
    Westerveld T, Vries APD, van Ballegooij A, de Jong F, Hiemstra D (2003) A probabilistic multimedia retrieval model and its evaluation. EURASIP J Appl Signal Process 2003:186–198CrossRefzbMATHGoogle Scholar
  16. 16.
    Noble WS et al (2004) Support vector machine applications in computational biology. In: Schoelkopf B, Tsuda K, Vert, J-P (eds) Kernel methods in computational biology. MIT Press, Cambridge, pp 71–92Google Scholar
  17. 17.
    Pinquier J, Karaman S, Letoupin L, Guyot P, Mégret R., Benois-Pineau J, Gaestel Y, Dartigues JF (2012) Strategies for multiple feature fusion with hierarchical hmm: application to activity recognition from wearable audiovisual sensors. In: Proceedings of 21st international conference on pattern recognition (ICPR), IEEE, pp 3192–3195Google Scholar
  18. 18.
    Binder A, Samek W, Kloft M, Müller C, Müller KR., Kawanabe M (2011) The joint submission of the tu berlin and fraunhofer first (tubfi) to the imageclef2011 photo annotation task. In: CLEF workshop notebook paperGoogle Scholar
  19. 19.
    Nagel K, Nowak S, Kühhirt U, Wolter K (2011) The Fraunhofer IDMT at ImageCLEF 2011 photo annotation task. In: Proceedings of CLEF (Notebook Papers/Labs/Workshop)Google Scholar
  20. 20.
    Csurka G, Dance C, Fan L, Willamowski J, Bray C (2004) Visual categorization with bags of keypoints. In: Workshop on statistical learning in computer vision, ECCV, vol 1, p 22Google Scholar
  21. 21.
    Quenot G, Benois-Pineau J, Mansencal B, Rossi E, Cord M, Precioso F, Gorisse D, Lambert P, Augereau B, Granjon L et al (2008) Rushes summarization by IRIM consortium: redundancy removal and multi-feature fusion. In: Proceedings of the 2nd ACM TRECVid video summarization workshop, ACM, pp 80–84Google Scholar
  22. 22.
    Wu Y, Chang EY, Chang KCC, Smith JR (2004) Optimal multimodal fusion for multimedia data analysis. In: Proceedings of the 12th annual ACM international conference on Multimedia, pp 572–579Google Scholar
  23. 23.
    Znaidia A, Borgne HL, Popescu A (2011) CEA list’s participation to visual concept detection task of ImageCLEF 2011. In: CLEF workshop notebook paperGoogle Scholar
  24. 24.
    Kittler J, Hatef M, Duin RPW, Matas J (1998) On combining classifiers. IEEE Trans Pattern Anal Mach Intell 20:226–239CrossRefGoogle Scholar
  25. 25.
    Ben Soltana W, Huang D, Ardabilian M, Chen L, Ben Amar C (2010) Comparison of 2D/3D features and their adaptive score level fusion for 3D face recognition. In: 3D data processing, visualization and transmission (3DPVT)Google Scholar
  26. 26.
    Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recogn Lett 15:1119–1125CrossRefGoogle Scholar
  27. 27.
    Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33:1–39CrossRefGoogle Scholar
  28. 28.
    Breiman L, Breiman L (1996) Bagging predictors. Mach Learn 24:123–140Google Scholar
  29. 29.
    Fergus R, Fei-Fei L, Perona P, Zisserman A (2005) Learning object categories from google’s image search. In: 10th IEEE international conference on computer vision ICCV, IEEE, vol 2, pp 1816–1823Google Scholar
  30. 30.
    Schroff F, Criminisi A, Zisserman A (2007) Harvesting image databases from the web. In: IEEE 11th international conference on computer vision, ICCV, pp 1–8Google Scholar
  31. 31.
    Wang G, Hoiem D, Forsyth DA (2009) Building text features for object image classification. In: Proceedings of CVPR, pp 1367–1374Google Scholar
  32. 32.
    Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn 29:51–59Google Scholar
  33. 33.
    Zhu C, Bichot CE, Chen L (2010) Multi-scale color local binary patterns for visual object classes recognition. In: Proceedings of ICPR, pp 3065–3068Google Scholar
  34. 34.
    Pujol A, Chen L (2007) Line segment based edge feature using hough transform. In: Proceedings of the 7th IASTED international conference on visualization, imaging and image processing, ACTA Press, pp 201–206Google Scholar
  35. 35.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110Google Scholar
  36. 36.
    Mikolajczyk K, Schmid C (2004) Scale and affine invariant interest point detectors. Int J Comput Vis 60:63–86Google Scholar
  37. 37.
    Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Proceedings of CVPR, vol 2, pp 2169–2178Google Scholar
  38. 38.
    Li FF, Perona P (2005) A Bayesian hierarchical model for learning natural scene categories. In: Proceedings of CVPR, vol 2, pp 524–531Google Scholar
  39. 39.
    Van de Sande KEA, Gevers T, Snoek CGM (2010) Evaluating color descriptors for object and scene recognition. IEEE Trans Pattern Anal Mach Intell 32:1582–1596CrossRefGoogle Scholar
  40. 40.
    Tola E, Lepetit V, Fua P (2010) Daisy: an efficient dense descriptor applied to wide-baseline stereo. IEEE Trans Pattern Anal Mach Intell 32:815–830Google Scholar
  41. 41.
    Zhu C, Bichot CE, Chen L (2011) Visual object recognition using daisy descriptor. In: Proceedings of ICME, pp 1–6Google Scholar
  42. 42.
    Dunker P, Nowak S, Begau A, Lanz C (2008) Content-based mood classification for photos and music: a generic multi-modal classification framework and evaluation approach. In: Proceedings of multimedia information retrieval, pp 97–104Google Scholar
  43. 43.
    Liu N, Dellandréa E, Tellez B, Chen L (2011) Evaluation of features and combination approaches for the classification of emotional semantics in images. In: International conference on computer vision, theory and applications (VISAPP)Google Scholar
  44. 44.
    Liu N, Dellandréa E, Tellez B, Chen L, Chen L (2011) Associating textual features with visual ones to improve affective image classification. In: Proceedings of ACII, vol 1, pp 195–204Google Scholar
  45. 45.
    Valdez P, Mehrabian A (1994) Effects of color on emotions. J Exp Psychol Gen 123:394–409CrossRefGoogle Scholar
  46. 46.
    Itten J, Van Haagen E (1973) The art of color: the subjective experience and objective rationale of color. Van Nostrand Reinhold, New YorkGoogle Scholar
  47. 47.
    Tamura H, Mori S, Yamawaki T (1978) Texture features corresponding to visual perception. IEEE Trans Syst Man Cybern 8(6):460–472Google Scholar
  48. 48.
    Haralick RM (1979) Statistical and structural approaches to texture. Proc IEEE 67:786–804CrossRefGoogle Scholar
  49. 49.
    Anstey NA (1966) Correlation techniques—a reivew. Can J Explor Geophys 2:55–82Google Scholar
  50. 50.
    van de Sande K. Colordescriptor software. http://www.colordescriptors.com
  51. 51.
    Colombo C, Bimbo AD, Pala P (1999) Semantics in visual information retrieval. IEEE Multimedia 6:38–53CrossRefGoogle Scholar
  52. 52.
    Dellandréa E, Liu N, Chen L (2010) Classification of affective semantics in images based on discrete and dimensional models of emotions. In: International workshop on content-based multimedia indexing (CBMI), pp 99–104Google Scholar
  53. 53.
    Ke Y, Tang X, Jing F (2006) The design of high-level features for photo quality assessment. In: IEEE computer society conference on computer vision and pattern recognition, vol 1, pp 419–426Google Scholar
  54. 54.
    Datta R, Li J, Wang JZ (2005) Content-based image retrieval: approaches and trends of the new age. In: Proceedings on multimedia information retrieval, pp 253–262Google Scholar
  55. 55.
    Viola PA, Jones MJ (2001) Robust real-time face detection. In: Proceedings of CCV, vol 57, pp 137–154Google Scholar
  56. 56.
    Budanitsky A, Hirst G (2001) Semantic distance in wordnet: an experimental, application-oriented evaluation of five measures. In: Workshop on WordNet and other lexical resources, 2nd meeting of the North American chapter of the association for computational linguisticsGoogle Scholar
  57. 57.
    Miller GA (1995) Wordnet: a lexical database for English. Commun ACM 38:39–41CrossRefGoogle Scholar
  58. 58.
    Huiskes MJ, Lew MS (2008) The MIR flickr retrieval evaluation. In: Proceedings on multimedia information retrieval, pp 39–43Google Scholar
  59. 59.
    Huiskes MJ, Thomee B, Lew MS (2010) New trends and ideas in visual concept detection: the MIR flickr retrieval evaluation initiative. In: MIR ’10: Proceedings of the 2010 ACM international conference on multimedia, information retrieval, pp 527–536Google Scholar
  60. 60.
    Vapnik VN (1995) The nature of statistical learning theory. Springer New York Inc., New YorkCrossRefzbMATHGoogle Scholar
  61. 61.
    Zhang J, Marszaek M, Lazebnik S, Schmid C (2007) Local features and kernels for classification of texture and object categories: a comprehensive study. Int J Comput Vis 73:213–238CrossRefGoogle Scholar
  62. 62.
    Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:1–27CrossRefGoogle Scholar
  63. 63.
    Escalante HJ, Montes M, Sucar E (2011) Multimodal indexing based on semantic cohesion for image retrieval. Inf Retrieval 15:1–32CrossRefGoogle Scholar
  64. 64.
    van Gemert JC, Veenman CJ, Smeulders AWM, Geusebroek JM (2010) Visual word ambiguity. IEEE Trans Pattern Anal Mach Intell 32:1271–1283CrossRefGoogle Scholar
  65. 65.
    Liu N, Zhang Y, Dellandréa E, Bres S, Chen L (2012) LIRIS-Imagine at ImageCLEF 2012 photo annotation task. In: CLEF workshop notebook paperGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Ningning Liu
    • 1
  • Emmanuel Dellandréa
    • 1
  • Bruno Tellez
    • 1
  • Liming Chen
    • 1
  1. 1.Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205LyonFrance

Personalised recommendations