Skip to main content

Temporal-Spatial Refinements for Video Concept Fusion

  • Conference paper
Computer Vision – ACCV 2012 (ACCV 2012)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7726))

Included in the following conference series:

  • 3422 Accesses

Abstract

The context-based concept fusion (CBCF) is increasingly used in video semantic indexing, which uses various relations among different concepts to refine the original detection results. In this paper, we present a CBCF method called Temporal-Spatial Node Balance algorithm (TSNB). This method is based on a physical model, in which the concepts are regard as nodes and the relations are regard as forces. Then all the spatial and temporal relations and the moving cost of the nodes will be balanced. This method is intuitive and observable to explain a concept how to influence others or be influenced by others. And it uses both the spatial and temporal information to describe the semantic structure of the video. We use TSNB algorithm on the datasets of TRECVid 2005-2010. The results show that this method outperforms all the existed works as we know. Besides, it is faster.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lew, M.S., et al.: Content-based multimedia information retrieval: State of the art and challenges. ACM Trans. on Mult. Comp. Comm. Appl. 2 (2006)

    Google Scholar 

  2. Smeaton, A.F., et al.: Evaluation campaigns and trecvid. In: MIR 2006 (2006)

    Google Scholar 

  3. Ramesh Naphade, M., et al.: Factor graph framework for semantic video indexing. IEEE Trans. on CSVT 12 (2002)

    Google Scholar 

  4. Weng, M.F., et al.: Multi-cue fusion for semantic video indexing. In: MM 2008 (2008)

    Google Scholar 

  5. Zha, Z.J., et al.: Refining video annotation by exploiting pairwise concurrent relation. In: MM 2007 (2007)

    Google Scholar 

  6. Zheng, Y., et al.: Semantic video indexing by fusing explicit and implicit context spaces. In: MM 2010 (2010)

    Google Scholar 

  7. Zha, Z.J., et al.: Building a comprehensive ontology to refine video concept detection. In: MIR 2007 (2007)

    Google Scholar 

  8. Fan, J., et al.: Incorporating concept ontology for hierarchical video classification, annotation, and visualization. IEEE Trans. on MM 9 (2007)

    Google Scholar 

  9. Gu, Z., et al.: Multi-layer multi-instance learning for video concept detection. IEEE Trans. on MM 10 (2008)

    Google Scholar 

  10. Mylonas, P., et al.: Using visual context and region semantics for high-level concept detection. IEEE Trans. on MM 11 (2009)

    Google Scholar 

  11. Wei, X.Y., et. al.: Exploring inter-concept relationship with context space for semantic video indexing. In: CIVR 2009 (2009)

    Google Scholar 

  12. Smith, J., et al.: Multimedia semantic indexing using model vectors. In: ICME 2003 (2003)

    Google Scholar 

  13. Jiang, W., et al.: Context-based concept fusion with boosted conditional random fields. In: ICASSP 2007 (2007)

    Google Scholar 

  14. Jiang, Y.G., et al.: Domain adaptive semantic diffusion for large scale context-based video annotation. In: ICCV 2009 (2009)

    Google Scholar 

  15. Golub, G., et al.: A hessenberg-schur method for the problem ax + xb= c. IEEE Trans. on AC 24 (1979)

    Google Scholar 

  16. (Trecvid), http://www-nlpir.nist.gov/projects/trecvid/

  17. Naphade, M., et al.: Large-scale concept ontology for multimedia. IEEE Multimedia (2006)

    Google Scholar 

  18. Yanagawa, A., et al.: Columbia university’s baseline detectors for 374 lscom semantic visual concepts. Columb. Univ. ADVENT Techn. Report (2007)

    Google Scholar 

  19. Jiang, Y.G., et al.: Vireo-374: Keypoint-based lscom semantic concept detectors, http://vireo.cs.cityu.edu.hk/research/vireo374/

  20. Jiang, Y.G., et al.: CU-VIREO374: Fusing Columbia374 and VIREO374 for Large Scale Semantic Concept Detection. Technical report, Columb. Univ. ADVENT #223-2008-1 (2008)

    Google Scholar 

  21. Yilmaz, E., et al.: Estimating average precision with incomplete and imperfect judgments. In: CIKM 2006 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Geng, J., Miao, Z., Chi, H. (2013). Temporal-Spatial Refinements for Video Concept Fusion. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds) Computer Vision – ACCV 2012. ACCV 2012. Lecture Notes in Computer Science, vol 7726. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37431-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37431-9_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37430-2

  • Online ISBN: 978-3-642-37431-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics