Skip to main content

Scene Signatures for Unconstrained News Video Stories

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNISA,volume 7131)

Abstract

We propose a novel video signature called scene signature which is defined as a collection of SIFT descriptors. A scene signature represents the visual cues from a video scene in a compact and comprehensive manner. We detect Near Duplicate Keyframe clusters within a news story and then for each of them we generate an initial scene signature including most informative mutual and distinctive visual cues. Compared to conventional keypoint-trajectory-based signatures, we take the co-occurrence of SIFT keypoints into account. Moreover, we utilize keypoints describing novel visual clues in the scene. Next, through three steps of refinements on the initial scene signature we shorten the semantic gap to obtain more compact and semantically meaningful scene signatures. The experimental results confirm the efficiency, robustness and uniqueness of our proposed scene signature compared to other global and local video signatures.

Keywords

  • Scene signature
  • Near-Duplicate Keyframe
  • News retrieval

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: VLDB 1994, pp. 487–499. Morgan Kaufmann (1994)

    Google Scholar 

  2. Cheung, S., Zakhor, A.: Efficient video similarity measurement with video signature. In: ICIP 2002, vol. 1, pp. 59–74 (January 2002)

    Google Scholar 

  3. Dong, W., Wang, Z., Charikar, M., Li, K.: Efficiently matching sets of features with random histograms. In: ACM MM 2008, pp. 179–188. ACM, NY (2008)

    Google Scholar 

  4. Huang, Z., Shen, H.T., Shao, J., Cui, B., Member, S., Zhou, X.: Practical Online Near-Duplicate Subsequence Detection for Continuous Video Streams. IEEE Transactions on Multimedia 12(5), 386–398 (2010)

    CrossRef  Google Scholar 

  5. Tan, H., Ngo, C., Hong, R., Chua, T.: Scalable detection of partial near-duplicate videos by visual-temporal consistency. In: ACM MM 2009, pp. 145–154. ACM (2009)

    Google Scholar 

  6. Wu, X., Hauptmann, A., Ngo, C.: Practical elimination of near-duplicates from web video search. In: ACM MM 2007, pp. 218–227. ACM (2007)

    Google Scholar 

  7. Wu, X., Ngo, C.-W., Hauptmann, A.G., Tan, H.-K.: Real-time near-duplicate elimination for web video search with content and context. Trans. Multi. 11, 196–207 (2009)

    CrossRef  Google Scholar 

  8. Wu, X., Takimoto, M., Satoh, S., Adachi, J.: Scene duplicate detection based on the pattern of discontinuities in feature point trajectories. In: ACM MM 2008, page 51 (2008)

    Google Scholar 

  9. Younessian, E., Adamek, X., Oliver, N.: Telefonica Research at TRECVID 2010 Content-Based Copy Detection (2010), http://www-nlpir.nist.gov

  10. Younessian, E., Rajan, D., Siong, C.E.: Improved keypoint matching method for near-duplicate keyframe retrieval. In: ISM 2009, pp. 298–303 (2009)

    Google Scholar 

  11. Zhao, W.-L., Ngo, C.-W.: Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection. Trans. Img. Proc. 18, 412–423 (2009)

    CrossRef  Google Scholar 

  12. Zhou, X., Zhou, X., Chen, L., Bouguettaya, A., Xiao, N., Taylor, J.: An efficient near-duplicate video shot detection method using shot-based interest points. IEEE Transactions on Multimedia 11(5), 879–891 (2009)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Younessian, E., Rajan, D. (2012). Scene Signatures for Unconstrained News Video Stories. In: Schoeffmann, K., Merialdo, B., Hauptmann, A.G., Ngo, CW., Andreopoulos, Y., Breiteneder, C. (eds) Advances in Multimedia Modeling. MMM 2012. Lecture Notes in Computer Science, vol 7131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27355-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27355-1_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27354-4

  • Online ISBN: 978-3-642-27355-1

  • eBook Packages: Computer ScienceComputer Science (R0)