Scene Signatures for Unconstrained News Video Stories

  • Ehsan Younessian
  • Deepu Rajan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7131)

Abstract

We propose a novel video signature called scene signature which is defined as a collection of SIFT descriptors. A scene signature represents the visual cues from a video scene in a compact and comprehensive manner. We detect Near Duplicate Keyframe clusters within a news story and then for each of them we generate an initial scene signature including most informative mutual and distinctive visual cues. Compared to conventional keypoint-trajectory-based signatures, we take the co-occurrence of SIFT keypoints into account. Moreover, we utilize keypoints describing novel visual clues in the scene. Next, through three steps of refinements on the initial scene signature we shorten the semantic gap to obtain more compact and semantically meaningful scene signatures. The experimental results confirm the efficiency, robustness and uniqueness of our proposed scene signature compared to other global and local video signatures.

Keywords

Scene signature Near-Duplicate Keyframe News retrieval 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: VLDB 1994, pp. 487–499. Morgan Kaufmann (1994)Google Scholar
  2. 2.
    Cheung, S., Zakhor, A.: Efficient video similarity measurement with video signature. In: ICIP 2002, vol. 1, pp. 59–74 (January 2002)Google Scholar
  3. 3.
    Dong, W., Wang, Z., Charikar, M., Li, K.: Efficiently matching sets of features with random histograms. In: ACM MM 2008, pp. 179–188. ACM, NY (2008)Google Scholar
  4. 4.
    Huang, Z., Shen, H.T., Shao, J., Cui, B., Member, S., Zhou, X.: Practical Online Near-Duplicate Subsequence Detection for Continuous Video Streams. IEEE Transactions on Multimedia 12(5), 386–398 (2010)CrossRefGoogle Scholar
  5. 5.
    Tan, H., Ngo, C., Hong, R., Chua, T.: Scalable detection of partial near-duplicate videos by visual-temporal consistency. In: ACM MM 2009, pp. 145–154. ACM (2009)Google Scholar
  6. 6.
    Wu, X., Hauptmann, A., Ngo, C.: Practical elimination of near-duplicates from web video search. In: ACM MM 2007, pp. 218–227. ACM (2007)Google Scholar
  7. 7.
    Wu, X., Ngo, C.-W., Hauptmann, A.G., Tan, H.-K.: Real-time near-duplicate elimination for web video search with content and context. Trans. Multi. 11, 196–207 (2009)CrossRefGoogle Scholar
  8. 8.
    Wu, X., Takimoto, M., Satoh, S., Adachi, J.: Scene duplicate detection based on the pattern of discontinuities in feature point trajectories. In: ACM MM 2008, page 51 (2008)Google Scholar
  9. 9.
    Younessian, E., Adamek, X., Oliver, N.: Telefonica Research at TRECVID 2010 Content-Based Copy Detection (2010), http://www-nlpir.nist.gov
  10. 10.
    Younessian, E., Rajan, D., Siong, C.E.: Improved keypoint matching method for near-duplicate keyframe retrieval. In: ISM 2009, pp. 298–303 (2009)Google Scholar
  11. 11.
    Zhao, W.-L., Ngo, C.-W.: Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection. Trans. Img. Proc. 18, 412–423 (2009)CrossRefGoogle Scholar
  12. 12.
    Zhou, X., Zhou, X., Chen, L., Bouguettaya, A., Xiao, N., Taylor, J.: An efficient near-duplicate video shot detection method using shot-based interest points. IEEE Transactions on Multimedia 11(5), 879–891 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ehsan Younessian
    • 1
  • Deepu Rajan
    • 1
  1. 1.School of Computer EngineeringNanyang Technological UniversitySingapore

Personalised recommendations