Skip to main content

Improving Retake Detection by Adding Motion Feature

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNIP,volume 6979)

Abstract

Retake detection is useful for many applications of video summarization. It is a challenging task since different takes of the same scene are usually of different lengths; or have been recorded under different environment conditions. A general approach to solve this problem is to decompose the input video sequence into sub-sequences and then group these sub-sequences into clusters. By combining with temporal information, the clustering result is used to find take and scene boundaries. One of the most difficult steps in this approach is to cluster sub-sequences. Most of previous approaches only use one keyframe for representing one sub-sequence and extract features such as color and texture from this keyframe for clustering. We propose another approach to improve the performance by combining the motion feature extracted from each sub-sequence and the features extracted from each representing keyframe. Experiments evaluated on the standard benchmark dataset of TRECVID BBC Rushes 2007 show the effectiveness of the proposed method.

References

  1. Bailer, W., Lee, F., Thallinger, G.: A distance measure for repeated takes of one scene. The Visual Computer: International Journal of Computer Graphics 25, 53–68 (2008)

    CrossRef  Google Scholar 

  2. Dumont, E., Merialdo, B.: Rushes video summarization and evaluation. Multimedia Tools and Applications 48, 51–68 (2010)

    CrossRef  Google Scholar 

  3. Wang, F., Ngo, C.W.: Rushes video summarization by object and event understanding. In: TVS 2007 Proceedings of the International Workshop on TRECVID Video Summarization, pp. 25–29 (2007)

    Google Scholar 

  4. Rand, W.M.: Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66, 846–850 (1971)

    CrossRef  Google Scholar 

  5. Flickner, M., Sawhney, H.S., Ashley, J., Huang, Q., Dom, B., Gorkani, M., Hafner, J., Lee, D., Petkovic, D., Steele, D., Yanker, P.: Query by image and video content: The qbic system. IEEE Computer 28, 23–32 (1995)

    CrossRef  Google Scholar 

  6. Stricker, M.A., Orengo, M.: Similarity of color images. In: Proc. of SPIE, Storage and Retrieval for Image and Video Databases III, vol. 2420, pp. 381–392 (1995)

    Google Scholar 

  7. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence 24, 971–987 (2002)

    CrossRef  MATH  Google Scholar 

  8. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop, pp. 121–130 (1981)

    Google Scholar 

  9. Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. Journal of Molecular Biology 147, 195–197 (1981)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Van Hoang, H., Le, DD., Satoh, S., Nguyen, Q.H. (2011). Improving Retake Detection by Adding Motion Feature. In: Maino, G., Foresti, G.L. (eds) Image Analysis and Processing – ICIAP 2011. ICIAP 2011. Lecture Notes in Computer Science, vol 6979. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24088-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24088-1_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24087-4

  • Online ISBN: 978-3-642-24088-1

  • eBook Packages: Computer ScienceComputer Science (R0)