Advertisement

Visual Saliency Based Video Summarization: A Case Study For Preview Video Generation

  • G. Ramya
  • Subhash KulkarniEmail author
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 79)

Abstract

A direction research to visual content-driven videos has been in facilitating a short preview of each video through summarization that largely contains short-duration sequence combination of each scene corresponding to stationary camera. This work aims at using visual saliency features to trace scene-change positions in the video. In the present work, visual saliency features are built using color and intensity information as features. Further, using accumulated difference measure (Forward and Backward) in saliency features has been used to filter out false-positive scene-change outcomes. The results have been found to be quite satisfactory and provide closed match to the exact scene-change positions in the video. Significant accuracy is observed with videos using stationary cameras. For moving or non-stationary camera, video summarization has always been a challenging issue. The proposed method has been successfully tested over visual content-driven videos ranging from underwater scenes, fight sequences to surveillance videos in generating preview video.

Keywords

Video processing Visual saliency Forward and backward accumulated difference (FAD, BAD) Video summarization Preview video generation 

References

  1. 1.
    Evangelopoulos, G., Rapantzikos, K., Potamianos, A., Maragos, P., Zlatintsi, A., Avrithis, Y.: Audiovisual attention modeling and salient event detection. In: Multimodal Processing and Interaction: Audio, Video, Text (eds.). Springer (2008)Google Scholar
  2. 2.
    Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Proc. 19(1), (2010)Google Scholar
  3. 3.
    Evangelopoulos, V., Zlatintsi, A., Skoumas, G., Rapantzikos, K., Potamianos, A., Maragos, P., Avrithis, Y.: Video event detection and summarization using audio, visual, text saliency. In IEEE Trans., ICASSP (2009)Google Scholar
  4. 4.
    Tong, Y., Konik, H., Cheikh, F.A., Guraya, F.F.E., Tremeau, A.: Multi-feature based visual saliency detection in surveillance video. In: Visual Communications and Image Processing 2010, vol. 7744 (2010)Google Scholar
  5. 5.
    El Khattabi, Z., Tabii, Y., Benkaddour, A.: Video summarization: techniques and applications. Int. J. Comput. Inf. Eng. 9(4) (2015)Google Scholar
  6. 6.
    Ying, L., Lee, S.-H., Yeh, C.-H., Kuo, C.-C.: Techniques for movie content analysis and skimming. In IEEE Signal Proc. Mag. 23(2) (2006)Google Scholar
  7. 7.
    Avrithis, Y., Doulamis, A., Doulamis, N., Kollias, S.: Summarization of video taped presentations: automatic analysis of motion and gesture. Comput. Vision Image Underst 75(12), 3–24 (1998)Google Scholar
  8. 8.
    Mu, Y., Lu, L., Zhang, H., Li, M.: A user attention model for video summarization. In: Proceedings ACM Int’l Conference on Multimedia (2002)Google Scholar
  9. 9.
    Ratakonda, K., Sezan, M., Crinon, R.: Hierarchical video summarization. In Proceedings SPIE, Visual Communication and Image Processing, vol. 3653, pp. 1531–1541, (Dec 1998)Google Scholar
  10. 10.
    Itti, L., Koch, C., Niebur, E.: A model of saliency based visual attention for rapid scene analysis. IEEE Trans. PAMI 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  11. 11.
    Freeman, W.T., Adelson, E.H.: The design and use of steerable filters. IEEE Trans. PAMI 9, 891–906 (1991)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Electronics and Communication EngineeringPESIT-Bangalore South CampusBangaloreIndia

Personalised recommendations