Advertisement

How to Measure the Relevance of a Retargeting Approach?

  • Christel Chamaret
  • Olivier Le Meur
  • Philippe Guillotel
  • Jean-Claude Chevet
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6554)

Abstract

Most cell phones today can receive and display video content. Nonetheless, we are still significantly behind the point where premium made for mobile content is mainstream, largely available, and affordable. Significant issues must be overcome. The small screen size is one of them. Indeed, the direct transfer of conventional contents (i.e. not specifically shot for mobile devices) will provide a video in which the main characters or objects of interest may become indistinguishable from the rest of the scene. Therefore, it is required to retarget the content. Different solutions exist, either based on distortion of the image, on removal of redundant areas, or cropping. The most efficient ones are based on dynamic adaptation of the cropping window. They significantly improve the viewing experience by zooming in the regions of interest. Currently, there is no common agreement on how to compare different solutions. A retargeting metric is proposed in order to gauge its quality. Eye-tracking experiments, zooming effect through coverage ratio and temporal consistency are introduced and discussed.

Keywords

Video Sequence Visual Attention Video Content Coverage Ratio Temporal Consistency 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Avidan, S., Shamir, A.: eam carving for content-aware image resizing. ACM Transactions on Graphics, SIGGRAPH 26 (2007)Google Scholar
  2. 2.
    Aziz, M.Z., Mertsching, B.: Fast and robust generation of feature maps for region-based visual attention. Image Processing 17(5), 633–644 (2008)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Chamaret, C., Le Meur, O.: Attention-based video reframing: validation using eye-tracking. In: ICPR (2008)Google Scholar
  4. 4.
    Chen, L., Xie, X., Fan, X., Ma, W., Zhang, H., Zhou, H.: A visual attention model for adapting images on small displays. ACM Multimedia Systems Journal 9(4) (2003)Google Scholar
  5. 5.
    Deselaers, T., Dreuw, P., Ney, H.: Pan, zoom, scan – time-coherent, trained automatic video cropping. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2008)Google Scholar
  6. 6.
    Fan, X., Xie, X., Ma, W., Zhang, H., Zhou, H.: Visual attention based image browsing on mobile devices. In: ICME 2003, vol. 1, pp. 53–56 (2003)Google Scholar
  7. 7.
    Itti, L., Koch, C.: Model of saliency-based visual attention for rapid scene analysis. IEEE Trans. on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  8. 8.
    Le Meur, O., Le Callet, P., Barba, D.: Predicting visual fixations on video based on low-level visual features. Vision Research 47(19), 2483–2498 (2007)CrossRefGoogle Scholar
  9. 9.
    Le Meur, O., Le Callet, P., Barba, D., Thoreau, D.: A coherent computational approach to model the bottom-up visual attention. IEEE Trans. on Pattern Analysis and Machine Intelligence 28(5), 802–817 (2006)CrossRefGoogle Scholar
  10. 10.
    Le Meur, O., Castellan, X., Le Callet, P., Barba, D.: Efficient Saliency-Based Repurposing Method. In: IEEE International Conference on Image Processing, pp. 421–424 (2006)Google Scholar
  11. 11.
    Liu, F., Gleicher, M.: Video retargeting: automating pan and scan. In: MULTIMEDIA 2006: Proceedings of the 14th Annual ACM International Conference on Multimedia, pp. 241–250. ACM Press, New York (2006)CrossRefGoogle Scholar
  12. 12.
    Liu, H., Xie, X., Ma, W., Zhang, H.: Automatic browsing of large pictures on mobile devices. In: ACM Multimedia Conference, pp. 148–155 (2003)Google Scholar
  13. 13.
    Kraehenbuehl, P., Manuel Lang, A.H., Gross, M.: A system for retargeting of streaming video. In: ACM Transactions on Graphics (Proc. of SIGGRAPH Asia) (2009)Google Scholar
  14. 14.
    Rubinstein, M., Shamir, A., Avidan, S.: Improved seam carving for video retargeting. ACM Transactions on Graphics (SIGGRAPH) 27(3), 1–9 (2008)CrossRefGoogle Scholar
  15. 15.
    Santella, A., Agrawala, M., Decarlo, D., Salesin, D., Cohen, M.: Gaze-based interaction for semi-automatic photo cropping. In: Proceedings of ACM’s CHI 2006, pp. 771–780 (2006)Google Scholar
  16. 16.
    Seyler, A.J., Budrikis, Z.: Details perception after scene changes in television image presentations. IEEE Trans. Inform. Theory 11(1), 31–43 (1965)CrossRefGoogle Scholar
  17. 17.
    Tao, C., Jia, J., Sun, H.: Active window oriented dynamic video retargeting. In: International Conference Computer Vision (2007)Google Scholar
  18. 18.
    Tatler, B.W., Baddeley, R.J., Gichrist, I.D.: Visual correlates of eye movements: Effects of scale and time. Vision Research 45(5), 643–659 (2005)CrossRefGoogle Scholar
  19. 19.
    Wolf, L., Guttmann, M., Cohen-Or, D.: Non-homogeneous content-driven video-retargeting. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–6 (October 2007)Google Scholar
  20. 20.
    Zhang, G., Cheng, M., Hu, S., Martin, R.R.: A shape-preserving approach to image resizing. Pacific Graphics 28 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Christel Chamaret
    • 1
  • Olivier Le Meur
    • 2
  • Philippe Guillotel
    • 1
  • Jean-Claude Chevet
    • 1
  1. 1.Technicolor R&IFrance
  2. 2.University of Rennes 1France

Personalised recommendations