Skip to main content
Log in

Saliency detection in MPEG and HEVC video using intra-frame and inter-frame distances

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

This paper proposes a video saliency detection model for MPEG and HEVC coded videos. The model extracts features from MPEG macroblocks and HEVC coding units. The feature variables are based on syntax elements and statistics of prediction error. The suitability of the selected features is verified through the use of stepwise regression. Three saliency maps are generated based on intra-frame distances, inter-frame distances and global distances. The proposed model is tested using the eye-1 dataset compiled by Laurent Itti Lab in the University of Southern California. The accuracy of the model is quantified by comparing saliency values at human saccade locations against saliency values at random locations. The comparison is performed in terms of Kullback–Leibler distances and receiver operator curves. The proposed solution is compared against existing work using similar experimental setup. Experimental results revealed that a Kullback–Leibler distance of 2.14 and area under the receiver operator curve of 0.936 are achieved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Ren, Z., Gao, S., Chia, L.-T., Rajan, D.: Regularized feature reconstruction for spatio-temporal saliency detection. IEEE Trans. Image Process. 22(8), 3120–3132 (2013)

    Article  Google Scholar 

  2. Decombas, M., Riche, N., Dufaux, F., Pesquet-Popescu, B., Mancas, M., Gosselin, B., Dutoit, T.: Spatio-temporal saliency based on rare model. In: IEEE International Conference on Image Processing (ICIP), pp. 15–18 (2013)

  3. Wonjun, K., Han, J.-J.: Video saliency detection using contrast of spatiotemporal directional coherence. IEEE Signal Process. Lett. 20(10), 1250–1254 (2014)

  4. Kim, W., Kim, C.: Spatiotemporal saliency detection using textural contrast and its applications. IEEE Trans. Circuits Syst. Video Technol. 24(4), 646–659 (2014)

    Article  Google Scholar 

  5. Hu, K.-T., Leou, J.-J., Hsiao, H.-H.: Visual attention region determination for H.264 videos. In: International Conference on Pattern Recognition (ICPR), 11–15 Nov 2012

  6. Imamoglu, N., Lin, W., Fang, Y.: A saliency detection model using low-level features based on wavelet transform. IEEE Trans. Multimed. 15(1), 96–105 (2013)

    Article  Google Scholar 

  7. Nataraju, S., Balasubramanian, V., Panchanathan, S.: Learning attention based saliency in videos from human eye movements. In: Workshop on Motion and Video Computing, Dec 2009

  8. Hadizadeh, H., Bajic, I.V.: Saliency-aware video compression. IEEE Trans. Image Process. 23(1), 19–33 (2014)

    Article  MathSciNet  Google Scholar 

  9. Milani, S., Bernardini, R., Rinaldo, R.: A saliency-based rate control for people detection in video. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, 26–31 May 2013

  10. Dardi, F., Abate, L., Ramponi, G.: No-reference measurement of perceptually significant blurriness in video frames. Signal, Image Video Process. 5(3), 271–282 (2011)

    Article  Google Scholar 

  11. Hamel, S., Guyader, N., Pellerin, D., Houzet, D.: Contribution of color in saliency model for videos. In: Signal, Image and Video Processing, Mar 2015

  12. Xu, L., Zeng, L., Wang, Z.: Saliency-based superpixels. Signal Image Video Process. 8(1), 181–190 (2013)

    Article  Google Scholar 

  13. Fang, Y., Lin, W., Chen, Z., Tsai, C.-M., Lin, C.-W.: A Video Saliency Detection Model in Compressed Domain. IEEE Trans. Circuits Syst. Video Technol. 21(1), 27–38 (2014)

  14. ISO/IEC 23008-2:2013: Information technology—high efficiency coding and media delivery in heterogeneous environments—Part 2: High efficiency video coding (2013)

  15. Mendenhall, W., Sincich, T.: Statistics for Engineering and Sciences, 5th edn. Pearson, New York (2006)

    MATH  Google Scholar 

  16. Muddamsetty, S.M., Sidibe, D., Tremeau, A., Meriaudeau, F.: A performance evaluation of fusion techniques for spatio-temporal saliency detection in dynamic scenes. In: IEEE International Conference on Image Processing (ICIP), Sept 2013

  17. Itti, L., Laurent, Carmi, R.: Eye-tracking data from human volunteers watching complex video stimuli. CRCNS.org. http://dx.doi.org/10.6080/K0TD9V7F (2009)

  18. Itti, L., Baldi, P.: Bayesian surprise attracts human attention. Adv. Neural Inf. Process. Syst. 46, 8–9 (2006)

    Google Scholar 

  19. Peters, R.J., Itti, L.: Applying computational tools to predict gaze direction in interactive visual environments. ACM Trans. Appl. Percept. 5(2) (2008)

  20. Hou, X., Zhang, L.: Dynamic visual attention: Searching for coding length increments. Advances in Neural Information Processing Systems, vol. 21, pp.681–688, MIT Press (2008)

  21. Singh, N., Agrawal, R.: Combination of Kullback–Leibler divergence and Manhattan distance measures to detect salient objects. Signal, Image Video Process. 9(2), 427–435 (2013)

    Article  Google Scholar 

  22. Guo, C., Zhang, L.: A novel multi-resolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19(1), 185–198 (2010)

    Article  MathSciNet  Google Scholar 

  23. Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: IEEE International Conference on Computer Vision and Pattern Recognition, Jan 2010

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tamer Shanableh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shanableh, T. Saliency detection in MPEG and HEVC video using intra-frame and inter-frame distances. SIViP 10, 703–709 (2016). https://doi.org/10.1007/s11760-015-0798-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-015-0798-9

Keywords

Navigation