Skip to main content

Visual attention prediction for images with leading line structure

Abstract

Researchers have proposed a wide variety of visual attention models, ranging from models that use local, low-level image features to recent approaches that incorporate semantic information. However, most models do not account for the visual attention evident in images with certain global structures. We focus specifically on “leading line” structures, in which explicit or implicit lines converge at a point. Through this study, we have conducted the experiments to investigate the visual attentions in images with leading line structure and propose new models that combine the low-level feature of center-surround differences of visual stimuli, the semantic feature of center bias and the structure feature of leading lines. We also create a new data set from 110 natural images containing leading lines and the eye-tracking data for 16 subjects. Our evaluation experiment showed that our models outperform the existing models against common indicators of saliency-map evaluation, underscoring the importance of leading lines in the modeling of visual attention.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

References

  1. Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a survey. arXiv preprint arXiv:1411.5878 (2014)

  2. Borji, A.: Boosting bottom-up and top-down visual features for saliency estimation. In: CVPR (2012)

  3. Borji, A., Feng, M., Lu, H.: Vanishing point attracts gaze in free-viewing and visual search tasks. J. Vis. 16(14), 18 (2016)

    Article  Google Scholar 

  4. Carmi, R., Itti, L.: Visual causes versus correlates of attentional selection in dynamic scenes. Vis. Res. 46(26), 4333–4345 (2006)

    Article  Google Scholar 

  5. Cerf, M., Harel, J., Einhäuser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: NIPS (2008)

  6. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems (NIPS), pp. 545–552 (2006)

  7. Harel, J.: A Saliency Implementation in MATLAB. http://www.klab.caltech.edu/~harel/share/gbvs.php

  8. Huang, X., Shen, C., Boix, X., Zhao, Q.: Salicon: reducing the semantic gap in saliency prediction by adapting deep neural networks. In: ICCV, pp. 262–270 (2015)

  9. Itti, L., Koch, C.: Feature combination strategies for saliency-based visual attention systems. J. Electron. Imaging 10, 161–169 (2001)

    Article  Google Scholar 

  10. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  11. Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. MIT Technical report, MIT-CSAIL-TR-2012-001, pp. 545–552 (2012)

  12. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: ICCV (2009)

  13. Kong, H., Audibert, J. Y., Ponce, J.: Vanishing point detection for road detection. In: CVPR, pp. 96–103 (2009)

  14. Liang, H., Jiang, M., Liang, R., Zhao, Q.: Saliency prediction with scene structural guidance. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff Center, Banff, Canada, October 5–8 (2017)

  15. Liu, N., Han, J., Zhang, D., Wen, S., Liu, T.: Predicting eye fixations using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

  16. Marat, S., Rahman, A., Pellerin, D., Guyader, N., Houzet, D.: Improving visual saliency by adding ‘face feature map’ and ‘center bias’. Cogn. Comput. 5(1), 63–75 (2013)

    Article  Google Scholar 

  17. Pan, J., Sayrol, E., Giro-i-Nieto, X., McGuinness, K., O’Connor, N.E.: Shallow and deep convolutional networks for saliency prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

  18. Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: ECCV (2010)

  19. Shen, C., Song, M., Zhao, Q.: Learning high-level concepts by training a deep network on eye fixations. In: NIPS Deep Learning and Unsup Feat Learn Workshop (2012)

  20. Tatler, B.W.: The Central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7(14), article 4 (2007)

  21. Wang, W., Shen, J.: Deep visual attention prediction. IEEE Trans. Image Process. 27, 2368–2378 (2018)

    MathSciNet  Article  Google Scholar 

  22. Wang, W., Shen, J., Shao, L.: Video salient object detection via fully convolutional networks. IEEE Trans. Image Process. 27(1), 38–49 (2018)

    MathSciNet  Article  Google Scholar 

  23. Xu, J., Jiang, M., Wang, S., Kankanhalli, M.S., Zhao, Q.: Predicting human gaze beyond pixels. J. Vis. 14(1), 28–28 (2014)

    Article  Google Scholar 

  24. Zhao, Q., Koch, C.: Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost. J. Vis. 14(6), 1–15, article 22 (2012)

Download references

Funding

This study was funded by JSPS Grants-in-Aid for Scientific Research (Grant Nos. 17H00738 and 16K12459).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyang Mao.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interests.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mochizuki, I., Toyoura, M. & Mao, X. Visual attention prediction for images with leading line structure. Vis Comput 34, 1031–1041 (2018). https://doi.org/10.1007/s00371-018-1518-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1518-6

Keywords

  • Visual attention model
  • Saliency map
  • Structure information
  • Leading lines