Skip to main content
Log in

Scanpath estimation based on foveated image saliency

  • Short Communication
  • Published:
Cognitive Processing Aims and scope Submit manuscript

Abstract

The estimation of gaze shift has been an important research area in saliency modeling. Gaze movement is a dynamic progress, yet existing estimation methods are limited to estimating scanpaths within only one saliency map, providing results with unsatisfactory accuracy. A bio-inspired method for gaze shift prediction is thus proposed. We take the effect of foveation into account in the proposed model, which plays an important role in the search for dynamic salient regions. The saccadic bias of gaze shifts and the mechanism of inhibition of return in short-term memory are also considered. Based on the probability map derived from these factors, candidates for the next fixation can be randomly generated, and the final scanpath can be acquired point by point. By the evaluation of objective measures, experimental results show that this method possesses better performance in several datasets than many existing models do.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

References

  • Anderson N, Anderson F, Kingstone A, Bischof W (2015) A comparison of scanpath comparison methods. Behav Res Methods 47(4):1377–1392

    Article  PubMed  Google Scholar 

  • Bays P, Husain M (2012) Active inhibition and memory promote exploration and search of natural scenes. J Vis 12(8):8

    Article  PubMed  PubMed Central  Google Scholar 

  • Bledowski C, Rahm B, Rowe J (2009) What “works” in working memory? separate systems for selection and updating of critical information. J Neurosci 29(43):13735–13741

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Borji A, Itti L (2013) State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell 35(1):185–207

    Article  PubMed  Google Scholar 

  • Borji A, Sihite D, Itti L (2013) Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process 22(1):55–69

    Article  PubMed  Google Scholar 

  • Botev Z, Grotowski J, Kroese D (2010) Kernel density estimation via diffusion. Ann Stat 38(5):2916–2957

    Article  Google Scholar 

  • Bruce NDB, Tsotsos JK (2005) Saliency based on information maximization. In: Weiss Y, Cholkopf B, Platt IC (eds) Advances in neural information processing systems. MIT Press, Cambridge

    Google Scholar 

  • Cristino F, Mathôt S, Theeuwes J, Gilchrist I (2010) ScanMatch: a novel method for comparing fixation sequences. Behav Res Methods 42(3):692–700

    Article  PubMed  Google Scholar 

  • Da Silva MP, Courboulay V (2013) Implementation and evaluation of a computational model of attention for computer vision. Image Process Concepts Methodol Tools Appl 422:273–306

    Google Scholar 

  • Engbert R, Trukenbrod H, Barthelme S, Wichmann F (2015) Spatial statistics and attentional dynamics in scene viewing. J Vis 15(1):14

    Article  PubMed  Google Scholar 

  • Geisler WS, Perry JS (2002) Real-time simulation of arbitrary visual fields. In: Proceedings of ACM symposium on eye tracking res application, pp 83–87

  • Guo C, Ma Q, Zhang L (2008) Spatio-temporal saliency detection using phase spectrum of quaternion Fourier transform. In: Proceedings of IEEE conference computer vision and pattern recognition, pp 1–8

  • Harel J, Koch C, Perona P (2007) Graph-based visual saliency. In: Platt IC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 545–552

    Google Scholar 

  • Hou X, Zhang L (2007) Saliency detection: A spectral residual approach. In: Proc IEEE conference on computer vision and pattern recognition, pp 1–8

  • Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  • Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: Proceedings of international conference on computer vision, pp 1–7

  • Larson A, Loschky L (2009) The contributions of central versus peripheral vision to scene gist recognition. J Vis 9(10):6

    Article  PubMed  Google Scholar 

  • Le Meur O, Coutrot A (2016) Introducing context-dependent and spatially-variant viewing biases in saccadic models. Vis Res 121:72–84

    Article  PubMed  Google Scholar 

  • Le Meur O, Liu Z (2015) Saccadic model of eye movements for free-viewing condition. Vis Res 116:152–164

    Article  PubMed  Google Scholar 

  • Liu H, Xu D, Huang Q, Li W, Xu M, Lin S (2013) Semantically based human scanpath estimation with HMMs. In: Proceedings of international conference on computer vision, pp 3232–3239

  • Needleman SB, Wunsch CD (1970) A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol 48(3):443–453

    Article  CAS  PubMed  Google Scholar 

  • Samuel AG, Kat D (2003) Inhibition of return: a graphical meta-analysis of its time course and an empirical test of its temporal and spatial properties. Psychon Bull Rev 10(4):897–906

    Article  PubMed  Google Scholar 

  • Sun X, Yao H, Ji R, Liu X (2014) Toward statistical modeling of saccadic eye-movement and visual saliency. IEEE Trans Image Process 23(11):4649–4662

    Article  PubMed  Google Scholar 

  • Tatler BW, Vincent BT (2008) Systematic tendencies in scene viewing. J Eye Mov Res 2(2):1–18

    Google Scholar 

  • Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cogn Psychol 12(1):97–136

    Article  CAS  PubMed  Google Scholar 

  • Walther D, Koch C (2006) Modeling attention to salient proto-objects. Neural Netw 19(9):1395–1407

    Article  PubMed  Google Scholar 

  • Wang W, Chen C, Wang Y, Jiang T, Fang F, Yao Y (2011) Simulating human saccadic scanpaths on natural images. In: Proceedings of computer vision and pattern recognition, pp 441–448

Download references

Acknowledgments

Funding was provided by National Natural Science Foundation of China (Grant No. 61572133).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Wang.

Additional information

Handling editor: Anna Belardinelli (University of Tübingen); Reviewers: Christian Balkenius (Lund University), Matei Mancas (University of Mons), Olivier Le Meur (University of Rennes).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Wang, B., Wu, X. et al. Scanpath estimation based on foveated image saliency. Cogn Process 18, 87–95 (2017). https://doi.org/10.1007/s10339-016-0781-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10339-016-0781-6

Keywords

Navigation