Advertisement

Feed and fly control of visual scanpaths for foveation image processing

  • Giuseppe Boccignone
  • Mario Ferraro
Article

Abstract

Foveation-based processing and communication systems can exploit a more efficient representation of images and videos by removing or reducing visual information redundancy, provided that the sequence of foveation points, the visual scanpath, can be determined. However, one point that is neglected by the great majority of foveation models is the “noisy” variation of the random visual exploration exhibited by different observers when viewing the same scene, or even by the same subject along different trials. Here, a model for the generation and control of scanpaths that accounts for such issue is presented. In the model, the sequence of fixations and gaze shifts is controlled by a saliency-based, information foraging mechanism implemented through a dynamical system switching between two states, “feed” and “fly.” Results of the simulations are compared with experimental data derived from publicly available datasets.

Keywords

Eye movements Random walk Visual attention Image encoding 

References

  1. 1.
    van Beers R (2007) The sources of variability in saccadic eye movements. J Neurosci 27(33):8757–8770CrossRefGoogle Scholar
  2. 2.
    Begum M, Karray F (2011) Visual attention for robotic cognition: a survey. IEEE Trans Auton Mental Develop 3(1):92–105CrossRefGoogle Scholar
  3. 3.
    Bishop CM (2006) Pattern recognition and machine learning (Information Science and Statistics). Springer, New YorkGoogle Scholar
  4. 4.
    Boccignone G, Chianese A, Moscato V, Picariello A (2005) Foveated shot detection for video segmentation. IEEE Trans Circuits Syst Video Technol 15(3):365–377CrossRefGoogle Scholar
  5. 5.
    Boccignone G, Ferraro M (2004) Modelling gaze shift as a constrained random walk. Physica A: Statistical Mechanics and its Applications 331(1–2):207–218CrossRefGoogle Scholar
  6. 6.
    Boccignone G, Ferraro M (2011) Modelling eye-movement control via a constrained search approach. In: Proceedings of 3rd European workshop on visual information processing (EUVIP 2011). IEEE Press, Piscataway, pp 235–240CrossRefGoogle Scholar
  7. 7.
    Boccignone G, Marcelli A, Napoletano P, Di Fiore G, Iacovoni G, Morsa S (2008) Bayesian integration of face and low-level cues for foveated video coding. IEEE Trans Circuits Syst Video Technol 18(12):1727–1740CrossRefGoogle Scholar
  8. 8.
    Borji A, Itti L (2012) State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell. http://doi.ieeecomputersociety.org/10.1109/TPAMI.2012.89
  9. 9.
    Brockmann D, Geisel T (2000) The ecology of gaze shifts. Neurocomputing 32(1):643–650CrossRefGoogle Scholar
  10. 10.
    Cerf M, Frady E, Koch C (2009) Faces and text attract gaze independent of the task: experimental data and computer model. J Vis 9(12):10.1–10.15Google Scholar
  11. 11.
    Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. In: Advances in neural information processing systems, vol 20. MIT Press, Cambridge, pp 545–552Google Scholar
  12. 12.
    Chambers J, Mallows C, Stuck B (1976) A method for simulating stable random variables. J Am Stat Assoc 71(354):340–344MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Churchland P, Ramachandran V, Sejnowski T (1994) A critique of pure vision. MIT Press, CambridgeGoogle Scholar
  14. 14.
    Codling E, Plank M, Benhamou S (2008) Random walk models in biology. J R Soc Interface 5(25):813CrossRefGoogle Scholar
  15. 15.
    Cotsaces C, Nikolaidis N, Pitas I (2006) Video shot detection and condensed representation. a review. IEEE Signal Process Mag 23(2):28–37CrossRefGoogle Scholar
  16. 16.
    Da Luz M, Buldyrev S, Havlin S, Raposo E, Stanley H, Viswanathan G (2001) Improvements in the statistical approach to random Lévy flight searches. Physica A: Statistical Mechanics and its Applications 295(1–2):89–92zbMATHCrossRefGoogle Scholar
  17. 17.
    Ellis S, Stark L (1986) Statistical dependency in visual scanning. Hum Factors 28(4):421–438Google Scholar
  18. 18.
    Frintrop S, Rome E, Christensen H (2010) Computational visual attention systems and their cognitive foundations: a survey. ACM Trans Appl Percept 7(1):1–39CrossRefGoogle Scholar
  19. 19.
    Gnedenko B, Kolmogórov A (1954) Limit distributions for sums of independent random variables. Addison-Wesley, ReadingzbMATHGoogle Scholar
  20. 20.
    Harel J, Koch C, Perona P (2007) Graph-based visual saliency. In: Advances in neural information processing systems, vol 19. MIT Press, Cambridge, pp 545–552Google Scholar
  21. 21.
    Harris C (1998) On the optimal control of behaviour: a stochastic perspective. J Neurosci Methods 83(1):73–88CrossRefGoogle Scholar
  22. 22.
    Holmqvist K, Nyström M, Andersson R, Dewhurst R, Jarodzka H, Van de Weijer J (2011) Eye tracking: a comprehensive guide to methods and measures. Oxford University Press, OxfordGoogle Scholar
  23. 23.
    Hou X, Zhang L (2007) Saliency detection: a spectral residual approach. In: Proceedings CVPR 07, vol 1, pp 1–8Google Scholar
  24. 24.
    Itti L (2004) Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Trans Image Process 13(10):1304–1318CrossRefGoogle Scholar
  25. 25.
    Itti L, Baldi P (2009) Bayesian surprise attracts human attention. Vis Res 49(10):1295–1306CrossRefGoogle Scholar
  26. 26.
    Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20:1254–1259CrossRefGoogle Scholar
  27. 27.
    Jackson, J (1958) Evolution and dissolution of the nervous system (Croonian lectures). Published in parts in the British Medical Journal, Lancet pp 5–75Google Scholar
  28. 28.
    Klein R, MacInnes W (1999) Inhibition of return is a foraging facilitator in visual search. Psychol Sci 10(4):346–352CrossRefGoogle Scholar
  29. 29.
    Koutrouvelis I (1980) Regression-type estimation of the parameters of stable laws. J Am Stat Assoc 75(372):918–928MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Kowler E (2011) Eye movements: the past 25 years. Vis Res 51(13):1457–1483. 50th Anniversary Special Issue of Vision Research - vol 2Google Scholar
  31. 31.
    Kustov A, Robinson D (1996) Shared neural control of attentional shifts and eye movements. Nature 384:74–77CrossRefGoogle Scholar
  32. 32.
    Lee J, De Simone F, Ebrahimi T (2011) Efficient video coding based on audio-visual focus of attention. J Vis Commun Image Represent 22(8):704–711CrossRefGoogle Scholar
  33. 33.
    Neal R (1993) Probabilistic inference using Markov chain Monte Carlo methods. Department of Computer Science, University of Toronto. http://www.cs.utoronto.ca/radford/
  34. 34.
    Nolan J (1997) Numerical calculation of stable densities and distribution functions. Commun Stat Stoch Models 13(4):759–774MathSciNetzbMATHCrossRefGoogle Scholar
  35. 35.
    Noton D, Stark L (1971) Scanpaths in eye movements during pattern perception. Science 171(968):308–311CrossRefGoogle Scholar
  36. 36.
    Plank M, James A (2008) Optimal foraging: Lévy pattern or process? J R Soc Interface 5(26):1077CrossRefGoogle Scholar
  37. 37.
    Privitera CM, Stark LW (2000) Algorithms for defining visual regions-of-interest: comparison with eye fixations. IEEE Trans Pattern Anal Mach Intell 22(9):970–982CrossRefGoogle Scholar
  38. 38.
    Reynolds A (2008) Optimal random Lévy-loop searching: new insights into the searching behaviours of central-place foragers. EPL (Europhysics Letters) 82(2):20001.1–20001.6CrossRefGoogle Scholar
  39. 39.
    Schütz A, Braun D, Gegenfurtner K (2011) Eye movements and perception: a selective review. J Vis 11(5):9CrossRefGoogle Scholar
  40. 40.
    Seo H, Milanfar P (2009) Static and space-time visual saliency detection by self-resemblance. J Vis 9(12):1–27CrossRefGoogle Scholar
  41. 41.
    Stark L, Privitera C, Yang H, Azzariti M, Ho Y, Blackmon T, Chernyak D (2001) Representation of human vision in the brain: how does human perception recognize images? J Electron Imaging 10:123–151CrossRefGoogle Scholar
  42. 42.
    Stephen D, Mirman D, Magnuson J, Dixon J (2009) Lévy-like diffusion in eye movements during spoken-language comprehension. Phys Rev E 79(5):056114.1–056114.6CrossRefGoogle Scholar
  43. 43.
    Tatler B, Vincent B (2009) The prominence of behavioural biases in eye guidance. Vis Cogn 17(6–7):1029–1054CrossRefGoogle Scholar
  44. 44.
    Vandekerckhove J, Tuerlinckx F, Lee M (2011) Hierarchical diffusion models for two-choice response times. Psychol Methods 16(1):44CrossRefGoogle Scholar
  45. 45.
    Viswanathan G, Afanasyev V, Buldyrev S, Havlin S, Da Luz M, Raposo E, Stanley H (2000) Lévy flights in random searches. Physica A: Statistical Mechanics and its Applications 282(1–2):1–12CrossRefGoogle Scholar
  46. 46.
    Wang Z, Lu L, Bovik AC (2003) Foveation scalable video coding with automatic fixation selection. IEEE Trans Image Process 12:1–12MathSciNetzbMATHCrossRefGoogle Scholar
  47. 47.
    You J, Reiter U, Hannuksela M, Gabbouj M, Perkis A (2010) Perceptual-based quality assessment for audio–visual services: a survey. Signal Process Image Commun 25(7):482–501CrossRefGoogle Scholar

Copyright information

© Institut Mines-Télécom and Springer-Verlag 2012

Authors and Affiliations

  1. 1.Dipartimento di Scienze dell’InformazioneUniversitá di MilanoMilanoItaly
  2. 2.Dipartimento di FisicaUniversitá di TorinoTorinoItaly

Personalised recommendations