Linear vs. Nonlinear Feature Combination for Saliency Computation: A Comparison with Human Vision
In the heart of the computer model of visual attention, an interest or saliency map is derived from an input image in a process that encompasses several data combination steps. While several combination strategies are possible and the choice of a method influences the final saliency substantially, there is a real need for a performance comparison for the purpose of model improvement. This paper presents contributing work in which model performances are measured by comparing saliency maps with human eye fixations. Four combination methods are compared in experiments involving the viewing of 40 images by 20 observers. Similarity is evaluated qualitatively by visual tests and quantitatively by use of a similarity score. With similarity scores lying 100% higher, non-linear combinations outperform linear methods. The comparison with human vision thus shows the superiority of non-linear over linear combination schemes and speaks for their preferred use in computer models.
KeywordsVisual Attention Saliency Computation Human Visual Attention Linear Combination Scheme Local Orientation Feature
Unable to display preview. Download preview PDF.
- 1.Ahmed, S.: VISIT: An Efficient Computational Model of Human Visual Attention. PhD thesis, University of Illinois at Urbana-Champaign (1991)Google Scholar
- 2.Milanese, R.: Detecting Salient Regions in an Image: from Biological Evidence to Computer implementation. PhD thesis, Dept. Computer Science, Univ. of Geneva, Switzerland (1993)Google Scholar
- 3.Tsotsos, J.K.: Toward a computational model of visual attention. In: Papathomas, T.V., Chubb, C., Gorea, A., Kowler, E. (eds.) Early vision and beyond, pp. 207–226. MIT Press, Cambridge (1995)Google Scholar
- 4.Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology, 97–136 (1980)Google Scholar
- 5.Koch, C., Ullman, S.: Shifts in selective visual attention: Towards the under- lying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
- 8.Clark, J.J., Ferrier, N.J.: Control of visual attention in mobile robots. In: IEEE Conference on Robotics and Automation, pp. 826–831 (1989)Google Scholar
- 9.Ouerhani, N., Bur, A., Hügli, H.: Visual attention-based robot self-localization. In: European Conference on Mobile Robotics (ECMR 2005), Ancona, Italy, September 7-10, pp. 8–13 (2005)Google Scholar
- 12.Itti, L., Koch, C.: A comparison of feature combination strategies for saliency- based visual attention systems. In: SPIE Human Vision and Electronic Imaging IV (HVEI 1999), vol. 3644, pp. 373–382 (1999)Google Scholar
- 13.Ouerhani, N., Jost, T., Bur, A., Hügli, H.: Cue normalization schemes in saliency- based visual attention models. In: Proc. International Cognitive Vision Workshop, Graz, Austria (2006)Google Scholar
- 16.Itti, L.: Quantitative modeling of perceptual salience at human eye position. Visual Cognition (in press, 2005)Google Scholar
- 17.Ouerhani, N., von Wartburg, R., Hügli, H., Müri, R.M.: Empirical validation of Saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis 3(1), 13–24 (2003)Google Scholar
- 18.Le Meur, O., Le Callet, P., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 28(5) (May 2006)Google Scholar