Optimal Cue Combination for Saliency Computation: A Comparison with Human Vision

  • Alexandre Bur
  • Heinz Hügli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4528)

Abstract

The computer model of visual attention derives an interest or saliency map from an input image in a process that encompasses several data combination steps. While several combination strategies are possible, not all perform equally well. This paper compares main cue combination strategies by measuring the performance of the considered models with respect to human eye movements. Six main combination methods are compared in experiments involving the viewing of 40 images by 20 observers. Similarity is evaluated qualitatively by visual tests and quantitatively by use of a similarity score. The study provides insight into the map combination mechanisms and proposes in this respect an overall optimal strategy for a computer saliency model.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Ouerhani, N., Hugli, H.: Real-time visual attention on a massively parallel SIMD architecture. International Journal of Real Time Imaging 9(3), 189–196 (2003)CrossRefGoogle Scholar
  3. 3.
    Ouerhani, N., Hugli, H.: MAPS: Multiscale attention-based presegmentation of color images. In: Griffin, L.D., Lillholm, M. (eds.) Scale-Space 2003. LNCS, vol. 2695, pp. 537–549. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Walther, D., Rutishauser, U., Koch, C., Perona, P.: Selective visual attention enables learning and recognition of multiple objects in cluttered scenes. Computer Vision and Image Understanding 100(1-2), 41–63 (2005)CrossRefGoogle Scholar
  5. 5.
    Koch, C., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  6. 6.
    Itti, L., Koch, C.: A comparison of feature combination strategies for saliency-based visual attention systems. In: SPIE Human Vision and Electronic Imaging IV (HVEI’99), vol. 3644, pp. 373–382 (1999)Google Scholar
  7. 7.
    Le Meur, O., Le Callet, P., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. PAMI 28(5) (2006)Google Scholar
  8. 8.
    Ouerhani, N., Bur, A., Hügli, H.: Linear vs. nonlinear feature combination for saliency computation: A comparison with human vision. In: Franke, K., Müller, K.-R., Nickolay, B., Schäfer, R. (eds.) Pattern Recognition. LNCS, vol. 4174, pp. 314–323. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Ouerhani, N., Jost, T., Bur, A., Hugli, H.: Cue normalization schemes in saliency-based visual attention models. In: International Cognitive Vision Workshop, Graz, Austria (2006)Google Scholar
  10. 10.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42(1), 107–123 (2002)CrossRefGoogle Scholar
  11. 11.
    Ouerhani, N., von Wartburg, R., Hugli, H., Mueri, R.: Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis (ELCVIA) 3(1), 13–24 (2004)Google Scholar
  12. 12.
    Itti, L.: Quantitative modeling of perceptual salience at human eye position. Visual Cognition, in press (2005)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Alexandre Bur
    • 1
  • Heinz Hügli
    • 1
  1. 1.Institute of Microtechnology, University of Neuchâtel, Rue A.-L. Breguet 2, CH-2000 NeuchâtelSwitzerland

Personalised recommendations