Advertisement

Quaternion-Based Spectral Saliency Detection for Eye Fixation Prediction

  • Boris Schauerte
  • Rainer Stiefelhagen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7573)

Abstract

In recent years, several authors have reported that spectral saliency detection methods provide state-of-the-art performance in predicting human gaze in images (see, e.g., [1–3]). We systematically integrate and evaluate quaternion DCT- and FFT-based spectral saliency detection [3,4], weighted quaternion color space components [5], and the use of multiple resolutions [1]. Furthermore, we propose the use of the eigenaxes and eigenangles for spectral saliency models that are based on the quaternion Fourier transform. We demonstrate the outstanding performance on the Bruce-Tsotsos (Toronto), Judd (MIT), and Kootstra- Schomacker eye-tracking data sets.

Keywords

Color Space Area Under Curve Saliency Detection Visual Saliency Saliency Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Peters, R., Itti, L.: The role of fourier phase information in predicting saliency. Journal of Vision 8, 879 (2008)CrossRefGoogle Scholar
  2. 2.
    Hou, X., Harel, J., Koch, C.: Image signature: Highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34, 194–201 (2012)CrossRefGoogle Scholar
  3. 3.
    Schauerte, B., Stiefelhagen, R.: Predicting human gaze using quaternion dct image signature saliency and face detection. In: Proc. Workshop on the Applications of Computer Vision (2012)Google Scholar
  4. 4.
    Guo, C.L., Ma, Q., Zhang, L.M.: Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In: Proc. Int. Conf. Comp. Vis. Pat. Rec. (2008)Google Scholar
  5. 5.
    Bian, P., Zhang, L.: Biological plausibility of spectral domain approach for spatiotemporal visual saliency. In: Advances in Neural Information Processing Systems (2009)Google Scholar
  6. 6.
    Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundation: A survey. ACM Trans. Applied Perception 7(1), 6:1–6:39 (2010)CrossRefGoogle Scholar
  7. 7.
    Meger, D., Forssén, P.E., Lai, K., et al.: Curious george: An attentive semantic robot. Robotics and Autonomous Systems 56(6), 503–511 (2008)CrossRefGoogle Scholar
  8. 8.
    Schauerte, B., Kühn, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention for object-based scene analysis. In: Proc. Int. Conf. Intell. Robots Syst. (2011)Google Scholar
  9. 9.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: Proc. Int. Conf. Comp. Vis. Pat. Rec. (2010)Google Scholar
  10. 10.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. (2012)Google Scholar
  11. 11.
    Ma, Z., Qing, L., Miao, J., Chen, X.: Advertisement evaluation using visual saliency based on foveated image. In: Proc. Int. Conf. Multimedia and Expo. (2009)Google Scholar
  12. 12.
    Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: Proc. Int. Conf. Comp. Vis. Pat. Rec. (2007)Google Scholar
  13. 13.
    Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19, 185–198 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Achanta, R., Süsstrunk, S.: Saliency detection using maximum symmetric surround. In: Proc. Int. Conf. Image Process. (2010)Google Scholar
  15. 15.
    Li, J., Levine, M.D., An, X., He, H.: Saliency detection based on frequency and spatial domain analysis. In: Proc. British Mach. Vis. Conf. (2011)Google Scholar
  16. 16.
    Oppenheim, A., Lim, J.: The importance of phase in signals. Proc. IEEE 69, 529–541 (1981)CrossRefGoogle Scholar
  17. 17.
    Huang, T., Burnett, J., Deczky, A.: The importance of phase in image processing filters. IEEE Trans. Acoust., Speech, Signal Process. 23, 529–542 (1975)CrossRefGoogle Scholar
  18. 18.
    Bruce, N., Tsotsos, J.: Saliency, attention, and visual search: An information theoretic approach. Journal of Vision 9, 1–24 (2009)CrossRefGoogle Scholar
  19. 19.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proc. Int. Conf. Comp. Vis. (2009)Google Scholar
  20. 20.
    Kootstra, G., Nederveen, A., de Boer, B.: Paying attention to symmetry. In: Proc. British Mach. Vis. Conf. (2008)Google Scholar
  21. 21.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cog. Psy. 12, 97–136 (1980)CrossRefGoogle Scholar
  22. 22.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)CrossRefGoogle Scholar
  23. 23.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: A bayesian framework for saliency using natural statistics. Journal of Vision 8 (2008)Google Scholar
  24. 24.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems (2007)Google Scholar
  25. 25.
    Ell, T.: Quaternion-fourier transforms for analysis of two-dimensional linear time-invariant partial differential systems. In: Int. Conf. Decision and Control (1993)Google Scholar
  26. 26.
    Sangwine, S.J.: Fourier transforms of colour images using quaternion or hypercomplex, numbers. Electronics Letters 32, 1979–1980 (1996)CrossRefGoogle Scholar
  27. 27.
    Sangwine, S.J., Ell, T.A.: Colour image filters based on hypercomplex convolution. In: IEEE Proc. Vision, Image and Signal Processing, vol. 147, pp. 89–93 (2000)Google Scholar
  28. 28.
    Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006)zbMATHCrossRefGoogle Scholar
  29. 29.
    Gao, D., Mahadevan, V., Vasconcelos, N.: On the plausibility of the discriminant center-surround hypothesis for visual saliency. Journal of Vision 8, 1–18 (2008)CrossRefGoogle Scholar
  30. 30.
    Ell, T., Sangwine, S.: Hypercomplex fourier transforms of color images. IEEE Trans. Image Process. 16, 22–35 (2007)zbMATHMathSciNetCrossRefGoogle Scholar
  31. 31.
    Hamilton, W.R.: Elements of Quaternions. University of Dublin Press (1866)Google Scholar
  32. 32.
    Zhao, Q., Koch, C.: Learning a saliency map using fixated locations in natural scenes. Journal of Vision 11, 1–15 (2011)CrossRefGoogle Scholar
  33. 33.
    Judd, T., Durand, F., Torralba, A.: Fixations on low-resolution images. Journal of Vision 11 (2011)Google Scholar
  34. 34.
    Olmos, A., Kingdom, F.A.A.: A biologically inspired algorithm for the recovery of shading and reflectance images. Perception 33, 1463–1473 (2004)CrossRefGoogle Scholar
  35. 35.
    Tatler, B., Baddeley, R., Gilchrist, I.: Visual correlates of fixation selection: Effects of scale and time. Vision Research 45, 643–659 (2005)CrossRefGoogle Scholar
  36. 36.
    Geusebroek, J.M., Smeulders, A., van de Weijer, J.: Fast anisotropic gauss filtering. IEEE Trans. Image Process. 12, 938–943 (2003)zbMATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Boris Schauerte
    • 1
  • Rainer Stiefelhagen
    • 1
  1. 1.Institute for AnthropomaticsKarlsruhe Institute of TechnologyKarlsruheGermany

Personalised recommendations