Skip to main content
Log in

Contributions of Different Spatial Modulations of Brightness Gradients to the Control of Visual Attention

  • Published:
Neuroscience and Behavioral Physiology Aims and scope Submit manuscript

The targets of visual selective attention are the most informative parts of images. All new data in recent years have pointed to the importance of second-order features – spatial inhomogeneities in brightness gradients – in solving a variety of visual tasks. We addressed the problem of determining how priorities of second-order features are distributed from the point of view of attracting attention. All information from initial images of objects other than modulations of specific modalities was removed. As a result, each object was represented in three versions: images consisting of modulations of contrast, orientation, and spatial frequency. In experiment 1, images formed from modulations of different dimensionalities but with the same spatial frequency competed for attention. In experiment 2, modulations of the same dimensionality but different frequencies were used. The direction of the subject’s attention was evaluated by recording eye movements. The first saccade after stimulus onset was assessed. The observations showed that in the object orientation discrimination task using pairs of sequential stimuli, images formed from modulations of contrast and orientation of lower spatial frequency had a significant advantage in competing for attention. This result is explained by the notion that in competing for attention, areas of images with greater amplitude of brightness gradient modulation had the advantage in competing for attention.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alekseeva, D., Babenko, V., and Yavna, D., “Spatial frequency modulations of basic features in object categorization,” Int. J. Psychophysiol., 131, 69–70 (2018).

    Article  Google Scholar 

  • Babenko, V. and Ermakov, P., “Specifi city of brain reactions to second-order visual stimuli,” Vis. Neurosci., 32, (e011) (2015).

  • Babenko, V. V. and Yavna, D. V., “Competition of spatial modulation brightness gradients for attention,” Ross. Psikhol. Zh., 15, No. 3, 160–189 (2018).

    Google Scholar 

  • Baldwin, A. S., Husk, J. S., Edwards, L., and Hess, R. H., “The efficiency of second order orientation coherence detection,” Vision Res., 109, Part A, 45–51 (2015).

  • Barabanshchikov, V. A. and Zhegallo, A. V., Recording and Analysis of the Direction of Gaze in Humans, Institute of Psychology, Russian Academy of Sciences (2013).

  • Bell, J., Gheorghiu, E., Hess, R. F., and Kingdom, F. A. A., “Global shape processing involves a hierarchy of integration stages,” Vision Res., 51, No. 15, 1760–1766 (2011).

    Article  Google Scholar 

  • Donk, M. and van Zoest, W., “Effects of salience are short-lived,” Psychol. Science, 19, No. 7, 733–739 (2008).

    Article  Google Scholar 

  • Ermachenko, N. S., Ermachenko, A. A., and Latanov, A. V., “Integration of videooculography and electroencephalography for studies of visual selective attention in humans,” Zh. Vyssh. Nerv. Deyat., 61, No. 5, 631–640 (2011).

    CAS  Google Scholar 

  • Fuchs, I., Ansorge, U., Redies, C., and Leder, H., “Salience in paintings: bottom-up influences on eye fi xations,” Cognit. Comput., 3, No. 1, 25–36 (2011).

    Article  Google Scholar 

  • Goffaux, V., Peters, J., Haubrechts, J., et al., “From coarse to fine? Spatial and temporal dynamics of cortical face processing,” Cereb. Cortex, 21, No. 2, 467–476 (2010).

    Article  Google Scholar 

  • Henry, G. H., Bishop, P. O., and Dreher, B., “Orientation specificity of cells in cat striate cortex,” J. Neurophysiol., 37, 1394–1409 (1974).

    Article  CAS  Google Scholar 

  • Hou, W., Gao, X., Tao, D., and Li, X., “Visual saliency detection using information divergence,” Patt. Recog., 46, No. 10, 2658–2669 (2013).

    Article  Google Scholar 

  • Hubel, D. H. and Wiesel, T. N., “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” J. Physiol., 160, No. 1, 106–154 (1962).

    Article  CAS  Google Scholar 

  • Johnson, A. and Zarei, A., “Second-order saliency predicts observer eye movements when viewing natural images,” J. Vis., 10, No. 7, 526 (2010).

    Article  Google Scholar 

  • Jones, J. P. and Palmer, L. A., “The two-dimensional spatial structure of simple receptive fields in cat striate cortex,” J. Neurophysiol., 58, No. 6, 1187–1211 (1987).

    Article  CAS  Google Scholar 

  • Landy, M. S., “Texture analysis and perception,” in: The New Visual Neurosciences, Werner, J. S. and Chalupa, L. M. (eds.), MIT Press, Cambridge, MA (2014)., pp. 639–652.

    Google Scholar 

  • Li, G. and Yu, Y., “Visual saliency based on multiscale deep features,” in: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5455–5463.

  • Maier, M. J., Companion Package to the Book “R: Einführung durch angewandte Statistik.” R Package Version 0.9.3, http://CRAN.R-project.org/package=REdaS.2015.

  • Mangiafico, S. S., Summary and Analysis of Extension Program Evaluation in R, Vers. 1.15.0, rcompanion.org/handbook/ (2016).

  • Marat, S., Rahman, A., Pellerin, D., et al., “Improving visual saliency by adding ‘face feature map’ and ‘center bias’,” Cognit. Comput., 5, No. 1, 63–75 (2013).

    Article  Google Scholar 

  • Musel, B., Kauffmann, L., Ramanoël, S., and Giavarini, C., “Coarse-to-fine categorization of visual scenes in scene-selective cortex,” J. Cogn. Neurosci., 26, No. 10, 2287–2297 (2014).

    Article  Google Scholar 

  • Pan, J., Sayrol, E., Giro-i-Nieto, X., et al., “Shallow and deep convolutional networks for saliency prediction,” in: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 598–606.

  • Ramon, M., Vizioli, L., Liu-Shuang, J., and Rossion, B., “Neural microgenesis of personally familiar face recognition,” Proc. Natl. Acad. Sci. USA, 112, No. 35, E4835–E4844 (2015).

    Article  CAS  Google Scholar 

  • Rosenholtz, R., “Texture perception,” in: The Oxford Handbook of Perceptual Organization, Wagemans, J. (ed.), Oxford University Press (2015), Sec. 2, Pt. 9.

  • Stoll, J., Thrun, M., Nuthmann, A., and Einhäuser, W., “Overt attention in natural scenes: Objects dominate features,” Vision Res., 107, 36–48 (2015).

    Article  Google Scholar 

  • Summers, R. J., Baker, D. H., and Meese, T. S., “Area summation of first- and second-order modulations of luminance,” J. Vis., 15, No. 12 (2015).

    Article  Google Scholar 

  • Theeuwes, J., “Spatial orienting and attentional capture,” in: The Oxford Handbook of Attention, Noble, A. C. and Kastner, S. (eds.), Oxford University Press, Oxford (2014), Ch. 8, pp. 231–252.

  • Theeuwes, J., “Top-down and bottom-up control of visual selection,” Acta Psychol., 135, No. 2, 77–99 (2010).

    Article  Google Scholar 

  • Theeuwes, J., Kramer, A. F., Hahn, S., et al., “Influence of attentional capture on oculomotor control,” J. Exp. Psychol. Hum. Percept. Perform., 25, No. 6, 1595–1608 (1999).

    Article  CAS  Google Scholar 

  • Theeuwes, J., Olivers, C. N. L., and Belopolsky, A., “Stimulus-driven capture and contingent capture,” Wiley Interdiscip. Rev. Cogn. Sci., 1, No. 6, 872–881 (2010).

    Article  Google Scholar 

  • Treisman, A. M. and Gelade, G., “A feature-integration theory of attention,” Cogn. Psychol., 12, No. 1, 97–136 (1980).

    Article  CAS  Google Scholar 

  • Victor, J. D., Conte, M. M., and Chubb, C. F., “Textures as probes of visual processing,” Annu. Rev. Vis. Sci., 15, No. 3, 275–296 (2017).

    Article  Google Scholar 

  • Vig, E., Dorr, M., and Cox, D., “Large-scale optimization of hierarchical features for saliency prediction in natural images,” in: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2014), pp. 2798–2805.

  • Watt, R. J. and Morgan, M. J., “A theory of the primitive spatial code in human vision,” Vision Res., 25, 1661–1674 (1985).

    Article  CAS  Google Scholar 

  • Westrick, Z. M. and Landy, M. S., “Pooling of first-order inputs in second- order vision,” Vision Res., 91, 108–117 (2013).

    Article  Google Scholar 

  • Wolfe, J. M. and Horowitz, T. S., “Five factors that guide attention in visual search,” Nat. Hum. Behav., 1 (0058) (2017).

  • Wolfe, J. M., Cave, K. R., and Franzel, S. L., “Guided search: An alternative to the feature integration model for visual search,” J. Exp. Psychol. Hum. Percept. Perform., 15, No. 3, 419–433 (1989).

    Article  CAS  Google Scholar 

  • Zhou, J., Yan, F., Lu, Z.-L., et al., “Broad bandwidth of perceptual learning in second-order contrast modulation detection,” J. Vis., 15, No. 2 (2015).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. V. Babenko.

Additional information

Translated from Zhurnal Vysshei Nervnoi Deyatel’nosti imeni I. P. Pavlova, Vol. 70, No. 2, pp. 182–192, March–April, 2020.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Babenko, V.V., Yavna, D.V. & Rodionov, E.G. Contributions of Different Spatial Modulations of Brightness Gradients to the Control of Visual Attention. Neurosci Behav Physi 50, 1035–1042 (2020). https://doi.org/10.1007/s11055-020-00994-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11055-020-00994-z

Keywords

Navigation