Advertisement

The role of location in visual feature binding

  • Oscar KovacsEmail author
  • Irina M. Harris
Article
  • 47 Downloads

Abstract

Location appears to play a vital role in binding discretely processed visual features into coherent objects. Consequently, it has been proposed that objects are represented for cognition by their spatiotemporal location, with other visual features attached to this location index. On this theory, the visual features of an object are only connected via mutual location; direct binding cannot occur. Despite supporting evidence, some argue that direct binding does take over according to task demands and when representing familiar objects. The current study was developed to evaluate these claims, using a brief memory task to test for contingencies between features under different circumstances. Participants were shown a sequence of three items in different colours and locations, and then asked for the colour and/or location of one of them. The stimuli could either be abstract shapes, or familiar objects. Results indicated that location is necessary for binding regardless of the type of stimulus and task demands, supporting the proposed structure. A follow-up experiment assessed an alternate explanation for the apparent importance of location in binding; eye movements may automatically capture location information, making it impossible to ignore and suggesting a contingency that is not representative of cognitive processes. Participants were required to maintain fixation on half of the trials, with an eye tracker for confirmation. Results indicated that the importance of location in binding cannot be attributed to eye movements. Overall, the findings of this study support the claim that location is essential for visual feature binding, due to the structure of object representations.

Keywords

Feature binding Location Object representation 

Notes

Acknowledgements

This research was supported by a Future Fellowship (FT0992123) from the Australian Research Council awarded to I.M. Harris. The authors declare no conflicts of interest.

References

  1. Allen, R. J., Castellà, J., Ueno, T., Hitch, G. J., & Baddeley, A. D. (2015). What does visual suffix interference tell us about spatial location in working memory? Memory & Cognition, 43(1), 133–142.  https://doi.org/10.3758/s13421-014-0448-4 Google Scholar
  2. Arnold, D. H. (2005). Perceptual pairing of colour and motion. Vision Research, 45(24), 3015–3026.  https://doi.org/10.1016/j.visres.2005.06.031 Google Scholar
  3. Bays, P. M., Wu, E. Y., & Husain, M. (2011). Storage and binding of object features in visual working memory. Neuropsychologia, 49(6), 1622–1631.  https://doi.org/10.1016/j.neuropsychologia.2010.12.023 Google Scholar
  4. Bosbach, S., Kornblum, C., Schröder, R., & Wagner, M. (2003). Executive and visuospatial deficits in patients with chronic progressive external ophthalmoplegia and Kearns–Sayre syndrome. Brain, 126(5), 1231–1240.  https://doi.org/10.1093/brain/awg101
  5. Brady, T. F., Konkle, T., Gill, J., Oliva, A., & Alvarez, G. A. (2013). Visual long-term memory has the same limit on fidelity as visual working memory. Psychological Science, 24(6), 981–990.  https://doi.org/10.1177/0956797612465439 Google Scholar
  6. Brockmole, J. R., & Franconeri, S. L. (2009). Introduction. Visual Cognition, 17(1/2), 1–9.  https://doi.org/10.1080/13506280802333211 Google Scholar
  7. Bruce, S., & Muhammad, Z. (2009). The development of object permanence in children with intellectual disability, physical disability, autism, and blindness. International Journal of Disability, Development and Education, 56(3), 229–246.  https://doi.org/10.1080/10349120903102213 Google Scholar
  8. Chen, W.-Y., & Fajou, C. (2015). EyeLinkForPsychopyInSUPA [Computer software]. Retrieved from https://github.com/alexholcombe/MOTcircular/blob/master/EyelinkEyetrackerForPsychopySUPA3.py
  9. Chen, H., & Wyble, B. (2015). The location but not the attributes of visual cues are automatically encoded into working memory. Vision Research, 107, 76–85.  https://doi.org/10.1016/j.visres.2014.11.010 Google Scholar
  10. Clifford, C. W. G., Holcombe, A. O., & Pearson, J. (2004). Rapid global form binding with loss of associated colors. Journal of Vision, 4(12), 1090.  https://doi.org/10.1167/4.12.8 Google Scholar
  11. Coslett, H. B., & Lie, G. (2008). Simultanagnosia: When a rose is not red. Journal of Cognitive Neuroscience, 20(1), 36–48.  https://doi.org/10.1162/jocn.2008.20002 Google Scholar
  12. Friedman-Hill, S. R., Robertson, L. C., & Treisman, A. (1995). Parietal contributions to visual feature binding: Evidence from a patient with bilateral lesions. Science, 269(5225), 853–855.Google Scholar
  13. Golomb, J. D., Kupitz, C. N., & Thiemann, C. T. (2014). The influence of object location on identity: A “spatial congruency bias”. Journal of Experimental Psychology: General, 143(6), 2262–2278.  https://doi.org/10.1037/xge0000017 Google Scholar
  14. Hansen, T., Pracejus, L., & Gegenfurtner, K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9(4), 26–26.Google Scholar
  15. Holcombe, A. O. (2009). Seeing slow and seeing fast: Two limits on perception. Trends in Cognitive Sciences, 13(5), 216–221.  https://doi.org/10.1016/j.tics.2009.02.005 Google Scholar
  16. Holcombe, A. O., & Clifford, C. W. G. (2012). Failures to bind spatially coincident features: Comment on Di Lollo. Trends in Cognitive Sciences, 16(8), 402–402.  https://doi.org/10.1016/j.tics.2012.06.011 Google Scholar
  17. Hommel, B., & Colzato, L. S. (2009). When an object is more than a binding of its features: Evidence for two mechanisms of visual feature integration. Visual Cognition, 17(1/2), 120–140.  https://doi.org/10.1080/13506280802349787 Google Scholar
  18. Kahneman, D.. & Treisman, A. (1984). Changing views of attention and automaticity. In R. Parasuraman & D. R. Davies (Eds.), Varieties of attention (pp. 29–61). New York, NY: Academic Press.Google Scholar
  19. Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of object files: Object-specific integration of information. Cognitive Psychology, 24(2), 175–219.Google Scholar
  20. Kirchner, H., & Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46(11), 1762–1776.  https://doi.org/10.1016/j.visres.2005.10.002 Google Scholar
  21. Leslie, A. M., Xu, F., Tremoulet, P. D., & Scholl, B. J. (1998). Indexing and the object concept: Developing ‘what’ and ‘where’ systems. Trends in Cognitive Sciences, 2(1), 10–18.  https://doi.org/10.1016/S1364-6613(97)01113-3 Google Scholar
  22. Loftus, G. R., & Masson, M. E. J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin & Review, 1(4), 476–490.Google Scholar
  23. Meegan, D. V., & Honsberger, M. J. M. (2005). Spatial information is processed even when it is task-irrelevant: Implications for neuroimaging task design. NeuroImage, 25(4), 1043–1055.  https://doi.org/10.1016/j.neuroimage.2004.12.061 Google Scholar
  24. Parra, M. A., Abrahams, S., Logie, R. H., & Sala, S. D. (2009). Age and binding within-dimension features in visual short-term memory. Neuroscience Letters, 449(1), 1–5.  https://doi.org/10.1016/j.neulet.2008.10.069 Google Scholar
  25. Peirce, J. W. (2007). PsychoPy—Psychophysics software in python. Journal of Neuroscience Methods, 162(1), 8–13.  https://doi.org/10.1016/j.jneumeth.2006.11.017 Google Scholar
  26. Pertzov, Y., & Husain, M. (2014). The privileged role of location in visual working memory. Attention, Perception, & Psychophysics, 76(7), 1914–1924.Google Scholar
  27. Rajsic, J., & Wilson, D. E. (2014). Asymmetrical access to color and location in visual working memory. Attention, Perception, & Psychophysics, 76(7), 1902–1913.  https://doi.org/10.3758/s13414-014-0723-2 Google Scholar
  28. Rangelov, D., & Zeki, S. (2014). Non-binding relationship between visual features. Frontiers in Human Neuroscience, 8, 749.  https://doi.org/10.3389/fnhum.2014.00749 Google Scholar
  29. Robertson, L. C., & Treisman, A. (2006). Attending to space within and between objects: Implications from a patient with Balint’s syndrome. Cognitive Neuropsychology, 23(3), 448–462.  https://doi.org/10.1080/02643290500180324 Google Scholar
  30. Rodrıguez, V., Valdes-Sosa, M., & Freiwald, W. (2002). Dividing attention between form and motion during transparent surface perception. Cognitive Brain Research, 13(2), 187–193.  https://doi.org/10.1016/S0926-6410(01)00111-2 Google Scholar
  31. Rogers, B. (2009). Motion parallax as an independent cue for depth perception: A retrospective. Perception, 38(6), 907.  https://doi.org/10.1068/p080125 Google Scholar
  32. Ryan, J. D., & Villate, C. (2009). Building visual representations: The binding of relative spatial relations across time. Visual Cognition, 17(1/2), 254.  https://doi.org/10.1080/13506280802336362 Google Scholar
  33. Schneegans, S., & Bays, P. M. (2017). Neural architecture for feature binding in visual working memory. Journal of Neuroscience, 37(14), 3913–3925.  https://doi.org/10.1523/JNEUROSCI.3493-16.2017 Google Scholar
  34. Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.  https://doi.org/10.1016/0010-0285(80)90005-5 Google Scholar
  35. Treisman, A., & Zhang, W. (2006). Location and binding in visual working memory. Memory & Cognition, 34(8), 1704–1719.Google Scholar
  36. Troscianko, T., Montagnon, R., Clerc, J. L., Malbert, E., & Chanteau, P. (1991). The role of colour as a monocular depth cue. Vision Research, 31(11), 1923–1929.  https://doi.org/10.1016/0042-6989(91)90187-A Google Scholar
  37. Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80(5), 352–373.  https://doi.org/10.1037/h0020071 Google Scholar
  38. Valdes-Sosa, M., Cobo, A., & Pinilla, T. (1998). Transparent motion and object-based attention. Cognition, 66(2), B13–B23.  https://doi.org/10.1016/S0010-0277(98)00012-2 Google Scholar
  39. VanRullen, R. (2009). Binding hardwired versus on-demand feature conjunctions. Visual Cognition, 17(1/2), 103–119.  https://doi.org/10.1080/13506280802196451 Google Scholar
  40. Walsh, V., Ashbridge, E., & Cowey, A. (1998). Cortical plasticity in perceptual learning demonstrated by transcranial magnetic stimulation. Neuropsychologia, 36(1), 45–49.  https://doi.org/10.1016/S0028-3932(97)00111-5 Google Scholar
  41. Zipser, D., & Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331(6158), 679–684.  https://doi.org/10.1038/331679a0 Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2019

Authors and Affiliations

  1. 1.School of PsychologyUniversity of SydneySydneyAustralia

Personalised recommendations