Skip to main content

News from the field

VISUAL COGNITION

Past determines present in number space

Cicchini, G. M., Anobile, G., & Burr, D. C. (2014). Compressive mapping of number to space reflects dynamic encoding mechanisms, not static logarithmic transform. Proceedings of the National Academy of Sciences, 111, 7867–7872. doi:10.1073/pnas.1402785111

This year looks like it could be a good one for history effects in visual cognition. Past perception is increasingly found to have a large effect on perception in the present. Such history dependence, which is well studied in many paradigms, has been infiltrating fields such as orientation discrimination (Fischer & Whitney, 2014), perception of ambiguous stimuli (Brascamp et al., 2008), and visual crowding (Kristjánsson et al., 2013), to name a few examples. These findings indicate that our representations of the visual world appear to be dynamic, strongly affected by the previous history of stimulation. Might we see Harry Helson’s classic book on adaptation-level theory (Helson, 1964) back in print soon? The time may very well be ripe for a revival of this far-too-often overlooked book.

The latest development involves a recent study published in Proceedings of the National Academy of Sciences by Cicchini, Anobile, and Burr (2014), who have reported an intriguing history effect in how observers map numbers onto space. Previous research has shown that human observers tend to have a mental number line on which numbers are mapped onto space, from the lowest at left to higher numbers on the right. As Cicchini et al. explained, human subjects tend to have a compressed representation of space that is initially logarithmic, but that becomes more linear with experience or schooling.

In the study by Cicchini et al., observers viewed briefly presented dot clouds having a varied number of dots. They were asked to judge the number of presented dots by making a mark on a number line presented below the dot clouds. Cicchini et al. found that observers’ responses were highly dependent on previous trials. The weighted average of the current and recent stimuli could explain the nonlinearity that was expressed as a logarithmic function. All of the variation that could be attributed to a logarithmic transform was, in other words, explained by previous history—in the form of the numbers of dots in the cloud to be judged on preceding trials. A relatively straightforward model involving Bayesian updating that generated a linear weighted sum of the current stimuli and the preceding ones accounted for all of the variation in responding. This result provides further evidence for the important role that previous history plays in how observers perceive visual stimuli in the present. –Á.K.

Additional references

Brascamp, J. W., Knapen, T. H. J., Kanai, R., Noest, A. J., van Ee, R., & van den Berg, A. V. (2008) Multi-timescale perceptual history resolves visual ambiguity. PLoS ONE, 3, e1497. doi:10.1371/journal.pone.0001497

Fischer, J., & Whitney, D. (2014). Serial dependence in visual perception. Nature Neuroscience, 17, 738–743. doi:10.1038/nn.3689

Helson, H. (1964). Adaptation-level theory: An experimental and systematic approach to behavior. New York, NY: Harper & Row.

Kristjánsson, Á., Heimisson, P. R., Róbertsson, G. F., & Whitney, D. (2013). Attentional priming releases crowding. Attention, Perception, & Psychophysics, 75, 1323–1329. doi:10.3758/s13414-013-0558-2

SEQUENTIAL EFFECTS

Why of course you can!

Fründ, I., Wichmann, F. A., & Macke, J. H. (2014). Quantifying the effect of intertrial dependence on perceptual decisions. Journal of Vision, 14(7), 9. doi:10.1167/14.7.9

When faced with two alternative responses, all observers are biased. That bias may be very small, but there is no reason to assume that it is exactly zero. Nonetheless, most researchers do at least implicitly make this assumption when deriving estimates of sensitivity from two-alternative forced choice results. This assumption is particularly hazardous because biases can fluctuate, and conventional analyses confound bias fluctuation with insensitivity.

Fründ et al. offer some new statistics with which to calculate the impact of bias fluctuation on binary responses. These new statistics are based on a model that ascribes any one trial’s bias to a stationary, linear combination of influences from the previous seven stimuli and the previous seven responses. A good indication of how important past trials can be is a comparison of how well this model can fit sequentially collected data when the sequence is and is not scrambled. The full model’s fit also compares favorably to that of a simpler model in which the influence of past trials is arbitrarily set to zero, even when taking into account the advantage of extra parameters.

Revisiting the data from four experiments, the authors found that some observers were more strongly influenced by the past than others. On average, the sensitivities inferred from these data were at least 3.0 % higher when bias fluctuations were not allowed to masquerade as internal noise. A more complicated model of intertrial dependence (e.g., one including interactions between previous stimuli and responses) might push that number even higher.

Averaging across more than 180,000 trials, the authors found very little impact of previous stimuli. More influential were the responses selected on the previous two or three trials. On difficult trials, most observers displayed a tendency to select whatever they had not selected before, but some observers displayed the opposite predilection. For one observer, the authors’ analysis of intertrial dependencies suggested an at least implicit belief that you can repeat the past: Past responses were actually a better predictor of the current response than was the current stimulus. –J.A.S.

AUDITORY PROCESSING

Gamers’ neural plasticity

Whitton, J. P., Hancock, K. E., & Polley, D. B. (2014). Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise. Proceedings of the National Academy of Sciences, 111, E2606–E2615. doi:10.1073/pnas.1322184111

It is known that sensory discrimination abilities can be improved with practice, but the practice benefits are usually restricted to the trained stimulus dimensions. At the same time, there are empirical reasons to believe that sensory learning reflects the coordinated activation of sensory brain areas and neuromodulatory control nuclei. In principle, these learning systems are engaged by tasks that require the continuous interplay of sensory cues, dynamically updated motor action programs, and neuromodulatory feedback.

In their investigation, Whitton and collaborators showed that it is possible to transfer the discrimination of simple and controlled sounds to “real-world” complex sounds. Instead of using traditional perceptual-learning paradigms, they capitalized on recent research findings revealing that action videogame training can impart a broader spectrum of benefits.

Twenty participants were randomly assigned to train on an auditory foraging task for 1 month (30 min per day for 5 days per week) or to be passively exposed to the training stimuli over the same time period. The participants controlled the movements of an avatar in a 2-D virtual arena using a game pad in the context of a custom audio game, and used audio feedback to guide their avatar to a location associated with the lowest sound level. A broadband masker was played continuously as a distractor. Participants received no verbal instructions about the goals of the game and simply learned to forage for rewards (points) through trial and error.

Participants learned to modulate their angular search vectors and target approach velocities on the basis of real-time changes in the level of a weak tone embedded in broadband noise. This capacity to extract a weak tone from noise generalized to an improved ability to comprehend spoken sentences in speech babble noise. The transfer capacity was measured with tone-in-noise detection thresholds, using both tonal and speech stimuli.

Although, in other work, clinical populations have received little benefit from conventional sensory rehabilitation strategies, the present investigation offers new therapeutic options via immersive computerized games. For one thing, the neural and perceptual salience of degraded sensory stimuli can be improved. What is more, the demonstration of Whitton et al. applies to animals (mice) that also gain benefits (improved decoding of low-intensity sounds at the training frequency and enhanced resistance to interference from background masking noise) from a foraging game! –S.G.

THE ATTENTIONAL BLINK

The attentional blink affects conscious perception discretely

Asplund, C. L., Fougnie, D., Zughni, S., Martin, J. W., & Marois, R. (2014). The attentional blink reveals the probabilistic nature of discrete conscious perception. Psychological Science, 25, 824–831. doi:10.1177/0956797613513810

An active debate surrounds whether conscious perception is discrete or graded. If conscious perception is fundamentally discrete, there is a dichotomy in the ways that objects can be consciously seen: We either see them or not. On the other hand, if conscious perception can be graded, then objects may be experienced in intermediate states of awareness, even when these objects are physically present in the stimulus. It has been extremely difficult to distinguish clearly between these two possibilities on an empirical basis.

Asplund, Fougnie, Zughni, Martin, and Marois (2014) have shed new light on this debate by using a mixture-modeling analysis that was designed to separate the “knowledge-based responses” and “guess-based responses” from a subject’s perceptual report of a continuous feature dimension of the target, such as its color. The knowledge-based responses would form a normally shaped distribution congregating around the true target-color value, and the SD of this distribution would characterize the “precision” of this knowledge. The guess-based responses, on the other hand, will be evenly distributed in the circular featural space typically used. By independently estimating these two distributions from the subject’s error response distributions, the authors could determine the “portion of guess-based responses” (p) as well as the “precision of the knowledge responses” (SD).

This creative combination of research question and method led them to important new insights. In Experiment 1, the observers tried to report the colors of two squares presented in an RSVP stream of circles. The perception of the second target was expected to be substantially impaired when it fell into the half-second period following the first target (i.e., the attentional blink). Here, a mixture-modeling analysis was applied to reveal how exactly perception was impaired. Asplund et al. found that, for these second target items, the probability of seeing them was substantially reduced, whereas the precision of these “percepts” remain intact. Experiment 2 extended this finding to the situation of facial features.

These finding offers strong and clear support for the notion that conscious perception is discrete in the case of the attentional blink: You either see something as precisely as it would have been seen without the blink, or you see nothing at all. In addition, this finding has important implications for the field of consciousness and perception, because it gives a simple and explicit operational definition to the “conscious state,” and points to a convenient way to measure it (i.e., SD). Hopefully, this pioneering work can inspire future studies that will systematically examine conscious perception in various other situations, through which a more global picture can be revealed. –L.H.

SPATIAL VISION

Updating the spatiotopic representation

Golomb, J. D., L’Heureux, Z. E., & Kanwisher, N. (2014). Feature-binding errors after eye movements and shifts of attention. Psychological Science, 25, 1067–1078. doi:10.1177/0956797614522068

When the eyes move, the positions of objects in the visual stimulus change on the retina but not in the world. Thus, the visual system needs to take the ever-changing retinotopic input and create a relatively stable spatiotopic view of the world. This remapping has been the subject of a substantial body of work over many years. Much of the interest has been in the dynamics of what happens just before, during, and immediately after a saccadic eye movement. Prior work has focused on where things appear to be at different times—as, for example, in Ross, Morrone, and Burr’s (1997) work showing that visual space seemed to be squeezed just before a saccade. In new work, Golomb, L’Heureux, and Kanwisher (2014) were more interested in what is seen than in where it appears to be located.

Consider the following situation. You are fixating at one point and are cued to make a saccade to another point. Either 50 or 500 ms after the saccade lands, four colored patches appear in a square array surrounding the new fixation point. Let’s call the patches A, B, C, and D. Your job is to report on the color of one of them using a color wheel so that the accuracy of your judgment can be measured in degrees around the hue circle. In the spatiotopic condition, you are asked to report on the color of the patch at a particular location in the world. The patches are arranged so that patch A is at that location now and is, thus, the correct color. Patch B is in the same retinotopic location where the cue had been before the saccade. That is, if the cue was below and to the right of fixation before the saccade, patch B is the patch that is now below and to the right of the new fixation. Thus, it would have been the right answer before the saccade but is not the right answer now. Patch C and patch B are the same distance from patch A, and patch D fills out the square array.

The interesting data in this study come from the errors, which come in two forms. You might swap the correct color for the color of another square, or you might get more or less the correct color but bias your setting toward the color of another square. At the brief delay of 50 ms, both types of error are seen. You were supposed to report color A, but sometimes you report color B, swapping in the color of the retinotopic patch. Patches C and D do not seem to have much influence on these errors. The blending errors are perhaps more interesting: Sometimes you report color A, but with a slight but real bias toward B. It is as if the old, fading, presaccadic map and the new, postsaccadic map are both active at the same time and are mixing together. By 500 ms, these effects are gone.

Suppose you are asked to report on the color of a retinotopically defined patch: What is the color of the patch up and to the left of fixation? In this case, there is no systematic interference from other patches. Only the developing spatiotopic representation shows traces of its presaccadic contents.

Interestingly, both swapping and blending errors can be produced without eye movements. If you are asked to shift attention from A to B just before making a color judgment about the attended item, you tend to make swapping errors but not blending errors. If, on the other hand, you are asked to divide attention between A and B and then to report on B after the patches have been removed, the color of A biases the assessment of B, as if the two colors were imperfectly separated during the division of attention.

These results seem to show that using the retinotopic input to update a spatiotopic representation of the world is not just a matter of changing spatial pointers. The featural contents from one representation can get incorrectly mixed with the contents of the other, at least for a few tens of milliseconds after a saccade. –J.M.W.

Additional references

Ross, J., Morrone, M. C., & Burr, D. C. (1997). Compression of visual space before saccades. Nature, 386, 598–601. doi:10.1038/386598a0

STATISTICAL LEARNING

Statistical learning in natural environment

Jiang, Y. V., Won, B.-Y., Swallow, K. M., & Mussack, D. M. (2014). Spatial reference frame of attention in a large outdoor environment. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi:10.1037/a0036779

Many of the paradigms we use under highly controlled laboratory settings are potentially ecologically valid. If we take visual search as an example, we can easily recall real-life situations in which we have looked for a friend in a large crowd or searched for our keys on a cluttered table. However, only on rare occasions do we actually put this potential to rigorous testing. Jiang and her colleagues did exactly that.

They examined visual search for a real object—a coin—in a real, large, outdoor environment. The coin was laid on the ground at one of many possible locations, with all of the other objects and distractions that are naturally present on the ground (e.g., leaves, bugs, and dirt). The participants could move around freely until they found the coin, and then they had to indicate which of its sides was up. On different trials, the coin was laid down in different places. This allowed Jiang et al. to observe search behavior under real-life conditions while collecting data over many trials for each participant. Moreover, because they were particularly interested in statistical learning and the spatial reference frame, the coin was placed more often in one quadrant of the search area. On different experiments, this high-probability quadrant was fixed in environmental coordinates, egocentric coordinates, or both. Importantly, the participants were not given any information regarding the likelihood of the different quadrants. Jiang et al. found that search times were faster when the coin was in the more-frequent quadrant than when it was in the other quadrants, suggesting that—as in visual search with computerized displays—their participants could pick up these regularities and use them to expedite their search in a natural environment.

Interestingly, the participants did not report noticing the fact that the quadrants differed in their likelihoods, so they probably did not employ this as an explicit strategy. But when they were forced to choose the more-frequent quadrant they were able to do so, suggesting that the learned probabilities were not strictly implicit. As for reference frames, the search was faster when the coin was in the more-frequent quadrant, whether this quadrant was fixed relative to the environment or relative to the participant’s facing direction. This finding suggests that in the natural environment people can learn spatial regularities in both environmental and egocentric reference frames. Additionally, this finding stands in contrast to previous findings showing that with computerized displays participants could not learn spatial probabilities when the more-likely region was fixed in environmental coordinates but varied in egocentric coordinates. Thus, Jiang et al.’s findings that statistical learning abilities are more flexible under real-life settings underscore the importance of testing the generality of conclusions based on highly controlled, often computerized, studies. –Y.Y.

ATTENTIONAL CAPTURE

Resisting distraction

Gaspar, J. M., & McDonald, J. J. (2014). Suppression of salient objects prevents distraction in visual search. Journal of Neuroscience, 34, 5658–5666. doi:10.1523/JNEUROSCI.4161-13.2014

The notion that salient objects in everyday life can capture attention in a purely bottom-up fashion has strong intuitive appeal. However, closer examination of this phenomenon in the laboratory has revealed interactions between top-down and bottom-up forms of attentional control that, in turn, have been difficult to disentangle. In some cases, findings that appear to reflect purely bottom-up control have been shown to instead reflect subtle forms of top-down control, whereas in other cases, findings that appear to reflect a lack of bottom-up control have been shown to instead reflect the occurrence and subsequent suppression of such effects by top-down control. However, although there is increasing evidence that observers can use top-down mechanisms to resist allocating attention to salient objects, the nature of these mechanisms is still poorly understood.

In an attempt to clarify the nature of these top-down mechanisms, Gaspar and McDonald used an additional-singleton paradigm in which both the to-be-attended target and the to-be-ignored distractor appeared as salient objects in a display of eight other, nonsalient objects. In one experiment, both the target and distractor were salient within the same feature dimension (color), and in another experiment, the target and distractor were salient in different feature dimensions (form and color, respectively). In both of these experiments, the distractor was more salient than the target. Gaspar and McDonald manipulated within- versus cross-dimension salience in order to determine whether the ability to resist the more salient distractor arises from a salience suppression mechanism or a dimension-weighting mechanism. In addition, Gaspar and McDonald measured two event-related potential components: the N2pc component, and the P D component. The N2pc component was hypothesized to measure the selection of attended objects, whereas the P D component was hypothesized to measure suppression of unattended objects. According to the salience suppression account, observers may be able to resist the salient distractor by suppressing it in both the within- and cross-dimension conditions. Accordingly, the N2pc and P D components should be elicited in both conditions: The N2pc component should be elicited when observers select the target, and the P D component should be elicited when they suppress the distractor.

In contrast, according to the dimension-weighing account, observers may be able to resist the salient distractor by increasing the attention weights associated with the feature dimension of the target. In this case, the salience of the distractor is not represented, because it is effectively tuned out at the preattentive stage of processing. However, this strategy could only tune out the salience of the distractor in the cross-dimension condition, leaving the observer especially vulnerable to the distractor in the within-dimension condition. Consequently, only the N2pc component should be elicited in both conditions. The main results supported the salience suppression account, in that the P D component was observed in both conditions. In addition, the timing of this component suggested that the suppression did not operate directly on salience maps in the lateral intraparietal cortex, but rather on object representations in the inferior temporal cortex. These findings are therefore important because they help shed light on the nature of the top-down mechanisms that allow observers to resist salient distractors in the outside world. –B.S.G.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

News from the field. Atten Percept Psychophys 76, 1505–1509 (2014). https://doi.org/10.3758/s13414-014-0740-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3758/s13414-014-0740-1