STATISTICAL REGULARITY

The power law is a 1 assumption

Stumpf & Porter (2012). Critical truths about power laws. Science, 335, 665–666.

Levitin, Chordia, & Menon (2012). Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proceedings of the National Academy of Sciences, doi:10.1073/pnas.1113828109

Rosenholtz, Huang, & Ehinger (2012). Rethinking the role of top-down attention in vision: effects attributable to a lossy representation in peripheral vision. Frontiers in Psychology, 3(13). doi:10.3389/fpsyg.2012.00013.

Are the editors of Science content for authors to cast any aspersions they like? Stumpf and Porter write, “What genuinely new insights have been gained by having found a … power law? We believe that such insights are very rare.” It would be interesting to replace “power law” with “BOLD hotspot” and to see if the editors remained as phlegmatic.

Just by coincidence, this month sees the publication of two very interesting new power laws: one by Levitin et al and one by Rosenholtz et al. Levitin et al applied analytical methods developed for de-noising the frequency spectra of neural spike trains to note onsets in computer generated renditions of classical (and other) compositions. (That is Scott Joplin, by the way; not Janis.) Between 0.01 and 1 Hz, the power spectra S(f) of these rhythms were well described by a power law of frequency f: S(f) α f β, 0.4 < β < 1.2. Rosenholtz et al. applied Signal-Detection Theory to a small set of statistics summarising target and distractor images and found a power-law relationship between the discriminability (d′ ) of these statistics and the efficiency (items/ms) with which a target could be found amongst distractors in a conventional search task: ms/item α d′ 1.

Stumpf and Porter have two major problems with power laws. Many lack statistical support and many lack what they call a generative mechanism (i.e. a reason for the power law). Contributing to both of these problems is an underappreciated facet of the central limit theorem. Most readers should remember that the sum of an infinite series of independent random variables will be normally distributed when those variables have finite variances. When they do not have finite variances, you can wind up with a power law. For some arbitrary sum x 0 probability p of any larger sum x will be: p α x β, x > x 0. Thus a power law without a generative mechanism likely represents the combined outcome of not just one, but many processes that we do not yet understand.

So, how do Levitin’s and Rosenholtz’s power laws fare against Stumpf and Porter’s criticism? Not too badly. Although Rosenholtz et al do not provide much support for their power law, they cite a more complete treatment in the works. More importantly, they seem to have adopted Stumpf and Porter’s recommendation of reporting their results in “neutral fashion.” In other words, it isn’t so much the form of the relationship between statistical discriminability and search efficiency that is of interest, it’s the fact that there is a (monotonic) relationship in the first place. (This is of interest because search efficiency has been intensively studied for decades, and few researchers have considered the influence of a factor like statistical discriminability.)

Levitin’s power law fares even better. For starters, his empirical measurements adhere to Stumpf and Porter’s first rule-of-thumb: empirical support was obtained over a 2 log-unit range. (Although I wonder whether 1 Hz, i.e. 60 bpm, might be a low cut-off frequency for classical rhythms. Intuition suggests that a contemporary speedcore slow dance might have comparatively more power at, say, 300 bpm or 5 Hz.) More importantly, the statistical support Levitin et al provide for their power law is a tour de force.

Not to cast aspersions, but perhaps the key difference between physicists like Stumpf and psychologists like Levitin is in their expectations. In previous work, Stumpf and Ingram (Stumpf & Ingram, 2005) tested power-laws against distributions with even less unpredictability, such as the Poisson and Normal, whose variances are finite. Psychologists, on the other hand, seem to be surprised whenever they find any departure from pure unpredictability. That’s why their null hypotheses tend to be flat spectra and zero correlation. –J.A.S.

Stumpf, M. P. H. & Ingram, P. J. (2005). Probability models for degree distributions of protein interaction networks. Europhysics Letters 71(1), 152–158.

VISUAL ADAPTATION

Where are negative afterimages?

Zaidi, Q., Ennis, R., Cao, D., & Lee, B. (2012). Neural locus of color afterimages. Current Biology, 22(3), 220–224.

The first time you saw a demonstration of a negative color afterimage, you probably got an explanation that talked about bleaching photoreceptors. In this grade school version, white (or gray) was the result of roughly equal outputs from short, medium, and long wavelength cones. When you stared at red, you bleached the long wavelength cones more than the medium or short. When you subsequently looked at the achromatic background, again, the pattern of cone outputs was what the unadapted eye would have seen had the color been a bluish-green and, so, that is what you saw. (Actually, the cones were probably called “red”, “green”, and “blue” but that is a separate problem in early childhood education.)

The second time you saw the demonstration, in that college Sensation and Perception course, you learned that the bleaching account couldn’t be right, at least not for the routine negative afterimages, which are produced by staring at projected images of false color American flags and the like. The amount of bleaching was too small and the rate of recovery from bleaching was far too fast. Some post-receptoral processes were clearly at work. On the third pass, in graduate, you learned that the topic was really very complicated and either gave up or adopted color vision as your life’s work.

Qasim Zaidi and colleagues (2012) are in the latter category and have produced a pleasingly straight-forward contribution to this topic. They had observers view a bipartite field that started as equally gray on both sides. The colors then changed sinusoidally to opposite positions on a color axis (e.g. one side turned green; the other, purple). From that position, the colors continued their sinusoidal trajectory back to gray. Because of adaptation, the observer’s percept gets to gray before the stimulus. The afterimage sums with the colors in the display and, at some moment in time, they neutralize each other. Zaidi et al. had their observers monitor a clock-like stimulus as a way to report the moment of neutrality. The adaption time constants are on the order of seconds, much slower than adaptation in photoreceptors.

Zaidi et al. presented the same sorts of stimuli to macaque ganglion cells and found very similar time courses. They argue that this shows that these afterimages must arise after the photoreceptors and either at or before the retinal ganglion cells. Their model proposes that the photoreceptors feed a leaky integrator that, in effect, keeps track of the recent history of stimulation. The leaky integrator’s out put is subtracted from the activity of the ganglion cells, producing the adaptation. The point of neutrality occurs when the integrator signal is equal in size to the signal produced by the current stimulus.

There have been claims that afterimages are cortical phenomena. Zaidi et al. argue that the retinal ganglion cells pass an afterimage signal to the cortex that is treated like any other signal and is subject to cortical processes that, for example, define surfaces. The site of adaptation, however, is localized to the retina beyond the photoreceptors. –J.W.

OBJECTS AND AFFECTS

What’s in a shape?

Leder, H., Tinio, P. P. L., & Bar, M. (2011). Emotional valence modulates the preference for curved objects. Perception, 40(6), 649–655. doi:10.1068/p6845

Watson, D. G., Blagrove, E., Evans, C., & Moore, L. (2012). Negative triangles: Simple geometric shapes convey emotional valence. Emotion, 12(1), 18–22. doi:10.1037/a0024495

Consumer products have gotten curvier over the last few decades. Cars, computers, kitchen utensils; they tend to have fewer sharp corners and more curved contours. Is this just fashion, or is there something deeper going on? A couple of recent papers argue that shape and contour interact with our emotional states. Leder, Tinio, and Bar (2011) writing in Perception, show that contour modulates our preferences for objects, unless the objects are negatively valenced. Meanwhile, Watson, Blagrove, Evans, and Moore (2012), writing in Emotion, show that downward-pointing triangles produce flanker effects for face targets.

The Leder et al. (2011) study builds on previous findings by Bar and Neta demonstrating that observers have a preference for curved objects vs. sharp objects (Bar & Neta, 2006), and that this difference shows up in the amygdala (Bar & Neta, 2007). Experiment 1 of the new study replicates the basic result. Observers were briefly shown pictures of neutral-valenced objects, such as combs, chairs, dice, or flatbed scanners, which could come in both sharp or curved versions (there were also abstract patterns, likewise sharp or curved, and a set of control objects which were represented by a single exemplar, either sharp or curved). They were asked to give a “like” or “dislike” response to each picture. Observers liked the curved objects more than the sharp objects.

Experiment 2 was more interesting. Here the stimuli were specifically chosen to have either negative or positive valence. The negative valence pictures were typically weapons or unpleasant animals (e.g., insects, bats, snakes), while the positively valenced pictures were typically foods or games. Instead of relying on finding curve knives or sharp donuts, they digitally manipulated the pictures into curvy and sharp versions. They also added valence ratings (pleasant v. unpleasant) and an arousal scale. Overall, the positive pictures were rated more pleasant, and less arousing than the negative pictures, findings that did not depend on contour (curve/sharp). Positive pictures were liked more than negative pictures, unsurprisingly. The meat of the paper is in the valence x contour interaction: For the positive pictures, curvy items were again preferred to their sharp counterparts. However, for negative pictures, contour made no difference. Leder et al (2011) suggest a sort of affective triage, where the “affective evaluation system” treats negative objects differently from positive objects. Negative objects may necessitate avoidance responses, whereas one can engage with positive objects in a more leisurely fashion, allowing perceptual features to come into play.

This general avoidance response does not seem to apply to negative faces, which are found more quickly in visual tasks than positive or neutral faces (Eastwood, Smilek, & Merikle, 2001; Ohman, Lundqvist, & Esteves, 2001). This is a somewhat surprising finding, since facial expressions are perceptually complex stimuli; perhaps the effects are mediated by simpler visual features (Wolfe & Horowitz, 2004)? It has been suggested that downward-pointing triangles could be carrying the negative affect (Larson, Aronoff, & Stearns, 2007; Tipples, Atkinson, & Young, 2002). However, this hypothesis has only been supported by association; downward triangles are detected more easily in search displays, much like negative faces. In a recent paper in Emotion, Watson, et al. (2012) take on the question in a more direct fashion.

Watson et al. (2012) used a twist on the Eriksen flanker paradigm (Eriksen & Eriksen, 1974). The targets were schematic emotional faces, and observers had to indicate whether the target face had a positive or negative valence. The flankers could be either faces or triangles. With negative face flankers, responses to negative faces were speeded, and responses to positive faces slowed (relative a neutral condition). This effect is expected, since negative face flankers are identical to the negative face targets, and thus mapped to the opposite response as the positive face targets. The interesting finding is that the same pattern was observed when the flankers were downward-pointing triangles (relative to a neutral condition with sideways-pointing triangles). Note that the triangles are never targets in this experiment, and thus any association between the flankers and the target responses must rely on a relationship established outside of the experiment. This comprises fairly strong evidence that downward-pointing triangles evoke negative emotional valence.

The fact that very simple visual properties can trigger emotional responses suggests that the visual system may be optimized to extract emotionally relevant information from a scene, without having to wait for complex, higher-level visual processing. It also suggests that the trend of curvaceous consumer products may be here to stay.—T.S.H.

Bar, M. M., & Neta, M. M. (2006). Humans prefer curved visual objects. Psychological science : A journal of the American Psychological Society / APS, 17(8), 645–648. doi:10.1111/j.1467-9280.2006.01759.x

Bar, M. M., & Neta, M. M. (2007). Visual elements of subjective preference modulate amygdala activation. Neuropsychologia, 45(10), 2191–2200. doi:10.1016/j.neuropsychologia.2007.03.008

Eastwood, J. D. J., Smilek, D. D., & Merikle, P. M. P. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception and Psychophysics, 63(6), 1004–1013.

Eriksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception and Psychophysics, 16(1), 143–149. doi:10.3758/BF03203267

Larson, C. L., Aronoff, J., & Stearns, J. J. (2007). The shape of threat: Simple geometric forms evoke rapid and sustained capture of attention. Emotion, 7(3), 526–534. doi:10.1037/1528-3542.7.3.526

Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80(3), 381–396. doi:10.1037//0022-3514.80.3.381

Tipples, J., Atkinson, A. P., & Young, A. W. (2002). The eyebrow frown: A salient social signal. Emotion, 2(3), 288–296. doi:10.1037/1528-3542.2.3.288

Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5(6), 495–501. doi:10.1038/nrn1411

PERCEPTION AND ACTION

Remote tool use affects reports of distance

Davoli, C. C., Brockmole, J. R., & Witt, J. K. (2012). Compressing perceived distance with remote tool-use: real, imagined, and remembered. Journal of Experimental Psychology: Human Perception and Performance, 38(1), 80–89.

Perception and action are intimately linked, and many perception-action linkages extend to tool use. For example, after using a tool, attention appears to concentrate around the functional end of the tool, allowing observers to detect targets near the tool faster than targets away from the tool (Reed et al. 2010). In their recent paper, Davoli and colleagues demonstrate that using a tool on a distant object—such as directing a laser pointer at an object well outside of action space—produces results similar to acting on an object via direct contact with a tool.

Several studies have reported that tool use alters distance reports for objects that are just out of arm’s reach. When participants use a tool to reach for such an object, they report that the object appears closer than when a tool was not used to reach toward the object, a result consistent with the idea that tool use expands personal space. In their initial experiments, Davoli et al. demonstrated a similar result for objects well outside personal space, objects that fell as far as 30 m away. Participants either directed a laser pointer at the object or pointed a baton at the object. When participants verbally reported the estimated distance to the target object, reports were closer when objects had been interacted with using the laser pointer than when with the baton. Interestingly, participants gave underestimates of distance when instructed to imagine pointing a laser pointer at the distant object. This latter result suggests that the intended action, not the visual stimulation by the laser pointer, affects reports of perceived distance. A later experiment demonstrated that distance reports were altered primarily by intended action; when holding a vacuum hose toward the target, participants reported the target object as appearing closer irrespective of whether the vacuum was in ‘vacuum’ mode or ‘blower’ mode.

Davoli and colleagues discuss their results in terms of embodied accounts of cognition, in which the environment, including the body, affect various aspects of cognition. Davoli et al.’s findings extend this literature to very distant interactions with objects, which raises many interesting questions. For example, distance perception relies on different cues (e.g., monocular and binocular), and different cues operate at different distances. Do near and far interactions affect these cues differently? Such questions might speak to methodological concerns that have been raised about some perception-action studies (Durgin, et al., 2009). –S.P.V

Reed, C. L., Betz, R., Garza, J. P., & Roberts, R. J. (2010). Grab it! Biased attention in functional hand and tool space. Attention Perception & Psychophysics, 72(1), 236–245.

Durgin, F. H., Baird, J. A., Greenburg, M., Russell, R., Shaughnessy, K., & Waymouth, S. (2009). Who is being deceived? The experimental demands of wearing a backpack. Psychonomic Bulletin & Review, 16(5), 964–969.

SPOKEN WORD RECOGNITION

Sleep on it

Dumay, N., & Gaskell, M. G. (2012). Overnight lexical consolidation revealed by speech segmentation. Cognition, 123, 110–132.

As listeners hear the stream of speech that makes up the fabric of our spoken interactions, individual words must be segmented and recognized in the context of all the words that we know and use. When recognizing a word, evidence from the incoming speech signal activates potential candidates in our lexical memory, which then compete for activation until the intended word is ultimately recognized. As we learn new words, they are eventually added to the pool of potential competitors and the extent to which a newly learned word participates in this lexical competition process indexes whether it has entered our vocabulary or lexicon. Dumay and Gaskell investigated when and how novel words are added to our lexical memory by assessing whether newly learned words serve as lexical competitors before or after sleep-induced memory consolidation.

Across experiments, listeners were presented with novel words that were similar to existing words in one of two ways; either the novel word was a close variant of an existing word (e.g., frenzy lk) or contained an embedded existing word (e.g., lir muck toze). After exposure, tasks requiring lexical processing were administered to determine if the newly encountered words now served as lexical competitors. Crucially, testing for lexical competition occurred either immediately after exposure or after a 24-hour or 7-day period. Previous work has shown that it is not the delay per se that influences memory consolidation in these tasks, but rather the opportunity to sleep (Dumay & Gaskell, 2007). The results showed that lexical competition from the newly learned words was only observed when listeners were tested after 24 hours or 7 days and had had the opportunity to sleep. No evidence of lexical competition was found immediately after exposure to the novel words. This pattern was similar for both types of novel words and extended to two distinct tests of lexical competition, including a word spotting or segmentation task. These findings indicate that newly acquired words may require a period of sleep in order to be incorporated into lexical memory. Sleep-induced memory consolidation is a crucial component of vocabulary learning. –LN.

Dumay, N., & Gaskell, M. G. (2007). Sleep-associated changes in the mental representation of spoken words. Psychological Science, 18(1), 35–39.