Psychonomic Bulletin & Review

, Volume 23, Issue 6, pp 1832–1838 | Cite as

Animacy increases second target reporting in a rapid serial visual presentation task

  • Guadalupe Guerrero
  • Dustin P. Calvillo
Brief Report


Attentional blink occurs when two target items, T1 and T2, are presented within brief moments of each other in a series of rapidly presented items and participants fail to report T2. The purpose of the present study was to examine the effect of characteristics of T2 on T2 reporting. Participants (N = 67) completed 4 blocks of 40 trials. Each trial consisted of 15 images, two of which were designated as T1 and T2. T2 was manipulated in three ways: animacy (animate or inanimate), threat (threatening or nonthreatening), and lag (200 ms or 400 ms after T1). The results indicated that more T2s were reported at the longer lag and that animate objects were reported more often than inanimate objects at both lags. Threat did not have a significant effect on T2 reporting although it interacted with lag: threatening objects were reported more frequently than nonthreatening objects at lag 2 but this trend reversed at lag 4. The results were consistent with the animate monitoring hypothesis, which claims that animate objects, because of their importance in ancestral environments, attract attention more easily than inanimate objects. Animate objects appear to capture attention more easily than inanimate objects as second targets in a rapid serial visual presentation task. This result is similar to animacy advantages reported with other attention tasks and with memory tasks.


Attentional blink Animate monitoring Threat superiority 

Attentional blink (AB) occurs when two targets are presented in close temporal proximity in a rapidly presented series of objects and individuals cannot report the second target (Shapiro, Arnell, & Raymond, 1997). AB may reflect limited attentional resources that are allocated to the first target (T1) rendering the second target (T2) undetected (Ward, Duncan, & Shapiro, 1996) or a temporary loss of attentional control (Di Lollo, Kawahara, Ghorashi, & Enns, 2005). Several factors influence detection of T2, including the time between T1 and T2 (Raymond, Shapiro, & Arnell, 1992), personal relevance of T2 (Shapiro, Caldwell, & Sorenson, 1997), and emotionality of T2 (Keil & Ihssen, 2004). The goal of the present study was to examine the effects of two characteristics that may enhance T2 detection: animacy and threat.

The animate monitoring hypothesis claims that the human attention system evolved to monitor animals (including humans) because of their importance in ancestral hunter-gatherer societies (New, Cosmides, & Tooby, 2007). Support for the animate monitoring hypothesis has come from studies that demonstrated that, compared with inanimate objects, animate objects are detected more quickly in visual search tasks (Jackson & Calvillo, 2013), they are detected more frequently in change blindness (New et al., 2007) and inattentional blindness tasks (Calvillo & Jackson, 2014), they receive longer eye fixations (Yang et al., 2012), and they are better remembered (Nairne, VanArsdall, Pandeirada, Cogdill, & LeBreton, 2013).

Threatening objects show attentional advantages over nonthreatening objects in visual search tasks (Fox, Griggs, & Mouchlianitis, 2007). Several studies, however, have allowed threat to confound with animacy. Studies have found that snakes and spiders are detected more quickly than flowers and mushrooms (Öhman, Flykt, & Esteves, 2001). It is unclear in these studies whether the differences emerge as the results of animacy, of threat, or if there is a specific attentional advantage for spiders or snakes. When the detection of threatening animals (snakes and spiders) is compared to the detection of nonthreatening animals (cats and fish), there are no differences except among those with specific phobias, who are faster at detecting the animal they fear (Soares, Esteves, & Flykt, 2009). Similarly, Calvillo and Hawkins (2016) found that threatening and nonthreatening animate objects are detected more frequently than threatening and nonthreatening inanimate objects in inattentional blindness tasks.

A few studies have examined the effect of threat on AB. Some studies have compared the detection of angry faces to that of neutral faces in T2, and most of these have found that angry faces reduce AB (Maratos, Mogg, & Bradley, 2008). Trippe, Hewig, Heydel, Hecht, and Miltner (2007) examined T2 detection rates in spider phobics and controls. T2 images consisted of snakes, spiders, positive emotional images (young humans and other animals), negative emotional images (injured or mutilated people), and neutral images (e.g., household objects). Overall, participants correctly identified positive and negative emotional images, snakes, and spiders more frequently than neutral images, and spider phobics were more likely than controls to identify spiders. In a similar study, Reinecke, Rinck, and Becker (2008) examined T2 reporting rates when T2s consisted of mushrooms, flowers, and spiders. Spiders were reported more frequently than flowers, which were reported more frequently than mushrooms. In their second experiment, Reinecke et al. (2008) included spider-phobics as participants and snakes as an additional T2 category. They found that spiders were the most frequently reported, flowers were the second most frequently reported, snakes were third most frequently reported, and mushrooms were the least frequently reported. Furthermore, spider phobics reported spiders more often than did control participants. Thus, threatening objects appear to increase the reporting of T2.

In the present study, we examined T2 reporting rates for animate and inanimate objects that were threatening or nonthreatening. We also manipulated the lag between T1 and T2 with some T2s appearing 200 ms after T1 and some appearing 400 ms after T1. Based on the animate monitoring hypothesis and supporting studies with inattentional blindness tasks (Calvillo & Hawkins, 2016), we predicted that animate objects would be reported more frequently than inanimate objects and that threat would not affect reporting rates. Studies have shown a relationship between susceptibility to inattentional blindness and AB (Beanland & Pammer, 2012) and working memory capacity has been shown to predict susceptibility to inattentional blindness (Richards, Hannon, & Derakshan, 2010; but see Bredemeier & Simons, 2012) and AB (Colzato, Spapé, Pannebakker, & Hommel, 2007; but see Martens & Johnson, 2008). Thus, there is some reason to believe that the findings from inattentional blindness studies should generalize to an AB task. Finally, we predicted that more T2s would be reported at the longer lag than at the shorter lag based on previous AB studies (Raymond et al., 1992).


Participants and design

Undergraduate students (N = 67) from California State University San Marcos participated in exchange for credit toward the completion of a research requirement. There were 12 men and 55 women, and participants ranged in age from 18 to 37 (M = 20.80, SD = 3.84) years. Based on the small-medium effect of animacy in inattentional blindness studies (Calvillo & Hawkins, 2016; Calvillo & Jackson, 2014), we predicted an effect of d = 0.35 in the present study. An a priori power analysis based on this effect size revealed that a sample of 67 was needed to detect this predicted effect with a power of 0.80 (and α = 0.05, two-tailed).

The study was a 2 (Animacy: animate or inanimate) × 2 (Threat: threatening or nonthreatening) × 2 (Lag: lag 2 or lag 4) within-subjects factorial design. The dependent variable was the accurate reporting of the target stimuli.

Materials and procedure

After providing informed consent, participants were given instructions on the task. They were informed that they would see a rapid sequence of images, two of which would have a red border around them, and that their task was to report the objects in the two red-bordered images. They then completed an example trial that had a slower presentation rate, were informed that the images in the actual trials would be presented more rapidly, and were allowed to ask questions.

Participants completed 4 blocks of 40 trials. Each trial contained a fixation cross presented for 500 ms, 15 images including 2 targets (indicated by a red border) presented for 100 ms each, and a perceptual mask presented for 500 ms. All images were taken from Moreno-Martinez and Montoro’s (2012) standardized set of photographed objects.1 The first target (T1) appeared in the 3rd, 4th, 5th, and 6th position in the trial for an equal number of trials. The second target (T2) appeared either two (lag 2) or four (lag 4) positions after the first target. T1 consisted of 10 fruits, 10 vegetables, 10 furniture items, and 10 musical instruments. T2 consisted of threatening animate objects (crocodile, tiger, shark, scorpion, snake, spider, lion, bee, bat, and mosquito), nonthreatening animate objects (butterfly, manatee, ladybug, fly, hummingbird, penguin, giraffe, duck, zebra, and turtle), threatening inanimate objects (revolver, machine gun, grenade, cannon, sword, axe, handsaw, hammer, arrow, and slingshot), and nonthreatening inanimate objects [nut (hardware), level (hardware), pen, shield, paperclip, teapot, compass, saucepan, pot, and pencil]. The 40 objects used as T2 were determined through a pilot study in which individuals provided threat ratings for 78 objects using a 5-point scale (with 1 = not at all threatening and 5 = extremely threatening). All animate objects were animals and all inanimate objects were artifacts. Based on these ratings, ten high-threatening and ten low-threat animate and inanimate objects were selected for inclusion. The ten threatening animate objects (M = 3.74, 95 % confidence interval (CI) [3.48, 3.99]) and ten threatening inanimate objects (M = 3.69, 95 % CI [3.43, 3.94]) were rated as more threatening than the ten nonthreatening animate objects (M = 1.24, 95 % CI [1.14, 1.34]) and ten nonthreatening inanimate objects (M = 1.21, 95 % CI [1.12, 1.30]). Filler items consisted of 130 images from the categories: trees, flowers, nuts, buildings, sports games, jewelry, vehicles, and clothing. After the 15th image in a trial and the perceptual mask, participants were prompted to report the name of the two target objects and to put a question mark when they could not identify a target. An example trial is illustrated in Fig. 1.
Fig. 1

Trial sequences consisted of a fixation screen shown for 500 ms, 15 images (including two targets indicated by red borders) shown for 100 ms each, and a mask shown for 500 ms

Each of the four blocks contained one trial with each of the 40 T1 objects and each of the 40 T2 objects as well as four repetitions of each of the filler items. Within each block, half of the threatening animate, nonthreatening animate, threatening inanimate, and nonthreatening inanimate objects appeared at lag 2 and the other half appeared at lag 4. Across blocks, the pairing of each of the T2 objects with one of the four categories of T1 objects was counterbalanced. For example, the snake may have appeared as T2 with a fruit as T1 in block 1, with a vegetable as T1 in block 2, with a furniture item as T1 in block 3, and with a musical instrument as T1 in block 4. The lag for each T2 object was also counterbalanced across blocks so that each T2 object appeared in the lag 2 condition twice and in the lag 4 condition twice.

Each participant completed the trials within the four blocks in the same random order. Data were collected individually in sessions lasting approximately 30 minutes. Stimuli were presented and data were collected using E-Prime.


We analyzed the accurate reporting rates of T2 as a function of animacy, threat, and lag in a three-way ANOVA. Consistent with typical AB studies, we only included T2 trials in which participants correctly reported T1 (Shapiro et al., 1997)2; trials in which participants responded with “?” (which occurred in 75.1 % of trials without a correct identification) or with an incorrect label for T2 (which occurred in 24.9 % of trials without a correct identification) were omitted. The mean reporting rates (and 95 % CIs) for all conditions are displayed in Fig. 2. In a three-way ANOVA, we found a significant effect of animacy, F(1, 66) = 98.55, p < 0.001, η2 p = 0.60, no significant effect of threat, F(1, 66) = 0.18, p = 0.667, η2 p = 0.00, and a significant effect of lag, F(1, 66) = 224.75, p < 0.001, η2 p = 0.77. Animate objects (M = 0.33, 95 % CI [0.28, 0.37]) were reported more frequently than inanimate objects (M = 0.20, 95 % CI [0.16, 0.23]), and T2s were reported more frequently at lag 4 (M = 0.41, 95 % CI [0.36, 0.46]) than at lag 2 (M = 0.11, 95 % CI [0.08, 0.14]). Threatening objects (M = 0.26, 95 % CI [0.22, 0.30]) were reported at rates very similar to nonthreatening objects (M = 0.26, 95 % CI [0.23, 0.30]).
Fig. 2

Mean accurate reporting rate for T2 by animacy, threat, and lag (error bars show 95 % CIs)

Several interactions were significant in the three-way ANOVA. The two-way interactions between animacy and lag, F(1, 66) = 13.66, p < 0.001, η2 p = 0.17, and between threat and lag, F(1, 66) = 10.31, p = 0.002, η2 p = 0.14, were significant, whereas the two-way interaction between animacy and threat was not, F(1, 66) = 2.01, p = 0.161, η2 p = 0.03. Finally, the three-way interaction of animacy, threat, and lag was significant, F(1, 66) = 10.11, p = 0.002, η2 p = 0.13. We conducted simple effects tests for the two-way interactions. For the interaction between animacy and lag, we analyzed the effect of animacy at each lag. The effect of animacy was significant at both lags, but it was slightly larger at lag 4, t(66) = 8.20, p < 0.001, d = 0.75, than at lag 2, t(66) = 6.13, p < 0.001, d = 0.58. For the interaction between threat and lag, we analyzed the effect of threat at each lag. At lag 2, threatening objects were accurately reported significantly more frequently than nonthreatening objects, t(66) = 2.43, p = 0.018, d = 0.17, whereas at lag 4, nonthreatening objects were accurately reported marginally more frequently than threatening objects, t(66) = 1.74, p = .087, d = 0.15. To examine the three-way interaction, we conducted separate two-way ANOVAs for each lag. The three-way interaction appears to have emerged because the two-way interaction between animacy and threat was significant at lag 2, F(1, 66) = 15.65, p < 0.001, η2 p = 0.19, but not at lag 4, F(1, 66) = 0.75, p = 0.391, η2 p = 0.01.

We also analyzed T1 reporting rates (in all trials) to determine if they varied as a function of the characteristics of T2 (animacy, threat, and lag). The mean T1 reporting rates (and 95 % CIs) for all conditions are displayed in Fig. 3. In a three-way ANOVA, we found significant main effects of animacy, F(1, 66) = 35.45, p < 0.001, η2 p = 0.35, threat, F(1, 66) = 12.43, p = 0.001, η2 p = 0.16, and lag, F(1, 66) = 51.62, p < 0.001, η2 p = 0.44. T1 was accurately reported more frequently when T2 was animate (M = 0.63, 95 % CI [0.59, 0.66]) than when it was inanimate (M = 0.58, 95 % CI [0.54, 0.62]), more frequently when it was threatening (M = 0.62, 95 % CI [0.58, 0.66]) than when it was nonthreatening (M = 0.59, 95 % CI [0.56, 0.63]), and more frequently with lag 2 (M = 0.64, 95 % CI [0.60, 0.67]) than with lag 4 (M = .57, 95 % CI [0.53, 0.61]). The two-way interaction between threat and lag was significant, F(1, 66) = 5.70, p = 0.020, η2 p = 0.08, whereas the two-way interactions between animacy and threat, F(1, 66) = 1.05, p = 0.310, η2 p = 0.02, and between animacy and lag, F(1, 66) = 1.20, p = 0.277, η2 p = 0.02, and the three-way interaction, F(1, 66) = 3.72, p = 0.058, η2 p = 0.05, failed to reach significance. We explored the two-way interaction between threat and lag with simple effects tests on the effect of threat at each lag. T1 was accurately reported significantly more frequently when T2 was threatening than when it was nonthreatening at lag 4, t(66) = 3.88, p < 0.001, d = 0.26, but not at lag 2, t(66) = 0.40, p = 0.690, d = 0.02.
Fig. 3

Mean accurate reporting rate for T1 by animacy, threat, and lag (error bars show 95 % CIs)


Consistent with previous AB studies (Raymond et al., 1992), we found that participants reported more T2s at the longer lag (lag 4) than at the shorter lag (lag 2). The novel contribution of the present study was that animate objects were reported more frequently than inanimate objects. This effect of animacy occurred at both lags.3 Threatening objects were not reported more than nonthreatening objects overall but did show advantages in some conditions. Specifically, threatening objects were reported more than nonthreatening objects at lag 2 (specifically with inanimate objects); however, the pattern was reversed at lag 4 leading to an interaction between threat and lag. The lack of a main effect of threat appears inconsistent with previous studies that have found that angry faces (Maratos et al., 2008) and threatening animals, such as snakes and spiders, increase the detection of T2 (Reinecke et al., 2008; Trippe et al., 2007). The threatening objects in those studies were animate, whereas animacy and threat were manipulated independently in the present study.

The animacy advantage in accurately reporting T2 found in the current study is consistent with the animate monitoring hypothesis, which claims that animate objects receive attentional priority because of their importance in ancestral, hunter-gatherer societies (New et al., 2007). New et al. (2007) claimed that the attentional advantage of animate objects occurs because information about other human beings and animals were important for the survival of our ancestors; being able to notice changes in the environment concerning enemies, predators, or food source was important for the survival of our species. Modern threats, such as guns and cars, may be more important for survival in humans’ current environment, but ancient threats, such as snakes and spiders, have shaped the human visual system to priority the detection of these objects (Isbell, 2006). Consistent with the animate monitoring hypothesis, the results of the present study suggest than all animate objects, not just threatening species, may permeate attention. Threatening animate objects were not reported more frequently than nonthreatening animate objects at either lag (there was a small, nonsignificant trend at both lags in the opposite direction).

The findings of the present study add to studies that have shown animate objects permeate visual attention in visual search, change blindness, and inattentional blindness tasks (Calvillo & Jackson, 2014; Jackson & Calvillo, 2013; New et al., 2007). These previous findings suggest that animate objects are prioritized in visual scenes. There is a general consensus that AB occurs because of a limitation in the ability to sequentially process targets (Kawahara & Enns, 2009). In the present study, the processing of T1 interfered less with the ability to identify T2 when T2 was animate than when it was inanimate. This suggests that animate objects also enhance the processing of targets across time, demonstrating the robustness of animate monitoring.

Some potential confounding variables were controlled for the in the present study. The effect of animacy cannot be attributed to animate objects being more threatening than inanimate objects. Pilot ratings of the threat of the animate and inanimate threatening objects were very similar to each other, as were the ratings of the threat of the animate and inanimate nonthreatening objects. This animacy advantage also cannot be attributed to animate objects in the present study being more familiar than the inanimate objects. Using the ratings provided by Moreno-Martinez and Montoro (2012), there was a tendency for the inanimate objects to be rated as more familiar than the animate objects.

The present results can be explained by existing theories of AB. One explanation of AB is that limited attentional resources are allocated to T1 leaving too few resources left to detect T2 (Ward et al., 1996). If this is the case, animate objects appear to have a lower threshold for detection than inanimate objects. That is, animate objects are detected with fewer resources than inanimate objects. Similar explanations have been posited for the animacy advantage in inattentional and change blindness studies (Calvillo & Jackson, 2014; New et al., 2007). An alternative explanation for AB is that the process of target identification results in a temporary loss of attentional control (Di Lollo et al., 2005). When the first target is identified, the attentional set is disrupted and the second target may go undetected. The results of the present study may have occurred because animate objects enable the attentional system to regain control more easily than inanimate objects. Animate objects capture attention, even when they are not part of the current attentional set (Calvillo & Hawkins, 2016), and thus, animate objects at T2 may not be as affected by a disruption of a participant’s attentional set.

Animacy also increased accurate reporting of T1. Importantly, this suggests that the animacy advantage for reporting T2 did not come at the cost of T1 accuracy. When T2 was animate, it did not attract attention away from the processing of T1; rather, it appeared to enhance it. In their second experiment, Reinecke et al. (2008) reported that T1 accuracy was greater when T2 was a spider than any other T2 objects. Similarly, Trippe et al. (2007) found that snakes and spiders as T2s led to some of the greatest T1 reporting rates, although some differences failed to reach significance. Thus, there are some previous reports of the content of T2 affecting T1 accuracy. Reinecke et al. (2008) explained that spiders at T2 did not disrupt T1 processing, but they may require less processing for their encoding. A similar explanation applies to the present study. Animate objects at T2 require fewer resources for encoding leaving more resources for the continuing processing of T1 leading to greater T1 accuracy.

The present study had a few noteworthy limitations. One of the limitations is that there could be differences in low-level characteristics across conditions that influence reporting of the second targets. Cole and Wilkins (2013) found that images that lead to fear responses tend to have higher contrast energy at midrange spatial frequencies than do images that do not lead to fear responses. We did not conduct a spectrum analysis of our images to rule out this potential confound. Another limitation of the present study is that all animate objects were naturally occurring objects (animals), whereas all inanimate objects were artifacts. The differences in T2 reporting rates could reflect differences in detecting natural objects versus artifacts rather than animate versus inanimate objects. Future studies could attempt to tease apart animacy and naturalness by including natural inanimate objects (e.g., rocks) and, perhaps, artifacts that are animate (e.g., robots). Another limitation of the present study was that both lag times were inside of the timeframe of AB (i.e., within 500 ms of T1). If a longer lag was included, we could determine if there were differences in reporting as a result of animacy at the longer time to determine if animacy reduces AB or if it simply increases the reporting of T2 at all lags. Future studies could examine these possibilities.

Future research could also examine the specific features of animate objects that lead to superior reporting in AB tasks. New and German (2015), for instance, found that features of a spider, even when scrambled, are detected more frequently than needles and houseflies. Research also should examine physiological responses to animate and inanimate objects in a rapid visual serial presentation task. Weimer, Gerdes, and Pauli (2013) found that participants have a greater skin conductance response to a spider than a flower in an inattentional blindness task, even when the spider was not reported. This physiological indicator of arousal may explain why animate objects are better detected in visual search tasks (Jackson & Calvillo, 2013) and better remembered in memory tasks (Nairne et al., 2013).

In conclusion, the present study was the first, to our knowledge, to examine the effects of animacy and threat in accurate reporting of T2. We found that animacy increases T2 reporting in a rapid serial visual presentation task and that this occurred in both the shorter and longer lag. Threat did not have an overall effect on T2 reporting rates (although it interacted with lag). These results are consistent with the animate monitoring hypothesis and add to a growing body of supporting evidence across a variety of paradigms.


  1. 1.

    This set of stimuli also contains several ratings for each image, including familiarity, which may influence the reporting of objects. Familiarity did not appear to be confounded with any of our manipulations: mean familiarity ratings for the 10 threatening animate objects was 3.20, for the 10 nonthreatening objects it was 3.34, for the 10 threatening inanimate objects it was 3.08, and for the 10 nonthreatening inanimate objects it was 4.04.

  2. 2.

    The pattern of T2 reporting did not change when all trials are analyzed; the effect sizes were slightly larger before omitting inaccurate T1 trials.

  3. 3.

    Interaction between animacy and lag emerged because the effect of animacy was larger at lag 4 than at lag 2. Accurate reporting rates were low at lag 2, which appeared to have restricted the effect of animacy at that lag.



The authors thank Jennifer Gomez and Briana Peralta for assistance with data collection.


  1. Beanland, V., & Pammer, K. (2012). Minds on the blink: The relationship between inattentional blindness and attentional blink. Attention, Perception, & Psychophysics, 74, 322–330. doi: 10.3758/s13414-011-0241-4 CrossRefGoogle Scholar
  2. Bredemeier, K., & Simons, D. J. (2012). Working memory and inattentional blindness. Psychonomic Bulletin & Review, 19, 239–244. doi: 10.3758/s13423-011-0204-8 CrossRefGoogle Scholar
  3. Calvillo, D. P., & Hawkins, W. (2016). Animate objects are detected more frequently than inanimate objects in inattentional blindness tasks independently of threat. Journal of General Psychology 143, 101–115. doi: 10.1080/00221309.2016.1163249
  4. Calvillo, D. P., & Jackson, R. E. (2014). Animacy, perceptual load, and inattentional blindness. Psychonomic Bulletin & Review, 21, 670–675.CrossRefGoogle Scholar
  5. Cole, G. G., & Wilkins, A. J. (2013). Fear of holes. Psychological Science, 24, 1980–1985. doi: 10.1177/0956797613484937 CrossRefPubMedGoogle Scholar
  6. Colzato, L. S., Spapé, M. M. A., Pannebakker, M. M., & Hommel, B. (2007). Working memory and the attentional blink: Blink size is predicted by individual differences in operation span. Psychonomic Bulletin & Review, 14, 1051–1057. doi: 10.3758/BF03193090 CrossRefGoogle Scholar
  7. Di Lollo, V., Kawahara, J.-I., Ghorashi, S. M. S., & Enns, J. T. (2005). The attentional blink: Resource depletion or temporary loss of control? Psychological Research, 69, 191–200. doi: 10.1007/s00426-004-0173-x CrossRefPubMedGoogle Scholar
  8. Fox, E., Griggs, L., & Mouchlianitis, E. (2007). The detection of fear-relevant stimuli: Are guns noticed as quickly as snakes? Emotion, 7, 691–696.CrossRefPubMedPubMedCentralGoogle Scholar
  9. Isbell, L. A. (2006). Snakes as agents of evolutionary change in primate brains. Journal of Human Evolution, 51, 1–35. doi: 10.1016/j.jhevol.2005.12.012 CrossRefPubMedGoogle Scholar
  10. Jackson, R. E., & Calvillo, D. P. (2013). Evolutionary relevance facilitates visual information processing. Evolutionary Psychology, 11, 1011–1026.CrossRefPubMedGoogle Scholar
  11. Kawahara, J., & Enns, J. T. (2009). Selection difficulty and interim competition are independent factors in rapid visual stream perception. Journal of Experimental Psychology: Human Perception and Performance, 35, 146–158. doi: 10.1037/a0013164 PubMedGoogle Scholar
  12. Keil, A., & Ihssen, N. (2004). Identification facilitation for emotionally arousing verbs during the attentional blink. Emotion, 4, 23–35. doi: 10.1037/1528-3542.4.1.23 CrossRefPubMedGoogle Scholar
  13. Maratos, F. A., Mogg, K., & Bradley, B. P. (2008). Identification of angry faces in the attentional blink. Cognition and Emotion, 22, 1340–1352. doi: 10.1080/02699930701774218 CrossRefPubMedPubMedCentralGoogle Scholar
  14. Martens, S., & Johnson, A. (2008). Working memory capacity, intelligence, and the magnitude of the attentional blink revisited. Experimental Brain Research, 192, 43–52. doi: 10.1007/s00221-008-1551-1 CrossRefPubMedGoogle Scholar
  15. Moreno-Martinez, F. J., & Montoro, P. R. (2012). An ecological alternative to Snodgrass & Vanderwart: 360 high quality colour images with norms for seven psycholinguistic variables. PLoS ONE, 7, e37527. doi: 10.1371/journal.pone.0037527 CrossRefPubMedPubMedCentralGoogle Scholar
  16. Nairne, J. S., VanArsdall, J. E., Pandeirada, J. N. S., Cogdill, M., & LeBreton, J. M. (2013). Adaptive memory: The mnemonic value of animacy. Psychological Science, 24, 2099–2105. doi: 10.1177/0956797613480803 CrossRefPubMedGoogle Scholar
  17. New, J., Cosmides, L., & Tooby, J. (2007). Category-specific attention for animals reflects ancestral priorities, not expertise. PNAS, 104, 16598–16603.CrossRefPubMedPubMedCentralGoogle Scholar
  18. New, J. J., & German, T. C. (2015). Spiders at the cocktail party: An ancestral threat that surmounts inattentional blindness. Evolution and Human Behavior, 36, 165–173. doi: 10.1016/j.evolhumbehav.2014.08.004 CrossRefGoogle Scholar
  19. Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130, 466–478. doi: 10.1037/0096-3445.130.3.466 CrossRefGoogle Scholar
  20. Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18, 849–860. doi: 10.1037/0096-1523.18.3.849 PubMedGoogle Scholar
  21. Reinecke, A., Rinck, M., & Becker, E. S. (2008). How preferential is the preferential encoding of threatening stimuli? Working memory biases in specific anxiety and the attentional blink. Journal of Anxiety Disorders, 22, 655–670. doi: 10.1016/j.anxdis.2007.06.004 CrossRefPubMedGoogle Scholar
  22. Richards, A., Hannon, E. M., & Derakshan, N. (2010). Predicting and manipulating the incidence of inattentional blindness. Psychological Research, 74, 513–523. doi: 10.1007/s00426-009-0273-8 CrossRefPubMedGoogle Scholar
  23. Shapiro, K. L., Arnell, K. M., & Raymond, J. E. (1997). The attentional blink. Trends in Cognitive Sciences, 1, 291–296.CrossRefPubMedGoogle Scholar
  24. Shapiro, K. L., Caldwell, J., & Sorensen, R. E. (1997). Personal names and the attentional blink: A visual “cocktail party” effect. Journal of Experimental Psychology: Human Perception and Performance, 23, 504–514. doi: 10.1037/0096-1523.23.2.504 PubMedGoogle Scholar
  25. Soares, S. C., Esteves, F., & Flykt, A. (2009). Fear, but not fear-relevance, modulates reaction times in visual search with animal distractors. Journal of Anxiety Disorders, 23, 136–144. doi: 10.1016/j.janxdis.2008.05.002 CrossRefPubMedGoogle Scholar
  26. Trippe, R. H., Hewig, J., Heydel, C., Hecht, H., & Miltner, W. R. (2007). Attentional blink to emotional and threatening pictures in spider phobics: Electrophysiology and behavior. Brain Research, 1148, 149–160.CrossRefPubMedGoogle Scholar
  27. Ward, R., Duncan, J., & Shapiro, K. (1996). The slow time-course of visual attention. Cognitive Psychology, 30, 79–109. doi: 10.1006/cogp.1996.0003 CrossRefPubMedGoogle Scholar
  28. Wierner, J., Gerdes, A. B. M., & Pauli, P. (2013). The effects of an unexpected spider stimulus on skin conductance responses and eye movements: An inattentional blindness study. Psychological Research, 77, 155–166. doi: 10.1007/s00426-011-0407-7 CrossRefGoogle Scholar
  29. Yang, J., Wang, A., Yan, M., Zhu, Z., Chen, C., & Wang, Y. (2012). Distinct processing for pictures of animals and objects: Evidence from eye movements. Emotion, 12, 540–551. doi: 10.1037/a0026848 CrossRefPubMedGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2016

Authors and Affiliations

  1. 1.Psychology DepartmentCalifornia State University San MarcosSan MarcosUSA

Personalised recommendations