Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias


This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement—that is, better long-term memory for emotional than for neutral stimuli—and the emotion-induced recognition bias—that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account—that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

Over the last two decades, a growing number of studies have employed spatial-frequency-filtered images (often faces) to study the processing of emotional stimuli. Figure 1 shows an example of a facial stimulus that is split into the low-spatial-frequency (LSF) components and the high-spatial-frequency (HSF) components. The theoretical background of those studies is twofold. First, it is known that our visual system is sensitive to and uses spatial frequencies for information processing (De Valois & De Valois, 1988). People can selectively attend to either high- or low-spatial-frequency information, and often do so automatically, depending on presentation time, distance, and most importantly, the diagnosticity of the information (e.g., Schyns & Oliva, 1999; for a review of spatial frequencies and face processing, see Ruiz-Soler & Beltran, 2006). Second, it is often assumed that low and high spatial frequencies differ in their capacities to automatically trigger emotion-related processes (e.g., Bannerman, Hibbard, Chalmers, & Sahraie, 2012; Holmes, Green, & Vuilleumier, 2005; Vuilleumier, Armony, Driver, & Dolan, 2003). It is hypothesized that a fast, potentially subcortical processing route triggers the amygdala by means of magnocellular processing, with greater sensitivity for low spatial frequencies (Morris, Öhman, & Dolan, 1999; Vuilleumier et al., 2003; see also Tamietto & de Gelder, 2010; but see Pessoa & Adolphs, 2010).

Fig. 1

Schematic depiction of the stimulus filtering, depicting a high-spatial-frequency-filtered stimulus (HSF) on the left and a low-spatial-frequency-filtered stimulus (LSF) on the right.

For example, in a seminal article, Vuilleumier et al. (2003) reported evidence for differential activation of the amygdala in response to HSF, LSF, and unfiltered (i.e., with the full broadband spatial-frequency [BSF] spectrum) fearful and neutral faces. Amygdala activity was stronger for emotional than for neutral faces under both BSF and LSF conditions during a gender categorization task, whereas there was no differential response for HSF faces. Hence, the amygdala as an emotion-related brain area seems to respond more strongly to emotional information only when this information contains low spatial frequencies, and this differential engagement can modulate activity in further brain areas (but see Morawetz, Baudewig, Treue, & Dechent, 2011). Likewise, nonconsciously presented emotional LSF information has been found to influence implicit (but not explicit) behavioral judgments (Laeng et al., 2010), and to elicit brain activity comparable to visible emotional faces (Prete, Capotosto, Zappasodi, Laeng, & Tommasi, 2015), corroborating the assumption that such information is capable to trigger emotion-related processes. Differential processing of HSF and LSF information has also been observed behaviorally in studies focusing on fast and early processes of perception, attention, and spontaneous judgments (Bannerman et al., 2012; Holmes et al., 2005). This evidence suggests that conditions, which promote automatic processing, such as short or nonconscious presentation durations, are advantageous to detect processing differences between high and low spatial frequencies (Langner, Becker, & Rinck, 2012; Rohr & Wentura, 2014), in accordance with the assumption that the supposed processing pathways are especially important for this kind of processing (i.e., fast and early, nonconscious; Barrett & Bar, 2009; Pourtois, Schettino, & Vuilleumier, 2013; Tamietto & de Gelder, 2010).

This may be a reason why a recent review by De Cesarei and Codispoti (2013) reported only mixed evidence with regard to the link between spatial frequencies and emotion processing. Indeed, several studies—especially the ones focusing on intentional processing of spatial frequency information—found no LSF advantage in the processing of emotional stimuli. An argument that seems especially disputable is that the variation of LSF versus HSF is simply a means to differentially trigger magno- versus parvocellular neural pathways or subcortical versus cortical routes, which in turn differentially activate the amygdala. Skottun and Skoyles (2008) noted that magno- versus parvocellular processing cannot be separated using different spatial frequencies, at least for suprathreshold presentations. Likewise, given that several processing routes contribute to visual information processing, behavioral studies cannot inform the discussion about the underlying cortical or subcortical processing routes (Pessoa & Adolphs, 2010; Pourtois et al., 2013). Moreover, as far as differences in very fast processing are concerned, it can be questioned whether observed LSF advantages are more than a global precedence effect. That is, since the visual system often operates in a coarse-to-fine processing mode, an LSF advantage might simply reflect a temporal processing advantage of “coarse” visual information (De Cesarei & Codispoti, 2013).

For these reasons, one should be careful in the interpretation of spatial frequency effects; in particular, the manipulation of spatial frequencies should not be considered a direct proxy of nonobservable processing paths. However, as a substantial number of studies do provide evidence for a link between the manipulation of spatial frequencies and emotion processing, it seems worthwhile to further explore the impact of spatial frequency manipulations.

The present set of experiments broadened the range of applications to a yet unexplored area, that is, the emotional enhancement of long-term memory. Specifically, we presented LSF and HSF emotional and neutral facial stimuli in an incidental recognition memory task. If emotion-related processes are differentially triggered by high and low spatial frequencies, this differential processing should also lead to differences in emotion-enhanced memory (Talmi, 2013; see also below), and thus, the emotion-enhanced memory advantage should be especially pronounced for LSF faces. Moreover, a second, related phenomenon—the emotion-induced recognition bias (e.g., Windmann & Krüger, 1998; see also below)—might also be influenced by this manipulation. In the following sections, we give brief sketches of both phenomena and delineate how they should be affected by the manipulation of spatial frequencies.

Emotion-enhanced memory

Emotion typically enhances long-term memory, in the sense that emotional stimuli are better recognized than neutral ones. This effect is often explained with reference to its adaptive value for survival: Efficient storage and retrieval of personally relevant stimuli aids orientation in a complex world. In this vein, the effect is believed to be general in nature and has been found with diverse stimuli, such as pictorial scenes, faces, or words (Adelman & Estes, 2013; Gupta & Srinivasan, 2009; Kensinger & Corkin, 2004; Ochsner, 2000; Talmi & Moscovitch, 2004; Talmi, Schimmack, Paterson, & Moscovitch, 2007; Wang, 2013), and under intentional as well as incidental learning conditions (e.g., Righi et al., 2012; Wang, 2013).

Regarding faces, Wang (2013), for example, reported better recognition of emotional than of neutral faces immediately after learning and after a 24-h delay. This effect was especially pronounced for negative faces. Similarly, Sergerie, LePage, and Armony (2006) observed higher recognition rates for fearful than for happy and neutral faces after a short retention interval. Gupta and Srinivasan (2009), by contrast, reported better recognition of sad and happy faces (vs. neutral faces) after a 24-h delay. Thus, emotional faces can have an impact on recognition memory at short as well as long retention intervals, and the effect seems most robust for faces of negative valence but can be found for happy faces as well.

More specific explanations of this emotion-related recognition advantage refer to the specific characteristics of emotional relative to neutral stimuli. On a basic level, the affective value of an emotional stimulus and the associated physiological reaction increase the number of stimulus attributes and thus the amount of details potentially available for recollection, improving the likelihood of a vividly experienced memory (Kensinger, 2009; Mather & Sutherland, 2009; Talmi, 2013). Furthermore, they trigger specific emotion-related processes that mediate and moderate memory (for reviews, see, e.g., Dolcos, LaBar, & Cabeza, 2006; McGaugh & Cahill, 2003; Phelps & Sharot, 2008; see also above), leading to enhanced encoding, greater consolidation and better retrieval. Of central importance in this regard is the amygdala (McGaugh, 2004; Phelps et al., 1998), which “registers” the emotional meaning of a stimulus, in particular stimulus-induced arousal (Zald, 2003). Increased amygdala activation at encoding is thought to moderate later memory for emotional events (Dolcos, LaBar, & Cabeza, 2004). In this vein, patients with amygdala damage typically do not show the same amount of memory enhancement for arousing stimuli as healthy participants do (Adolphs, Russell, & Tranel, 1999; LaBar & Phelps, 1998; Phelps et al., 1998; Richardson, Strange, & Dolan, 2004).

Some studies, however, have not shown a general advantage of emotional content on recognition accuracy, but—with reference to dual process theories of recognition memory that distinguish between recollection-based and familiarity-based recognition processes (see Yonelinas, 2002, for a review)—better recollection for emotional stimuli (for faces, see Johansson, Mecklinger, & Treese, 2004; Patel, Girard, & Green, 2012; for pictures, see Ochsner, 2000; Sharot & Yonelinas, 2008).

Thus, emotion stimuli, including faces, typically have a recognition advantage in memory. This effect seems most robust for negative faces, and amygdala activation seems critical for the effect to emerge (at least with regard to arousing stimuli; see Adolphs et al., 1999, and Phelps et al., 1998, for cases of preserved memory for valenced but deficient memory for arousing stimuli). Furthermore, an emotional memory advantage can—additionally or alternatively—be reflected in increased remember responses or greater recollection.

Thus, in line with former studies, our expectation was that emotional faces (i.e., happy and fearful ones) or, alternatively, only the negative ones, will produce better recognition performance than neutral faces. The main question of interest was whether the variation of spatial frequencies (i.e., LSF vs. HSF) will moderate the emotion-enhanced memory effect.

Note that, in comparison with most former studies employing LSF and HSF stimuli, in our study faces were presented for a rather sustained period of time (i.e., several seconds). Therefore, possible processing differences in LSF and HSF stimuli cannot be explained in terms of a temporal processing advantage for LSF stimuli. Specifically, it could be that LSF stimuli showed an advantage in emotion processing in previous studies, because of the employed short presentation times. Visual processing is assumed to proceed from coarse-to-fine (Hegdé, 2008), and the “low route” advantage could also rely on fast processing conditions (Rohr & Wentura, 2014). However, after several seconds of presentation, differences in processing speed should not play a role. Thus, if we find no differential effect for LSF versus HSF stimuli on long-term memory enhancement for emotional stimuli (despite sufficient power), this would indirectly support the notion of fast processing routes that are differentially triggered by LSF and HSF stimuli (i.e., fast presentation times or speeded responses might be necessary to observe effects of spatial frequency on emotion processing). However, if we do find a differential effect in the long-term memory paradigm, temporal interpretations of the observed spatial-frequency effects are impeded. In this case, explanations must focus, first of all, on the concrete differences between LSF and HSF stimuli, which might then again be cautiously linked to hypotheses about differential activation of emotion-related processes and the underlying neuro-cognitive processing pathways.

Emotion-induced recognition bias

A second robust phenomenon, which cannot be ignored when studying emotion-related memory effects, is the emotion-induced recognition bias (Dougal & Rotello, 2007; Maratos, Allan, & Rugg, 2000; McNeely, Dywan, & Segalowitz, 2004; Windmann & Krüger, 1998; Windmann & Kutas, 2001). Most often studied with words, negative words are associated with a more liberal response bias; that is, participants are more likely to believe that subjectively negative words were presented in a study phase than to believe that neutral words were presented. Because this effect seems most pronounced with words, at least part of this effect can be attributed to a larger semantic cohesion of negative word lists, such that false recognition might result from spreading activation processes in the semantic network of negative words (see, e.g., Maratos et al., 2000). However, the recognition bias has also been found with pictures (Bowen, Spaniol, Patel, & Voss, 2016) and faces (Johansson et al., 2004, for highly intense faces; but see Windmann & Chmielewski, 2008, for an inconclusive result with faces). These results suggest that the semantic cohesion explanation cannot be the whole story. Emotional stimuli indeed seem to induce an illusory feeling of familiarity that leads participants to respond “old” (Windmann & Chmielewski, 2008). Thus, as an ancillary research question, we tested for the emotion-induced recognition bias in our study. Since the bias has not been consistently found with faces, this aspect of the present study can be seen as an exploratory contribution to this area of research.

Aim and overview of the present studies

In four experiments, we presented low and high spatial frequency-filtered facial stimuli in incidental learning contexts. We included positive (happy faces), negative (fearful faces), and neutral stimuli (neutral-expression faces). Thus, we were able to test for a general advantage of emotional stimuli (i.e., happy and fearful as compared to neutral ones), as well as for a possible specific advantage of negative stimuli (i.e., fearful faces as compared to happy ones).Footnote 1

All experiments followed the same general procedure and used the same materials but employed different encoding task instructions (see below). Participants first completed an incidental learning task. After a retention interval of 12 min, filled with an unrelated filler task, there was a surprise old/new recognition task. At the end of the experiment, participants were debriefed and thanked for their participation.

We varied the encoding instructions across experiments for two reasons: (a) It has been shown that the task instructions critically influence the attentional focus on HSF or LSF information (e.g., Schyns & Oliva, 1999; Smith & Merlusca, 2014). Participants focus on the information that is most diagnostic for the task at hand. To ensure that our results were not driven by the specific attentional focus or other task demands, participants completed several incidental encoding tasks, which focused on different perceptual features of the faces. (b) Second, it has been shown that the emotional memory-enhancement effect can vary depending on encoding instructions (see, e.g., Ritchey, LaBar, & Cabeza, 2011). In our experiments, all encoding instructions were related to the appearance of the faces. Participants completed an age rating task in Experiment 1, a regional provenance classification task in Experiment 2, a gender categorization task in Experiment 3, and an emotion intensity rating task in Experiment 4.Footnote 2 Because across-task variations in the results were negligible in line with our intentions, we collapsed the results across experiments, with Task as a between-subjects factor.

We also included a remember/know/guess procedure in all experiments (see Yonelinas, 2002, for a review). That is, following classification of a stimulus as old, participants had to indicate the phenomenal base of their judgment. This procedure was included mainly as a precaution: As we discussed earlier, emotion-enhanced memory has sometimes been found only in terms of an increased rate of remember responses (i.e., recollection) for emotional faces rather than generally enhanced recognition accuracy (Johansson et al., 2004; Patel et al., 2012). Thus, to be prepared for this potential outcome, we required data from the remember/know/guess procedure.



A total of 250 undergraduate students from various faculties of Saarland University participated in the studies (N = 64 in Exp. 1, N = 67 in Exp. 2, N = 62 in Exp. 3, and N = 57 in Exp. 4), either in exchange for course credit or for a payment of 8 Euro (180 females; age: Md = 21 years, range 18–46). The data of an additional three participants could not be analyzed due to an incorrect assignment of test stimuli (i.e., wrong list assignment in the recognition test; see below).


The study was based on a 3 (Emotion: joy, fear, neutral) × 2 (Spatial Frequency: HSF, LSF) within-subjects design. Incidental Study Task (age, provenance, gender, emotion intensity) was treated as a between-subjects (control) factor, varied across experiments.

A-priori power calculations focused on specific contrasts (see also the Results section): The contrast fear/joy versus neutral represents the hypothesis of a general emotion advantage, whereas fear versus joy represents the hypothesis of a negativity advantage. We wanted to have enough power (a) to detect frequency-related differences in these contrasts and (b) to find (or refute with some legitimacy) possible moderations of this result by task. Regarding option (a), given a sample size of N = 250 and an α-value of .05 (two-tailed), an effect of size dz = 0.23 (i.e., a small effect according to Cohen, 1988) could be detected with a probability of 1 − β = .95. With regard to option (b), given the same parameter settings of N and α, a between-participants effect of size f = 0.26 (i.e., a medium-sized effect according to Cohen, 1988) could be detected with a probability of 1 − β = .95. Power calculations were done with G*Power 3 (Faul, Erdfelder, Lang, & Buchner, 2007).


We used 72 face images (24 per emotional category; i.e., joy, fear, neutral) of different people (half men, half women for each emotion category). Half of them were taken from the Radboud Faces Database (Langner et al., 2010), and half were taken from the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Öhman, 1998). Emotion × Gender subsets were balanced with regard to set (Radboud/Karolinska). Ten additional neutral face images served as practice, primacy, and recency stimuli (see below for further details on the study and test lists). Appendix Table 6 shows the recognition rates, mean arousal (only KDEF) and intensity ratings for our selection of stimuli.

All images were set to a size of 199 × 254 pixels (6 × 8 cm on the screen, approx. 4.9 × 6.5 deg of visual angle) and converted to 8-bit grayscale. Shoulders and distracting features (e.g., long hair) were removed using Adobe Photoshop. Mean pixel intensity (i.e., luminance) of these initial images was M = 148.5 (SD = 85.6); mean Michelson contrast M = 0.94 (SD = 0.0009). The stimuli were then filtered in MATLAB (MathWorks Inc.) using a FIR filter (order = 50)Footnote 3 with cutoff frequencies of 24 cpf/3.7 cpd for high spatial frequencies, and 6 cpf/0.9 cpd for low spatial frequencies (see Vuilleumier et al., 2003, for similar cutoffs). After filtering, the Michelson contrast of all pictures was equalized to have a minimum luminance of 20% (i.e., value 51 in the 0–255 code) and a maximum luminance of 85% (i.e., value 217), resulting in 0.62 Michelson contrast for all images. Correspondingly, luminance did only slightly differ across pictures M = 141.87, SD = 2.49, for LSF stimuli, ranging from 133.78 for fear to 135.78 for joy; M = 134.54, SD = 7.66, for HSF stimuli, ranging from 140.77 for fear to 142.46 for neutral stimuli.Footnote 4

The 144 resulting experimental images (i.e., 72 LSF and 72 HSF images) were sorted into four study lists of 36 stimuli. Each list comprised 18 HSF and 18 LSF pictures, balanced with regard to emotion, gender, and set (Radboud/Karolinska). Items were assigned to lists such that List 1 and List 2 (as well as List 3 and List 4) comprised the same identities with the same emotion expression, but complementary filters (i.e., if an image was included in List 1 [List 3] in its LSF form, its HSF form was included in List 2 [List 4], and vice versa). Each participant received only one list in the encoding phase, supplemented by a further list used as distractors in the subsequent old/new recognition test. Thus, two quarters of participants received List 1 or 2, respectively, for encoding (plus List 3 or 4, respectively, for the recognition test) whereas the two remaining quarters received List 3 or 4, respectively, for encoding (plus List 1 or 2, respectively, for the recognition test). Item positions on the lists were fixed (i.e., we created one random sequence, with the restriction that each frequency or emotion could not appear more than two times in succession). Additionally, four neutral pictures (one LSF and one HSF of each gender) were inserted at the beginning and the end of each list to remove the impact of primacy and recency effects. The same pictures served as primacy and recency fillers on all lists. Two further neutral faces (one LSF, one HSF) served as practice stimuli for the incidental encoding task. The experiment was run in E-Prime 1.2 on a standard PC with a 17-in. CRT monitor. All stimuli were presented on a uniformly black background.

To ensure that potential memory effects could be unambiguously interpreted (i.e., to ensure that any effects would not reflect simple perception-based differences), all filtered stimuli were presented to a separate sample of participants (N = 29) in a forced-choice categorization task. The results of this pilot study are reported in the Appendix 2.


Participants were tested in groups of up to five participants, with individual participants separated by partition walls. Participants were informed that they would participate in several short, unrelated experiments, and that all instructions would be given on the screen. Distance to the screen was adjusted to 70 cm, and participants were prompted to hold this distance throughout the experiment.

First, participants performed the incidental study task. They were informed that they would be presented with two types of face images (i.e., “blurry” and “sketchy” versions), and that their task was to either (a) estimate the person’s age (Exp. 1) or (b) guess the person’s regional provenance (Exp. 2), or (c) to categorize the faces on the basis of gender (Exp. 3), or (d) to rate the intensity of the displayed emotion (Exp. 4). As a cover story, we told participants that we were interested in the influence of appearance (i.e., “blurriness” or “sketchiness”) on their estimates.

Each trial began with the presentation of a fixation cross for 500 ms, followed by the face for 5 s. Then, participants gave their response by clicking one out of several buttons on the screen. For the age rating task (Exp. 1), participants were given eight response options, ranging from “18–19” to “32–33.” For the regional provenance task (Exp. 2), participants could choose between five regions (i.e., Central Europe, Eastern Europe, Southern Europe, North America, Scandinavia). For the gender task (Exp. 3), the choices were “male” and “female.” For the emotion intensity-rating task (Exp. 4), participants were given a scale from 1 (not intense at all) to 7 (extremely intense).

Initially, participants were given two practice trials with neutral stimuli. Immediately after the incidental study task, they performed a 12-min filler task unrelated to either emotion or face processing.Footnote 5 Participants were then administered the surprise old/new recognition test. They were informed that they would be presented with faces, some of which they had already seen. Each trial started with a 500-ms fixation cross, followed by a face image with two response boxes labeled “old” and “new.” For faces classified as old, participants were prompted to additionally indicate if they could remember the face, if they just knew they had seen the face but could not retrieve specific details, or if they had just guessed. Participants gave their remember/know/guess response by pressing one of three labeled keys (i.e., either the “S” [guess], “G” [know], or “K” [remember] key on a standard QWERTZ keyboard). After the recognition task, participants filled in a questionnaire, which checked if they had suspected at any time during the study or filler tasks that a memory test was going to be conducted.Footnote 6 Afterward, participants were debriefed and thanked for their participation.

Data analyses

For the analyses of recognition sensitivity and bias, we used the signal detection parameters d’ and c as dependent variables.Footnote 7 Because several cases had values of 0 or 1 for hits or false alarms, we applied the correction suggested by Snodgrass and Corwin (1988) to d’ and c, to include all cases in the analyses.

On a more critical note, it can be shown (see, e.g., Rotello, Masson, & Verde, 2008; Verde & Rotello, 2003) that d’ differences between conditions are biased if (a) they are accompanied by differences in response bias c and (b) the equal variances assumption of the signal detection model for the familiarity distributions of new and old items does not hold. Differences in c are observed here (see below) and the equal variances assumption can be questioned for recognition data (see, e.g., Ratcliff, Sheu, & Gronlund, 1992): Typically, variance for old items is larger than for new items. For the unequal variances signal detection theory, the sensitive measure d a is defined as (see, e.g., Macmillan & Creelman, 2005):

$$ {d}_a = \sqrt{\frac{2}{1+{s}^2}}\cdot \left( z(Hits)- s\cdot z(FA)\right), $$

with s being the ratio of variance of new items to variance of old items. (As can easily be seen, d a is equivalent with d’ for s = 1.) The ratio s can be estimated by ROC analyses (e.g., based on confidence ratings). We did not have ROC dataFootnote 8; however, as a reasonable estimate of the variance ratio for these kinds of data, the value s = 0.8 has been suggested (see Martin et al., 2011; Ratcliff et al., 1992; Verde & Rotello, 2003). Replacing d’ by d a (with s = 0.8) allowed for a second, more conservative analysis of sensitivity.

Finally, to give a condensed report on the remember/know/guess data, we calculated measures of recollection and familiarity (see, e.g., Sharot & Yonelinas, 2008; Yonelinas & Jacoby, 1994). Recollection was defined as the proportion of correct “remember” responses to an item set (i.e., remember hits) minus the corresponding false alarm rate—that is, recollection = p(rememberold) – p(remembernew). Familiarity was defined as the proportion of “know” responses divided by the proportion of non-“remember” responses, corrected for false alarms—that is, familiarity = p(knowold)/[1 – p(rememberold)] – p(knownew)/[1 – p(remembernew)].

We used the multivariate approach to repeated measures analysis with a-priori-specified contrasts according to our hypotheses, thereby transforming the tripartite factor of emotion into a vector of two orthogonal contrast variables (see, e.g., O’Brien & Kaiser, 1985; for applications, see Petrova & Wentura, 2012; Rohr, Degner, & Wentura, 2012). That is, the first contrast compared the averages for happy and fearful stimuli with those for neutral stimuli. This contrast represents the hypothesis that emotional stimuli (in general) produce larger memory effects than do neutral stimuli. The second contrast was the contrast between happy and fearful stimuli, representing the hypothesis of larger memory effects for negative than for positive items. Furthermore, task was included as a control factor in all analyses. For all interaction tests involving the Task factor, we report the F approximation that corresponds to the Pillai–Bartlett statistic (see Olson, 1976).


Unless otherwise noted, all effects referred to as statistically significant throughout the text are associated with p values below .05, two-tailed. Table 1 shows the relative hit rates and false alarm rates as a function of frequency, emotional expression, and task.

Table 1 Relative hit rates and false alarm rates as a function of frequency, emotional expression, and task

Recognition sensitivity

With d’ scores as the dependent variable, we conducted a 2 (Frequency) × 3 (Emotion) × 4 (Task) MANOVA for repeated measures, with Task as a between-subjects factor (see Fig. 2). The analysis yielded four significant effects (see Table 2 for the inferential statistics). First, we observed a main effect of frequency. On average, performance was better for HSF than for LSF faces. Second, there was a main effect of emotion, which was entirely due to the first orthogonal contrast (joy/fear vs. neutral). Third and most important, an interaction of emotion and frequency emerged: Both orthogonal contrasts were significant. Finally, task yielded a main effect. As can be seen in Table 1 and Fig. 2, the overall performance differed between tasks. Because of the significant Emotion × Frequency interaction, we report separate analyses for low- and high-frequency stimuli. Note that no significant interactions involved the Task factor.

Fig. 2

Mean d’ recognition rates for high (left) and low (right) spatial frequencies for neutral, happy, and fearful faces across all experiments. The thin lines represent the results of each experiment; the thick lines represent the aggregated results.

Table 2 Inferential statistics for memory performance (d’) for the overall analysis and the analysis of low and high frequency stimuli

Low frequencies

A 3 (Emotion) × 4 (Task) MANOVA for repeated measures yielded a main effect of emotion (see Table 2 for the inferential statistics). Remarkably, this effect was present across all four tasks, Fs(2, 55–65) > 4.30, ps < .019, η p 2 > .125. The first contrast (joy/fear vs. neutral) was significant and associated with a difference between d’ for emotional faces and d’ for neutral faces of M = .32 (SD = .82), which corresponds to an effect size of d Z = .39 (i.e., an effect just below “medium size” according to Cohen, 1988). Again, this effect held for all four tasks, Fs(1, 56–66) > 6.23, ps < .016, η p 2s > .085. Thus, as hypothesized, the emotional LSF stimuli were associated with better recognition than were the neutral LSF stimuli. The second contrast (joy vs. fear) was significant as well, suggesting that memory performance for fearful LSF faces was slightly better than performance for happy LSF faces. The effect was, however, very small, M = .13 (SD = .87, d Z = .15), and significant only for one task (i.e., regional provenance), F(1, 66) = 5.34, p = .023, η p 2 = .076; F(1, 56) = 3.69, p = .060, η p 2 = .062, for emotion, and Fs < 1.43, ps > .236, for the remaining tasks. Again no significant interactions involved task. Thus, recognition memory was relatively comparable for happy and fearful faces.

High frequencies

A 3 (Emotion) × 4 (Task) MANOVA for repeated measures yielded a main effect of emotion as well (see Table 2 for the inferential statistics). Again, the first contrast (joy/fear versus neutral) was significant. Thus, the emotional memory advantage was also present for HSF stimuli. However, as already shown by the interaction effect in the overall analysis, this effect was significantly smaller than the corresponding effect for LSF stimuli. The difference between d’ for emotional faces and d’ for neutral faces was M = .14 (SD = .75), which corresponds to an effect size of d Z = .19 (i.e., a “small” effect according to Cohen, 1988). Moreover, it was significant only for the gender task, F(1, 61) = 5.67, p = .020, η p 2 = .085; Fs < 3.24, ps > .076, for the remaining tasks. The second contrast was not significant. Again, no significant interactions involved task. Thus, in line with our hypotheses, the emotional memory advantage in the high spatial frequencies was relatively small and did not hold across tasks.

Assuming unequal variances: da instead of d

As we outlined above, we conducted alternative analyses of sensitivity by using d a (with s = 0.8) instead of d’. Table 7 shows the inferential statistics for the 2 (Frequency) × 3 (Emotion) × 4 (Task) MANOVA for repeated measures with Task as a between-subjects factor. As can be easily seen by comparing Table 7 with Table 2 (i.e., the analyses using d’ as the dependent variable), everything was essentially the same except one detail: The difference between happy and fearful low-frequency faces disappeared if we assumed unequal variances (with s = 0.8). This result leads us to conclude that the above-reported small difference in recognition memory between happy and fearful faces was not reliable; rather, recognition memory was comparable in both the happy and fearful face conditions.

Recollection and familiarity

The remember/know/guess data are shown in Table 8; Table 3 shows descriptive statistics for the indices of recollection and familiarity. A 2 (Index: recollection vs. familiarity) × 2 (Frequency) × 3 (Emotion) × 4 (Task) MANOVA for repeated measures, using the recollection and familiarity indices as dependent variables, yielded a significant Index × Frequency × Emotion interaction, F(2, 245) = 3.76, p = .025, η p 2 = .030, which was not further moderated by task, F < 1, indicating that recollection and familiarity were influenced by both emotion and spatial frequency. In parallel to the recognition results, the interaction was mainly due to the first contrast (i.e., neutral vs. fear/joy), F(1, 246) = 4.64, p = .032, η p 2 = .019; F < 1.06 for the moderation by task. The second contrast was nonsignificant, F(1, 246) = 2.54, p = .113, η p 2 = .010, F < 1, for the moderation by task. Thus, recollection and familiarity differed depending on emotionality and frequency. To investigate these differences in more detail, we analyzed low and high spatial frequencies again separately.

Table 3 Mean familiarity and recollection values as a function of frequency and emotional expression

Low frequencies

As can be seen in Table 3, the 2 (Index) × 3 (Emotion) × 4 (Task) repeated measures MANOVA for low frequencies yielded that the first contrast of the emotion effect was dominantly familiarity-based [F(1, 246) = 5.38, p = .021, η p 2 = .021, for the Index × Emotion interaction]. However, the contrast (emotion vs. neutral) was significant for familiarity, F(1, 246) = 20.19, p < .001, η p 2 = .076, as well as for recollection, F(1, 246) = 8.30, p = .004, η p 2 = .033.

High frequencies

The corresponding analysis for high spatial frequencies showed that the first contrast was numerically more based on recollection differences [F(1, 246) < 1 for the Index × Emotion interaction]. The contrast was significant for familiarity, F(1, 246) = 7.73, p = .006, η p 2 = .030, for high/familiarity, as well as recollection, F(1, 246) = 24.83, p < .001, η p 2 = .092, for high/recollection.

Thus, familiarity as well as recollection contributed to the emotional memory advantage in both LSF and HSF stimuli, albeit to different degrees. The LSF emotional memory advantage was primarily due to familiarity; the (relatively small) HSF emotional memory advantage was primarily based on recollection. From the perspective of emotion memory, one might have expected that the LSF emotion advantage should also be driven by recollection. However, it is not clear which exact encoding and mnemonic processing stages are in which way impacted by the spatial frequency manipulation. Thus, the observed outcome might be plausibly explained given the specific filtering conditions. We will discuss the possible reasons for the outcome in the discussion.

Response bias

Mean values of the response bias c are given in Table 4. Positive values indicate a more conservative criterion, negative values a more liberal response criterion. To examine whether response criteria differed between the emotional and neutral stimuli, we conducted a 2 (Frequency) × 3 (Emotion) × 4 (Task) MANOVA for repeated measures, with Task as a between-subjects factor and all other factors as within-subjects variables, using c as the dependent variable (see Table 5 for the inferential statistics). The analysis yielded main effects of spatial frequency and emotion, as well as a significant interaction of emotion and spatial frequency (which was not further moderated by task). Thus, response bias differed depending on frequency and emotion. Again, we analyzed high and low spatial frequencies separately to further disentangle the interaction.

Table 4 Mean response bias scores (c) as a function of frequency, emotional expression, and task (SD in parentheses)
Table 5 Inferential statistics for response bias (c) for the overall analysis and the analysis of low and high frequency stimuli

Low frequencies

For low spatial frequencies, a 3 (Emotion) × 4 (Task) MANOVA for repeated measures yielded a main effect of emotion, which was solely due to the second contrast (which was not further moderated by task).Footnote 9 The response criterion for negative faces was significantly more conservative than the one for positive (and than that for neutral) faces. However, we should not put too much weight on this result; it is the only one reported in this section that can potentially be reduced to differences in categorization performance (see Appendix 2).

High frequencies

For high spatial frequencies, a 3 (Emotion) × 4 (Task) repeated measures MANOVA yielded a main effect of emotion—which, however, was due solely to the first contrast: The response criterion for emotional faces was significantly more liberal than the one for neutral faces.


The present study provides the first evidence that emotional memory enhancement can indeed be manipulated by means of spatial-frequency filtering. Across all tasks, low spatial frequency emotional faces elicited significantly larger emotional memory enhancement than high spatial frequency faces, which only showed a small enhancement. In addition, the emotion-induced recognition bias was also influenced by the spatial frequency manipulation. Although high-frequency emotional faces were associated with a more liberal response bias, fearful low-frequency faces were associated with a more conservative bias. The effects were not moderated by type of encoding task. That is, we cannot attribute the result to the demands of any special task (e.g., that a given task is better supported by high or by low frequencies, thereby creating different depths of encoding for the different frequencies). In the remainder of this Discussion, we will discuss the two phenomena—emotion-enhanced memory and emotion-induced recognition bias—separately.

Emotion-enhanced memory

The larger sensitivity advantage (relative to neutral faces) for emotional low-frequency faces than for emotional high-frequency faces held across tasks and for different alternative dependent variables (i.e., d’, d a with s = 0.8, and Pr) and can therefore be considered a robust finding. In addition, in the analyses of d’ (as well as Pr), a small fear-over-happy advantage was found for low-frequency faces (but not for high-frequency faces). However, this result is more equivocal since it was not confirmed in the alternative analyses using d a as the dependent variable and it was not found for all tasks; we will thus refrain from further elaborating on this effect.

Although differences in emotional processing provide the most straightforward explanation for the observed results, there is a potential caveat: A critical reader might think that the Emotion × Frequency interaction might have been caused by differences in recognition performance—specifically, by comparable levels of recognition for emotional LSF and HSF faces, but worse recognition of neutral LSF faces than of neutral HSF faces. However, this notion (1) cannot explain why neutral faces in particular should show this difference, and (2) it completely ignores the main effect of recognition for LSF versus HSF faces. LSF and HSF facial stimuli differ with regard to the level of distinctiveness, with HSF stimuli providing more details for encoding a distinctive representation and more details to act as a more distinctive retrieval cue. Slightly different levels of overall recognition performance for HSF and LSF stimuli can thus be expected a priori. Consequently, the only way to adequately interpret the Emotion × Frequency interaction is to describe it as a larger recognition advantage for emotional stimuli in the LSF condition than in the HSF condition.

Given the main effect of frequency, a second caveat is to presume a ceiling effect for emotional HSF faces. However, the overall level of recognition performance is far from ceiling even for HSF stimuli. Moreover, the different tasks were associated with slightly different levels of overall recognition performance but the presence versus absence of a recognition advantage for emotional HSF faces across tasks does not vary with the overall level of recognition performance (see Fig. 2). Thus, we take the Emotion × Frequency interaction to indicate a genuinely larger effect of emotional content in LSF than in HSF stimuli.

This pattern of results suggests that spatial-frequency filtering can indeed be used to selectively engage processes involved in the interaction of emotion and memory. The findings corroborate the importance of low spatial frequencies for emotion processing (Laeng et al., 2010; Vuilleumier et al., 2003) and extend these findings from perception and judgment to memory. By means of additional categorization data (see Appendix 1), we could rule out that differences in the emotional categorizability of the filtered faces could account for the memory effects.

Importantly, the results cannot be attributed solely to fast and early perceptual processes, given the relatively long (i.e., 5-s) presentation time at encoding. Returning to our critical discussion from the introduction, we see the following routes of interpretation. First, we can interpret the results in line with the existing literature on frequency-filtered stimuli. That is, we assume that LSF and HSF stimuli differentially engage emotion-related processing paths, with LSF stimuli activating these paths more than HSF stimuli. However, this interpretation requires the additional assumption that the differential engagement is not constrained to the first fractions of a second but sustained over a longer period. If we take this stance, it is interesting to note that positive and negative low-frequency emotional faces were associated with a comparable recognition advantage, suggesting no difference in the processing of positive and negative valence. Given that our stimuli were carefully matched on arousal, this result implies that the previously observed influence of valence on emotional memory (i.e., a greater recognition advantage for negative vs. positive items, relative to neutral items) may reflect differences in stimulus arousal rather than differences in underling memory processes (for this argument, see also Mather & Sutherland, 2009).

Second, we can take a step back and consider other differences between LSF and HSF facial stimuli (apart from the triggering of different pathways) that could account for the results. Indeed, the following argument can, in principle, explain the results without referring to different processing paths: Earlier, we explained the main effect of frequency by referring to differences in distinctiveness between high- and low-frequency faces. In an abstract sense, faces can be characterized by x features (on average) that distinguish the exemplars within the set of HSF faces, and by y features (on average) that distinguish the exemplars within the set of LSF faces, with x being larger than y. Everything else being equal, a larger feature vector, that is, a more distinct pattern, promotes recognition success—hence the main effect of frequency. If we assume that (1) emotional connotation is one of the features, and (2) that the presence of a feature (i.e., “looks happy” or “looks fearful”) is represented in memory but not its absence (i.e., “neutral-looking” is not a feature—take a beauty spot as an analogue), we can easily explain the emotion × frequency interaction: Neutral faces “miss” one feature and are thus less distinct than emotional faces—hence the general recognition advantage of emotional faces. Finally, the relative importance of a distinguishing feature directly determines differences in recognition success: The ratio 1/y is larger than the ratio 1/x—hence, the larger emotional advantage for LSF than for HSF stimuli.

This explanation fits to the analyses of familiarity and recollection as well. The emotional memory advantage in the low spatial frequencies was primarily based on familiarity, whereas the small emotional memory advantage in the high spatial frequencies was to a larger extent due to recollection. The reduced stimulus distinctiveness of LSF stimuli might have prompted fewer associations at encoding (e.g., “She reminds me of my school friend Jane.” or “His nose looks like an arrow.”)—in turn, meaning that fewer details may have been available at retrieval to warrant a “remember” response. Thus, it seems plausible that the emotional memory enhancement in the low spatial frequencies was primarily driven by familiarity. Alternatively, the “blurry” perceptual impression of LSF faces might have decreased confidence of the old/new judgments, thereby leading to a shift in response criterion toward a “know” rather than a “remember” response. The resulting remember/know/guess response pattern might thus also indicate differences in memory strength rather than differences in the underlying memory processes (cf. Wixted & Mickes, 2010).

The explanation of the Emotion × Frequency interaction in terms of distinctiveness does not refer to different affective processing paths for both high- and low-frequency faces. It does not even refer to affective processing of emotional faces at all. Certainly, this explanation cannot account for the memory advantage of emotional stimuli in general, given the vast amount of evidence regarding the impact of specific emotion-related processes on memory (i.e., physiological influences and different neuronal processing). However, the distinctiveness notion is able to explain the memory advantage of emotional faces in behavioral studies, since it rests on the assumption that an emotional expression constitutes a recognition-relevant feature whereas neutrality does not. If the explanation accounts for such results, it can potentially be tested by creating encoding contexts that switch the asymmetry (i.e., that neutrality is encoded as a feature, emotion not). For example, imagine an encoding context that presents happy and neutral faces as depicting guests of a party. Then, “neutrality” of a facial expression might be better encoded than the emotional “happy” expression, which would be considered the default in this context (see Schmidt, 1991, regarding the assumption that unusual or unexpected items are perceived as more distinct; McDaniel & Geraci, 2006, for a review). Regarding the present study, this would mean that distinctiveness and diagnosticity are the driving factors underlying the present results, not a specific link between low spatial frequencies and emotion processing.

Emotion-induced recognition bias

Although not the central focus of our experiments, our analyses revealed that spatial frequencies not only had an impact on emotional memory, but also moderated the effect of emotion on the response criterion. For high frequencies, emotional faces (i.e., positive and negative ones) were associated with a more liberal response criterion than neutral faces. This result corresponds with an emotion-induced recognition bias that is typically found with negative words but has also been reported with faces (Johansson et al., 2004; but see Windmann & Chmielewski, 2008) and positive words (Windmann & Chmielewski, 2008). For low frequencies, however, the result did not replicate. (Indeed, negative LSF faces were associated with a significantly more conservative response criterion than were positive faces. However, as already indicated in the Results section, we should not put too much weight on this detail since it can potentially be reduced to differences in categorization performance (i.e., it is difficult to recognize whether a face is fearful or neutral, leading to a shift in response criterion; see Appendix 2).

The more liberal response criterion for emotional HSF faces (than for neutral ones) is only a small effect. However, its size matches the corresponding (nonsignificant) effect found for faces by Windmann and Chmielewski (2008). There are three main explanations of the emotion-induced recognition bias (see, e.g., Windmann & Chmielewski, 2008), two of which attribute it to genuine emotion processing. First, the executive-control account (Windmann & Krüger, 1998) claims that there is an asymmetry between “old” and “new” responses: An “old” response is the preactivated response tendency that is (often) executed on the basis of automatic processes; a “new” response is the result of more controlled processes. Emotional stimuli (especially threatening ones) tend to interrupt or impair ongoing controlled processes, and it follows that the tendency to give the preactivated “old” response increases. Second, according to a variant of the memory bias account, “emotion circuits in the brain…boost the activity of currently activated sensory and mnemonic representations in such a way that emotional stimuli appear clearer and more vivid than nonemotional stimuli” (Windmann & Chmielewski, 2008, p. 764). This clarity and vividness is misattributed to familiarity, thus increasing the rate of “old” responses.

For two reasons, both accounts cannot explain why, in our results, the HSF stimuli in particular are associated with the recognition bias. First, although it may seem plausible that LSF faces triggered emotional processes to a larger degree than HSF faces, given the long presentation duration, it would seem far-fetched to claim the opposite. Second, if the recognition bias result is explained with reference to genuine emotional processes, consequentially (though not necessarily) the emotion-enhanced recognition performance should be explained with reference to emotional processes as well. Doing so, however, creates a dilemma because one would need to assume stronger emotional processing of HSF faces in the case of the recognition bias, but stronger emotional processing of LSF faces in the case of emotion-enhanced recognition performance.

However, a third account of the recognition bias—the semantic cohesion explanation (Maratos et al., 2000)—focuses on the fact that the effect has been found most consistently for negative word stimuli. Typically, negative words in a study have stronger semantic interrelations. Therefore, semantic priming processes within old and new words can make negative words appear more familiar, inducing a bias toward “old” responses. Of course, due to its specific focus, this explanation cannot be straightforwardly applied to facial stimuli. However, a reinterpretation of this account might make it applicable. If we assume that the cohesion does not result in priming processes (i.e., a mutual facilitation of old and new negative words), but simply means that there is greater similarity (overlap) in the feature vectors that represent negative word concepts (as compared to positive and neutral ones), we can explain the recognition bias by the process of retrieval confusion: due to overlap, a new negative word has an increased likelihood of activating the feature vector of an old word. This reinterpretation of the semantic cohesion account can, in principle, be applied to our materials as well, if we make the additional post-hoc assumption that happy and fearful HSF faces, respectively, are on average perceived to be more similar to each other than neutral HSF faces (i.e., both might be perceived as expressive, and therefore more similar to each other than to neutral, whereas LSF faces might provide generally fewer associations, as we noted above). However, this assumption remains speculative at present. Further research will be necessary to clarify what underlies the emotion-induced recognition bias in our study. We wanted, however, to highlight that the bias might also be explained by nonemotional processes.


Taken together, our study provides the first behavioral evidence for a differential enhancement of emotional memory due to spatial-frequency filtering. An emotional memory enhancement was consistently observed only for the low spatial frequencies. This is in line with the assumption that these frequencies are important for the automatic activation of emotion-related processes. However, this interpretation in terms of genuine emotion-related differences between LSF and HSF faces can be challenged in several ways. First, the effect was found despite the long presentation duration of stimuli in the encoding phase, which suggests that the impact of spatial frequencies on emotional processing does not necessitate short and fast presentation conditions. Second, the result can potentially be explained in terms of differences in the number of mnemonically relevant stimulus features and resulting differences in item discriminability. Third, a further result—the emotion-induced recognition bias found with HSF but not LSF faces—produces a dilemma if it is interpreted as a genuine emotion effect at the same time as the emotional memory enhancement effect. Thus, the observed results can also be explained by nonemotional aspects related to perception (i.e., distinctiveness, diagnosticity). Clearly, more research is needed to determine whether the observed effects are based on emotion-related processing differences or not. Potentially, the observed results emerge from several underlying processes (emotion-related and not emotion-related ones), which are impacted by manipulation of spatial frequency.

In this regard, we also wish to note several limitations of the present study. At first, it would have been desirable to include a nonfiltered comparison condition. In this way, we could have compared whether recognition memory of spatial-frequency-filtered stimuli is generally different from the recognition memory performance of unfiltered stimuli. The present results only allow us to compare HSF and LSF stimuli. Second, the specific encoding tasks created some (although negligible) variability in the results. It would be desirable to control for this variance, for example, by including performance in the encoding tasks as a further control factor. Unfortunately, this was not possible in the present study, because information about age and provenance were not available for the stimuli. Thus, responses in these tasks were arbitrary. Emotion intensity was a subjective judgment. Future studies in this field could include encoding performance as a further factor into the analyses (if the employed tasks provide this possibility) to enhance clarity of results. Furthermore, the inclusion of neuropsychological methods could help to clarify the exactly underling processes (e.g., differential activation of emotion-related brain areas at encoding and retrieval could help to decide, which one of the two alternative explanations is correct). At the present point, we can only speculate about the exactly underlying processes. Whether emotion-related processes or perceptual ones drive the results cannot be determined on the basis of these results alone. Nevertheless, our study can serve as a starting point for research on emotion and memory using frequency-filtered stimuli. It shows that spatial frequencies play a role for recognition memory. If emotion-related processes underlie the present results, this would extend the link of emotion and spatial frequencies to memory; if perceptual processes underlie the present results, our study would provide interesting insights into the emotional memory advantage in general. We thus hope that our study encourages further research in this field.


  1. 1.

    We have chosen these two specific emotions because of theoretical considerations (e.g., Öhman & Mineka, 2001) and because of empirical evidence that (especially negative) high-arousing emotions might preferably trigger subcortical, emotion-related brain areas (e.g., Mattavelli et al., 2014). More specifically, increased amygdala activation relative to neutral stimuli has not only been observed for BSF (i.e., nonfiltered), but also for LSF fearful faces (Vuilleumier et al., 2003).

  2. 2.

    We have chosen these specific tasks because all of them are based on perceptual characteristics of the face and typically are automatically inferred (i.e., people automatically infer emotion, age, gender, and provenance from a face). Moreover, with regard to gender, emotion, and race, evidence already exists of the diagnosticity of a specific spatial frequency range (e.g., Aguado, Serrano-Pedraza, Rodríguez, & Román, 2010; Fiset, Blais, Gosselin, Bub, & Tanaka, 2008; Harel & Bentin, 2009; Schyns & Oliva, 1999). Thus, by employing different tasks, we could ensure that our results were not driven by the diagnosticity of the specific frequency range.

  3. 3.

    More precisely, we used MATLAB’S fir1 filter (with parameters 50 and 6/100 [low] or 24/100 [high], Nyquist frequency of 100 [i.e., the horizontal plane]) to create a 2-D mask, which was then multiplied with the Fourier representation of the image.

  4. 4.

    An anonymous reviewer correctly pointed out that we did not adjust the root mean square (RMS) contrast, which is a better indicator of perceived contrast (Bex & Makous, 2002). Thus, it could still be possible that differences in contrast contributed to our results. However, first, RMS contrast did not differ across emotion categories within one frequency spectrum, and second, the pattern of observed results would not be predicted from a contrast explanation. Precisely, if RMS contrast had driven the results, one would have expected better recognition and better memory for the high-contrast LSF stimuli, because contrast has been observed to correlate with activity in V1 (Boynton, Engel, Glover, & Heeger, 1996). Moreover, neural responses saturate at medium to high contrast levels. Although the specific contrast–response functions vary across brain areas and specific cells, many neuronal responses are nearly saturated at Michelson contrasts above 0.3 (e.g., V1 and LGN; Sclar, Maunsell, & Lennie, 1990) or at less than 40% RMS contrast (in V1 and fusiform face area (FFA); Yue, Cassidy, Devaney, Holt, & Tootell, 2011). A recent study by Maher et al. (2016) also showed that the FFA already responds to faces at very low contrasts, and that the response does not change with increasing contrast. Thus, it is very unlikely that differences in the RMS contrasts of the stimuli had an influence onto the results.

  5. 5.

    The filler task was a distractor–response binding paradigm (see Frings & Moeller, 2012). In this task, letters were presented as either targets or distractors in a flanker configuration. Thus, the task was cognitively demanding, but not related to emotion.

  6. 6.

    Only n = 19 participants across all experiments stated that they had an idea that a memory test was going to be conducted (n = 4 in the age task, n = 7 in the provenance task, n = 8 in the emotion task, and n = 0 in the gender task). Moreover, these responses seemed partially due to the demand character of the question. Several participants made further comments like “I guessed it,” “Now that I know, I find it not surprising,” “It was intuition,” and no participant mentioned having explicitly encoded the faces. Therefore, all participants were kept for further analyses. Excluding these participants did not change the results.

  7. 7.

    Thus, we acknowledge the ongoing debate regarding the adequacy of continuous versus discrete models of recognition processes (see, e.g., Kellen, Klauer, & Bröder, 2013; Klauer & Kellen, 2015; and Pazzaglia, Dube, & Rotello, 2013, for recent contributions). Note that the most often used alternative to d' and c—namely Pr and Br from double high-threshold theory (Snodgrass & Corwin, 1988)—yielded essentially the same results as d' and c with our data.

  8. 8.

    One might argue that the remember/know/guess differentiation might be taken as a proxy for confidence ratings. However, Martin and colleagues (2011) compared ROC analyses based on confidence ratings and on remember/know/guess differentiation. The two analyses diverged especially with regard to the ratio s: ROC analyses based on confidence yielded values of s < 1, whereas ROC analyses based on remember/know/guess yielded values of s that did not significantly deviate from 1. Thus, a critic would not be convinced by ROC analyses based on remember/know/guess data.

  9. 9.

    The first contrast interacted significantly with task (see Table 3). This was due to the comparably large variability of the neutral condition across tasks (see Table 2). We refrain from discussing this detail due to its questionable replicability.


  1. Adelman, J. S., & Estes, Z. (2013). Emotion and memory: A recognition advantage for positive and negative words independent of arousal. Cognition, 129, 530–535. doi:10.1016/j.cognition.2013.08.014

    Article  PubMed  Google Scholar 

  2. Adolphs, R., Russell, J. A., & Tranel, D. (1999). A role for the human amygdala in recognizing emotional arousal from unpleasant stimuli. Psychological Science, 10, 167–171. doi:10.1111/1467-9280.00126

    Article  Google Scholar 

  3. Aguado, L., Serrano-Pedraza, I., Rodríguez, S., & Román, F. J. (2010). Effects of spatial frequency content on classification of face gender and expression. Spanish Journal of Psychology, 13, 525–537. doi:10.1017/S1138741600002225

    Article  PubMed  Google Scholar 

  4. Bannerman, R. L., Hibbard, P. B., Chalmers, K., & Sahraie, A. (2012). Saccadic latency is modulated by emotional content of spatially filtered face stimuli. Emotion, 12, 1384–1392. doi:10.1037/a0028677

    Article  PubMed  Google Scholar 

  5. Barrett, L. F., & Bar, M. (2009). See it with feeling: Affective predictions during object perception. Philosophical Transactions of the Royal Society, 364, 1325–1334. doi:10.1098/rstb.2008.0312

    Article  Google Scholar 

  6. Bex, P. J., & Makous, W. (2002). Spatial frequency, phase, and the contrast of natural images. Journal of the Optical Society of America A, 19, 1096–1106. doi:10.1364/JOSAA.19.001096

    Article  Google Scholar 

  7. Bowen, H. J., Spaniol, J., Patel, R., & Voss, A. (2016). A diffusion model analysis of decision biases affecting delayed recognition of emotional stimuli. PLoS ONE, 11, e0146769. doi:10.1371/journal.pone.014676

    Article  PubMed  PubMed Central  Google Scholar 

  8. Boynton, G. M., Engel, S. A., Glover, G. H., & Heeger, D. J. (1996). Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience, 16, 4207–4221.

    PubMed  Google Scholar 

  9. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. doi:10.4324/9780203771587

    Google Scholar 

  10. De Cesarei, A., & Codispoti, M. (2013). Spatial frequencies and emotional perception. Reviews in the Neurosciences, 24, 89–104. doi:10.1515/revneuro-2012-0053

    Article  PubMed  Google Scholar 

  11. De Valois, R. L., & De Valois, K. K. (1988). Spatial vision. New York, NY, US: Oxford University Press.

    Google Scholar 

  12. Dolcos, F., LaBar, K. S., & Cabeza, R. (2004). Interaction between the amygdala and the medial temporal lobe memory system predicts better memory for emotional events. Neuron, 42, 855–863. doi:10.1016/S0896-6273(04)00289-2

    Article  PubMed  Google Scholar 

  13. Dolcos, F., LaBar, K. S., & Cabeza, R. (2006). The memory enhancing effect of emotion: Functional neuroimaging evidence. In B. Uttl, N. Ohta, & A. L. Siegenthaler (Eds.), Memory and emotion: Interdisciplinary perspectives (pp. 107–133). Malden, MA: Blackwell. doi:10.1002/9780470756232.ch6

    Google Scholar 

  14. Dougal, S., & Rotello, C. M. (2007). “Remembering” emotional words is based on response bias, not recollection. Psychonomic Bulletin & Review, 14, 423–429. doi:10.3758/BF03194083

    Article  Google Scholar 

  15. Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). GPower*3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191. doi:10.3758/bf03193146

    Article  PubMed  Google Scholar 

  16. Fiset, D., Blais, C., Gosselin, F., Bub, D., & Tanaka, J. (2008). Potent features for the categorization of Caucasian, African American, and Asian faces in Caucasian observers [Abstract]. Journal of Vision, 8(6), 258. doi:10.1167/8.6.258

    Article  Google Scholar 

  17. Frings, C., & Moeller, B. (2012). The horserace between distractors and targets: Retrieval-based probe responding depends on distractor–target asynchrony. Journal of Cognitive Psychology, 24, 582–590. doi:10.1080/20445911.2012.666852

    Article  Google Scholar 

  18. Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: A validation study. Cognition and Emotion, 22, 1094–1118. doi:10.1080/02699930701626582

    Article  Google Scholar 

  19. Gupta, R., & Srinivasan, N. (2009). Emotions help memory for faces: Role of whole and parts. Cognition and Emotion, 23, 807–816. doi:10.1080/02699930802193425

    Article  Google Scholar 

  20. Harel, A., & Bentin, S. (2009). Stimulus type, level of categorization, and spatial-frequencies utilization: Implications for perceptual categorization hierarchies. Journal of Experimental Psychology: Human Perception And Performance, 35, 1264–1273. doi:10.1037/a0013621

    PubMed  PubMed Central  Google Scholar 

  21. Hegdé, J. (2008). Time course of visual perception: Coarse-to-fine processing and beyond. Progress In Neurobiology, 84, 405–439. doi:10.1016/j.pneurobio.2007.09.001

    Article  PubMed  Google Scholar 

  22. Holmes, A., Green, S., & Vuilleumier, P. (2005). The involvement of distinct visual channels in rapid attention towards fearful facial expressions. Cognition and Emotion, 19, 899–922. doi:10.1080/02699930441000454

    Article  Google Scholar 

  23. Johansson, M., Mecklinger, A., & Treese, A. (2004). Recognition memory for emotional and neutral faces: An event-related potential study. Journal of Cognitive Neuroscience, 16, 1840–1853.

    Article  PubMed  Google Scholar 

  24. Kellen, D., Klauer, K. C., & Bröder, A. (2013). Recognition memory models and binary-response ROCs: A comparison by minimum description length. Psychonomic Bulletin & Review, 20, 693–719. doi:10.3758/s13423-013-0407-2

    Article  Google Scholar 

  25. Kensinger, E. A. (2009). Remembering the details: Effects of emotion. Emotion Review, 1, 99–113. doi:10.1177/1754073908100432

    Article  PubMed  PubMed Central  Google Scholar 

  26. Kensinger, E. A., & Corkin, S. (2004). Two routes to emotional memory: Distinct neural processes for valence and arousal. Proceedings of the National Academy Sciences, 101, 3310–3315. doi:10.1073/pnas.0306408101

    Article  Google Scholar 

  27. Klauer, K. C., & Kellen, D. (2015). The flexibility of models of recognition memory: The case of confidence ratings. Journal of Mathematical Psychology, 67, 8–25. doi:10.1016/

    Article  Google Scholar 

  28. LaBar, K. S., & Phelps, E. A. (1998). Arousal-mediated memory consolidation: Role of the medial temporal lobe in humans. Psychological Science, 9, 490–493. doi:10.1111/1467-9280.00090

    Article  Google Scholar 

  29. Laeng, B., Profeti, I., Sæther, L., Adolfsdottir, S., Lundervold, A., Vangberg, T.,…Waterloo, K. (2010). Invisible expressions evoke core impressions. Emotion, 10, 573–586. doi:10.1037/a0018689

  30. Langner, O., Becker, E. S., & Rinck, M. (2012). Higher sensitivity for low spatial frequency expressions in social anxiety: Evident in indirect but not direct tasks? Emotion, 12, 847–851. doi:10.1037/a0028761

    Article  PubMed  Google Scholar 

  31. Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H. J., Hawk, S. T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud faces database. Cognition and Emotion, 24, 1377–1388. doi:10.1080/02699930903485076

    Article  Google Scholar 

  32. Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces. Stockholm, Sweden: Department of Clinical Neuroscience, Psychology Section, Karolinska Institute.

    Google Scholar 

  33. Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user’s guide (2nd ed.). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.

    Google Scholar 

  34. Maher, S., Ekstrom, T., Tong, Y., Nickerson, L. D., Frederick, B., & Chen, Y. (2016). Greater sensitivity of the cortical face processing system to perceptually-equated face detection. Brain Research, 1631, 13–21. doi:10.1016/j.brainres.2015.11.011

    Article  PubMed  Google Scholar 

  35. Maratos, E. J., Allan, K., & Rugg, M. D. (2000). Recognition memory for emotionally negative and neutral words: An ERP study. Neuropsychologia, 38, 1452–1465. doi:10.1016/S0028-3932(00)00061-0

    Article  PubMed  Google Scholar 

  36. Martin, C. D., Baudouin, J., Franck, N., Guillaume, F., Guillem, F., Huron, C., & Tiberghien, G. (2011). Comparison of RK and confidence judgement ROCs in recognition memory. Journal of Cognitive Psychology, 23, 171–184. doi:10.1080/20445911.2011.476722

    Article  Google Scholar 

  37. Mather, M., & Sutherland, M. (2009). Disentangling the effects of arousal and valence on memory for intrinsic details. Emotion Review, 1, 118–119. doi:10.1177/1754073908100435

    Article  Google Scholar 

  38. Mattavelli, G., Sormaz, M., Flack, T., Asghar, A. R., Fan, S., Frey, J., &…Andrews, T. J. (2014). Neural responses to facial expressions support the role of the amygdala in processing threat. Social Cognitive And Affective Neuroscience, 9, 1684–1689. doi:10.1093/scan/nst162

  39. McDaniel, M. A., & Geraci, L. (2006). Encoding and retrieval processes in distinctiveness effects: Toward an integrative framework. In R. R. Hunt, J. B. Worthen, R. R. Hunt, & J. B. Worthen (Eds.), Distinctiveness and memory (pp. 65–88). New York, NY, US: Oxford University Press. doi:10.1093/acprof:oso/9780195169669.003.0004

    Google Scholar 

  40. McGaugh, J. L. (2004). The amygdala modulates the consolidation of memories of emotionally arousing experiences. Annual Review of Neuroscience, 27, 1–28. doi:10.1146/annurev.neuro.27.070203.144157

    Article  PubMed  Google Scholar 

  41. McGaugh, J. L., & Cahill, L. (2003). Emotion and memory: Central and peripheral contributions. In R. J. Davidson, K. R. Scherer, H. H. Goldsmith, R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 93–116). New York, NY, US: Oxford University Press.

    Google Scholar 

  42. McNeely, H. E., Dywan, J., & Segalowitz, S. J. (2004). ERP indices of emotionality and semantic cohesiveness during recognition judgments. Psychophysiology, 41, 117–129. doi:10.1111/j.1469-8986.2003.00137.x

    Article  PubMed  Google Scholar 

  43. Morawetz, C., Baudewig, J., Treue, S., & Dechent, P. (2011). Effects of spatial frequency and location of fearful faces on human amygdala activity. Brain Research, 1371, 87–99. doi:10.1016/j.brainres.2010.10.110

    Article  PubMed  Google Scholar 

  44. Morris, J. S., Ohman, A., & Dolan, R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences, 96, 1680–1685. doi:10.1073/pnas.96.4.1680

    Article  Google Scholar 

  45. O’Brien, R. G., & Kaiser, M. K. (1985). MANOVA method for analyzing repeated measures designs: An extensive primer. Psychological Bulletin, 97, 316–333.

    Article  PubMed  Google Scholar 

  46. Ochsner, K. N. (2000). Are affective events richly “remembered” or simply familiar? The experience and process of recognizing feelings past. Journal of Experimental Psychology: General, 129, 242–261. doi:10.1037/0096-3445.129.2.242

    Article  Google Scholar 

  47. Öhman, A., & Mineka, S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychological Review, 108, 483–522. doi:10.1037/0033-295X.108.3.483

    Article  PubMed  Google Scholar 

  48. Olson, C. L. (1976). Practical considerations in choosing a MANOVA test statistic: A rejoinder to Stevens. Psychological Bulletin, 86, 1350–1352. doi:10.1037/0033-2909.86.6.1350

    Article  Google Scholar 

  49. Patel, R., Girard, T. A., & Green, R. A. (2012). The influence of indirect and direct emotional processing on memory for facial expressions. Cognition and Emotion, 26, 1143–1152. doi:10.1080/02699931.2011.642848

    Article  PubMed  Google Scholar 

  50. Pazzaglia, A. M., Dube, C., & Rotello, C. M. (2013). A critical comparison of discrete-state and continuous models of recognition memory: Implications for recognition and beyond. Psychological Bulletin, 139, 1173–1203. doi:10.1037/a0033044

    Article  PubMed  Google Scholar 

  51. Pessoa, L., & Adolphs, R. (2010). Emotion processing and the amygdala: From a “low road” to “many roads” of evaluating biological significance. Nature Reviews Neuroscience, 11, 773–783. doi:10.1038/nrn2920

    Article  PubMed  PubMed Central  Google Scholar 

  52. Petrova, K., & Wentura, D. (2012). Upper–lower visual field asymmetries in oculomotor inhibition of emotional distractors. Vision Research, 62, 209–219. doi:10.1016/j.visres.2012.04.010

    Article  PubMed  Google Scholar 

  53. Phelps, E. A., LaBar, K. S., Anderson, A. K., O’Connor, K. J., Fulbright, R. J., & Spencer, D. D. (1998). Specifying the contributions of the human amygdala to emotional memory: A case study. Neurocase, 4, 527–540. doi:10.1080/13554799808410645

    Article  Google Scholar 

  54. Phelps, E. A., & Sharot, T. (2008). How (and why) emotion enhances the subjective sense of recollection. Current Directions in Psychological Science, 17, 147–152. doi:10.1111/j.1467-8721.2008.00565.x

    Article  PubMed  PubMed Central  Google Scholar 

  55. Pourtois, G., Schettino, A., & Vuilleumier, P. (2013). Brain mechanisms for emotional influences on perception and attention: What is magic and what is not. Biological Psychology, 92, 492–512. doi:10.1016/j.biopsycho.2012.02.007

    Article  PubMed  Google Scholar 

  56. Prete, G., Capotosto, P., Zappasodi, F., Laeng, B., & Tommasi, L. (2015). The cerebral correlates of subliminal emotions: An eleoencephalographic study with emotional hybrid faces. European Journal Of Neuroscience, 42, 2952–2962. doi:10.1111/ejn.13078

    Article  PubMed  Google Scholar 

  57. Ratcliff, R., Sheu, C., & Gronlund, S. D. (1992). Testing global memory models using ROC curves. Psychological Review, 99, 518–535. doi:10.1037/0033-295X.99.3.518

    Article  PubMed  Google Scholar 

  58. Richardson, M. P., Strange, B. A., & Dolan, R. J. (2004). Encoding of emotional memories depends on amygdala and hippocampus and their interactions. Nature Neuroscience, 7, 278–285. doi:10.1038/nn1190

    Article  PubMed  Google Scholar 

  59. Righi, S., Marzi, T., Toscani, M., Baldassi, S., Ottonello, S., & Viggiano, M. P. (2012). Fearful expressions enhance recognition memory: Electrophysiological evidence. Acta Psychologica, 139, 7–18. doi:10.1016/j.actpsy.2011.09.015

    Article  PubMed  Google Scholar 

  60. Ritchey, M., LaBar, K. S., & Cabeza, R. (2011). Level of processing modulates the neural correlates of emotional memory formation. Journal of Cognitive Neuroscience, 23, 757–771. doi:10.1162/jocn.2010.21487

    Article  PubMed  Google Scholar 

  61. Rohr, M., Degner, J., & Wentura, D. (2012). Masked emotional priming beyond global valence activations. Cognition and Emotion, 26, 224–244. doi:10.1080/02699931.2011.576852

    Article  PubMed  Google Scholar 

  62. Rohr, M., & Wentura, D. (2014). Spatial frequency filtered images reveal differences between masked and unmasked processing of emotional information. Consciousness and Cognition, 29, 141–158. doi:10.1016/j.concog.2014.08.021

    Article  PubMed  Google Scholar 

  63. Rotello, C. M., Masson, M. J., & Verde, M. F. (2008). Type I error rates and power analyses for single-point sensitivity measures. Perception & Psychophysics, 70, 389–401. doi:10.3758/PP.70.2.389

    Article  Google Scholar 

  64. Ruiz-Soler, M., & Beltran, F. (2006). Face perception: An integrative review of the role of spatial frequencies. Psychological Research, 70, 273–292. doi:10.1007/s00426-005-0215-z

    Article  PubMed  Google Scholar 

  65. Schmidt, S. R. (1991). Can we have a distinctive theory of memory? Memory & Cognition, 19, 523–542. doi:10.3758/BF03197149

  66. Schyns, P., & Oliva, A. (1999). Dr. Angry and Mr. Smile: When categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition, 69, 243–265. doi:10.1016/S0010-0277(98)00069-9

    Article  PubMed  Google Scholar 

  67. Sclar, G., Maunsell, J. H., & Lennie, P. (1990). Coding of image contrast in central visual pathways of the macaque monkey. Vision Research, 30, 1–10. doi:10.1016/0042-6989(90)90123-3

    Article  PubMed  Google Scholar 

  68. Sergerie, K., Lepage, M., & Armony, J. L. (2006). A process-specific functional dissociation of the amygdala in emotional memory. Journal of Cognitive Neuroscience, 18, 1359–1367. doi:10.1162/jocn.2006.18.8.1359

    Article  PubMed  Google Scholar 

  69. Sharot, T., & Yonelinas, A. P. (2008). Differential time-dependent effects of emotion on recollective experience and memory for contextual information. Cognition, 106, 538–547. doi:10.1016/j.cognition.2007.03.002

    Article  PubMed  Google Scholar 

  70. Skottun, B. C., & Skoyles, J. R. (2008). Spatial frequency and the magno-parvocellular distinction—Some remarks. Neuro-Ophthalmology, 32, 179–186. doi:10.1080/01658100802274952

    Article  Google Scholar 

  71. Smith, M. L., & Merlusca, C. (2014). How task shapes the use of information during facial expression categorizations. Emotion, 14, 478–487. doi:10.1037/a0035588

    Article  PubMed  Google Scholar 

  72. Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50. doi:10.1037/0096-3445.117.1.34

    Article  Google Scholar 

  73. Talmi, D. (2013). Enhanced emotional memory: Cognitive and neural mechanisms. Current Directions in Psychological Science, 22, 430–436. doi:10.1177/0963721413498893

    Article  Google Scholar 

  74. Talmi, D., & Moscovitch, M. (2004). Can semantic relatedness explain the enhancement of memory for emotional words? Memory & Cognition, 32, 742–751. doi:10.3758/BF03195864

    Article  Google Scholar 

  75. Talmi, D., Schimmack, U., Paterson, T., & Moscovitch, M. (2007). The role of attention and relatedness in emotionally enhanced memory. Emotion, 7, 89–102. doi:10.1037/1528-3542.7.1.89

    Article  PubMed  Google Scholar 

  76. Tamietto, M., & de Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nature Reviews Neuroscience, 11, 697–709. doi:10.1038/nrn2889

    Article  PubMed  Google Scholar 

  77. Verde, M. F., & Rotello, C. M. (2003). Does familiarity change in the revelation effect? Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 739–746. doi:10.1037/0278-7393.29.5.739

    PubMed  Google Scholar 

  78. Vuilleumier, P., Armony, J., Driver, J., & Dolan, R. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nature Neuroscience, 6, 624–631. doi:10.1038/nn1057

    Article  PubMed  Google Scholar 

  79. Wagner, H. L. (1993). On measuring performance in category judgment studies of nonverbal behavior. Journal of Nonverbal Behavior, 17, 3–28. doi:10.1007/BF00987006

  80. Wang, B. (2013). Facial expression influences recognition memory for faces: Robust enhancement effect of fearful expression. Memory, 21, 301–314. doi:10.1080/09658211.2012.725740

    Article  PubMed  Google Scholar 

  81. Windmann, S., & Chmielewski, A. (2008). Emotion-induced modulation of recognition memory decisions in a Go/NoGo task: Response bias or memory bias? Cognition and Emotion, 22, 761–776. doi:10.1080/02699930701507899

    Article  Google Scholar 

  82. Windmann, S., & Krüger, T. (1998). Subconscious detection of threat as reflected by an enhanced response bias. Consciousness and Cognition, 7, 603–633. doi:10.1006/ccog.1998.0337

    Article  PubMed  Google Scholar 

  83. Windmann, S., & Kutas, M. (2001). Electrophysiological correlates of emotion-induced recognition bias. Journal of Cognitive Neuroscience, 13, 577–592. doi:10.1162/089892901750363172

    Article  PubMed  Google Scholar 

  84. Wixted, J. T., & Mickes, L. (2010). A continuous dual-process model of remember/know judgments. Psychological Review, 117, 1025–1054. doi:10.1037/a0020874

    Article  PubMed  Google Scholar 

  85. Yonelinas, A. P. (2002). The nature of recollection and familiarity: A review of 30 years of research. Journal of Memory and Language, 46, 441–517. doi:10.1006/jmla.2002.2864

    Article  Google Scholar 

  86. Yonelinas, A. P., & Jacoby, L. L. (1994). Dissociations of processes in recognition memory: Effects of interference and of response speed. Canadian Journal of Experimental Psychology, 48, 516–535. doi:10.1037/1196-1961.48.4.516

    Article  PubMed  Google Scholar 

  87. Yue, X., Cassidy, B. S., Devaney, K. J., Holt, D. J., & Tootell, R. H. (2011). Lower-level stimulus features strongly influence responses in the fusiform face area. Cerebral Cortex, 21, 35–47. doi:10.1093/cercor/bhq050

    Article  PubMed  Google Scholar 

  88. Zald, D. H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain Research Reviews, 41, 88–123. doi:10.1016/S0165-0173(02)00248-5

    Article  PubMed  Google Scholar 

Download references

Author note

This research was supported by a grant from the German Research Foundation (DFG; No. WE 2284/9). The authors thank Ullrich Ecker for his helpful comments and Gerrit Großmann for his support concerning stimulus preparation and filtering.

Author information



Corresponding authors

Correspondence to Michaela Rohr or Dirk Wentura.


Appendix 1

Table 6 Norm rating data of the images used, taken from Langner et al. (2010) for the RAFD set and Goeleven et al. (2008) for the KDEF set
Table 7 Inferential statistics for memory performance (d a ) for the overall analysis and the analysis of low- and high-frequency stimuli
Table 8 “Remember” and “know” responses to old items (false alarms to new items are in parentheses) as a function of frequency, emotional expression, and task

Appendix 2

All filtered stimuli were presented to a separate sample of participants (N = 29) in a forced choice categorization task. Table 9 depicts the mean categorization rates for each emotion and response category, averaged across individuals.

Table 9 Mean categorization rates and mean unbiased hit rates (in percentages) for each emotion category and spatial frequency

All emotions were clearly recognized above chance (i.e., >33%). Moreover, all emotions except LSF fear were recognized almost perfectly. Fearful LSF expressions were to a certain extent mistaken as being neutral (i.e., there was a conservative response bias for the category fearful). To statistically examine differences between the categorization rates, we calculated the unbiased hit rate (Wagner, 1993) for each participant, thereby considering potential response biases. As is indicated in Table 9, unbiased hit rates for fear and neutral faces are somewhat lower than the original hit rates, due to the fact that neutral faces were sometimes mistaken as fearful or vice versa. The arc sine transformed unbiased hit rates were analyzed in a repeated-measures MANOVA with emotion and spatial frequency as within-subjects factors. This analysis yielded a main effect of emotion, F(2, 27) = 144.10, p < .001, η p 2 = .914, a main effect of frequency, F(1, 28) = 46.65, p < .001, η p 2 = .625, and a significant emotion × frequency interaction, F(2, 27) = 16.87, p < .001, η p 2 = .555. High-spatial-frequency faces were generally better categorized, and happy faces were better recognized than neutral and fearful faces. Although we observed no difference between neutral and fearful faces in the high spatial frequencies, fearful faces were categorized significantly worse, t(28) = 5.00, p < .001 (i.e., confused with neutral faces) than neutral faces in the low spatial frequency condition.

Thus, the pattern of differences does not mimic the pattern of recognition reported in the Results section. It is therefore unlikely that the memory results can be reduced to differences in categorization accuracy. Nevertheless, to corroborate this claim, we correlated recognition performance (d’), and response bias (c) with categorization performance using items as data units, aggregated over spatial frequency as well as separately for HSF and LSF versions. Aggregated over spatial frequencies, we did not find any correspondence for recognition performance (r = .00, p = .99) or response bias data (r = –.07, p = .43). Separate analyses for low and high spatial frequencies yielded insignificant correlations as well (r = –.15, p = .21, for LSF; r = .19, p = .11, for HSF). Given the different signs and the insignificance of these correlations, they cannot account for the emotional memory advantage.

However, correlations of the emotion-induced recognition bias with the categorization data indicated that the observed conservative bias in the low spatial frequencies might be due at least partly to categorization difficulty. LSF categorization performance was negatively related to the response criterion (r = –.23, p = .05). To shed further light on this issue, we conducted a hierarchical multiple regression with response criterion as the dependent variable and two coding variables corresponding to the contrasts neutral versus emotional faces (K1) and happy versus fearful faces (K2) as predictors entered in Step 1 as well as categorization performance in Step 2. In line with the findings reported in the Results section, K2 (but not K1) was a significant predictor in Step 1. In Step 2, however, none of the predictors were significant, which indicates that K2 and categorization performance are mutually redundant. Thus, we cannot refute the argument that categorization performance is responsible for the response bias result. The corresponding correlation for HSF was insignificant and, more important, had a different sign (r = .20, p = .08). Thus, the more liberal response criterion for emotional HSF faces cannot be explained by differences in recognition.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rohr, M., Tröger, J., Michely, N. et al. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias. Mem Cogn 45, 699–715 (2017).

Download citation


  • Emotion
  • Memory
  • Spatial frequencies