Behavior Research Methods

, Volume 46, Issue 2, pp 596–610

The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database

  • Artur Marchewka
  • Łukasz Żurawski
  • Katarzyna Jednoróg
  • Anna Grabowska
Open Access
Article

Abstract

Selecting appropriate stimuli to induce emotional states is essential in affective research. Only a few standardized affective stimulus databases have been created for auditory, language, and visual materials. Numerous studies have extensively employed these databases using both behavioral and neuroimaging methods. However, some limitations of the existing databases have recently been reported, including limited numbers of stimuli in specific categories or poor picture quality of the visual stimuli. In the present article, we introduce the Nencki Affective Picture System (NAPS), which consists of 1,356 realistic, high-quality photographs that are divided into five categories (people, faces, animals, objects, and landscapes). Affective ratings were collected from 204 mostly European participants. The pictures were rated according to the valence, arousal, and approach–avoidance dimensions using computerized bipolar semantic slider scales. Normative ratings for the categories are presented for each dimension. Validation of the ratings was obtained by comparing them to ratings generated using the Self-Assessment Manikin and the International Affective Picture System. In addition, the physical properties of the photographs are reported, including luminance, contrast, and entropy. The new database, with accompanying ratings and image parameters, allows researchers to select a variety of visual stimulus materials specific to their experimental questions of interest. The NAPS system is freely accessible to the scientific community for noncommercial use by request at http://naps.nencki.gov.pl.

Keywords

Emotion induction Affective visual stimuli Affective ratings Picture database Physical properties Gender differences International Affective Picture System Nencki Affective Picture System 

One of the most important tasks for experimenters when studying the influences of emotions on various cognitive processes (e.g., memory or attention) is the selection of appropriate and controlled stimuli for inducing specific emotional states (Gerrards-Hesse, Spies, & Hesse, 1994; Horvat, Popović, & Ćosić, 2012, 2013). Emotionally charged materials in different modalities (auditory, lexical, and visual) have been widely used in both behavioral and neuroimaging research for both healthy and clinical populations (Grabowska et al., 2011; Marchewka & Nowicka, 2007; Posner et al., 2009; Viinikainen, Kätsyri, & Sams, 2011). Currently, several sets of standardized, emotionally charged stimuli are freely available to researchers worldwide. These sets are standardized on the basis of either dimensional or discrete category theories of emotion (Barrett, 2006; Dalgleish, 2004; Ekman, 1992). Dimensional theories of emotion claim that affective experiences can be characterized by several fundamental dimensions. These dimensions might include valence, arousal (sometimes referred to as activation), dominance, and approach–avoidance (see Mauss & Robinson, 2009, for a review), and each dimension has its own range. Valence ranges from highly positive to highly negative, and arousal from excited/aroused to relaxed/unaroused (Lang, Greenwald, Bradley, & Hamm, 1993; Russell, 1980). Approach–avoidance, also known as “motivation direction,” ranges from tendency to approach to tendency to avoid a stimulus. Finally, dominance represents the degree of perceived control over the affective stimulus, and ranges from feeling in control to feeling out of control. Although the range is defined, there is disagreement as to whether approach and avoidance are more synonymous with positive or negative states (Watson, Wiese, Vaidya, & Tellegen, 1999). Furthermore, it has been suggested that motivational direction and affective valence are independent. This has been demonstrated for the case of anger: In spite of being a negative affective state, anger can nevertheless be associated with approach tendencies (Carver & Harmon-Jones, 2009; Gable & Harmon-Jones, 2010).

As compared to dimensional-category theories of emotion, discrete-category theories assume that the above-mentioned dimensions are too simple to accurately reflect the neural systems underlying emotional responses. Instead, discrete-category theories propose the presence of at least five basic universal emotions (e.g., happiness, anger, fear, disgust, and sadness), as was originally suggested by Darwin (1872).

Typically, to collect normalized ratings for arousal, dominance, or valence, the Self-Assessment Manikin (SAM) scale is employed (Bradley & Lang, 1994). Recent studies have also employed computer-based sliders that are moved along a gradually colored bar in order to collect ratings (Dan-Glauser & Scherer, 2011). For basic emotions, the most common approach is to directly ask participants to name emotions by using predefined labels with an indication of intensity level (Briesemeister, Kuchinke, & Jacobs, 2011; Fujimura, Matsuda, Katahira, Okada, & Okanoya, 2011; Mikels et al., 2005). A number of neuroimaging studies have shown distinct neuronal patterns related to ratings based on both dimensional- and discrete-category theories of emotions (Tettamanti et al., 2012; Viinikainen et al., 2011).

Emotionally charged stimuli and databases

When studying emotions, researchers can choose stimuli from existing standardized databases of auditory, verbal, and visual materials. The International Affective Digitized Sounds (IADS) is one of the most frequently used databases of emotionally charged auditory stimuli, with sounds being characterized according to emotional valence and arousal (Bradley & Lang, 1999), as well as to discrete emotional categories (Stevenson & James, 2008). Other sets of audio stimuli include the Montreal Affective Voices (Belin, Fillion-Bilodeau, & Gosselin, 2008), Portuguese sentences and pseudosentences for research on emotional prosody (Castro & Lima, 2010), vocal emotional stimuli in Mandarin Chinese (Liu & Pell, 2012), and musical excerpts (Vieillard, Peretz, Khalfa, Gagnon, & Bouchard, 2008).

Standardized emotionally charged verbal stimulus materials are also available for several languages. Ratings according to dimensional- and/or discrete-category theories have been collected for languages such as English (the Affective Norms for English Words [ANEW]; Bradley & Lang, 1999; Stevenson, Mikels, & James, 2007), German (Berlin Affective Word List [DENN–BAWL]; Briesemeister et al., 2011; Võ et al., 2009), Finnish (Eilola & Havelka, 2010), Spanish (Redondo, Fraga, Padrón, & Comesaña, 2007), and French (Bonin, Méot, Aubert, Niedenthal, & Capelle-Toczek, 2003).

Several databases of static emotional faces have also been developed, which consist of pictures of models or actors from various backgrounds. These databases include the following: the Karolinska Directed Emotional Faces (KDEF; Lundqvist, Flykt, & Öhman, 1998), using Caucasian models; the Japanese and Caucasian Facial Expressions of Emotion (JACFEE; Ekman & Matsumoto, 1993–2004), with Caucasian and Japanese models; the Montreal Set of Facial Displays of Emotion (Beaupre, Cheung, & Hess, 2000), incorporating French Canadian, Chinese, and sub-Saharan African models; and finally, the NimStim (Tottenham et al., 2009), which provides a uniform set of Asian-American, African-American, European-American, and Latino-American actors, all photographed under identical conditions.

Finally, at present three databases contain static visual affective stimuli with various content and validated normative ratings1: the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 1999), the Geneva Affective Picture Database (GAPED; Dan-Glauser & Scherer, 2011), and the Emotional Picture System (EmoPicS; Wessa et al., 2010).

IAPS is the most widely used database of natural pictures of emotionally charged stimuli. Numerous cross-validation studies have shown the reliable induction of expressive and physiological emotion responses by these stimuli (Greenwald, Cook, & Lang, 1989; Lang et al., 1993; Modinos et al., 2012; Weinberg & Hajcak, 2010). The original norms and their updates (Lang, Bradley, & Cuthbert, 2008) were created according to a dimensional-category theory of affect, including valence, arousal, and dominance. The data set has also been characterized to some extent using a discrete-category theory of emotion (Davis, Rahman, Smith, & Burns, 1995; Mikels et al., 2005). Hundreds of behavioral and neuroimaging studies have been conducted using IAPS. However, as has been pointed out, certain issues relate to the use of this database (Colden, Bruder, & Manstead, 2008; Dan-Glauser & Scherer, 2011; Grabowska et al., 2011; Mikels et al., 2005). One constraint is the limited number of pictures belonging to specific content categories. This might lead to situations in which participants are presented with the same materials twice: for example, when the participants must be recruited from limited or specific cohorts (especially in the case of fMRI studies). As a consequence, the power of the emotional induction might be lowered. Similarly, if one wants to study reactions to “new” emotionally charged stimuli and their influence on cognitive processes (e.g., to study the old–new effect, repetition effect, or false recognition) (Marchewka, Jednoróg, Nowicka, Brechmann, & Grabowska, 2009; Michałowski, Pané-Farré, Löw, Weymar, & Hamm, 2011; Rozenkrants, Olofsson, & Polich, 2008), the number of images in a particular category should be large enough to avoid uncontrolled stimulus repetition. Moreover, the quality of IAPS images is not always satisfactory, which might introduce uncontrolled factors in the experimental design. This is especially the case when photographs in one category have significantly poorer quality than those in others. Several studies have also shown that the physical properties of the image, such as size, luminance, and complexity, might influence the affective processing of visual stimuli (Bradley, Hamby, Löw, & Lang, 2007; Codispoti & De Cesarei, 2007; Nordström & Wiens, 2012; Olofsson, Nordin, Sequeira, & Polich, 2008; Wiens, Sand, & Olofsson, 2011). Last, but not least, it has been shown that content category matters. For example, social versus nonsocial photographs elicit different behavioral and neural responses (Colden et al., 2008; Wiens et al., 2011).

The GAPED (Dan-Glauser & Scherer, 2011) database was recently introduced to increase the availability of visual emotion stimuli, and it can be divided into six categories. Negative pictures are divided into four specific content categories: spiders, snakes, and scenes that induce emotions related to the violation of moral or legal norms (human rights violations or animal mistreatment). Positive pictures represent mainly human and animal babies and nature scenery, whereas neutral pictures mainly depict inanimate objects. GAPED can be particularly useful in studies evaluating phobic reactions (Aue, Hoeppli, & Piguet, 2012) or other research dealing with spider and snake presentations or militant-related exposure, in which multiple presentations of stimuli of the same type are required. The main limitation of this database is its asymmetry—it contains many more negative than positive pictures, and the content of negative pictures is more specific. This makes it difficult to balance content across valences. Another limitation is that the pictures are relatively small—640 × 480 pixels.

The EmoPicS database (Wessa et al., 2010) was developed as a supplement to IAPS and provides an additional pool of validated emotion-inducing pictures for scientific studies. However, the database is relatively small, including a total of only 378 affective photographs with different semantic content (a variety of social situations, animals, and plants) selected from public Internet photo libraries and archives. The images have a resolution of 800 × 600 (landscape orientation only), which is significantly lower than the typical resolutions encountered in digital photography and display today (1,600 × 1,200). In addition to the dimensional category ratings of valence and arousal, the authors have also provided the physical parameters of each picture, including luminance, contrast, and color composition. This is a useful feature of the database, since the physical properties of pictures can influence early stages of visual processing (Chammat, Jouvent, Dumas, Knoblauch, & Dubal, 2011).

New database - The Nencki Affective Picture System

Taking into consideration the constantly growing number of behavioral and neuroimaging studies on emotion, we anticipate a demand for additional affective pictorial databases that provide researchers with information about the physical properties of the stimuli. In the present work, we provide high-quality emotionally charged photographs that are grouped into five general categories. For each picture, we have collected ratings of valence, arousal, and motivational direction (avoidance–approach), using the dimensional-category theory of emotions that has also been employed for previous databases (IAPS, GAPED, EmoPics). Additional physical qualities of the pictures, including luminance, contrast, and color composition are also provided. This newly developed set of photographs, called the Nencki Affective Picture System (NAPS), is available to the scientific community for noncommercial use.

Method

Material

The pictures were assembled from photographs either taken by the coauthors in various places and situations around the world over the last 6 years (2006–2012) or obtained from the noncommercial photography stock of the Polish newspaper group. The latter source contains mostly pictures of mutilation and accidents that were not published in print or online. An initial set of images contained around 5,000 pictures, from which 1,356 images were selected. The main rule for inclusion during the selection process was that pictures not contain any visible commercial logotypes or widely known places. Likewise, images containing large written words in any language were removed in order to make the database less culture-specific. In addition, we excluded blurred photographs and pictures with a resolution that was lower than 1,600 × 1,200 (landscape) or 1,200 × 1,600 (portrait) pixels. Pictures were preliminarily divided into five broad categories—people, faces, animals, objects, and landscapes—by the authors. The images were then resized and cropped using constant proportions of 4:3 (landscape) or 3:4 (portrait). In addition, each picture was automatically color/contrast adjusted by a PC computer using David’s Batch Processor (open-source Gimp software, Version 2.6). Following this selection procedure, the preliminary categorization was confirmed by three independent judges, who were provided with descriptive rules and examples (Colden et al., 2008). Pictures in the “people” category were described as those containing visible alive, injured, or dead human bodies or isolated parts of the human body. This category could not contain pictures uniquely displaying faces. In contrast, the “faces” category had to contain facial information, with at least the eyes or the mouth region being clearly visible. Pictures in the “animals” category were described as containing visible animals, dead or alive. Pictures in this category could contain human body parts in the background (e.g., an animal in the hands of a person). The “object” category was described as a very broad class in which a wide range of clearly visible objects, foods, or vehicles were depicted without humans or animals present. Finally, the “landscapes” category was described as images depicting a wide range of natural and manmade scenery, panoramas, or terrain without humans or animals visible. Judges were asked to assign the photographs to one of the categories or indicate that they are not sure. In approximately 99% of the cases, all of the judges classified the pictures into the same category (Cronbach’s α = .99). Examples of negative, neutral, and positive pictures from each category are depicted in Fig. 1.
Fig. 1

Examples of negative, neutral, and positive Nencki Affective Picture System (NAPS) pictures from each category. Their ratings for valence (V), arousal (A), and approach–avoidance (A-A) are as follow: Faces_362_v, V = 2.53, A = 6.89, A-A = 2.22; Faces_192_ h, V = 5.43, A = 4.78, A-A = 5.31; Faces_116_ h, V = 7.57, A = 5.11, A-A = 6.57; People_125_ h, V = 2.69, A = 6.14, A-A = 3.08; People_150_ h, V = 5.21, A = 4.64, A-A = 5.24; People_172_v, V = 8.02, A = 4.53, A-A = 7.67; Animals_073_ h, V = 3.80, A = 7.35, A-A = 3.79; Animals_148_ h, V = 5.35, A = 5.28, A-A = 5.48; Animals_177_ h, V = 8.09, A = 4.37, A-A = 7.82; Landscapes_025_ h, V = 3.73, A = 6.10, A-A = 3.66; Landscapes_084_v, V = 5.40, A = 4.69, A-A = 5.10; Landscapes_121_ h, V = 7.74, A = 3.20, A-A = 8.02; Objects_125_ h, V = 2.02, A = 6.20, A-A = 1.47; Objects_239_v, V = 5.00, A = 4.74, A-A = 5.15; Objects_192_ h, V = 7.18; A = 4.11, A-A = 6.55

Participants

A total of 204 healthy volunteers took part in the study (119 women, 85 men; mean age = 23.9 years, SD = 3.4). The participants were mainly college students and young employees recruited from the University of Warsaw and the Nencki Institute of Experimental Biology. Sixty percent of the participants were Polish (N = 123), the the rest belonged to other, mostly European nationalities (exchange students). The local Research Ethics Committee in Warsaw approved the experimental protocol of the study, and written informed consent was obtained from all participants prior to the study.

Rating scales and stimuli presentation

Before the experimental session, the participants were given details about the contents of the images and familiarized themselves with the dimensions through the use of example stimuli. In addition, participants were informed that if they felt any discomfort during the session, they should report it immediately in order to stop the experiment.

Each participant was presented with 362 images chosen pseudorandomly from all of the categories, with the constraint that no more than three stimuli of one category were presented in succession. In all, 12 different sets of stimuli were prepared on the basis of this rule. On average, 55 ratings were collected for each picture. The sessions started with an instructional screen and 12 practice trials, with a longer time limit for the first seven of these trials. In the main experiment, each picture was presented in full-screen view for 3 s. After the first presentation of each stimulus, rating scales were displayed on a new screen to the right, and a smaller version of the image was presented on the left side of the screen. The small picture and rating scales remained available to the participant until she or he had completed all three ratings. The participants had 3 s to complete ratings on each dimension, amounting to 9 s in total. After the participant had completed all ratings, the offset picture and scale disappeared and were immediately replaced by the next picture in the series.

Three continuous bipolar semantic sliding scales were shown, each ranging from 1 to 9. Participants indicated their ratings by moving a bar over a horizontal scale using a standard computer mouse. On the valence scale, participants were asked to complete the sentence, “You are judging this image as . . .” (from 1 = very negative to 9 = very positive, with 5 = neutral). Next, participants judged motivational direction by completing the sentence, “My reaction to this image is . . .” (from 1 = to avoid to 9 = to approach, with 5 = neutral). Finally, participants judged the degree of arousal elicited by pictures with the introductory sentence, “Confronted with this image, you are feeling: …” (from 1 = relaxed to 9 = aroused, with 5 = neutral/ambivalent).

We decided to use semantic bipolar scales in the present study because it has been show that the SAM arousal scale may lead to misinterpretations (Riberio, Pompéia, & Bueno, 2005). In the original technical manual of IAPS and SAM (Lang et al., 1999), the description of one of the extremes of the arousal scale uses the terms relaxed, calm, sluggish, and unaroused. However, the affective space obtained from stimuli in American (Lang et al., 1999) and Spanish (Moltó at el., 1999; Vila et al., 2001) populations showed that the standardized rating is “boomerang-shaped,” with one extreme of the arousal scale being referred to as no reaction. As a result, this extreme anchor of the scale was used only for neutral photographs, whereas the opposite extreme was used to describe both pleasant and unpleasant pictures (arousing, value = 9). On the other hand, Brazilians (Ribeiro et al., 2005) and Germans (Grühn & Scheibe, 2008) interpreted the SAM arousal scale differently, and attributed less arousal to pleasant photographs and more arousal to neutral and negative ones. Pleasant images of landscapes, flowers, and babies were rated as being relaxing and calming. This led to a more linear distribution of scores in the affective space.

The present experiment lasted approximately 1 h. An obligatory 10-min break was taken after half of the stimuli had been presented, during which participants were asked to leave the experimental room. The study was conducted on standard PC computers using 24-in. LCD monitors. The core software for stimulus presentation and data acquisition was created using Presentation software (Version 14.6, www.neurobs.com). All responses were analyzed further using the statistical package SPSS (2009).

Results

Ratings for each picture of the database, together with a short description of its content, are presented in the supplemental materials, Table S1. Descriptive statistics for the dimensions and categories by gender are presented in Tables 1 and 2.
Table 1

Descriptive statistics, calculated separately for each dimension in women, men, and both groups, for all NAPS photographs

Gender/Dimension

Min

Max

Mean

SD

Women: Arousal

1.53

8.08

5.12

1.12

Women: Valence

1.24

8.59

5.36

1.70

Women: AvAp

1.35

8.53

5.32

1.55

Men: Arousal

1.78

8.38

5.08

1.03

Men: Valence

1.44

8.44

5.44

1.56

Men: AvAp

1.44

8.44

5.42

1.42

Both groups: Arousal

2.04

8.05

5.10

1.06

Both groups: Valence

1.33

8.54

5.39

1.63

Both groups: AvAp

1.43

8.46

5.36

1.48

All scales range from 1 to 9, in which 1 represents negative, relaxing, or tendency to avoid, depending on the dimension being considered. AvAp, avoidance–approach; Min, minimal value; Max, maximal value; SD, standard deviation.

Table 2

Descriptive statistics, calculated separately for each dimension and category in women and men

 

Women

Men

Category

Arousal

Valence

AvAp

Arousal

Valence

AvAp

Faces

Mean

5.17

5.58

5.44

5.10

5.53

5.40

SD

0.97

1.65

1.35

0.89

1.49

1.22

Min–Max

2.97–7.81

1.63–8.40

1.48–8.08

3.00–7.72

1.54–7.90

1.47–7.84

N

372

372

372

372

372

372

People

Mean

5.72

4.62

4.70

5.62

4.79

4.91

SD

1.20

1.99

1.84

1.12

1.83

1.69

Min–Max

2.53–8.03

1.24–8.29

1.41–8.32

2.44–8.38

1.44–8.29

1.35–8.15

N

250

250

250

250

250

250

Animals

Mean

5.20

5.47

5.32

5.17

5.66

5.53

SD

1.21

1.71

1.63

1.11

1.55

1.48

Min–Max

2.43–7.83

1.50–8.45

1.55–8.18

2.25–7.50

2.04–8.00

1.92–7.79

N

221

221

221

221

221

221

Objects

Mean

5.12

5.15

5.12

5.08

5.31

5.32

SD

0.69

1.31

1.27

0.68

1.22

1.18

Min–Max

2.93–7.30

1.85–7.91

1.35–7.96

2.67–7.04

2.14–7.91

1.59–7.74

N

328

328

328

328

328

328

Landscapes

Mean

4.10

6.14

6.26

4.19

6.12

6.19

SD

1.12

1.55

1.38

1.00

1.47

1.34

Min–Max

1.53–7.17

2.57–8.59

2.66–8.53

1.78–6.95

2.15–8.44

2.35–8.44

N

185

185

185

185

185

185

All scales range from 1 to 9, in which 1 represents negative, relaxing, or tendency to avoid, depending on the dimension being considered

Correlation analyses of emotional dimensions for gender and content categories

The importance of sex differences has been documented in cognitive processes such as memory, emotion, and vision (see Cahill, 2006, for a review). It has been shown that the same visual stimuli may elicit different levels of arousal and valence in males than in females. Relative to men, women react more strongly to unpleasant materials. They rate IAPS pictures as being more unpleasant and arousing, and react with higher corrugator electromyographic activity and greater event-related potential amplitudes (Bradley et al., 2001; Lithari et al., 2010; McManis et al., 2001). On the other hand, men tend to rate pleasant pictures, especially erotica, as more pleasant and more arousing than women do, and show significantly greater electrodermal activity. Finally, pleasant and unpleasant IAPS stimuli have been shown to activate different neuronal structures in women and men (Wrase et al., 2003).

Taking images as cases, we used Pearson’s correlations to examine the relationships between ratings of valence, arousal, and approach–avoidance for each category and sex separately (see Table 3 and Figs. 2, 3 and 4). Correlation coefficients for men were also directly compared to those for women using a calculation for the test of the difference between two independent correlation coefficients (Preacher, 2002). This calculation involves converting the two correlation coefficients into z scores using Fisher’s r-to-z transformation. Then, making use of the sample size employed to obtain each coefficient, these z scores are compared using Formula 2.8.5 from Cohen and Cohen (1983, p. 54):
Table 3

Correlation coefficients resulting from correlations of ratings of valence, arousal, and approach–avoidance with one another, listed for each category and gender

 

Women

Men

Category

Dimension

Valence

Arousal

Valence

Arousal

Faces

Arousal

–.814

 

–.696

 

AP-AV

.962

–.802

.939

–.647

People

Arousal

–.869

 

–.753

 

AP-AV

.975

–.853

.954

–.696

Animals

Arousal

–.861

 

–.757

 

AP-AV

.977

–.863

.967

–.757

Objects

Arousal

–.768

 

–.502

 

AP-AV

.967

–.755

.941

–.414

Landscapes

Arousal

–.896

 

–.824

 

AP-AV

.974

–.875

.965

–.788

AP-AV, approach–avoidance

Fig. 2

Behavioral ratings for valence (y-axis) and arousal (x-axis) in each category, for women exclusively, men exclusively, and both groups together. Each single dot represents the rating for a particular image on a two-dimensional scale, with standard deviations (SDs) in gray

Fig. 3

Behavioral ratings for avoidance–approach (y-axis) and arousal (x-axis) in each category, for women exclusively, men exclusively, and both groups together. Each single dot represents the rating for a particular image on a two-dimensional scale, with standard deviations (SDs) in gray

Fig. 4

Behavioral ratings for valence (y-axis) and avoidance–approach (x-axis) in each category, for women exclusively, men exclusively, and both groups together. Each single dot represents the rating for a particular image on a two-dimensional scale, with standard deviations (SDs) in gray

$$ \mathrm{Z}=\mathrm{Z}1-\mathrm{Z}2/\mathrm{SDZ}, $$
where SDZ = Sqrt[1/(N1 − 3) + 1/(N2 − 3)], and N1 and N2 are the sample sizes.

All dimensions were highly correlated in both men and women (all ps < .001; see Table 3 for the correlation coefficients). However, women had higher correlation coefficients than did men in all cases, except for the correlations between valence and approach–avoidance for animals and landscapes. In the case of the correlations between valence and arousal, the differences between women and men were the strongest for objects (z = –5.91, p < .001, effect size = 0.46) and people (z = –3.88, p < .001, effect size = 0.35), and the weakest for landscapes (z = –2.69, p = .007, effect size = 0.28). Similarly, in case of arousal correlated with approach–avoidance, the strongest difference between genders was visible for objects (z = –6.94, p < .001, effect size = 0.54), and the weakest for landscapes (z = –2.75, p = .006, effect size = 0.29). For the correlations between valence and approach–avoidance, differences were found for objects (effect size = 0.29), people (effect size = 0.31), and faces (effect size = 0.24) (3.29 ≤ z ≤ 3.79, ps ≤ .001).

Physical properties of images

The properties of each image were computed with Python-based (www.python.org) in-house software using SciPy (Version 0.10.1, www.scipy.org) and the Python Imaging Library (for JPEG compression; Version 1.1.7). Luminance was defined as the average pixel value of the grayscaled image, and the contrast was defined as the standard deviation across all pixels of the grayscaled image (Bex & Makous, 2002). JPEG size can be used as an index of the overall complexity of an image (Donderi, 2006). Perceptually simple images are highly compressible, therefore resulting in smaller file size. The JPEG sizes of the color images were determined with a compression quality setting of 80 (on a scale from 1 to 100). As an additional index of image complexity, the entropy of each grayscaled image was determined. Entropy, H, is computed from the histogram distribution of the 8-bit gray-level intensity values x: H = –Σp(x)log p(x), where p represents the probability of an intensity value x. Entropy varies with the “randomness” of an image—low-entropy images have rather large uniform areas with limited contrast (e.g., a dark sky), whereas images with high entropy are images that are more “noisy” and have a high degree of contrast from one pixel to the next. In addition, each picture was converted to the CIE L*a*b* color space. This space, unlike RGB color space, is based on the opponent-process theory of color vision and approximates characteristics of the human visual system. In this system, the L* dimension corresponds to luminance (range: 0–100), and a* and b* correspond to two chromatic channels ranging from red (positive values) to green (negative values), and from blue (negative values) to yellow (positive values) (Tkalcic & Tasic, 2003). For each image and channel, the mean across all pixels was calculated. For example, a high positive value in the a* dimension indicates that a particular picture contains a large amount of “red color.” Values for the different physical properties for each picture are listed in Table S1 of the supplemental materials.

Methodology validation

The affective spaces of NAPS and IAPS are presented in Fig. 5. Ratings from the NAPS database show a more linear association between the valence and arousal dimensions, as compared to the “boomerang-shaped” relation found in the IAPS database (Lang et al., 2008). It appears that this difference is due to the arousal dimension. In the case of IAPS, both positive and negative pictures were rated as arousing, whereas neutral pictures were rated as falling at the other extreme of the scale. In NAPS, however, negative images were rated as arousing, positive as relaxing, and neutral at the middle of the arousal scale.
Fig. 5

The affective spaces of the International Affective Picture System (IAPS) and Nencki Affective Picture System (NAPS), with image descriptions

In order to directly compare the slider ratings obtained with the methodology applied in the present study to the ratings obtained using the SAM scale (Lang et al., 1999), two additional experiments were conducted. In these experiments, a subset of images from the NAPS (n = 48) and IAPS (n = 48) databases were chosen. First, the IAPS pictures were selected to cover the whole affective space (excluding erotic images), and then NAPS pictures were matched to them for content: landscapes, smiling faces, objects, snakes, wild animals, accidents, mutilated faces, and so forth. The full list of images is presented in the supplemental materials as Table S2.

A total of 96 images for each study were presented pseudorandomly with respect to their content and the source of each image. The images were displayed for 3 s, after which the small picture and rating scales remained available to the participant until she or he had completed all three ratings. The images from NAPS were downsampled to match the lower resolution of the images from IAPS. Two separate procedures were conducted on different groups of volunteers. In the first study, 14 participants (eight women, six men; mean age = 23.5 years, SD = 1.4) underwent the procedure already described with slider scales, but only with the valence and arousal scales. In the second study, 14 participants (eight women, six men; mean age = 23.7 years, SD = 1.2) rated the stimuli using a computerized version of the SAM, for the valence and arousal scales only. A power analysis indicated that a sample size of 13 was sufficient to detect a significant correlation (effect size = 0.7) with a power of .80 and an alpha of .05. Both studies were conducted using English instructions and scale descriptions. The participants assigned to each group were matched for age and education. They were mostly Polish and European students of the Warsaw International Studies in Psychology program.

For the IAPS images, we obtained strong correlations between the ratings obtained with the slider scale and the SAM for both valence (r = .929, p < .001) and arousal (r = .753, p < .001). Table 4 presents the correlations between the ratings gathered in the present study and previous normative ratings for the IAPS images (Lang et al., 2008, and Libkuman, Otani, Kern, Viger, & Novak, 2007). Again, the ratings obtained using the slider scale correlated strongly with the previous norms for valence (r = .944, p < .001, and r = .842, p < .001, for the Lang et al. and Libkuman et al. studies, respectively). For the arousal scale, a better correlation was obtained with the Lang et al. (r = .833, p < .001) than with the Libkuman et al. (r = .303, p < .05, z = 4.2, p < .001) norms. However, ratings obtained with the slider scale showed a significantly higher linear correlation between the measured dimensions of valence and arousal (r = –.793, p < .001) than did the ratings obtained with the SAM (r = –.583, p < .003, z = 1.96, p = .05; r = –.516, p < .001, z = 2.41, p = .016; r = –.390, p < .05, z = 3.17, p = .002, for the present study sample, Lang et al., and Libkuman et al., respectively).
Table 4

Correlations for a subset of 48 IAPS pictures

 

IAPS Pictures

1

2

3

4

5

6

7

1

VAL_Lang

       

2

VAL_Libkuman

.842**

      

3

VAL_SAM

.909**

.886**

     

4

VAL_Slider

.944**

.842**

.929**

    

5

ARO_Lang

–.516**

–.487**

–.603**

–.630**

   

6

ARO_Libkuman

–.107

–.390*

–.219

–.170

.567**

  

7

ARO_SAM

–.512**

–.493**

–.583**

–.588**

.883**

.542**

 

8

ARO_Slider

–.753**

–.652**

–.759**

–.793**

.833**

.303*

.753**

VAL, valence; ARO, arousal. *p < .05, **p < .001

For the NAPS pictures, we also obtained strong correlations between the ratings obtained with the slider scale and SAM, for both valence (r = .962, p < .001) and arousal (r = .745, p < .001). Again, the correlation between valence and arousal was slightly, but not significantly, higher for the ratings obtained using the slider scale (r = –.747, p < .001) than for those with the SAM (r = –.649, p < .001).

Discussion

In the present study, we have presented a comprehensive battery of static, emotionally charged and emotionally neutral visual stimuli that are available for use by the scientific community. Following empirical suggestions to divide emotionally charged stimuli into meaningful content categories (Weinberg & Hajcak, 2010), the database provides images for five content categories—people, faces, animals, objects, and landscapes—which distinguishes it from other databases. The database should facilitate examination of the influence of context on emotional elicitation by providing researchers with some control over stimuli. All pictures belonging to each category are of high quality, with a minimum resolution of 1,600 by 1,200 pixels.

We examined the influence of gender on the correlations between valence, arousal, and approach–avoidance for different categories of pictures. Taking into account the differences in the correlation coefficients between genders, together with the scatterplots, correlations between arousal and valence were stronger in women than in men, especially for pictures of people, faces, and objects. This finding might reflect a rating bias mentioned by Bradley and Lang (2007) for the IAPS pictures, which was particularly evident for unpleasant pictures (Schaaff, 2008). It seems that women, more than men, are biased for pictures depicting humans and objects. In other words, they tend to primarily rate unpleasant pictures as more arousing. In line with electrophysiological studies, women show greater event-related potential amplitudes for unpleasant and highly arousing stimuli than do men (Lithari et al., 2010).

In the present study, we used a computerized slider that was moved along a gradually colored bar to obtain a 9-point rating scale for several dimensions. To validate our methodology, we conducted two additional experiments using subsets of images from NAPS and IAPS to compare the ratings obtained with the slider scale to ratings obtained with a computerized version of the SAM. For both the valence and arousal dimensions, we found strong correlations between the ratings collected using the slider scale and SAM. These correlations were significant for the SAM ratings gathered in the present sample, as well as for previously obtained norms (Lang et al., 2008; Libkuman et al., 2007). In contrast to the SAM, the semantic slider scale produced a more linear relationship between valence and arousal. This finding was most probably due to the fact that the arousal scale was more bipolar in the case of the slider scale (going from relaxed to aroused) than of the SAM (from unaroused to aroused: Ribeiro, Teixeira-Silva, Pompéia, & Bueno, 2007). This might explain why the affective space of NAPS does not have the boomerang shape seen in the affective space of IAPS. Alternatively, adding highly arousing, pleasant (erotic) images to NAPS might change the distribution of ratings in the affective space. Researchers using NAPS should be aware of the fact that this procedure, as is true of the GAPED rating method (Dan-Glauser & Scherer, 2011), may potentially give rating values different from those on the SAM scale. Therefore, we suggest that researchers conduct separate ratings based on the SAM for studying specific sets of images from NAPS if they want to directly compare their results to those obtained using IAPS. We also advise researchers to employ an additional rating procedure when pulling together images from different visual affective static image sets (GAPED, EmoPicS, or IAPS) or when controlling the physical values of the images for studies using electroencephalography (Wiens et al., 2011).

An additional limitation of NAPS is that the present version of the database lacks very positive (high-valence) pictures with high arousal (e.g., pictures with erotic content). However, we are in the process of adding these images to the database. Further analysis aimed at segregating basic emotions (Mikels et al., 2005) within NAPS is also in progress. Images without standardized ratings can also be provided to researchers on request. The database, together with the dimensional ratings and physical properties of each image, is available to the scientific community, for noncommercial use only, on request.

Footnotes

  1. 1.

    Note that additional stimulus databases are available to the scientific community. Here, we focus on those that have most often been employed and that have been validated according to the emotional theories. For more information on neutral and emotionally charged stimuli, see the website www.cla.temple.edu/cnl/STIMULI/index.html.

Notes

Author note

We are grateful to Anna Jaworek for helpful comments on the construction of NAPS and results interpretation. We are also grateful to Katarzyna Paluch, Małgorzata Wierzba, and Łukasz Okruszek for participant recruitment. This study was supported by Polish Ministry of Science and Higher Education, the Iuventus Plus Grant Nos. IP2010 024070 and IP2011 033471. The authors have declared that no competing interests exist.

Supplementary material

13428_2013_379_MOESM1_ESM.xls (834 kb)
Supplementary Table 1(XLS 834 kb)
13428_2013_379_MOESM2_ESM.xlsx (16 kb)
Supplementary Table 2(XLSX 15.6 kb)

References

  1. Aue, T., Hoeppli, M. E., & Piguet, C. (2012). The sensitivity of physiological measures to phobic and nonphobic fear intensity. Journal of Psychophysiology, 26, 154–167.CrossRefGoogle Scholar
  2. Barrett, L. F. (2006). Are emotions natural kinds? Perspectives in Psychological Science, 1, 28–58. doi:10.1111/j.1745-6916.2006.00003.x CrossRefGoogle Scholar
  3. Beaupre, M. G., Cheung, N., & Hess, U. (2000). The montreal set of facial displays of emotion. Quebec, Canada: Montreal.Google Scholar
  4. Belin, P., Fillion-Bilodeau, S., & Gosselin, F. (2008). The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behavior Research Methods, 40, 531–539. doi:10.3758/BRM.40.2.531 PubMedCrossRefGoogle Scholar
  5. Bex, P. J., & Makous, W. (2002). Spatial frequency, phase, and the contrast of natural images. Journal of the Optical Society of America, 19, 1096–1106.PubMedCrossRefGoogle Scholar
  6. Bonin, P., Méot, A., Aubert, L., Niedenthal, P. M., & Capelle-Toczek, M.-C. (2003). Normes de concrétude, de valeur d’imagerie, de fréquence subjective et de valence émotionnelle pour 866 mots [Norms of concreteness, imagery value, subjective frequence, and emotional valence for 866 words]. L’Année Psychologique, 103, 655–694.CrossRefGoogle Scholar
  7. Bradley, M., & Lang, P. J. (2007). The International Affective Picture System (IAPS) in the study of emotion and attention. In J. A. Coan & J. J. B. Allen (Eds.), The handbook of emotion elicitation and assessment (pp. 29–46). New York, NY: Oxford University Press.Google Scholar
  8. Bradley, M. M., Codispoti, M., Sabatinelli, D., & Lang P. J. (2001). Emotion and motivation II: sex differences in picture processing. Emotion, 300–319.Google Scholar
  9. Bradley, M. M., Hamby, S., Löw, A., & Lang, P. J. (2007). Brain potentials in perception: Picture complexity and emotional arousal. Psychophysiology, 44, 364–373. doi:10.1111/j.1469-8986.2007.00520.x PubMedCrossRefGoogle Scholar
  10. Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The Self-Assessment Manikin and the semantic differential. Journal of Behavioral Therapy and Experimental Psychiatry, 25, 49–59.CrossRefGoogle Scholar
  11. Bradley, M. M., & Lang, P. J. (1999). International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Technical Report No. B-2). Gainesville, FL: University of Florida, Center for Research in Psychophysiology.Google Scholar
  12. Briesemeister, B. B., Kuchinke, L., & Jacobs, A. M. (2011). Discrete emotion norms for nouns: Berlin Affective Word List (DENN-BAWL). Behavior Research Methods, 43, 441–448. doi:10.3758/s13428-011-0059-y PubMedCrossRefGoogle Scholar
  13. Cahill, L. (2006). Why sex matters for neuroscience. Nature Reviews Neuroscience, 7, 477–484. doi:10.1038/nrn1909 PubMedCrossRefGoogle Scholar
  14. Carver, C. S., & Harmon-Jones, E. (2009). Anger is an approach-related affect: Evidence and implications. Psychological Bulletin, 135, 183–204. doi:10.1037/a0013965 PubMedCrossRefGoogle Scholar
  15. Castro, S. L., & Lima, C. F. (2010). Recognizing emotions in spoken language: A validated set of Portuguese sentences and pseudosentences for research on emotional prosody. Behavior Research Methods, 42, 74–81. doi:10.3758/BRM.42.1.74 PubMedCrossRefGoogle Scholar
  16. Chammat, M., Jouvent, R., Dumas, G., Knoblauch, K., & Dubal, S. (2011). Interactions between luminance contrast and emotionality in visual pleasure and contrast appearance. Perception, 40(ECVP Abstract Suppl.), 22.Google Scholar
  17. Codispoti, M., & De Cesarei, A. (2007). Arousal and attention: Picture size and emotional reactions. Psychophysiology, 44, 680–686.PubMedCrossRefGoogle Scholar
  18. Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.Google Scholar
  19. Colden, A., Bruder, M., & Manstead, A. S. R. (2008). Human content in affect-inducing stimuli: A secondary analysis of the international affective picture system. Motivation and Emotion, 32, 260–269.CrossRefGoogle Scholar
  20. Dalgleish, T. (2004). The emotional brain. Nature Reviews Neuroscience, 5, 583–589.PubMedCrossRefGoogle Scholar
  21. Dan-Glauser, E. S., & Scherer, K. R. (2011). The Geneva affective picture database (GAPED): A new 730-picture database focusing on valence and normative significance. Behavior Research Methods, 43, 468–477.PubMedCrossRefGoogle Scholar
  22. Darwin, C. (1872). The expression of the emotions in man and animals. London, UK: John Murray.CrossRefGoogle Scholar
  23. Davis, W. J., Rahman, M. A., Smith, L. J., & Burns, A. (1995). Properties of human affect induced by static color slides (IAPS): Dimensional, categorical and electromyographic analysis. Biological Psychology, 41, 229–253.PubMedCrossRefGoogle Scholar
  24. Donderi, D. C. (2006). Visual complexity: A review. Psychological Bulletin, 132, 73–97. doi:10.1037/0033-2909.132.1.73 PubMedCrossRefGoogle Scholar
  25. Eilola, T. M., & Havelka, J. (2010). Affective norms for 210 British English and Finnish nouns. Behavior Research Methods, 42, 134–140. doi:10.3758/BRM.42.1.134 PubMedCrossRefGoogle Scholar
  26. Ekman, P. (1992). Are there basic emotions? Psychological Review, 99, 550–553. doi:10.1037/0033-295X.99.3.550 PubMedCrossRefGoogle Scholar
  27. Ekman, P., & Matsumoto, D. (1993–2004). Japanese and Caucasian Facial Expressions of Emotion (JACFEE). Palo Alto, CA: Consulting Psychologists Press.Google Scholar
  28. Fujimura, T., Matsuda, Y.-T., Katahira, K., Okada, M., & Okanoya, K. (2011). Categorical and dimensional perceptions in decoding emotional facial expressions. Cognition and Emotion, 26, 587–601. doi:10.1080/02699931.2011.595391 PubMedCentralPubMedCrossRefGoogle Scholar
  29. Gable, P., & Harmon-Jones, E. (2010). The motivational dimensional model of affect: Implications for breadth of attention, memory, and cognitive categorisation. Cognition and Emotion, 24, 322–337.CrossRefGoogle Scholar
  30. Gerrards-Hesse, A., Spies, K., & Hesse, F. W. (1994). Experimental inductions of emotional states and their effectiveness: A review. British Journal of Psychology, 85, 55–78.CrossRefGoogle Scholar
  31. Grabowska, A., Marchewka, A., Seniów, J., Polanowska, K., Jednoróg, K., Królicki, L., & Członkowska, A. (2011). Emotionally negative stimuli can overcome attentional deficits in patients with visuo-spatial hemineglect. Neuropsychologia, 49, 3327–3337.PubMedCrossRefGoogle Scholar
  32. Greenwald, M. W., Cook, E. W., & Lang, P. J. (1989). Affective judgment and psychological response: Dimensional covariation in the evaluation of pictoral stimuli. Journal of Psychophysiology, 3, 51–64.Google Scholar
  33. Grühn, D., & Scheibe, S. (2008). Age-related differences in valence and arousal ratings of pictures from the International Affective Picture System (IAPS): Do ratings become more extreme with age? Behavior Research Methods, 40, 512–521. doi:10.3758/BRM.40.2.512 PubMedCrossRefGoogle Scholar
  34. Horvat, M., Popović, S., & Ćosić, K. (2012). Towards semantic and affective coupling in emotionally annotated databases. In Proceedings of the 35th International ICT Convention MIPRO 2012, 1003–1008.Google Scholar
  35. Horvat, M., Popović, S., & Ćosić, K. (2013). Multimedia stimuli databases usage patterns: a survey report. In Proceedings of the 35th International ICT Convention MIPRO 2013, 1265–1269.Google Scholar
  36. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1999). International Affective Picture System (IAPS). Instruction manual and affective ratings (Technical Report No. A-4). Gainsville, Florida: University of Florida, Center for Research in Psychophysiology.Google Scholar
  37. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Technical Report No. A-8). Gainesville, FL: University of Florida, Center for Research in Psychophysiology.Google Scholar
  38. Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at pictures: Affective, facial, visceral, and behavioral reactions. Psychophysiology, 30, 261–273.PubMedCrossRefGoogle Scholar
  39. Libkuman, T. M., Otani, H., Kern, R., Viger, S. G., & Novak, N. (2007). Multidimensional normative ratings for the International Affective Picture System. Behavior Research Methods, 39, 326–334. doi:10.3758/BF03193164 PubMedCrossRefGoogle Scholar
  40. Lithari, C., Frantzidis, C. A., Papadelis, C., Vivas, A. B., Klados, M. A., Kourtidou-Papadeli, C., & Bamidis, P. D. (2010). Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions. Brain Topography, 23, 27–40. doi:10.1007/s10548-009-0130-5 PubMedCentralPubMedCrossRefGoogle Scholar
  41. Liu, P., & Pell, M. D. (2012). Recognizing vocal emotions in Mandarin Chinese: A validated database of Chinese vocal emotional stimuli. Behavior Research Methods, 44, 1042–1051.PubMedCrossRefGoogle Scholar
  42. Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces—KDEF [CD ROM]. Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9.Google Scholar
  43. Marchewka, A., Jednoróg, K., Nowicka, A., Brechmann, A., & Grabowska, A. (2009). Grey-matter differences related to true and false recognition of emotionally charged stimuli—A voxel based morphometry study. Neurobiology of Learning and Memory, 92, 99–105. doi:10.1016/j.nlm.2009.03.003 PubMedCrossRefGoogle Scholar
  44. Marchewka, A., & Nowicka, A. (2007). Emotionally negative stimuli are resistant to repetition priming. Acta Neurobiologiae Experimentalis, 67, 83–92.PubMedGoogle Scholar
  45. Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition and Emotion, 23, 209–237.PubMedCentralPubMedCrossRefGoogle Scholar
  46. McManis, M. H., Bradley, M. M., Berg, W. K., Cuthbert, B. N., & Lang, P. J. (2001). Emotional reactions in children: Verbal, physiological, and behavioral responses to affective pictures. Psychophysiology, 38, 222–231.Google Scholar
  47. Michałowski, J. M., Pané-Farré, C. A., Löw, A., Weymar, M., & Hamm, A. O. (2011). Modulation of the ERP repetition effects during exposure to phobia-relevant and other affective pictures in spider phobia. International Journal of Psychophysiology, 85, 55–61.PubMedCrossRefGoogle Scholar
  48. Mikels, J. A., Fredrickson, B. L., Larkin, G. R., Lindberg, C. M., Maglio, S. J., & Reuter-Lorenz, P. A. (2005). Emotional category data on images from the International Affective Picture System. Behavior Research Methods, 37, 626–630.PubMedCentralPubMedCrossRefGoogle Scholar
  49. Modinos, G., Pettersson-Yeo, W., Allen, P., McGuire, P. K., Aleman, A., & Mechelli, A. (2012). Multivariate pattern classification reveals differential brain activation during emotional processing in individuals with psychosis proneness. NeuroImage, 59, 3033–3041. doi:10.1016/j.neuroimage.2011.10.048 PubMedCrossRefGoogle Scholar
  50. Moltó, J., Montañés, S., Poy, R., Segarra, P., Pastor, M. C., & Tormo, M. P. (1999). Un nuevo método para el estudio experimental de las emociones: El Internacional Affective Picture System (IAPS). Adaptación española. Revista de Psicología General y Aplicada, 52, 55–87.Google Scholar
  51. Nordström, H., & Wiens, S. (2012). Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention. BMC Neuroscience, 20, 13–49.Google Scholar
  52. Olofsson, J. K., Nordin, S., Sequeira, H., & Polich, J. (2008). Affective picture processing: An integrative review of ERP findings. Biological Psychology, 77, 247–265. doi:10.1016/j.biopsycho.2007.11.006 PubMedCentralPubMedCrossRefGoogle Scholar
  53. Posner, J., Russell, J. A., Gerber, A., Gorman, D., Colibazzi, T., Yu, S., … Peterson, B. S. (2009). The neurophysiological bases of emotion: An fMRI study of the affective circumplex using emotion-denoting words. Human Brain Mapping, 30, 883–895. doi:10.1002/hbm.20553 Google Scholar
  54. Preacher, K. J. (2002). Calculation for the test of the difference between two independent correlation coefficients [Computer software]. http://quantpsy.org
  55. Redondo, J., Fraga, I., Padrón, I., & Comesaña, M. (2007). The Spanish adaptation of ANEW (Affective Norms for English Words). Behavior Research Methods, 39, 600–605. doi:10.3758/BF03193031 PubMedCrossRefGoogle Scholar
  56. Rozenkrants, B., Olofsson, J. K., & Polich, J. (2008). Affective visual event-related potentials: Arousal, valence, and repetition effects for normal and distorted pictures. International Journal of Psychophysiology, 67, 114–123.PubMedCentralPubMedGoogle Scholar
  57. Ribeiro, R. L., Pompéia, S., & Bueno, O. F. A. (2005). Comparison of Brazilian and American norms for the International Affective Picture System (IAPS). Revista Brasileira de Psiquiatria, 27, 208–215. doi:10.1590/S1516-44462005000300009 PubMedCrossRefGoogle Scholar
  58. Ribeiro, R. L., Teixeira-Silva, F., Pompéia, S., & Bueno, O. F. (2007). IAPS includes photographs that elicit low-arousal physiological responses in healthy volunteers. Physiology and Behavior, 91, 671–675.PubMedCrossRefGoogle Scholar
  59. Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161–1178.CrossRefGoogle Scholar
  60. Schaaff, K. (2008). Challenges on emotion induction with the International Affective Picture System. Karlsruhe: Universitat Karlsruhe (TH).Google Scholar
  61. SPSS. (2009). PASW statistics for windows (Version 18.0). Chicago, IL: SPSS Inc.Google Scholar
  62. Stevenson, R. A., & James, T. W. (2008). Affective auditory stimuli: Characterization of the International Affective Digitized Sounds (IADS) by discrete emotional categories. Behavior Research Methods, 40, 315–321. doi:10.3758/BRM.40.1.315 PubMedCrossRefGoogle Scholar
  63. Stevenson, R. A., Mikels, J. A., & James, T. W. (2007). Characterization of the Affective Norms for English Words by discrete emotional categories. Behavior Research Methods, 39, 1020–1024. doi:10.3758/BF03192999 PubMedCrossRefGoogle Scholar
  64. Tettamanti, M., Rognoni, E., Cafiero, R., Costa, T., Galati, D., & Perani, D. (2012). Distinct pathways of neural coupling for different basic emotions. NeuroImage, 59, 1804–1817.PubMedCrossRefGoogle Scholar
  65. Tkalcic, M., & Tasic, J. F. (2003). Colour spaces: Perceptual, historical and applicational background. In Proceedings of EUROCON 2003: Computer as a Tool (Vol. 1, pp. 304–308). Piscataway, NJ: IEEE.Google Scholar
  66. Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., & Nelson, C. (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168, 242–249.PubMedCentralPubMedCrossRefGoogle Scholar
  67. Vieillard, S., Peretz, I., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition and Emotion, 22, 720–752.CrossRefGoogle Scholar
  68. Viinikainen, M., Kätsyri, J., & Sams, M. (2011). Representation of perceived sound valence in the human brain. Human Brain Mapping, 33, 2295–2305.PubMedCrossRefGoogle Scholar
  69. Vila, S., Sánchez, M., Ramírez, I., Fernández, M. C., Cobos, P., Rodríguez, S., & Moltó, J. (2001). El Sistema Internacional de Imágenes Afectivas (IAPS): Adaptación espanõla. Segunda parte. Revista de Psicología General y Aplicada, 54, 635–657.Google Scholar
  70. Võ, M. L.-H., Conrad, M., Kuchinke, L., Urton, K., Hofmann, M. J., & Jacobs, A. M. (2009). The Berlin Affective Word List Reloaded (BAWL-R). Behavior Research Methods, 41, 534–538. doi:10.3758/BRM.41.2.534 PubMedCrossRefGoogle Scholar
  71. Watson, D., Wiese, D., Vaidya, J., & Tellegen, A. (1999). The two general activation systems of affect: Structural findings, evolutionary considerations, and psychobiological evidence. Journal of Personality and Social Psychology, 76, 820–838.CrossRefGoogle Scholar
  72. Weinberg, A., & Hajcak, G. (2010). Beyond good and evil: The time-course of neural activity elicited by specific picture content. Emotion, 10, 767–782. doi:10.1037/a0020242 PubMedCrossRefGoogle Scholar
  73. Wessa, M., Kanske, P., Neumeister, P., Bode, K., Heissler, J., & Schönfelder, S. (2010). EmoPics: Subjektive und psychophysiologische Evaluationen neuen Bildmaterials für die klinisch-biopsychologische Forschung. Zeitschrift für Klinische Psychologie und Psychotherapie, 39(Suppl. 1/11), 77.Google Scholar
  74. Wiens, S., Sand, A., & Olofsson, J. K. (2011). Nonemotional features suppress early and enhance late emotional electrocortical responses to negative pictures. Biological Psychology, 86, 83–89.PubMedCrossRefGoogle Scholar
  75. Wrase, J., Klein, S., Gruesser, S. M., Hermann, D., Flor, H., Mann, K., & Heinz, A. (2003). Gender differences in the processing of standardized emotional visual stimuli in humans: A functional magnetic resonance imaging study. Neuroscience Letters, 348, 41–45.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2013

Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  • Artur Marchewka
    • 1
    • 3
  • Łukasz Żurawski
    • 2
  • Katarzyna Jednoróg
    • 2
  • Anna Grabowska
    • 2
  1. 1.Laboratory of Brain Imaging, Neurobiology CentreNencki Institute of Experimental BiologyWarsawPoland
  2. 2.Laboratory of Psychophysiology, Department of NeurophysiologyNencki Institute of Experimental BiologyWarsawPoland
  3. 3.Laboratory of Brain Imaging, Neurobiology CentreNencki Institute of Experimental BiologyWarsawPoland

Personalised recommendations