Functional connectivity dynamics during film viewing reveal common networks for different emotional experiences

  • Gal Raz
  • Alexandra Touroutoglou
  • Christine Wilson-Mendenhall
  • Gadi Gilam
  • Tamar Lin
  • Tal Gonen
  • Yael Jacob
  • Shir Atzil
  • Roee Admon
  • Maya Bleich-Cohen
  • Adi Maron-Katz
  • Talma Hendler
  • Lisa Feldman Barrett
Article

Abstract

Recent theoretical and empirical work has highlighted the role of domain-general, large-scale brain networks in generating emotional experiences. These networks are hypothesized to process aspects of emotional experiences that are not unique to a specific emotional category (e.g., “sadness,” “happiness”), but rather that generalize across categories. In this article, we examined the dynamic interactions (i.e., changing cohesiveness) between specific domain-general networks across time while participants experienced various instances of sadness, fear, and anger. We used a novel method for probing the network connectivity dynamics between two salience networks and three amygdala-based networks. We hypothesized, and found, that the functional connectivity between these networks covaried with the intensity of different emotional experiences. Stronger connectivity between the dorsal salience network and the medial amygdala network was associated with more intense ratings of emotional experience across six different instances of the three emotion categories examined. Also, stronger connectivity between the dorsal salience network and the ventrolateral amygdala network was associated with more intense ratings of emotional experience across five out of the six different instances. Our findings demonstrate that a variety of emotional experiences are associated with dynamic interactions of domain-general neural systems.

Keywords

Stimulus-induced functional connectivity Network cohesion Amygdala Emotions 

Evidence increasingly suggests that coordinated, large-scale networks are implicated across a variety of emotional experiences (Barrett, 2006, 2012; Barrett & Satpute, 2013; Lindquist, Wager, Kober, Bliss-Moreau, & Barrett, 2012; Touroutoglou, Lindquist, Dickerson, & Barrett, 2015; Wilson-Mendenhall, Barrett, & Barsalou, 2015). Situating recent meta-analytic evidence on brain basis of emotion within a growing systems neuroscience literature reveals that the regions involved in emotion are distributed across multiple, anatomically constrained “resting-state” networks that contribute to many psychological phenomena (Barrett & Satpute, 2013; Kober et al., 2008; Lindquist & Barrett, 2012; Lindquist et al., 2012). Specific regions that had initially been shown to be more active for one emotion category than for others in meta-analytic data (e.g., more active for fear than for sadness or disgust; Fusar-Poli et al., 2009; Vytal & Hamann, 2010) were not replicated in a more comprehensive meta-analysis (Lindquist & Barrett, 2012), but instead operate in large-scale networks that are not specific to any given emotion category (Touroutoglou et al., 2015), or even to the domain of emotion (Anderson, 2015). These networks support more domain-general functions, such as executive function, affiliation, salience detection, and so forth, and thus contribute to constructing emotional experiences (as well as other kinds of experiences; Barrett & Satpute, 2013; Lindquist & Barrett, 2012). When regional or network patterns emerge for one emotion category versus another, this is because that category of emotional experiences tends to draw more on certain domain-general functions than others (Saarimäki et al., 2015; Tettamanti et al., 2012; Wager et al., 2015; Wilson-Mendenhall, Barrett, Simmons, & Barsalou, 2011).

Consistent with meta-analytic evidence and other recent neuroscience findings, emerging psychological-construction approaches emphasize that a domain-general network approach is powerful because the interplay of networks could account for the wide variety of emotions that people experience in real life (e.g., Barrett, 2013; Wilson-Mendenhall et al., 2015). In contrast to traditional “basic” emotion approaches (e.g., Ekman, 1999; Panksepp, 2004), which remain focused on identifying the biological signatures of five or so emotion categories, psychological-construction approaches motivate a new empirical focus on the role of domain-general networks in different emotional experiences. Here, we examined whether the large-scale resting-state networks that have been associated with domain-general network dynamically coordinate during the emergence of different emotional experiences.

Neuroimaging studies have identified several networks that appear to be involved in constructing a variety of emotional experiences: (1) a “salience network” encompassing the anterior insula and anterior cingulate cortex (Hermans et al., 2011; Kober et al., 2008; Lindquist et al., 2012; Seeley et al., 2007; Touroutoglou, Bickart, Barrett, & Dickerson, 2014; Touroutoglou, Hollenbeck, Dickerson, & Barrett, 2012) and (2) three amygdala-based networks whose seeds are located in the medial, ventrolateral, and dorsal aspects of the amygdala and are associated with the domain-general processes of affiliation, social perception, and aversive response, respectively (Bickart, Hollenbeck, Barrett, & Dickerson, 2012).

The term “salience” means to allocate attention to the sensory signals that have relevance for allostasis (i.e., the process of keeping various physiological systems in balance; for a definition of “salience,” see Barrett & Simmons, 2015; for a discussion of allostasis, see Sterling, 2012). Both the salience and amygdala-based networks are important for predicting and attending to stimuli that will influence allostasis. Visceromotor limbic cortices (notably the cingulate cortices, the anterior insula, and medial prefrontal cortices) are key regions in the brain that regulate allostasis (for a review, see Barrett & Simmons, 2015; Chanes & Barrett, 2016), and all can be found in networks that regulate attention. According to recent theoretical accounts that use an active inference approach, as well as anatomical detail from tract-tracing studies, these visceromotor regions within the salience and amygdala-based networks send visceromotor commands to the subcortical structures that control the autonomic systems (e.g., the hypothalamus and periaqueductal gray matter); they also anticipate the sensory consequences of these visceromotor changes by sending interoceptive predictions to the primary interoceptive cortex in the posterior insula, which are then corrected by actual interoceptive inputs from the body (Barrett & Simmons, 2015). These interoceptive predictions (and their adjustments) are the basis of affective feelings. As the intensity of emotional experience increases, greater coupling is observed between these networks, as they are functioning to keep the body’s systems in balance in the face of an evocative stimulus.

Although the importance of the salience and the amygdala-based networks in affective experience has been established individually, the goal of the present investigation was to examine the dynamic connectivity between these networks in relation to the fluctuating intensity of subjective emotional experience induced by movies. Dynamic interactions have been increasingly highlighted in recent neuroscientific accounts, conceptualizing emotions less as punctuate phenomena and more as events that unfold over time (Barrett, 2013; Lewis, 2005; Scherer, 2009). We have hypothesized that emotions are generated through the interactions of networks involved in domain-general processes that are not specific to emotion (e.g., Barrett, 2013; Barrett & Satpute, 2013; Barrett & Simmons, 2015). In light of these hypotheses, we examined time-varying functional connectivity during the dynamic emergence of emotional experiences induced by the film clips. More naturalistic emotional experiences, such as viewing film clips, elicit strong subjective and physiological changes by introducing dynamic, real-world social situations (Gross & Levenson, 1995; Schaefer, Nils, Sanchez, & Philippot, 2010). We hypothesized a correspondence between the temporal patterns of network connectivity and the intensity of emotional experiences both within and between different emotion categories.

To examine this hypothesis, we used a newly developed approach for extracting a continuous network cohesion index (NCI; Fig. 1) (Raz et al., 2014; Raz et al., 2012). The NCI is a sliding-window estimator of the connectivity between brain networks’ nodes derived from functional magnetic resonance imaging (fMRI). NCI time courses can be compared with time-varying indices of emotional experiences to investigate the relevance of network dynamics to the emotional experience. The present study is unique in taking a dynamic approach to measuring time-varying functional connectivity and time-varying subjective reports of emotional experience.
Fig. 1

a Illustration of the sensitivity of the network cohesion index (NCI) to phasic coupling of signals. Each of the colored lines represents the blood oxygen level dependent (BOLD) time course of a node in a specific network defined on the basis of prior knowledge (schematically represented as points on a glass brain in panel b). The data presented here were taken from a random, representative participant. The upper gray curve indicates the average signal at each time point. The gray curve at the bottom represents the NCI computed for this network. The color bar areas mark the intervals of increased NCI. Note that during these intervals, no global peaks of the mean signal are evident, but rather fluctuations of the signals that follow similar temporal trends. This indicates that the NCI is indeed sensitive to the extent to which the fluctuations are homogeneous, as expected. b The NCIs were computed as the t statistics for a set of Fisher z-transformed pairwise correlations between the signals of the nodes either within a network (intra-NCI) or between networks (inter-NCI)

We specifically examined the internetwork cohesion of the salience and amygdala-based networks during experiences of anger, sadness, and fear while participants viewed films clips that induced these emotion categories (Rottenberg, Ray, & Gross, 2007). The participants in four different samples first viewed each film during scanning and then watched each clip a second time outside the scanner while making continuous intensity ratings of the most relevant subjective emotional experience that was induced by the clip (see Table 1). We predicted that as the cohesion of the salience and amygdala-based networks increased, so too would the self-reported intensity of the sadness, fear, and anger experienced.
Table 1

Details on the cinematic stimuli and the sample groups

Film Title

Duration (min:sec)

Sample Size

Target Emotion Category

Content Description

Sophie’s Choice

10:00

44

Sadness

Sophie tells her friend about her traumatic experience: She was forced by a Nazi officer in the Auschwitz concentration camp to choose which of her two children will be taken from her.

Stepmom

8:21

43

Sadness

A mother talks with each of her children separately about her future death from a terminal disease.

The X-Files, “Home” episode

5:00

28

Fear

A couple is attacked by a group of zombies at their home.

The Ring 2

8:15

14

Fear

A mother is looking for her lost son in a bazaar; The couple is attacked by a deer while driving their car.

Avenge but One of My Two Eyes

5:27

74*

42**

Anger

A left-wing activist confronts soldiers who delay the return home of Palestinian children at a checkpoint.

*Time Point I, **Time Point II

Method

Participants and materials

Data were collected from four independent samples of healthy volunteers without known histories of neurological or psychiatric disorder and with at least 12 years of education, with Hebrew as their spoken language. The data for all the movies were part of larger datasets collected for projects examining hypotheses unrelated to this study. All of the participants signed an informed consent form approved by the ethics committees of the Tel Aviv Sourasky Medical Center.
  • Sample 1: Participants watched a film clip taken from Sophie’s Choice (Pakula, 1983; 10:00 min) and one taken from Stepmom (Columbus, 1998; 8:21 min). The target emotion category in both cases was sadness. However, since the clips presented different numbers of dramatic peaks, they were expected to induce different temporal profiles of emotional intensity. Valid ratings and fMRI data were obtained from 44 (25 females, 19 males; mean age = 26.73 ± 4.69 years, range = 21–37) individuals while watching the Sophie’s Choice clip, and 43 (22 females, 21 males; mean age = 26.93 ± 4.86 years, range = 21–37) individuals while watching the Stepmom clip. These were previously analyzed in the context of empathy-related processing (Raz et al., 2012).

  • Sample 2: Valid data were obtained from 28 participants (13 females, 15 males; mean age = 23.48 ± 1.01 years, range = 22–26) while watching an excerpt from The X-Files (Manners,“Home” episode, 1996; 5:00 min) with the target emotion of fear.

  • Sample 3: Valid data were collected from 14 participants (two females, 12 males; mean age = 25.5 ± 4.15 years, range = 22–39) while watching an excerpt from The Ring 2 (Nakata, 2005; 8:15 min) with the target emotion of fear.

  • Sample 4: Participants watched an excerpt from the documentary Avenge but One of My Two Eyes (Mograbi, 2005; 5:21 min) twice at two time points, with a one-year gap between viewings. The target emotion category was anger. In a pilot study, in which we examined anger-inducing clips, we found that only the clip from Avenge effectively and distinctively elicited anger. Other anger-inducing clips, such as Schindler’s List, elicited mixed responses in which anger was not the dominant emotion category. Valid data were collected from 74 individuals (males only; mean age = 19.51 ± 1.45 years, range = 18–21.5) at Time Point 1, and 42 individuals at Time Point 2 (mean age = 19.7 ± 0.82 years, range = 19–22).

Behavioral data acquisition

Movie task

All participants were instructed to passively view the films and pay attention to the cinematic events. In Sample 1, the two sadness clips (Sophie’s Choice and Stepmom) were presented to the participants in a counterbalanced order with a period of 10 min in between. The display of all films was preceded and followed by an epoch during which the participants passively gazed at an all-black slide. The duration of this epoch was 30 s for the X-Files and Avenge clips, 2 min for the Ring 2 clip, and 2 min for the Sophie’s Choice and Stepmom clips.

Emotion label rating inventory

Outside of the scanner, after scanning was complete, participants were asked to provide a detailed report of their emotional experience by completing an emotion category label inventory. The aim of the emotion label inventory was to provide an emotional profile of the clips rather than to capture individual responses to the cinematic material. The emotion labeling provided a richer description of the emotional experiences constructed during the films and complementary information on the relevance of the target emotion categories that were preselected for the continuous emotional-experience intensity-rating task. The inventory contained 76 emotion labels that were adopted from Shaver, Schwartz, Kirson, and O’Connor (1987), translated into Hebrew and presented along with their corresponding annotations, adapted from the Rav-Milim Hebrew dictionary (see also Fig. S2). The participants rated how intensely they experienced each emotion category on a 7-point Likert-like scale (1 = negligible, 7 = very high intensity). In Samples 2 and 3, the emotion label inventory was completed by independent samples of 16 participants (31.81 ± 6.61 years, range = 22–48; 29.87 ± 5.57 years, range = 22–48, respectively).

To test that the three film categories used in the study differed in the ways they were emotionally tagged by the participants, we performed a multivariate analysis of variance (MANOVA). The subjective ratings of the 76 emotion labels were pooled for each category (i.e., “fear,” “sadness,” and “anger”) and used as dependent variables in this test. A post-hoc Wilcoxon ranked sum test was used to specifically verify cross-category differences in the three target emotion categories.

To further test the predominance of the target emotion and the related emotion labels within the relevant categories, we performed a comparison based on factor analysis. Ten factors were extracted from the labeling data, pooled over all of the cinematic conditions using the maximum likelihood estimation procedure (Harman, 1976) as implemented in the MATLAB “factoran” function. For each category we selected the factor for which the target emotion had one of the three highest loadings. The weighted target factors were compared with the two other factors (after z-scoring the loadings) within each of the three categories using a paired two-sided Wilcoxon ranked sum test. The ratings of the two cinematic conditions were pooled within the categories before testing.

Continuous emotional experience intensity ratings

In the postscan viewing session, the participants then rewatched the designated movie clips while continuously reporting on shifts in the intensity of a single target emotional experience of interest (sadness, fear, and anger experience in Samples 1, 2–3, and 4, respectively). Participants were instructed to report on the intensity of their emotional experience as they had experienced it while watching the film clip during their first viewing in the scanner. They made their ratings retrospectively to avoid contaminating the fMRI recording; retrospections made this close in time to the original experience contain little bias and highly correlate with the original experience (Raz et al., 2012; Robinson & Clore, 2002). Each rating was sampled at 10 Hz using in-house software. The participants used a vertical scale, indicating seven levels of intensity—from neutral to very intense (each containing three sublevels, so the participants had to press three times to advance to the next level; see Fig. S1). In Sample 4, two participants had missing data due to technical difficulties. In Sample 3, four participants had missing data for the same reason.

fMRI data acquisition and preprocessing

All scans during film viewing were obtained by a GE 3-T Signa Excite echo speed scanner with an eight-channel head coil located at the Wohl Institute for Advanced Imaging at the Tel Aviv Sourasky Medical Center. The structural scans included a T1-weighted 3-D axial spoiled gradient echo (SPGR) pulse sequence (repetition time [TR]/error time [TE] = 7.92/2.98 ms, 150 slices, slice thickness = 1 mm, flip angle = 15°, pixel size = 1 mm, field of view [FOV] = 256 × 256 mm). Functional whole-brain scans were performed in interleaved order with a T2*-weighted gradient echo planar imaging pulse sequence (TR/TE = 3,000/35 ms, flip angle = 90°, pixel size = 1.56 mm, FOV = 200 × 200 mm, slice thickness = 3 mm, 39 slices per volume). Active noise-cancelling headphones (Optoacoustics) were used.

Preprocessing was performed using Brain Voyager QX version 2.4 (Goebel, Esposito, & Formisano, 2006). Head motions were detected and corrected using trilinear and sync interpolations, respectively, applying rigid-body transformations with three translation and three rotation parameters. The data were high-pass filtered at 0.008 Hz. Spatial smoothing with a 6-mm full width at half maximum (FWHM) kernel was applied. To avoid the confounding effect of fluctuations in the whole-brain blood oxygenation level dependent (BOLD) signal, for each TR, each voxel was scaled by the global mean at that time point. The anatomical SPGR data were standardized to 1×1×1 mm and transformed into Talairach space after manual co-registration with the corresponding functional maps.

In all, 21, 20, 27, three, and six data sets were discarded due to various technical failures and exaggerated head motions (deviations higher than 1.5 mm and 1.5° from the reference point) in the cases of Stepmom, Sophie, Avenge, The Ring 2, and The X-Files, respectively (see Table 1 for the final numbers of analyzed datasets).

Definition of regions of interest (ROIs) within the salience and amygdala-based networks

The coordinates of the major nodes within the salience network used for the network cohesion analysis had been previously identified in Touroutoglou et al. (2012; see Table 2 and Fig. 2). We used ROIs within each of the two anatomically separable and functionally distinct salience subnetworks—the dorsal and ventral salience subnetworks (Nelson et al., 2010; Touroutoglou et al., 2012). Similarly, the major nodes within the three amygdala-based networks—the medial, ventrolateral, and dorsal amygdala networks—had been previously identified in Bickart et al. (2012; see Table 2 and Fig. 2). The determination of the coordinates of the major nodes within the amygdala-based networks is reported in the supplementary materials.
Table 2

Network nodes and the Talairach coordinates of their centers

Region

x

y

z

Ventral Salience Network

 Right ventral insula

25

15

–7

 Left ventral insula

–36

12

–4

 Right superior frontal gyrus

19

46

34

 Right pregenual ACC

1

30

22

 Right putamen

16

6

–2

 Left putamen

–18

5

–3

Dorsal Salience Network

 Right dorsal insula

32

18

7

 Left dorsal insula

–38

–2

5

 Right dorsal ACC

–3

7

46

 Right middle frontal gyrus

36

24

49

 Left middle frontal gyrus

–35

27

39

 Right supramarginal gyrus

52

–34

45

 Left supramarginal gyrus

–55

–39

38

Medial Amygdala Network

 Right medial amygdala

12

–4

–14

 Left medial amygdala

–14

–4

–14

 Right anterior hippocampus

21

–13

–15

 Left anterior hippocampus

–21

–13

–15

 Right nucleus accumbens

6

6

–2

 Left nucleus accumbens

–8

6

–2

 Ventromedial prefrontal cortex

0

39

–6

 Right temporal pole

36

22

–27

 Left temporal pole

–32

20

–25

 Subgenual ACC

1

23

1

Dorsal Amygdala Network

 Right dorsal amygdala

19

–5

–7

 Left dorsal amygdala

–21

–4

7

 Caudal ACC

–1

6

37

 Right middle insula

34

–3

–1

 Right hypothalamus

5

–5

–3

 Right red nucleus

1

–17

–17

 Right putamen

25

2

0

 Left putamen

–25

3

–1

Ventral Amygdala Network

 Right ventral amygdala

25

–4

–15

 Left ventral amygdala

–26

–3

–16

 Right temporal pole

36

16

–26

 Left temporal pole

–35

16

–25

 Right superior temporal sulcus

46

–2

–15

 Left superior temporal sulcus

–47

0

–11

 Right fusiform gyrus

33

–12

–27

 Left fusiform gyrus

–34

–12

–30

 Right lateral orbitofrontal cortex

33

29

–9

 Left lateral orbitofrontal cortex

–34

28

–10

ACC, anterior cingulate cortex

Fig. 2

Nodes of the amygdala and the salience connectivity networks

Network cohesion index analysis

A node’s signal was extracted using a mask including voxels whose Gaussian weight was higher than 1 %. To compute the NCIs (Raz et al., 2014; Raz et al., 2012), the BOLD signal was first extracted for each node within each network (the dorsal salience, ventral salience, medial amygdala, ventrolateral amygdala, and dorsal amygdala networks) using a Gaussian mask with a 3-mm radius around the seed coordinates. Pearson correlations between the signals of all pairs of cross-network nodes were then computed in sliding 30-s (ten TRs) time windows. The Pearson coefficients of the relevant nodes were Fisher-z transformed for each time window within each participant, and a Student’s t test was performed on their values to assess the strength of internetwork connectivity. We did not introduce a lag between the NCIs as in Raz et al. (2012). NCIs were computed only for networks whose nodes did not overlap spatially, since such an overlap introduces strong dependencies between the signals. The dorsal amygdala network was found to spatially overlap with both the ventral and dorsal salience networks, and therefore the cross-network NCIs for these networks were excluded from analysis.

Before comparing the emotional rating and the NCIs, we excluded an interval whose duration was seven TRs (approximately the span of hemodynamic response) from the onset of each of the time series, to minimize the effect of novelty related to the change between the rest and movie conditions (Raz et al., 2014; Raz et al., 2012). The rating time series were preprocessed so that their length would fit the length of the NCI series. Median values of the rated emotion intensity were computed in sliding time windows of ten TRs with an overlap of nine TRs (similarly to the NCI computation). Using a Spearman’s rank test, the temporal pattern of the NCI was then correlated with the ratings for each of the individual participants. The correlations were computed for series including only nonoverlapping time windows.

To test the statistical significance of the association between emotion intensity ratings and NCI indices, a two-tailed z test was performed on the Fisher z-transformed Spearman correlation coefficients, which were normally distributed.

Estimating the specificity of the results in the brain space

To control for the possibility that the association between the rating and the NCI results from a nonspecific source (e.g., head motions or other whole-brain physiological noise), a spatial bootstrapping method was employed (Raz et al., 2014; Raz et al., 2012). The sets of coordinates of the dorsal salience and medial amygdala networks were randomly translocated using translations, rotation, and mirror-flip transformations. These transformations preserve the set of distances between the nodes of the original networks. The ICBM 452 probability map (www.loni.usc.edu/atlases/Atlas_Detail.php?atlas_id=6) was used to generate a gray-matter mask. A Gaussian sphere with a 3-mm radius was generated around the node coordinates, and the weighted probability of each of the nodes included in the gray matter was computed. A random network was discarded if any of its nodes had a weighted gray-matter probability lower than 25 % (this threshold was selected because it allows for the inclusion of brainstem regions).

After the exclusion of overlapping randomized networks, 405 sets of coordinates were further used to test the specificity of the results. For each of the cinematic conditions and randomized networks, the analysis of NCI–behavioral associations described above was repeated. The z values were calculated for the comparisons between the emotional ratings and the NCIs of the randomized networks. The index of specificity of the dorsal salience–medial amygdala NCI was defined as the percentage of the randomized instances with a z value lower than the original result.

Simple ROI regression analyses of BOLD–rating correlations

Network dynamics analysis was compared with a standard general linear model (GLM) analysis in which ratings of the intensity of emotion experience were used as parametric regressors for the fMRI data. The ratings were convolved with a canonical hemodynamic response function, and a separate design matrix was generated for each of the films.

An additional analysis was performed to examine the possibility that the emotional rating could be predicted by the network nodes’ BOLD signal alone, regardless of the cohesion index. Spearman’s coefficients for the nodes’ signal and the rating were computed in two alternative ways. The first included resampling of the BOLD signal, similarly to the resampling used for the cohesion analysis. BOLD signals were extracted for each node of the networks of interest and averaged in sliding windows corresponding to the nonoverlapping windows used for the cohesion analysis. The second method included no resampling of the data as in standard fMRI regression analysis. The BOLD signal of each node was compared with the emotional rating after it was convolved with a canonical hemodynamic response function.

The Fisher z-transformed Spearman’s coefficients for individual comparisons of the brain–behavioral indices were z-tested as described in the Method section. False discovery rate (FDR) correction was applied for 33 comparisons (the total number of nodes) for the five cinematic conditions.

Partial conjunction analysis

To examine the consistency of the link between the emotion rating and the NCI (as well as other univariate fMRI measures), we applied partial conjunction analysis (Heller, Golland, Malach, & Benjamini, 2007). This method allows for testing a hypothesis using repeating tests under positive dependency. We applied the algorithm (available at www.math.tau.ac.il/~ruheller/Software.html), using a version that is compatible with positive dependencies. This procedure also controls for the FDR (Benjamini & Yekutieli, 2001). Partial conjunction analysis was applied to test the consistency of the association between the ratings of subjective emotional experience and the ROI signals, as well as the whole-brain GLM, as described in Heller et al. (2007).

A post-hoc test of the link between the medial amygdala–dorsal salience NCI and a pattern of monotonic growth

A post-hoc analysis was designed to examine the possibility that the correlation of the medial amygdala–dorsal salience NCI with the emotional intensity ratings was confounded by the tendency of this neural index to increase monotonically during the scan. We assumed that if this confound were relevant, the replacement of the “double hump” pattern of the rating in Stepmom with a sham rating pattern, which is closer to monotonic growth, would increase the correlation between the medial amygdala–dorsal salience NCI and the rating.

The median of the emotional intensity ratings of sadness experienced during the Sophie’s Choice clip, which showed gradually increases over time, was resampled using the standard MATLAB resample function to fit the duration of emotional rating in Stepmom. This regressor was then compared with the individual medial amygdala–dorsal salience NCIs, as described above.

Results

Emotion labeling and continuous rating

We first tested whether the specific film clips were different in terms of the reported emotional experiences they elicited. A MANOVA revealed that the profiles of rated emotion categories significantly differed across the emotional film categories (the transformed Wilk’s lambda corresponds to χ2 = 17,648 with 75 degrees of freedom, p < 5 × 10–32). FDR-corrected post-hoc contrast analyses indicated that the clips in each of the target emotion categories were rated higher for that category than were the clips in any of the other categories [Fear (FEAR) > Fear (SADNESS): z = 3.64, p < .0005; Fear (FEAR) > Fear (ANGER): z = 6.34, p < 5 × 10–10; Sadness (SADNESS) > Sadness (FEAR): z = 6.28, p < 5 × 10–10; Sadness (SADNESS) > Sadness (ANGER): z = 4.84, p < 5 × 10–6; Anger (ANGER) > Anger (FEAR): z = 5.6, p < 5 × 10–8; Anger (ANGER) > Anger (SADNESS): z = 5.72, p < 5 × 10–8; Fig. 3a].
Fig. 3

Post-hoc emotional labeling analysis. a Between-category comparison. The box plots indicate the target emotion ratings for each of the emotional categories. b Within-category comparison of the weighted factors in which fear, sadness, and anger are dominant. FEAR, SADNESS, and ANGER indicate the movie categories by the target emotion. *p < .0005, **p < 5 × 10–6, ***p < 5 × 10–10, ****p < 5 × 10–15, *****p < 5 × 10–20

For five out of the six film clips, participants rated their experience as being most intense according to the target emotion label (sadness while watching the Sophie’s Choice and Stepmom clips; fear while watching the X-Files clip; anger while watching the Avenge clip; Table 3). The exception was The Ring 2, for which the target emotion category, fear, was rated second only to the related label fright. The continuous emotional experience ratings confirmed that each film was effective in eliciting the target emotional experience during the dramatic peaks of the clips. The peak sadness intensity scores were 19.5 and 13 out of 21 during the Sophie’s Choice and Stepmom clips, respectively (corresponding to the labels very high sadness and moderate to high sadness, respectively). The peak fear ratings were 12/21 (moderate fear) and 13/21 (moderate to high fear) during the X-Files and Ring 2 clips, respectively. And the peak anger intensity ratings were 19/21 (very high anger) and 18/21 (high anger) during the viewings of the Avenge clip.
Table 3

Emotional tagging of the cinematic conditions

Region

Median Rating

Frequency* (%)

Sophie’s Choice (N = 60)

 Sadness

5

91.67

 Compassion

5

83.33

 Mercy

5

81.67

 Anger

4

73.33

 Hate

4

56.67

 Fear

4

68.33

 Horror

4

65

Stepmom (N = 61)

 Sadness

5

86.89

 Compassion

5

81.97

 Mercy

4

85.26

 Sympathy

4

81.97

The X-Files (N = 16)

 Fear

5

81.25

 Fright

4

68.75

 Dread

4

62.5

The Ring 2 (N = 16)

 Fright

5

93.75

 Fear

4

87.5

 Dread

4

62.5

Avenge Time Point I (N = 88)

 Anger

5

92.55

 Hostility

5

80.85

 Resentment

5

56.38

 Nervousness

5

75.53

 Abhorrence

5

64.89

 Contempt

5

75.53

 Shame

4.5

67.02

 Hate

4

70.21

 Rage

4

70.21

 Compassion

4

63.83

 Sympathy

~4

59.57

 Mercy

4

61.70

 Disappointment

4

72.34

 Humiliation

4

59.57

 Affection

4

61.70

 Frustration

4

50

 Agitation

4

60.63

 Pride

4

57.45

 Worry

4

44.68

 Insult

4

48.94

 Preparedness

4

64.89

 Unrest

4

57.45

 Aggression

4

64.89

 Interest

4

50

 Self-control

4

63.83

 Admiration

4

61.70

Avenge Time Point II (N = 50)

 Anger

4.5

94

 Shame

4

68

 Disgust

4

70

 Hostility

4

74

 Insult

4

68

 Contempt

4

72

*Frequency of cases in which the rating of the label was “moderate” or higher

The predominance of the target emotion and the related labels within each of the categories was validated by comparing the relevant factors computed by maximum likelihood estimation. As expected, we found three distinct, semantically coherent factors in which fear, sadness, and anger were dominant. Thus, the highest five loadings on the three relevant factors were given to the labels: (1) anger, hate, rage, hostility, and abhorrence on Factor 1; (2) fright, fear, dread, horror, and shock on Factor 6; and (3) compassion, sadness, sympathy, mercy, and gloominess on Factor 7 (see Fig. S2 in supplementary materials for the full composition of the factors). As expected, Factor 1, in which anger was dominant, was higher than Factors 6 (z = 10.37, p < 5 × 10–25) and 7 (z = 8.9, p < 5 × 10–18) for the anger-inducing films; Factor 6, in which fear was dominant, was higher than Factors 1 (z = 4.57, p < 5 × 10–6) and 7 (z = 4.51, p < 5 × 10–5) for the fear-inducing films; and Factor 7, in which sadness was dominant, was higher than Factors 1 (z = 9.1, p < 5 × 10–19) and 6 (z = 8.01, p < 5 × 10–15) for the sadness-inducing films (Fig. 3b).

Network interactions correlates with the intensity of emotional experiences

As predicted, the internetwork cohesion indices were consistently related to the intensities of two different experiences of sadness, as well as to the intensity of fear and anger experiences. Specifically, the NCI reflecting connectivity between the dorsal salience and medial amygdala networks was significantly positively correlated with the intensity ratings of the target emotional experience across all of six cinematic conditions (Fig. 4a and Table 4). Partial conjunction analysis indicated a significantly consistent link between the medial amygdala–dorsal salience NCI and the ratings across all six of the conditions [qFDR < .05]. In the case of the dorsal salience–ventrolateral amygdala coefficient (see Fig. S3 in the supplementary materials), significant correlations with the emotional experience intensity rating were found for five out of the six conditions. Significant correlations with the ratings were also found for the ventral salience–ventrolateral amygdala and ventral salience–medial amygdala in two and one out of six conditions, respectively.
Fig. 4

Relations between rated emotional intensity and the dorsal salience–medial amygdala NCI across five emotional cinematic conditions. a Time courses of the median ratings of emotion intensity (colored) overlaid on the dorsal salience–medial amygdala NCI (black). The colored and gray dashed lines indicate rating interquartile ranges and standard errors, respectively. The gray areas indicate the standard errors. b The specificity of the results in the brain space was assessed using a bootstrapping method. A frequency histogram presents the z values resulting from comparisons between the ratings and NCIs computed for random networks. The black arrows indicate the z values of the original comparisons. *p < .05, **p < .01, ***p < 5 × 10–4, ****p < 5 × 10–9

Table 4

Calculated z values for tests of individual Spearman’s coefficients between rated emotional intensity and network cohesion indices

Networks

Sophie’s Choice

Stepmom

The X-Files

The Ring 2

Avenge I

Avenge II

Dorsal salience–Medial amygdala

6.45§

4.02***

2.94***

2.73**

3.04***

3.4***

Dorsal salience–Ventrolateral amygdala

5.23§

2.77**

0.68

3.06***

3.09***

3.17***

Ventral salience–Medial amygdala

6.2§

1.65

3.19**

2.35*

2.78**

0.82

Ventral salience–Ventrolateral amygdala

5.45§

1.77

1.47

3.14***

0.83

0.44

*p < .05; **p < .01, qFDR < .05; ***p < .005, qFDR < .05; §p < 5 × 10–5, qFDR < .05

We employed a bootstrapping method to assess whether the observed brain–experience relationship was unique to the specific constellation of the dorsal salience network and the medial amygdala, or rather to a global brain phenomenon unspecific to our networks of interest. In each sample, we observed high specificity in the brain space of the association between the intensity ratings and the NCI of the dorsal salience and the medial amygdala networks (i.e., whether there was a low chance of getting similar or stronger correlations between the rating and the cohesion of sets of gray-matter regions when these regions were selected randomly; Fig. 4b). We found specificity of 100 % for the Sophie’s Choice clip, 99.11 % for the Stepmom clip, 97.54 % for the X-Files clip, 99.55 % for the Ring 2 clip, 97.07 % for the first display of Avenge, and 99.33 % for the second display of Avenge. Furthermore, when we replaced the median rating of sadness during the Stepmom clip with the median rating of sadness for the Sophie’s Choice clip (which showed a monotonic ascent), we did not observe a significant correlation between the dorsal salience–medial amygdala NCI and the intensity ratings. This indicates that the correlation between internetwork cohesion and the intensity of sadness was not due to a general tendency of the NCI to correlate with a pattern of monotonic growth.

Further supporting our predictions, the NCI reflecting connectivity between the dorsal salience and the ventral amygdala networks was significantly correlated with the rating of emotional intensity across five of the six conditions (Table 4).

The ventral salience NCIs were not consistently associated with continuous ratings of emotional experience, although they were statistically significant during the Sophie’s Choice film clip (Table 4).

Finally, we performed tests to confirm that the NCI analysis indeed captured a relationship between network connectivity and emotional experience that could not be captured by simpler measures such as the BOLD signal. In a whole-brain GLM analysis (Fig. 5, Table S1, Fig. S4), the most consistent brain–behavior correlations were found in regions implicated with visual scene processing along both the ventral and dorsal pathways. The peak effects (consistent in four out of six conditions at qFDR < .05) were found bilaterally in the fusiform gyrus (19 and 14 voxels in the right and left hemispheres, respectively) and the ventral–caudal intraparietal sulcus (three and one voxels, which are part of a larger cluster spread across the right and left hemispheres, respectively). Other effects were found in frontal regions—mainly in orbitofrontal cortex and frontal eye field, but their consistency level was lower. Additionally, we examined whether the ratings of emotional experience intensity correlated with the BOLD signal alone in the nodes of the networks of interest. The results showed that although several nodes’ BOLD signals correlated with the ratings of emotional experience (see Table S1 and S2 in the supplementary materials), these correlations were not statistically significant for all six of the movie viewing conditions. For example, the BOLD signal within the right ventral anterior insula correlated with the ratings of emotional experience of fear and anger, but not in the sadness-inducing clips.
Fig. 5

General linear model analysis with emotion intensity ratings as parametric regressors over the six film clips. Partial conjunction analysis was used to examine the consistency of the associations between these indices. The colors indicate the robustness of the results (the numbers of conditions in which a significant effect was found, out of the total number of conditions), as regions with high probability scores (red in online figure) are more consistently correlated with the ratings across cinematic conditions. Only the effects that were consistent in two or more conditions are shown. The results were thresholded at qFDR < .05

Discussion

Inspired by psychological-construction approaches to emotion (Barrett, 2006, 2012), in our study we examined the hypothesis that dynamic interactions of domain-general, large-scale intrinsic brain networks support the emergence of a variety of experiences both within and between different emotion categories (Barrett, 2006, 2012; Barrett & Satpute, 2013; Lindquist & Barrett, 2012; Touroutoglou et al., 2015). We found a consistent relationship between time-varying internetwork functional connectivity measures and time-varying self-reported emotional intensity across several different categories of emotional experiences. More specifically, the salience network, anchored by dorsal anterior insula, showed increased connectivity to regions of the medial amygdala network involved in social affiliation as cinematic experiences of anger, sadness, and fear became more intense. We also observed that the connectivity strength between the dorsal salience network and the ventrolateral amygdala network increased as emotional experiences became more intense in five of the cinematic conditions. Our findings highlight the added value of analyzing time-varying connectivity dynamics as a complementary approach to a static, intrinsic-network analysis. Using a newly developed functional connectivity method during film viewing, we have demonstrated for the first time that the time-varying dynamic coupling of networks correlates with emotional intensity. Because the ratings did not show a consistent relationship with any given network node’s BOLD signals, our results suggest that the time-varying connectivity between networks is what is important for understanding emotional intensity (rather than the activity of single nodes).

These findings are also in line with a growing body of evidence that supports a domain-general, constructionist approach to emotion categories (Kober et al., 2008; Lindquist et al., 2012). The insula-based network regions associated with salience and the amygdala-based networks regions associated with the social-affiliation network (Bickart et al., 2012) have consistently demonstrated increases in activation in meta-analyses of neuroimaging studies of anger, sadness, fear, disgust, and happiness (Kober et al., 2008; Lindquist et al., 2012). Interestingly, the medial amygdala “social-affiliation” network shares many overlapping regions with the “default mode” network, which includes midline cortical, lateral prefrontal, and temporal lobe regions (Andrews-Hanna, Reidler, Huang, & Buckner, 2010; Buckner, Andrews-Hanna, & Schacter, 2008) that are consistently active during varieties of emotion categories such as anger, fear, happiness, and sadness (Lindquist et al., 2012; Wilson-Mendenhall et al., 2015; Wilson-Mendenhall et al., 2011). The ventromedial prefrontal cortex, which is an overlapping node in both the default and medial amygdala networks, appears to be particularly important for understanding the meaning of moment-to-moment changes in affective and social cues (for empirical evidence, see Roy, Shohamy, & Wager, 2012; for a discussion, see also Barrett, 2012; Barrett & Satpute, 2013).

The lack of specificity for the salience network suggests that it plays a more domain-general function across instances of anger, fear, and sadness, perhaps by representing the feeling of arousal that is common across different emotion categories. Salience network regions are implicated in a variety of tasks that involve unpleasant affect (Hayes & Northoff, 2011). More specifically, our findings are consistent with other recent work showing that salience network intrinsic connectivity is associated with negative affect (Touroutoglou et al., 2014; Touroutoglou et al., 2012; Touroutoglou et al., 2015) across different categories of negative emotions (Touroutoglou et al., 2015), as well as evidence that it supports empathy (Decety & Jackson, 2004) and contains key regions responsible for visceromotor control (Craig, 2011; Ongur, Ferry, & Price, 2003; Vogt, 2005). Nodes within the dorsal salience network are also engaged during attention and executive function tasks (Corbetta, Patel, & Shulman, 2008; Nelson et al., 2010), and it has been suggested that the network helps guide “switching” between internally and externally focused events (Corbetta et al., 2008; Menon & Uddin, 2010). Finally, the ventral salience NCIs were associated with emotion experience in only two emotional experiences: anger when watching The Ring and sadness when watching Sophie’s Choice. This is an important observation, because it shows clearly that each emotion category is a population of diverse instances that are constructed using different interacting networks (i.e., different varieties of anger [when watching The Ring vs. Avenge] and sadness [when watching Sophie’s Choice vs. Stepmom]). Future studies should investigate the distinct contributions of the dorsal and ventral salience networks in the dynamic interactions of the networks involved in constructing emotional experiences. It is possible that emotional intensity in these two dynamic movie clips was specifically related to intense changes in interoception (i.e., the perception of internal sensations from the core of the body; Barrett & Simmons, 2015).

Similarly, nodes within the medial amygdala network involved in social affiliation are engaged during emotion experiences (Lindquist et al., 2012), and also appear to serve more domain-general functions. In particular, affiliation network nodes such as ventromedial prefrontal cortex and hippocampus are engaged in autobiographical memory and semantic memory (Buckner et al., 2008), context-based object perception (Binder, Desai, Graves, & Conant, 2009), and moral reasoning (Bzdok et al., 2012), suggesting that this network might play a general function for understanding the feelings, desires, and needs of others.

In light of the evidence that individual instances of anger, fear, and sadness categories each arise from interactions of domain-general networks, it would seem valuable for future fMRI studies of emotion to employ a systems neuroscience perspective. Because our study examined only negatively valenced experiences, it is yet to be explored whether these findings would generalize to positive emotions. Although it is possible that the interaction of dorsal salience network connectivity with the amygdala networks is specifically related to various types of negative affect (Eryilmaz, Van De Ville, Schwartz, & Vuilleumier, 2011), evidence supporting the involvement of regions included in the salience and amygdala networks in positive affect suggests that these networks play a broader role in emotional experiences (Kober et al., 2008; Lindquist et al., 2012).

The high replicability of our findings suggests that the medial amygdala–dorsal insula and ventrolateral amygdala–dorsal insula NCIs may be useful in future research as neural markers of emotion-related domain-general processes. Such a research agenda fits well with the constructionist framework, which highlights that the domain-general network approach is powerful because the interplay of networks could account for the wide variety of emotions that people experience in real life. It is worth noting that, in line with Kober and colleagues’ meta-analysis findings (Kober et al., 2008), our whole-brain GLM findings indicate that the intensity of emotional experience is also consistently related to other domain-general functions—namely, activation of the neural circuits involved in perception processing.

Our findings show a striking consistency of the relation between subjective emotional experience and interactions of the domain-general salience and amygdala-based networks (i.e., the association between these measures replicated in up to six different instances of emotion). We found that different emotional experiences are often grounded in different distributed patterns, and that each distributed pattern reflects the interaction of domain-general processes. Much emphasis has been placed on distinguishing the neural patterns that distinguish one emotion category from another (e.g., Saarimäki et al., 2015; Tettamanti et al., 2012; Wager et al., 2015), but the domain-general emphasis specified in psychological construction suggests the potential for much more variability in emotional life (both within and between emotion categories; see, e.g., Wilson-Mendenhall et al., 2015; Wilson-Mendenhall et al., 2011). Because our findings are based on a composite dataset in which emotion category was a between-subjects manipulation, this study was not optimal for examining differences in the variety of emotional experiences. But our findings do provide the groundwork for expanding a dynamic, domain-general framework for understanding the many emotions that people experience.

Despite the fact that dorsal salience–ventrolateral amygdala network connectivity consistently predicted the intensity of emotional experience across five of our six films, the significance of the relationship did not hold for the X-Files clip. The ventrolateral amygdala network, which includes association areas in the superior temporal sulcus and orbitofrontal regions, is specifically implicated in the perception of social cues (see Bickart et al., 2012). Thus, the lack of a relation here may be linked to the relative lack of visual social cues in this clip. Whereas in the other cinematic clips social cues such as direction of gaze, facial gestures, and hand action tended to occur more frequently in highly emotional moments, in the X-Files clip, the drama intensified in a night scene in almost complete darkness. Future studies should examine whether the intensity of social cues modulates the connectivity between the ventrolateral amygdala and dorsal salience networks.

One limitation of our study was that we did not directly examine whether the salience and amygdala-based networks are domain-general. However, it is not unreasonable to assume so, on the basis of both the structural and functional evidence published in other studies. For example, major nodes in these networks are members of the brain’s “rich club” (van den Heuvel & Sporns, 2013), which consists of the most densely interconnected regions of the human brain (van den Heuvel & Sporns, 2011). Activation foci in these hubs within the salience network are commonly found in neuroimaging studies using tasks from a variety of psychological domains, spanning emotion, cognition, perception, and action (see Fig. 2 in Clark-Polner, Wager, Satpute, & Barrett, in press, and Fig. 1 in Nelson et al., 2010; see also Yeo et al., 2016). The medial amygdala network overlaps extensively with the default mode network, which also contains rich-club hubs and is widely appreciated as a domain-general network (e.g., Andrews-Hanna et al., 2010; Barrett & Satpute, 2013; Buckner et al., 2008; Lindquist & Barrett, 2012; Yeo et al., 2016). With respect to emotional experience, the default mode network is routinely engaged across all categories of emotional experience (Lindquist et al., 2012; Wager et al., 2015), during typical and atypical instances of emotion (Wilson-Mendenhall, Barrett, & Barsalou, 2013), and in the representation of emotion concepts (Skerry & Saxe, 2015; Wilson-Mendenhall et al., 2011), as well as of concepts more generally (Binder et al., 2009).

Consistent with future work moving beyond the handful of categories typically studied in the emotion literature, the richness and granularity of emotional experiences during each film clip (as evidenced by the emotion-tagging procedure) points to the limitations of studies in which single emotion labels reductively designate complex emotional experiences. This caveat also applies to the moment-to-moment rating procedure adopted in our study, which was limited to single target emotions. It is possible that people were experiencing several emotions intensely during Sophie’s Choice (because several in addition to sadness, fear, and anger were rated relatively high; Table 3, Fig. 3). However, our results indicate that people tended to experience a dominant emotion in the other clips used in the study (see Fig. 3). In the future, it will be important to explore the technical feasibility and validity of multi-emotional continuous rating, as well as the use of dimension reduction methods (such as factor analysis and principal component analysis) in this context.

Notes

Author note

We thank Alaina Baker for her assistance with editing the manuscript. This work was supported by a National Institutes of Health Director’s Pioneer Award (DP1OD003312) to L.F.B., a National Institute on Aging grant (R01 AG030311-06A1) to L.F.B., and Shared Instrumentation grants (1S10RR023401, 1S10RR019307, and 1S10RR023043) from the National Center for Research Resources. This study was also supported by a Dan David Scholarship to G.R., and by grants from the University of Chicago’s Arete Initiative “The Science of Virtues,” to T.H., G.R., and G.G.; the Israeli Defense Forces Medical Corps, to T.H. and R.A.; the U.S Department of Defense (W81XWH-11-2-0008) to T.H.; and BRAINTRAIN under the EU FP7 Health Cooperation Work Program (602186), to T.H. and G.R. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources, the National Institutes of Health, or the National Institute on Aging. G.R., A.T., C.W.-M., S.A., T.H., and L.F.B. designed the study. G.R., A.T., T.H., and L.F.B. wrote the manuscript. G.R., A.T., T.H., L.F.B., G.G., C.W.-M., T.L., T.G., Y.J., S.A., R.A., and A.M.-K. analyzed the data. L.F.B. and T.H. contributed to the grant funding. The authors declare no conflicts of interest.

Supplementary material

13415_2016_425_MOESM1_ESM.docx (2.5 mb)
ESM 1(DOCX 2583 kb)

References

  1. Anderson, M. L. (2015). Précis of after phrenology: Neural reuse and the interactive brain. Behavioral and Brain Sciences, 16, 1–22.Google Scholar
  2. Andrews-Hanna, J. R., Reidler, J. S., Huang, C., & Buckner, R. L. (2010). Evidence for the default network’s role in spontaneous cognition. Journal of Neurophysiology, 104, 322–335.CrossRefPubMedPubMedCentralGoogle Scholar
  3. Barrett, L. F. (2006). Are emotions natural kinds? Perspectives on Psychological Science, 1, 28–58. doi:10.1111/j.1745-6916.2006.00003.x CrossRefPubMedGoogle Scholar
  4. Barrett, L. F. (2012). Emotions are real. Emotion, 12, 413–429.CrossRefPubMedGoogle Scholar
  5. Barrett, L. F. (2013). Psychological construction: A Darwinian approach to the science of emotion. Emotion Review, 5.Google Scholar
  6. Barrett, L. F., & Satpute, A. B. (2013). Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain. Current Opinion in Neurobiology, 23, 361–372.CrossRefPubMedPubMedCentralGoogle Scholar
  7. Barrett, L. F., & Simmons, W. K. (2015). Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16, 419–429.CrossRefPubMedPubMedCentralGoogle Scholar
  8. Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29, 1165–1188.CrossRefGoogle Scholar
  9. Bickart, K. C., Hollenbeck, M. C., Barrett, L. F., & Dickerson, B. C. (2012). Intrinsic amygdala–cortical functional connectivity predicts social network size in humans. Journal of Neuroscience, 32, 14729–14741.CrossRefPubMedPubMedCentralGoogle Scholar
  10. Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19, 2767–2796.CrossRefPubMedPubMedCentralGoogle Scholar
  11. Buckner, R. L., Andrews-Hanna, J. R., & Schacter, D. L. (2008). The brain’s default network. Annals of the New York Academy of Sciences, 1124, 1–38.CrossRefPubMedGoogle Scholar
  12. Bzdok, D., Schilbach, L., Vogeley, K., Schneider, K., Laird, A. R., Langner, R., & Eickhoff, S. B. (2012). Parsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathy. Brain Structure and Function, 217, 783–796.CrossRefPubMedPubMedCentralGoogle Scholar
  13. Chanes, L., & Barrett, L. F. (2016). Redefining the role of limbic areas in cortical processing. Trends in Cognitive Sciences, 20, 96–106. doi:10.1016/j.tics.2015.11.005 CrossRefPubMedGoogle Scholar
  14. Clark-Polner, E., Wager, T. D., Satpute, A. B., & Barrett, L. F. (in press). Neural fingerprinting: Meta-analysis, variation, and the search for brain-based essences in the science of emotion. In L. F. Barrett, M. Lewis, & J. M. Haviland-Jones (Eds.), The handbook of emotion (4th ed.). New York, NY: Guilford.Google Scholar
  15. Columbus, C. (1998). Stepmom. USA.Google Scholar
  16. Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58, 306–324.CrossRefPubMedPubMedCentralGoogle Scholar
  17. Craig, A. D. (2011). Significance of the insula for the evolution of human awareness of feelings from the body. Annals of the New York Academy of Sciences, 1225, 72–82. doi:10.1111/j.1749-6632.2011.05990.x CrossRefPubMedGoogle Scholar
  18. Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavioral and Cognitive Neuroscience Review, 3, 71–100.CrossRefGoogle Scholar
  19. Ekman, P. (1999). Facial expressions. In T. Dalgleish & M. J. Power (Eds.), Handbook of cognition and emotion (pp. 301–320). New York, NY: Wiley.Google Scholar
  20. Eryilmaz, H., Van De Ville, D., Schwartz, S., & Vuilleumier, P. (2011). Impact of transient emotions on functional connectivity during subsequent resting state: A wavelet correlation approach. NeuroImage, 54, 2481–2491. doi:10.1016/j.neuroimage.2010.10.021 CrossRefPubMedGoogle Scholar
  21. Fusar-Poli, P., Placentino, A., Carletti, F., Landi, P., Allen, P., Surguladze, S., … Politi, P. (2009). Functional atlas of emotional faces processing: a voxel-based meta-analysis of 105 functional magnetic resonance imaging studies. Journal of Psychiatry and Neuroscience, 34, 418–432.Google Scholar
  22. Goebel, R., Esposito, F., & Formisano, E. (2006). Analysis of functional image analysis contest (FIAC) data with brainvoyager QX: From single-subject to cortically aligned group general linear model analysis and self-organizing group independent component analysis. Human Brain Mapping, 27, 392–401.CrossRefPubMedGoogle Scholar
  23. Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films. Cognition and Emotion, 9, 87–108.CrossRefGoogle Scholar
  24. Harman, H. H. (1976). Modern factor analysis. Chicago, IL: University of Chicago Press.Google Scholar
  25. Hayes, D. J., & Northoff, G. (2011). Identifying a network of brain regions involved in aversion-related processing: A cross-species translational investigation. Frontiers in Integrative Neuroscience, 5, 49. doi:10.3389/fnint.2011.00049 CrossRefPubMedPubMedCentralGoogle Scholar
  26. Heller, R., Golland, Y., Malach, R., & Benjamini, Y. (2007). Conjunction group analysis: An alternative to mixed/random effect analysis. NeuroImage, 37, 1178–1185.CrossRefPubMedGoogle Scholar
  27. Hermans, E. J., van Marle, H. J. F., Ossewaarde, L., Henckens, M. J. A. G., Qin, S., van Kesteren, M. T. R., … Fernandez, G. (2011). Stress-related noradrenergic activity prompts large-scale neural network reconfiguration. Science, 334, 1151.Google Scholar
  28. Kober, H., Barrett, L. F., Joseph, J., Bliss-Moreau, E., Lindquist, K., & Wager, T. D. (2008). Functional grouping and cortical–subcortical interactions in emotion: A meta-analysis of neuroimaging studies. NeuroImage, 42, 998–1031. doi:10.1016/j.neuroimage.2008.03.059 CrossRefPubMedPubMedCentralGoogle Scholar
  29. Lewis, M. D. (2005). Bridging emotion theory and neurobiology through dynamic systems modeling. Behavioral and Brain Sciences, 28, 169–194.PubMedGoogle Scholar
  30. Lindquist, K. A., & Barrett, L. F. (2012). A functional architecture of the human brain: Emerging insights from the science of emotion. Trends in Cognitive Sciences, 16, 533–540.CrossRefPubMedPubMedCentralGoogle Scholar
  31. Lindquist, K. A., Wager, T. D., Kober, H., Bliss-Moreau, E., & Barrett, L. F. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35, 121–143. doi:10.1017/S0140525X11000446 CrossRefPubMedPubMedCentralGoogle Scholar
  32. Manners, K. (1996). Home. Season 2, episode 4 in The X-Files. USA.Google Scholar
  33. Menon, V., & Uddin, L. Q. (2010). Saliency, switching, attention and control: A network model of insula function. Brain Structure and Function, 214, 655–667.CrossRefPubMedPubMedCentralGoogle Scholar
  34. Mograbi, A. (2005). Avenge but one of my two eyes. Israel.Google Scholar
  35. Nakata, H. (2005). The ring two. USA.Google Scholar
  36. Nelson, S., Dosenbach, N., Cohen, A., Wheeler, M., Schlaggar, B., & Petersen, S. (2010). Role of the anterior insula in task-level control and focal attention. Brain Structure and Function, 214, 669–680.CrossRefPubMedPubMedCentralGoogle Scholar
  37. Ongur, D., Ferry, A. T., & Price, J. L. (2003). Architectonic subdivision of the human orbital and medial prefrontal cortex. Journal of Comparative Neurology, 460, 425.CrossRefPubMedGoogle Scholar
  38. Pakula, A. J. (1983). Sophie’s choice. USA.Google Scholar
  39. Panksepp, J. (2004). Affective neuroscience: The foundations of human and animal emotions. New York, NY: Oxford University Press.Google Scholar
  40. Raz, G., Jacob, Y., Gonen, T., Winetraub, Y., Flash, T., Soreq, E., & Hendler, T. (2014). Cry for her or cry with her: Context-dependent dissociation of two modes of cinematic empathy reflected in network cohesion dynamics. Social Cognitive and Affective Neuroscience, 9, 30–38. doi:10.1093/scan/nst052 CrossRefPubMedGoogle Scholar
  41. Raz, G., Winetraub, Y., Jacob, Y., Kinreich, S., Maron-Katz, A., Shaham, G., … Hendler, T. (2012). Portraying emotions at their unfolding: A multilayered approach for probing dynamics of neural networks. NeuroImage, 60, 1448–1461.Google Scholar
  42. Robinson, M. D., & Clore, G. L. (2002). Belief and feeling: Evidence for an accessibility model of emotional self-report. Psychological Bulletin, 128, 934.CrossRefPubMedGoogle Scholar
  43. Rottenberg, J., Ray, R. R., & Gross, J. J. (2007). Emotion elicitation using films. In J. A. Coan & J. J. B. Allen (Eds.), The handbook of emotion elicitation and assessment (pp. 9–28). Oxford, UK: Oxford University Press.Google Scholar
  44. Roy, M., Shohamy, D., & Wager, T. D. (2012). Ventromedial prefrontal–subcortical systems and the generation of affective meaning. Trends in Cognitive Sciences, 16, 147–156.CrossRefPubMedPubMedCentralGoogle Scholar
  45. Saarimäki, H., Gotsopoulos, A., Jääskeläinen, I. P., Lampinen, J., Vuilleumier, P., Hari, R., … Nummenmaa, L. (2015). Discrete neural signatures of basic emotions. Cerebral Cortex. Advance online publication. doi:10.1093/cercor/bhv086
  46. Schaefer, A., Nils, F., Sanchez, X., & Philippot, P. (2010). Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion, 24, 1153–1172.CrossRefGoogle Scholar
  47. Scherer, K. R. (2009). The dynamic architecture of emotion: Evidence for the component process model. Cognition and Emotion, 23, 1307–1351.CrossRefGoogle Scholar
  48. Seeley, W. W., Menon, V., Schatzberg, A. F., Keller, J., Glover, G. H., Kenna, H., … Greicius, M. D. (2007). Dissociable intrinsic connectivity networks for salience processing and executive control. Journal of Neuroscience, 27, 2349–2356. doi:10.1523/JNEUROSCI.5587-06.2007
  49. Shaver, P., Schwartz, J., Kirson, D., & O’Connor, C. (1987). Emotion knowledge: Further exploration of a prototype approach. Journal of Personality and Social Psychology, 52, 1061–1086.CrossRefPubMedGoogle Scholar
  50. Skerry, A. E., & Saxe, R. (2015). Neural representations of emotion are organized around abstract event features. Current Biology, 25, 1945–1954.CrossRefPubMedGoogle Scholar
  51. Sterling, P. (2012). Allostasis: A model of predictive regulation. Physiology and Behavior, 106, 5–15. doi:10.1016/j.physbeh.2011.06.004 CrossRefPubMedGoogle Scholar
  52. Tettamanti, M., Rognoni, E., Cafiero, R., Costa, T., Galati, D., & Perani, D. (2012). Distinct pathways of neural coupling for different basic emotions. NeuroImage, 59, 1804–1817. doi:10.1016/j.neuroimage.2011.08.018 CrossRefPubMedGoogle Scholar
  53. Touroutoglou, A., Bickart, K. C., Barrett, L. F., & Dickerson, B. C. (2014). Amygdala task-evoked activity and task-free connectivity independently contribute to feelings of arousal. Human Brain Mapping, 35, 5316–5327.CrossRefPubMedPubMedCentralGoogle Scholar
  54. Touroutoglou, A., Hollenbeck, M., Dickerson, B. C., & Barrett, L. F. (2012). Dissociable large-scale networks anchored in the right anterior insula subserve affective experience and attention. NeuroImage, 60, 1947–1958.CrossRefPubMedPubMedCentralGoogle Scholar
  55. Touroutoglou, A., Lindquist, K. A., Dickerson, B. C., & Barrett, L. F. (2015). Intrinsic connectivity in the human brain does not reveal networks for “basic” emotions. Social Cognitive and Affective Neuroscience, 10, 1257–1265. doi:10.1093/scan/nsv013 CrossRefPubMedGoogle Scholar
  56. van den Heuvel, M. P., & Sporns, O. (2011). Rich-club organization of the human connectome. Journal of Neuroscience, 31, 15775–15786.CrossRefPubMedGoogle Scholar
  57. van den Heuvel, M. P., & Sporns, O. (2013). Network hubs in the human brain. Trends in Cognitive Sciences, 17, 683–696. doi:10.1016/j.tics.2013.09.012 CrossRefPubMedGoogle Scholar
  58. Vogt, B. A. (2005). Pain and emotion interactions in subregions of the cingulate gyrus. Nature Reviews Neuroscience, 6, 533–544.CrossRefPubMedPubMedCentralGoogle Scholar
  59. Vytal, K., & Hamann, S. (2010). Neuroimaging support for discrete neural correlates of basic emotions: A voxel-based meta-analysis. Journal of Cognitive Neuroscience, 22, 2864–2885.CrossRefPubMedGoogle Scholar
  60. Wager, T. D., Kang, J., Johnson, T. D., Nichols, T. E., Satpute, A. B., & Barrett, L. F. (2015). A Bayesian model of category-specific emotional brain responses. PLoS Computational Biology, 11, e1004066.CrossRefPubMedPubMedCentralGoogle Scholar
  61. Wilson-Mendenhall, C. D., Barrett, L. F., Simmons, W. K., & Barsalou, L. W. (2011). Grounding emotion in situated conceptualization. Neuropsychologia, 49, 1105–1127. doi:10.1016/j.neuropsychologia.2010.12.032 CrossRefPubMedGoogle Scholar
  62. Wilson-Mendenhall, C. D., Barrett, L. F., & Barsalou, L. W. (2013). Neural evidence that human emotions share core affective properties. Psychological Science, 24, 947–956.CrossRefPubMedPubMedCentralGoogle Scholar
  63. Wilson-Mendenhall, C. D., Barrett, L. F., & Barsalou, L. W. (2015). Variety in emotional life: Within-category typicality of emotional experiences is associated with neural activity in large-scale brain networks. Social Cognitive and Affective Neuroscience, 10, 62–71. doi:10.1093/scan/nsu037 CrossRefPubMedGoogle Scholar
  64. Yeo, B. T., Krienen, F. M., Eickhoff, S. B., Yaakub, S. N., Fox, P. T., Buckner, R. L., … Chee, M. W. (2016). Functional specialization and flexibility in human association cortex. Cerebral Cortex, 26, 465. doi:10.1093/cercor/bhv260

Copyright information

© Psychonomic Society, Inc. 2016

Authors and Affiliations

  • Gal Raz
    • 1
    • 2
    • 3
  • Alexandra Touroutoglou
    • 4
    • 5
  • Christine Wilson-Mendenhall
    • 1
    • 5
    • 6
  • Gadi Gilam
    • 1
    • 7
  • Tamar Lin
    • 1
    • 7
  • Tal Gonen
    • 1
    • 7
  • Yael Jacob
    • 1
    • 8
  • Shir Atzil
    • 5
    • 6
  • Roee Admon
    • 1
    • 9
  • Maya Bleich-Cohen
    • 1
  • Adi Maron-Katz
    • 1
    • 2
  • Talma Hendler
    • 1
    • 2
    • 7
  • Lisa Feldman Barrett
    • 5
    • 6
    • 10
  1. 1.The Tel Aviv Center for Brain Functions, Wohl Institute for Advanced ImagingTel Aviv Sourasky Medical CenterTel AvivIsrael
  2. 2.Sackler Faculty of MedicineTel Aviv UniversityTel AvivIsrael
  3. 3.Steve Tisch School of Film and TelevisionTel Aviv UniversityTel AvivIsrael
  4. 4.Department of NeurologyMassachusetts General Hospital and Harvard Medical SchoolBostonUSA
  5. 5.MGH/HST Athinoula A. Martinos Center for Biomedical ImagingMassachusetts General Hospital and Harvard Medical SchoolCharlestownUSA
  6. 6.Department of PsychologyNortheastern UniversityBostonUSA
  7. 7.School of Psychological SciencesTel Aviv UniversityTel AvivIsrael
  8. 8.Sagol School of NeuroscienceTel Aviv UniversityTel AvivIsrael
  9. 9.Center for Depression, Anxiety and Stress Research, McLean HospitalHarvard Medical SchoolBelmontUSA
  10. 10.Psychiatric Neuroimaging Division, Department of PsychiatryMassachusetts General Hospital and Harvard Medical SchoolCharlestownUSA

Personalised recommendations