The role of anterior insular cortex in social emotions
- First Online:
- Cite this article as:
- Lamm, C. & Singer, T. Brain Struct Funct (2010) 214: 579. doi:10.1007/s00429-010-0251-3
- 2.4k Views
Functional neuroimaging investigations in the fields of social neuroscience and neuroeconomics indicate that the anterior insular cortex (AI) is consistently involved in empathy, compassion, and interpersonal phenomena such as fairness and cooperation. These findings suggest that AI plays an important role in social emotions, hereby defined as affective states that arise when we interact with other people and that depend on the social context. After we link the role of AI in social emotions to interoceptive awareness and the representation of current global emotional states, we will present a model suggesting that AI is not only involved in representing current states, but also in predicting emotional states relevant to the self and others. This model also proposes that AI enables us to learn about emotional states as well as about the uncertainty attached to events, and implies that AI plays a dominant role in decision making in complex and uncertain environments. Our review further highlights that dorsal and ventro-central, as well as anterior and posterior subdivisions of AI potentially subserve different functions and guide different aspects of behavioral regulation. We conclude with a section summarizing different routes to understanding other people’s actions, feelings and thoughts, emphasizing the notion that the predominant role of AI involves understanding others’ feeling and bodily states rather than their action intentions or abstract beliefs.
Anterior insula and social emotions: an overview
The aim of this paper is to highlight the role of the anterior insular cortex (AI) in social emotions, hereby defined as affective states that are not only related to the self, but depend on the social context and arise when we interact with other people. Numerous functional neuroimaging and neuropsychological investigations suggest that AI plays a prominent role in emotional processing. For example, a recent meta-analysis of 162 functional neuroimaging studies of emotion shows that dorsal and ventral subdivisions of the AI are—along with the amygdala and the ventral striatum—among the most consistently activated regions in studies of emotion (Kober et al. 2008). Consistent emotion-related activation, though to a lesser degree than for the anterior parts, was also reported for middle insula/posterior AI. The majority of studies integrated in this meta-analysis investigated basic emotions, such as anger, sadness, or disgust, but not social emotions. However, recent investigations in the fields of social neuroscience and neuroeconomics indicate that AI is of similar importance for emotions that are relevant during social interaction. Given that most of these findings originate from research on empathy and vicarious emotions, we will first review this evidence and then examine AI involvement in compassion and other social emotions. At the end of this section, we will discuss recent results from neuroeconomics, which uses paradigms from behavioral economics to unveil the neural correlates of social phenomena such as cooperation and fairness.
AI and the experience of empathy
In the last few years, the field of social neuroscience has rapidly advanced its understanding of the neural mechanisms underlying empathy for others, that is, our ability to share feelings with other people [for a more detailed definition of terms, see de Vignemont and Singer (2006), Decety and Lamm (2006), Singer and Lamm 2009]. The study of empathy-related brain responses in the domain of pain quickly emerged as a dominant experimental paradigm because the neural bases of the direct experience of transient pain are fairly well-understood, both in terms of the neural circuits involved and in terms of how activation of these circuits can be detected in vivo in humans using functional neuroimaging techniques (Singer and Lamm 2009). The so-called pain matrix (i.e., the network of brain areas responsive to pain experienced in oneself) consists of a variety of cortical and subcortical networks and structures coding the various concomitants of nociception (Apkarian et al. 2005; Derbyshire 2000; Peyron et al. 2000). The pain matrix can be subdivided into areas coding for the sensory-discriminative component of the pain experience and other areas coding for the motivational-affective components of pain. While the former predominantly involve primary and secondary somatosensory cortices and dorsal posterior insula, the latter mainly consist of AI and dorsal anterior cingulate cortex (dACC). Based on the results of direct intracerebral stimulation experiments, for example, it has been proposed that the functions of AI and dACC are not specifically related to nociception, but reflect rather general-purpose visceromotor and viscero-sensitive mechanisms associated with affective experiences (Ostrowsky et al. 2000, 2002). In contrast, the fact that stimulation of dorsal posterior insula triggered painful sensations supports the existence of a posterior–anterior gradient in insular cortex in which primary nociceptive information is processed in the posterior insula and re-mapped to the anterior insula to form integrated affective feeling states (see also below and Craig 2003a, b, 2009, 2010).
The most consistent finding from empathy-for-pain studies is that observing pain in others activates parts of the affective-motivational neural network that is also activated when we experience pain in ourselves. For example, Singer et al. (2004b) recruited couples to measure neural responses related to direct and vicarious painful experiences. In one condition, the female partner who was lying in the scanner received painful electric stimulation. In another condition, the same stimuli were delivered to the male partner who was seated next to the MRI scanner and whose hand could be seen via a mirror system. Comparing brain activations when participants experienced pain themselves with activations during empathizing with their partners while they were feeling pain revealed overlapping neural activity in bilateral middle to anterior insula, the dACC, brainstem, and the cerebellum. Thus, both the first-hand experience of pain and the knowledge that a beloved person is experiencing pain activated the affective component of the pain matrix, which suggests that representations of participants’ own affective states were engaged when empathizing with the negative affect of their suffering partners.
The observation of overlapping brain areas involved in nociceptive processing for self and others have resulted in the so-called “shared networks” account of empathy and affective sharing. The foundation of this account lies in Simulation Theory as developed in cognitive science and philosophy of mind (Gallese 2003; Gallese and Goldman 1998; Goldman 2006), which proposes that we understand other people’s minds by using our own mental states to simulate how we might feel or what we might think in a given situation, and to infer from this what the other person may actually feel or think. Accordingly, the basic assumption of the shared networks account is that perceiving or imagining someone else’s state activates neural representations coding this state when we experience it ourselves.
Activation in AI during affective sharing is not confined to pain, but has also been observed for other negative affective states, such as disgust (Jabbi et al. 2008; Wicker et al. 2003). Furthermore, shared activation networks have also been identified for positive emotions. For example, Hennenlotter et al. (2005) obtained overlapping activation in left AI during the observation and execution of smiles. In a similar vein, both self-related positive affect and the observation of pleasant affect resulting from food intake or listening to an amusing story activated similar regions in AI (Hennenlotter et al. 2005; Jabbi et al. 2007; van der Gaag et al. 2007). These findings speak against the widespread assumption that AI reflects predominantly negatively valenced feeling states (see also below).
Finally, it is important to stress that a number of non-overlapping activations exist between self- and other-related experiences. In addition to the posterior-to-anterior gradient within the insular cortex mentioned above, activation during empathy is restricted to subdivisions in the cingulate cortex associated with affective-motivational functions, whereas directly experiencing pain activates a much larger portion of the cingulate cortex, including areas that are explicitly related to action control (see also Jackson et al. 2006). Similar findings are reported for executing facial expressions of emotion and observing them in others, and by studies investigating the functional connectivity between shared and non-shared areas (Hennenlotter et al. 2005; Jabbi et al. 2008; Zaki et al. 2007). These findings are of particular importance for “simulationist” accounts of intersubjectivity. While shared networks certainly constitute an important mechanism to understand intersubjectivity, differences in neural responses during self- and other-related experiences might be just as crucial because they allow us to distinguish between these qualitatively very distinct experiences.
Compassion, admiration, and love
Several studies indicate that AI is also recruited during positive, approach-related emotions resulting from the interaction with others, such as compassion, admiration, and the experience of romantic or maternal love. For example, a recent functional magnetic resonance imaging (fMRI) study reported consistent engagement of brain regions such as bilateral AI, dACC, hypothalamus, and mesencephalon during compassion and admiration (Immordino-Yang et al. 2009). This study also demonstrated that the insular responses accompanying admiration for virtue and skills, as well as compassion for social pain, occur significantly later than the very immediate response triggered by the observation of physical pain in others. According to the authors, this suggests that more complex social emotions, because they require more contextual appraisal and rely on socially learned responses, are less efficient in engaging affective responses in interaction partners than more “basic” nociceptive and emotional experiences. In addition, “social” versus immediate responses might be subserved by different mechanisms, show stronger individual differences, and be driven by personal preferences as well as social learning.
The latter aspect is also indicated by neuroscientific investigations of long-term compassion meditators such as Buddhist monks. Their aim is, by means of continuous and long-term mental training using specific meditation techniques, to develop an attitude towards others characterized by compassion and loving kindness. Recently, Lutz et al. (2008) showed that this type of mental training resulted in stronger neural responses in anterior and middle insular cortex in expert meditators, as compared to novice meditators, when they performed a compassion meditation exercise during which meditators were exposed to emotions conveyed in human vocalizations (such as crying or laughter). Notably, while higher activation was observed independently of the valence of the vocalizations, the strongest difference between monks and novices was observed for negative emotions. In addition, heart rate increases were higher in Buddhist monks and showed a stronger coupling with hemodynamic responses in left middle insular cortex (Lutz et al. 2009), indicating not only a cerebral but also a physiological-autonomic correlate of their heightened state of compassion. Interestingly, recent behavioral evidence indicates that different types of meditation can also affect one’s sensitivity to directly experienced pain (Grant and Rainville 2009; Perlman et al. 2010; Zeidan et al. 2009). The neural correlates of this reduced sensitivity were mainly associated with reduced activation in somatosensory cortices in a single-case study (Kakigi et al. 2005) and with structural changes in cortical areas including primary somatosensory cortex, but also AI, as a result of long-term Zen meditation practice (Grant et al. 2010).
The affective and mental states accompanying compassion are best described by profound feelings of care and concern for others and their welfare (Singer and Steinbeis 2009). As such, they are most similar to feelings of maternal and romantic love. Notably, fMRI investigations consistently suggest similar activations in middle insular cortex as well as in AI when participants are exposed to pictures of their beloved partners or children or when asked to generate feelings of unconditional love towards individuals with intellectual disabilities (Bartels and Zeki 2000, 2004; Beauregard et al. 2009; Leibenluft et al. 2004).
Finally, when reviewing the evidence for insular involvement during compassion and love versus during empathy (for both positive and negative emotions), we noticed a tendency of the former to activate the middle subdivision of insular cortex while the latter predominantly recruited more anterior insular subdivisions. A possible explanation of this finding is that compassionate and romantic love might be associated with parasympathetic responses different from rather sympathetic responses elicited in the empathy studies reviewed above a possible dissociation that should be addressed by future studies. Note in this respect the proposal (Craig 2005) that autonomic responses are differentially linked to hemispheric asymmetries in insular involvement, and that such insular asymmetry might encode differences in affective valence (see also below; and Craig 2003a, b, 2009; Saper 2002 for the link between central and autonomic nervous system functions). While the left insular cortex is assigned to positive, parasympathetically dominated responses, the right insula is thought to engage predominantly in negative, sympathetically dominated affective processing. Although formal analyses of asymmetry are required to test this hypothesis (see also below), the activation patterns in the studies on compassion and love speak more for bilateral engagement of the insular cortex during these mental and affective states.
Fairness, cooperation, and punishment
Another stream of evidence for the consistent involvement of AI in social emotions stems from studies in the emerging field of neuroeconomics in which interpersonal phenomena such as cooperation and fairness are investigated using paradigms stemming from game theory and behavioral economics. In one study, participants were playing the so-called ultimatum game (Sanfey et al. 2003). In this game, a proposer is given a certain amount of money and asked to split it with the second player, the responder. The responder can either reject the proposed split, in which case neither player wins any money, or accept it, in which case each player keeps the share of money proposed. Behavioral evidence shows that fair proposals (with a 50:50 split being the fairest) have higher chances of being accepted, while unfair proposals are more likely to be rejected. When participants played this game in the fMRI scanner, activation in bilateral AI, dACC, and dorsolateral prefrontal cortex preceded the decision to defect against unfair but not fair players (Sanfey et al. 2003). This pattern of neural activation was specifically related to the social context as activation was significantly higher when participants played against an intentional agent as compared to a computer “proposing” equally low offers. In addition, the response in AI was positively correlated with the amount of unfairness, and heightened activation in AI during the rejection but not during the acceptance of unfair offers indicated a direct link of AI activity to social decision making.
In a related experiment, unreciprocated cooperation, as compared to reciprocated cooperation, exhibited in a prisoner’s dilemma game was associated with greater activity in bilateral AI, left hippocampus, and left lingual gyrus (Rilling et al. 2008). In this game, two players independently chose to either cooperate with each other, or not, and received a payoff that depended upon the combination of their choices. Again, this finding was specific to social interaction as activations were higher than during a gambling control task with no social interaction. Additionally, functional connectivity between bilateral AI and lateral orbitofrontal cortex in response to unreciprocated cooperation predicted defection in the subsequent round, again indicating the link between AI and social decision making. A similar conclusion can be derived from an fMRI study revealing that dysfunctional patterns of dorsal bilateral AI activity in patients with borderline personality disorder were associated with their incapacity to maintain cooperation, in particular to repair broken cooperation (King-Casas et al. 2008).
While mutual cooperation usually results in feelings of trust and friendship, a lack of cooperation results in anger and indignation. Interestingly, similar mechanisms might have been at play in an fMRI study in which participants observed fair or unfair players receiving painful shocks. This study (Singer et al. 2006) showed that empathy-related neural responses in AI and dACC were significantly reduced when participants observed unfair players, but only in the male subsample. In addition, this effect was accompanied by activation in reward-related areas correlated with an expressed desire for revenge. Furthermore, social exclusion reliably activated AI in both adults and adolescents, and activity levels were increased during pharmacologically induced negative mood (Eisenberger et al. 2003, 2009; Masten et al. 2009). Finally, AI activation was not only elicited by unfair or non-cooperative behavior associated with negative emotions such as anger, disappointment, or sadness, but also when participants viewed pictures of people who had proven to be intentionally cooperative during social interaction (Singer et al. 2004a). This, again, points to a crucial role of AI in positive and negative social emotions. Notably, the bilateral engagement of AI in the latter study did not confirm earlier indications of asymmetric responses in AI tracking the trustworthiness of faces, which were evaluated on the basis of photographs and without prior social interaction (Winston et al. 2002).
In sum, the present review suggests an important role of AI in social emotions ranging from empathic experiences in the domains of disgust, pain, and pleasant emotions to compassion, admiration, and fairness. The finding of AI involvement in both negative and positive affective states further poses the important question about how the valence of emotional experiences is encoded in AI (see also Ackermann and Riecker 2010; Garavan 2010, reviewing evidence for AI activity during positive and approach-related affect outside the social domain). One possible answer to this question is that the left and the right AI, respectively, are preferentially encoding positive and negative affect (Craig 2005, 2009), a proposal paralleling similar suggestions based on asymmetries in prefrontal cortex activation (Davidson 2004). At first glance, the results of our meta-analysis of empathy for pain studies suggest functional lateralization (Fig. 1). Activation clusters in the left hemisphere are larger and also encompass more ventral parts of AI, while activation in the right hemisphere seems to be more confined to the dorsal subdivision of AI. However, summary reports from functional neuroimaging might be misleading in evaluating hemispheric asymmetry because they rely on thresholded statistical images. Therefore, we performed a formal test of asymmetry by computing lateralization indices (LIs, Wilke and Lidzba 2007) for the contrast empathy for pain > no pain, in the 168 participants who had been entered into our meta-analysis. The region of interest mask for this analysis encompassed voxels in anterior and middle insular cortex. Contrast images were individually and adaptively thresholded (Wilke and Lidzba 2007) and LIs for both strength and extent of activation within this custom mask were calculated. Neither statistical test revealed a significant difference between activation in left and right AI (P(strength) = 0.09, P(extent) = 0.097). Besides failing to confirm the hypothesis of hemispheric asymmetry, our analysis shows that evaluations of asymmetry should not be based on thresholded statistical parametric maps, but that formal tests such as the one we performed are necessary for valid assessments of asymmetry hypotheses.
An alternative explanation of how valence is encoded in AI is via the pattern of connectivity with other brain areas coding for positive or negative affect, such as the ventral striatum, orbitofrontal cortex, and the amygdala—which have been linked to reward, positive affect, and approach behavior (Knutson et al. 2001; Knutson and Wimmer 2007), and negative affect and withdrawal, respectively (but see also Cunningham et al. 2008; Sergerie et al. 2008). Alternatively, different neuronal ensembles in AI lying in close proximity to each other might code for negative and positive affect (Matsumoto and Hikosaka 2009), but the coarse resolution of fMRI may not be able to separate them. As yet, we do not have sufficient information about how positive and negative valence is encoded in AI and in other emotion-relevant brain areas such as striatum and amygdala. An answer to this question is of utmost relevance for future neuroscientific models of emotion.
The role of AI in representing bodily and global emotional state(s)
The studies reviewed above indicate that AI plays a central role in emotional processing related to social interactions. However, this role does not seem to be restricted to social emotions. Rather, AI is essentially recruited during most basic and complex emotional processes (Kober et al. 2008; see also Ackermann and Riecker 2010; Craig 2010; Garavan 2010, in this special issue), and these findings need to be integrated into our account of the role and function of AI in social interactions.
What are the putative neuronal computations implemented in this brain structure? Insular cortex is broadly acknowledged as viscerosensory cortex and as such underpins the neural representation of, e.g., body temperature, muscular and visceral sensations, and arousal (Craig 2002, 2003a, b; Critchley 2005; Critchley et al. 2004; Damasio 1994). Based on evidence from neuroanatomical, functional neuroimaging, and electrophysiological studies, Bud Craig (2002, 2009) has developed a detailed account of the role of insular cortex in interoceptive awareness. This account is also presented in this special issue of Brain Structure and Function (Craig 2010). Basically, it outlines a posterior-to-anterior progression of increasingly complex representations of interoceptive signals in the human insula. These signals are integrated and re-represented in AI where they become consciously accessible (enter our awareness) and generate a unified meta-representation of global emotional moments. This meta-representation might explain the activation of AI during most subjective feeling states.
The role of interoceptive awareness in affective processing is in line with “two-stage” theories of emotion. These theories argue that visceral and somatic feelings of the bodily state—such as our stomach cramping up when we have to walk down a deserted and badly lit alley at night—are at the root of emotional experiences (Schachter and Singer 1962). However, the initial bodily response is translated into a full-blown emotional experience only after we have appraised it. In fact, the way in which bodily information is integrated and consciously appraised (“interpreted”) determines whether the perception of one’s stomach tension will result in a fearful experience or a positive thrill: a phenomenon dealt with extensively by appraisal theories of emotion (Scherer et al. 2001) and whose influence might be so powerful that it can even affect physiological functions such as temperature regulation (Moseley et al. 2008).
A model for the role of AI in actual and predictive feeling states related to self and others
In an extension of Bud Craig’s model, Singer et al. (2009) suggested a role of AI beyond representing physiological or emotional information related to the self and to the current state (see also Craig 2009, 2010, concerning the relationship between time, awareness, and insular function). In this model, error-based learning of emotional states is based on current and predictive feeling states, and this learning mechanism has a dual function as it can be applied to self and others (Singer et al. 2009). Crucially, the model incorporates findings from neuroeconomics about insular responses to uncertainty and risk. In the following sections, the central aspects of this model and its relevance for AI involvement in social emotions will be discussed.
Furthermore, the proposed dual function of AI implies that deficits in awareness of one’s own bodily and feeling states should also affect our predictions of how others will feel in a certain situation. Such deficits are a hallmark feature of alexithymia, a subclinical phenomenon marked by difficulties in identifying and describing feelings and in distinguishing feelings from the bodily sensations of emotional arousal. Recent evidence indeed suggests that people with higher levels of alexithymia show lower trait empathy and a lower response in AI when they are asked to feel their own emotions (Silani et al. 2008). Remarkably, this hypoactivation in AI is also observed when highly alexithymic subjects are asked to empathize with others who are experiencing pain (Bird et al. 2010). In a similar vein, it has been shown that patients with fronto-temporal lobe degeneration (Seeley 2010), which usually also includes degeneration of the ventral anterior insula, not only show self-related emotional and social impairments, but also a reduction in trait empathy (Rankin et al. 2005; Sturm et al. 2006). These findings corroborate the assumption that deficits in experiencing or understanding self-related emotional states result in deficits in sharing affective states of other people, and that these functions are associated with representations in AI.
Importantly, accurate simulations and predictions will facilitate appropriate and adaptive self- and other-related behaviors, while incorrect ones result in maladaptive behavior. A recent neurobiologically motivated model of anxiety stresses the relevance of this point. This model suggests that neurons in AI compute an “interoceptive prediction error” which signals a mismatch between anticipated and actually experienced bodily responses to a potentially aversive stimulus (Paulus and Stein 2006). Malfunction of this mechanism might result in anxiety, a hypothesis supported by studies showing increased insular activation during emotion processing in anxiety-prone individuals (Stein et al. 2007; see also Paulus and Stein 2010, in this special issue). Also, when “interoceptive mismatch” was induced experimentally using false feedback of one’s heartbeat, activity in dorsal AI was enhanced and correlated with increased emotional intensity/salience attributed to stimuli previously rated as neutral (Gray et al. 2007). Furthermore, recent evidence suggests an amplification of AI and amygdala responses to aversive pictures when their anticipation involved uncertainty (Sarinopoulos et al. 2010). Jointly regarded the findings from empathy and anxiety research imply a role of AI beyond representations of current physiological or emotional states and suggest a role in behaviorally relevant computations of predictive states and prediction errors. In addition, a connection between feelings and uncertainty is implicated by recent results in neuroeconomics on risk processing, showing that AI is closely related to monitoring uncertainty and risk, as well as to computing prediction errors (e.g., Preuschoff et al. 2008). These findings will be reviewed in detail elsewhere in this special issue (Bossaerts 2010).
Together with the observations outlined in the preceding paragraphs about representations of current and predictive states, these results were integrated into a unified model of AI function (Singer et al. 2009). According to this model, AI subserves learning about modality-specific feeling states (current and predicted ones, and associated prediction errors), such as learning about pain or touch, as well as learning about uncertainty via a parallel mechanism that involves a corresponding set of representations: actual uncertainty, predictive uncertainty, and uncertainty prediction errors (greater or less uncertainty than predicted). In the case of pain, for example, the latter mechanism provides a measure of how uncertain one is about the upcoming pain stimulus within the current environment. It reflects the variance of the pain (and not its mean), that is, how certain one is about the occurrence and magnitude of the painful stimulus and the resulting feeling state and—after pain delivery—how accurate, for example, one’s prediction about the link between stimulus and feeling state was. The model further predicts that these different streams of information are integrated into a general subjective feeling state within AI which is further modulated by individual preferences such as risk aversion and contextual appraisal. Such mechanisms allow for affective learning and regulation of body homeostasis, and they can guide decision making in complex and uncertain environments, with social environments being a prevalent example (see also Behrens et al. 2008; Kuo et al. 2009 for examples demonstrating that intuitive evaluations of complex multidimensional experiences jointly engage AI and dACC).
The hypothesis that global feeling states in AI integrate information from contextual appraisal and individual preferences helps us explain a variety of findings demonstrating the strong malleability of empathic responses (see also the review by Hein and Singer 2008). Modulation of empathic responses in AI has been observed for a variety of situations. For example and as mentioned above, neural responses in AI are strongly reduced when male participants observe people suffering pain who played unfairly in previous economic exchange games, but not when they observe fair players suffering pain (Singer et al. 2006). Furthermore, activity in AI and dACC was significantly modulated when participants observed the painful stimulation of patients who responded to pain and touch differently than the participants themselves (Lamm et al. 2010). The latter study also indicated that cortical networks involved in cognitive control and response inhibition played an important role in the appraisal-based modulation of neural activity. Moreover, recent investigations suggest that behavioral, neural, and autonomic responses are altered when the people in pain are members of another race (Xu et al. 2009), when they are socially stigmatized (Decety et al. 2010), or when appraising their pain results in heightened personal distress (Lamm et al. 2007a, 2008). Most notably, a recent study indicated that, after participants observe ingroup or outgroup members receiving painful electric shocks, the response in AI predicts whether they will help those people in a later stage of the experiment (Hein et al. 2010). Finally, the above-mentioned findings of a strong association between empathic brain responses in AI and empathy trait measures as well as alexithymia measures suggest that empathic brain responses are not only modulated by participants’ contextual appraisal of the situation and attitudes, but also by their personality characteristics (see also de Vignemont and Singer 2006).
In sum, the integration of these findings in a unifying model of AI function suggests that AI subserves both learning about feeling states as well as uncertainty, and that AI integrates this information to create multi-modally integrated global emotional states, which in turn guide adaptive decision making and homeostatic regulation related to self- and other-related affective states. However, to date, there is no direct test of the validity of such a unifying model (see also Singer et al. 2009, for future experiments). Future investigations combining research on predictions, uncertainty, and the processing of direct versus vicarious experiences are therefore required to establish its validity, and we hope the present review can provide an impetus in this direction.
Segregation of AI: different functions in ventral and dorsal AI?
In the previous sections, we differentiated between a posterior–anterior gradient in insular cortex in processing primary and higher-order integrated representations of nociceptive and feeling states, respectively. However, we have talked about AI in a rather unspecific manner and have not taken into account the subdivisions of AI. Of particular importance, investigations of insular cytoarchitectonics suggest that AI can be divided into a dorsal dysgranular and a ventral agranular part. The dysgranular part entails the superior portion of AI, which is characterized by incomplete laminar structure but incipient, rudimentary granularity, compared to the more ventral agranular part which shows a lack of granule (stellate) cells usually found in cortical input layers (Mesulam and Mufson 2004; see also Kurth et al. 2010). These two subdivisions are not only distinct with respect to their cellular structure, but show differences in functional and structural connectivity. In rhesus monkeys, the dorsal part of AI has dense connections with cortical motor structures, such as lateral and medial prefrontal cortex, and similar connectivity has been suggested for primates (Augustine 1996; Mesulam and Mufson 1985). In contrast, the ventral subdivision is characterized by connectivity with limbic and paralimbic structures, such as amygdaloid nuclei, orbitofrontal cortex, and hippocampus. Although AI in humans and non-human primates might not be equivalent, recent in vivo evidence from functional connectivity combined with probabilistic white matter tracking using diffusion tensor imaging suggests a similar pattern of connectivity in humans (Baliki et al. 2009). Based on differences in hemodynamic activation and functional connectivity, the latter study also indicated a distinction between these areas with respect to encoding nociception versus attention and task control.
Together, these differences in structural and functional neuroanatomy suggest to partition dorsal and ventral AI with respect to different yet closely related functions relevant for homeostatic regulation and adaptive behavior. We speculate that the ventral subdivision may be predominantly engaged in internal and bodily homeostatic regulation whereas the dorsal subdivision may be engaged more in executive mechanisms related to adaptive behavior, including the “translation” of emotional states into action tendencies (see also Wager and Feldman Barrett 2004). In line with such a view are recent meta-analytic findings indicating specific activation of dorsal AI in executive control tasks (Nee et al. 2007; Wager et al. 2004; Wager and Smith 2003; see also Nelson et al. 2010). Future studies should therefore focus on a further partitioning of insular cortex in general and AI in particular with respect to their functions for affective learning, homeostatic regulation, and adaptive behavior. Such accounts would also have to take into consideration that only the ventral bank of AI seems to contain von Economo neurons (Allman 2010). These cells have so far only been observed in a few species, including humans, great apes, elephants, and dolphins (Butti et al. 2009; Hakeem et al. 2009). This observation suggests that some functions subserved by AI may not have emerged until late in evolution and may be specific to those mammalian species for which social interaction and collaboration are particularly important. Such a view would be in line with the crucial role of AI in the processing of more complex social emotions such as empathy, compassion, and fairness, as highlighted in the present paper.
Neural routes to understanding the minds of others
So far, we have reviewed evidence for an important role of AI in empathy and understanding other people’s feeling states. However, the insular cortex is by no means the only structure underlying our ability to understand other people’s affective and mental states. Recent research in social neuroscience suggests at least three distinct routes underlying the understanding of other people’s thoughts, actions, and emotions (Blair 2005; de Vignemont and Singer 2006; Decety and Lamm 2006; Keysers and Gazzola 2007; Preston and de Waal 2002; Singer 2006). One route is the one reviewed in this article, allowing access to other people’s feelings. The second route is related to cognitive inferences about the mental states of other people, referred to as “theory of mind,” “mentalizing,” or “mindreading” (Baron-Cohen 1995; Frith and Frith 2003; Premack and Woodruff 1978). The third route refers to our ability to understand motor actions and action intentions (Gallese 2009; Rizzolatti et al. 1996). Whereas empathy is mostly associated with activation in limbic and paralimbic areas such as insular and anterior cingulate cortex, mentalizing and thinking about others is predominantly accompanied by activation in medial prefrontal regions, the superior temporal sulcus (STS), extending into the parietal lobe (temporo-parietal junction), and sometimes also the temporal poles (Amodio and Frith 2006; Frith and Frith 2006; Mitchell 2009). Finally, action understanding has been closely linked to the discovery of “mirror neurons,” that is, neurons which are active both during the execution of an action and during its observation (Gallese et al. 1996; Grèzes and Decety 2001; Rizzolatti and Craighero 2004; Rizzolatti et al. 1996). In a highly influential paper, Gallese et al. (1996) reported that approximately 17% of the neurons recorded in ventral premotor area F5 of the macaque monkey respond both when the monkey executed a particular movement, for example, grasping, placing, or manipulating an object, and when the monkey observed someone else performing that same movement. Later, neurons with similar visuomotor properties were discovered in the anterior intraparietal area (Fogassi et al. 2005), and recently also in primary motor cortex (Tkach et al. 2007). The primary function of mirror neurons was proposed to be related to action understanding, although recent acclaims question such functionality [see Hickok (2009), for critical review]. Note also that evidence for the existence of mirror neurons in humans is indirect and principally relies on functional neuroimaging studies that demonstrate an overlap in activation between observation and action conditions in regions homologous to areas of the monkey brain in which mirror neurons have been found (see, e.g., Dinstein et al. 2008; Lingnau et al. 2009).
While the enthusiasm about the discovery of mirror neurons in the motor domain has initially led to suggestions that mirror neurons lie at the root of intersubjectivity and the ability to understand others in general, recent research in the field of social neuroscience has clarified that mirror neuron systems cannot be the sole mechanism enabling us to form models of other people’s states and, in particular, other people’s affective states. Prevalent examples are consistent results showing that social judgments or empathy tasks do not engage any of the brain areas classically associated with the “motor mirror neuron system” (Mitchell 2009; Mitchell et al. 2006). Furthermore, studies investigating neural responses in situations when the sensory, motor, and affective consequences of an action are not shared (“mirrored”) between observer and target have shown similar activations as in cases where action consequences could be “mirrored”. Therefore, at least in those situations, action understanding derived from direct mirror-matching mechanisms seems unlikely (Lamm et al. 2007b, 2010).
Based on such observations and theoretical arguments, it has been proposed that the representations of global emotional states in insular cortex elicited when empathizing with the suffering of others might be flexibly triggered by various interacting mechanisms related to action understanding as well as mentalizing (de Vignemont and Singer 2006; Decety and Lamm 2006; Singer 2006). Evidence for this theoretical stance, in the realm of empathy, has recently been provided by the above-mentioned meta-analysis (Lamm et al. 2010). This analysis revealed that, depending on whether empathy was triggered by visual cues or abstract symbols, either areas underpinning action understanding (inferior parietal/ventral premotor cortices) or areas associated with the inference of mental states (precuneus, ventral medial prefrontal cortex, superior temporal cortex/temporo-parietal junction) were differentially involved during empathy. Notably, however, subdivisions of AI and dACC were activated irrespective of the task context, suggesting that the functions implemented in these areas might be accessed and utilized by different neurofunctional pathways. In a similar vein, Jabbi and Keysers (2008) explored the link between emotion recognition from facial expressions and their relation to mirror-neuron-related areas (IFG and IPL) and emotion-related areas (insula). Based on analyses involving Granger causality and effective connectivity, their data suggest that IFG and IPL are relevant when observing facial muscle movements, whereas insula comes into play when emotional quality must be inferred from facial expressions.
In sum, these lines of research suggest that the generation of predictive models of other people’s actions, feeling states, and beliefs involve different neural routes and that the degree of interaction between these routes is determined by the nature of the task and the information available in the specific situation. The present review has, however, pointed out the importance of AI in generating and predicting self- and other-related feeling states. Our review also outlines several open questions, such as the need to distinguish between neural functions in dorsal and ventral, anterior and middle, and left and right insular cortex, to name only a few. These questions need to be addressed by future research, and we are optimistic that this will ultimately lead to a more in-depth understanding of this fascinating, multifaceted neural structure and its role in affective processes of both private and social provenience.
We gratefully acknowledge support from the University of Zurich (Research Priority Program on the Foundations of Human Social Behavior) and the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013/ERC Grant Agreement No. 205557 [EMPATHICBRAIN]).