This study tested the functional involvement of the angular gyrus (AG) in semantic cognition, focusing on three key issues: (1) response polarity (activation vs. deactivation) and its relation to task difficulty, (2) lateralization (left vs. right AG), and (3) functional–anatomical subdivision (PGa vs. PGp). To this end, we combined and re-analyzed the fMRI data of five studies on semantic processing from our laboratory. For each study, we extracted the response profiles from the same anatomical regions-of-interest (ROIs) for left and right PGa and PGp.
We found that the AG was consistently deactivated during non-semantic conditions, as compared to the resting baseline. In contrast, response polarity was inconsistent during semantic conditions, involving both deactivation and activation in different studies, conditions and AG-ROIs. However, we consistently found relative response differences between semantic and non-semantic conditions, as well as between different semantic conditions. These effects were always observed in left AG, and often but not always in right AG.
A combined linear-mixed-model analysis across all studies revealed that these response profiles could be best explained by both task difficulty (as measured by response times; RT) and semantic processing demand (Fig. 4). AG activity decreased with increasing task difficulty and was relatively higher for semantic than non-semantic conditions. Difficulty effects were stronger in PGa than PGp, irrespective of hemisphere. Semantic effects were stronger in left than right AG, regardless of subregion.
Theories of AG function
Our findings support the view that the AG is engaged in semantic processing (Binder and Desai 2011; Seghier 2013; Kuhnke et al. 2020b), while they oppose the view that the AG is exclusively a domain-general region showing task-difficulty-related deactivation (Lambon Ralph et al. 2016; Humphreys et al. 2021). The domain-general view would have predicted that the AG is consistently deactivated during both semantic and non-semantic conditions, and any response differences between semantic and non-semantic conditions can be completely explained by task difficulty differences. In contrast, the semantics view would have predicted that AG responses cannot be explained by task difficulty alone, but it is crucial to consider semantic processing demand (i.e., whether the task involves semantic processing).
In line with the domain-general view, we found consistent deactivation of the AG during non-semantic conditions. Moreover, AG activity was related to task difficulty in both semantic and non-semantic conditions. However, response polarity during semantic conditions was inconsistent and involved positive activations (e.g., in studies A and C). Crucially, contrary to the domain-general view and in support of the semantics view, AG activity levels could not be explained task difficulty alone, but semantic processing demand proved essential: Models based on task difficulty alone were substantially outperformed by models that also included semantic processing demand. The optimal model included two-way interactions of ROI × RT and ROI × Semantics, but no RT × Semantics interaction, indicating that effects of RT and semantics on activity levels in each AG-ROI were independent. These results strongly support the view that the AG is engaged in semantic processing.
Response polarity and semantic processing in the AG
The inconsistency of AG response polarity during semantic conditions suggests that responses vs. rest are unreliable evidence to assess the AG’s role in semantic processing, contrary to the arguments in some previous work (Humphreys et al. 2015, 2021). More generally, comparisons against rest are problematic as the resting baseline itself is not process-neutral (Stark and Squire 2001; Morcom and Fletcher 2007). “Resting” can involve mind wandering, autobiographical memory, as well as self-referential and introspective processes (Andrews-Hanna 2012). It is particularly problematic that all these processes may involve the retrieval of semantic information (Binder et al. 1999, 2009). Thus, the AG might be “deactivated” during attention-demanding tasks as the semantic processing that occurs during rest is interrupted (Seghier 2013). In other words, AG deactivation may indeed reflect its involvement in semantic processing. Under this view, it is not surprising that AG activity in our semantic conditions often did not significantly differ from the resting baseline. A semantic region would only be predicted to show positive activation (above rest) when the task involves semantic processing to a greater extent than during the resting state. Indeed, we found positive activation for some semantic conditions (e.g., in studies A and C).
Compared to absolute responses vs. rest, relative responses between different experimental conditions were much more consistent: The AG consistently showed differential activity between semantic and non-semantic conditions, and between different semantic conditions. This strongly suggests that the AG is sensitive to semantics. Moreover, these findings corroborate the view that relative responses between conditions, not absolute responses vs. rest, should constitute the main focus of neuroimaging studies on AG function (Stark and Squire 2001; Finn 2021).
The AG generally showed relatively higher activity for semantic than non-semantic conditions. The only exception was study D (Martin et al. 2021), where a semantic fluency task induced relatively lower activity than a counting task. Several possibilities may explain this result: First, counting involves the production of number words, which are often considered a type of abstract concept (Hauk and Tschentscher 2013; Desai et al. 2018). Thus, counting may indeed involve abstract semantic processes to a greater extent than a semantic fluency task on everyday object concepts (e.g., flowers, animals). This view is supported by the fact that counting is frequently employed in clinical contexts for pre-operative language mapping (Duffau et al. 1999, 2004). Second, the AG response pattern in study D may be mainly driven by task difficulty: Behavioral analyses revealed that the counting task was easier than the semantic fluency task. Therefore, the domain-general task difficulty effect may have overshadowed the semantic effect in this study.
Overall, our results suggest that AG responses are modulated by both stimulus characteristics and task demands. Regarding stimulus characteristics, the AG is more engaged for meaningful than meaningless stimuli, even when presented in the same task (e.g., words > pseudowords in studies B, C and E; related > unrelated object pairs in study A; meaningful > meaningless phrases in study B). Other stimulus variables known to modulate AG responses, such as concreteness (Binder et al. 2005), frequency (Graves et al. 2010) or familiarity (Woodard et al. 2007) were well-matched between conditions within each study, and potential between-study differences were controlled for in the random effects structure of our linear-mixed-effects model. Regarding task demands, the AG seems to show stronger activity in tasks that explicitly require the retrieval of semantic information (e.g., semantic judgments) than tasks that only implicitly probe semantic processing (e.g., lexical decision). Notably, a supplementary linear-mixed-model analysis that distinguished explicit and implicit semantic tasks yielded no improvement in model fit (Table S38), suggesting that the AG is equally involved in both types of task. However, fine-grained differences between explicit and implicit semantic tasks were observed in individual studies: In study B (Graessner et al. 2021), left PGa selectively showed higher activity for meaningful phrases (e.g., “fresh apple”) than anomalous phrases (e.g., “awake apple”) during explicit meaningfulness judgments, but not during implicit lexical comparisons. In study C (Kuhnke et al. 2020b), left PGa was selectively engaged for sound features of word meaning during sound judgments, and for action features during action judgments, but for neither during lexical decisions. Moreover, we observed positive activation (above rest) in the left AG in studies A and C when the task explicitly required the retrieval of individual semantic features. Taken together, these results suggest that left AG responds most strongly to task-relevant semantic information. This view is in line with theories that assume semantic processing to rely on a flexible, task-dependent architecture (Hoenig et al. 2008; Kemmerer 2015; Kuhnke et al. 2020b, 2021).
Task difficulty effects in the AG
Several recent studies have claimed that semantic effects in the AG are likely to be an artifact of difficulty rather than semantic processing per se (Humphreys et al. 2015, 2021). That is, the authors point out that some typical semantic contrasts that reveal larger relative activation in the AG may be confounded with difficulty (e.g., words vs. pseudowords, concrete vs. abstract words). Indeed, multiple studies found that harder conditions yield stronger AG deactivation (Hahn et al. 2007; Humphreys et al. 2015). The current study reveals that task difficulty and semantic processing demand show separable effects in the bilateral AG: AG activity decreased with increasing task difficulty across semantic and non-semantic conditions. Independently, AG activity was relatively higher during semantic than non-semantic conditions. Notably, difficulty and semantic effects were also orthogonal in regional preference: Difficulty effects were stronger in PGa than PGp, regardless of hemisphere. Semantic effects were stronger in left than right AG, regardless of subregion. Thus, our results are consistent with the claim that the AG indexes task difficulty, but they provide strong evidence against the claim that semantic effects in the AG are explained by difficulty.
The separability of difficulty and semantic effects provides evidence that the AG is likely to have a domain-general role in addition to being sensitive to semantic processing. This view is supported by a previous fMRI study which revealed that the same regions of the default mode network (DMN), including AG, can show domain-general task difficulty effects (activation for pseudowords vs. words) and domain-specific semantic effects (decoding of high vs. low imageability words) (Mattheiss et al. 2018). Moreover, this view is in line with modern network-based views of cognitive neuroscience, suggesting that the function of a brain region depends on its interactions with other areas in a given task (Seghier 2013). Under this view, the same region can have multiple different functions by virtue of being connected to different regions in different tasks (Bassett and Sporns 2017). Indeed, the AG exhibits particularly flexible functional connectivity in different cognitive tasks (Chai et al. 2016; Kuhnke et al. 2021). The AG seems to be involved in both domain-general task-difficulty-related processes and domain-specific semantic processes. These are not mutually exclusive.
Relatedly, it is important to consider whether semantics and task difficulty were correlated in our study, and if so, how the linear-mixed-model analysis could distinguish their effects on AG activity. Firstly, non-semantic conditions were not always harder than semantic conditions. For example, in study C (Kuhnke et al. 2020b), lexical decisions on pseudowords were easier than semantic judgments. Nonetheless, pseudowords induced the strongest AG deactivation. Secondly, the semantics and RT variables were not strongly correlated (Table S37), allowing for both variables to explain unique variance in AG responses. Finally, if semantic effects in the AG could be completely explained by RT, then a model based only on RT should be optimal. However, this was clearly not the case: The RT-only model was substantially outperformed by models that also included semantics. Overall, our results indicate that task difficulty and semantics explain separable parts of the variance in AG activity.
Notably, while the linear-mixed-model analysis across all studies indicated that effects of task difficulty and semantics on AG responses were independent, we found an interaction between semantics and difficulty in study D (Martin et al. 2021). Specifically, right AG showed a selective difficulty effect in the semantic fluency task, but not in the counting task. However, behavioral analyses also revealed a selective difficulty effect in the semantic fluency task, whereas forward (“easy”) and backward (“difficult”) counting did not differ in behavioral performance. Therefore, the AG response pattern corresponded to the behavioral pattern, consistent with the view that difficulty effects in the AG are domain-general.
Left AG showed stronger semantic effects than right AG. Our combined analysis across all studies revealed that left AG exhibits larger activity differences between semantic and non-semantic conditions. In individual studies, left AG always showed activity differences between semantic and non-semantic conditions, as well as between different semantic conditions. Right AG also often showed these effects, but not always. For example, in study C (Kuhnke et al. 2020b), left but not right AG was engaged for sound and action features of word meaning. In study E (Turker et al. 2021), only left but not right AG was sensitive to word complexity. In study B (Graessner et al. 2021), right PGa did not distinguish pseudoword and real-word phrases.
These results are in line with a previous neuroimaging meta-analysis demonstrating a more consistent recruitment of left than right AG during semantic processing (Binder et al. 2009). However, they are inconsistent with meta-analyses suggesting exclusive recruitment of left but not right AG (Jackson 2021; Hodgson et al. 2021). Our findings suggest that right AG is also sensitive to semantics, but plays a weaker role than left AG, at least under “normal” conditions in young and healthy human adults. In support of this view, Jung-Beeman (2005) summarized evidence that both the left and right hemispheres are engaged in semantic cognition; however, the right hemisphere seems to perform coarser computations than the left.
As a hypothesis for future work, we propose that right AG might compensate when left AG is perturbed or even damaged. Such potential mechanisms of adaptive plasticity could be investigated in future studies combining non-invasive brain stimulation (e.g., TMS) with a neuroimaging readout (e.g., fMRI) (Bergmann et al. 2016; Hartwigsen and Volz 2021).
We observed distinct response profiles for the cytoarchitectonic subregions of the AG, PGa and PGp. Specifically, our combined analysis across all studies revealed that domain-general task difficulty effects were stronger in PGa than PGp, regardless of hemisphere.
These results partially support and partially refute previous functional–anatomical subdivisions of the AG. Noonan et al. (2013) performed a meta-analysis of functional neuroimaging studies on executive control during semantic processing. They found that left dorsal AG (~ PGa) and adjacent intraparietal sulcus (IPS) showed increased activity for semantic tasks with a higher executive demand. In contrast, left ventral AG (~ PGp) was engaged for semantic vs. phonological tasks, but insensitive to executive demands. Accordingly, the “controlled semantic cognition” (CSC) framework proposes that dorsal AG/IPS supports the controlled retrieval of semantic representations, rather than semantic representation per se (Jefferies 2013). Specifically, under this framework, left dorsal AG/IPS is associated with the multiple demand network (MDN) involved in domain-general executive control (Duncan 2010), whereas ventral AG supports semantic integration (Jefferies 2013). Similarly, Humphreys et al. (2021) argued that dorsal AG / IPS and ventral AG have distinct functions as dorsal AG shows a greater response when difficulty is increased, whereas ventral AG shows lower activity for harder tasks. In line with these proposals, we found stronger difficulty effects in PGa than PGp, suggesting a stronger contribution of PGa to domain-general task-difficulty-related processes. However, in contrast to these previous views, we found the same negative relationship between neural activity and task difficulty in all AG-ROIs: Activity decreased with increasing difficulty.
This response pattern contradicts the expected response pattern of a control-related MDN region (i.e., increased activity for increased difficulty) and is more consistent with the response of a DMN region (cf. Noonan et al. 2013). Previous reports of control-related activation in PGa may have reflected activation “spillover” from nearby MDN regions, such as IPS (Duncan 2010; Whitney et al. 2012). This is especially plausible in a coordinate-based meta-analysis (as in Noonan et al. 2013), which involves smoothing activation peaks in standard space using Gaussian kernels (Eickhoff et al. 2009). Indeed, a more recent meta-analysis of semantic control, which updated the Noonan et al. (2013) study with novel data and methodology, found no consistent AG engagement (Jackson 2021; also see Hodgson et al. 2021). Moreover, a high-resolution subject-specific parcellation study revealed that IPS is part of the core MDN, whereas the AG is anti-correlated with the MDN and more likely to belong to the DMN (Assem et al. 2020). Together with our finding of lower AG activity for harder tasks, these results oppose the view of the CSC framework that (part of) the AG is involved in executive control processes during semantic processing. Overall, if the semantic system is indeed composed of representation and control regions as the CSC framework proposes (Lambon Ralph et al. 2016), our results suggest that the AG supports semantic representation, rather than control. However, it is unclear whether representation and control can be strictly divided (Chapman et al. 2020), and there may be further subdivisions of the semantic system, such as long-term (semantic memory) vs. short-term (working memory) representation (Martin et al. 1994; Vigneau et al. 2006).
Crucially, in our study, all AG subregions—including bilateral PGa—also showed domain-specific semantic effects, which were separable from their domain-general task difficulty effects. These findings strongly suggest that bilateral PGa supports not only domain-general processes, but also domain-specific semantic processes (Mattheiss et al. 2018). Indeed, while the combined analysis across all studies indicated similar semantic effects (i.e., activity differences between semantic and non-semantic conditions) for PGa and PGp, left PGa seemed to exhibit the highest sensitivity to fine-grained semantic manipulations in individual studies. For example, in the study by Graessner et al. (2021), left PGa was the only AG region that showed an activity difference between meaningful and anomalous phrases when this subtle semantic difference was task-relevant. In Kuhnke et al. (2020b), left PGa was the only AG subregion that selectively responded to action and sound features of word meaning when these were task-relevant. These results suggest that left PGa might be the AG subregion that is most relevant for semantic cognition.
This view of left PGa is supported by Seghier (2013) who subdivided the left AG into anterior-dorsal and ventral subregions based on four functional neuroimaging studies (Sharp et al. 2009; Nelson et al. 2010; Seghier et al. 2010; Price and Ansari 2011). The anterior-dorsal subregion shows a remarkably close and consistent correspondence with area PGa (see Fig. 4 in Seghier 2013). Moreover, in all four studies, this area was sensitive to semantic variables. The other (more ventral) subregions were more variable across studies, both in location and in relation to semantic processing. Taken together, these previous findings and our current results support the view that left PGa constitutes a functional unit that can be functionally distinguished from its neighbors (e.g., PGp, IPS) and that is engaged in semantic cognition.
The role of the AG in semantics
A common view holds that the AG acts as a cross-modal convergence zone or “hub” that binds and integrates semantic features related to various sensory-motor modalities (Damasio 1989; Mesulam 1998; Binder and Desai 2011). This view is supported by the AG’s location at the junction between several sensory-motor processing streams (e.g., somatomotor, auditory, visual; Seghier 2013; Margulies et al. 2016). Moreover, the AG shows extensive structural (Hagmann et al. 2008; Bonner and Price 2013) and functional (Tomasi and Volkow 2011; Kuhnke et al. 2021) connectivity with various sensory-motor cortices. Crucially, functional neuroimaging studies indicate that AG activity increases with the amount of semantic information that can be extracted from a given input (Binder 2016). At the level of individual concepts, AG activity is modulated by thematic associations (Bar and Aminoff 2003), concreteness (Binder et al. 2005), frequency (Graves et al. 2010), and familiarity (Woodard et al. 2007). Beyond the single-concept level, the AG is sensitive to compositionality at the phrase (Price et al. 2015b), sentence (Obleser et al. 2007), and narrative (Ferstl et al. 2008) levels. Taken together, these results suggest that the AG integrates different semantic features into a coherent conceptual representation.
However, the role of a cross-modal hub of the semantic system is classically associated with the anterior temporal lobe (ATL) (Lambon Ralph et al. 2016). In support of this view, evidence from semantic dementia (Patterson et al. 2007; Jefferies 2013), functional neuroimaging (Visser et al. 2010; Rice et al. 2015), and TMS (Pobric et al. 2010a, b) indicates a crucial role of the ATL in semantic processing across virtually all types of concepts. If the ATL already acts as a cross-modal hub, what could be the function of the AG?
We propose that the AG constitutes a “multimodal” hub, whereas the ATL is an “amodal” hub. As an “amodal” hub, the ATL integrates semantic features into highly abstract representations that do not retain modality-specific information. As a “multimodal” hub, on the other hand, the AG binds different semantic features associated with the same concept, while retaining modality-specific information. In other words, the amodal ATL combines features in a complex, non-linear fashion (cf. Lambon Ralph 2014; Patterson and Lambon Ralph 2016), whereas the multimodal AG binds features linearly. During online processing, the AG could thereby enable efficient access to task-relevant semantic features (Kuhnke et al. 2020b, 2021).
Similar proposals have been put forth previously. For instance, Seghier (2013) argued that while integration and amodality have been associated with the ATL, the AG might support “first-order” integration that provides direct access to conceptual representations. Similarly, Reilly et al. (2016) proposed the AG to constitute a “low-order hub” engaged in multimodal feature binding, whereas the ATL acts as a “high-order” hub performing symbolic transformations on the bound features. During these non-linear transformations, modality-specific information is abstracted away.
The multimodal–amodal hub theory is supported by several studies. Fernandino et al. (2016) found that AG activity during concreteness judgements correlated with the strength of sensory-motor associations for all modalities tested (action, sound, shape, color, motion). In contrast, ATL activity did not correlate with individual sensory-motor associations. In line with these results, Kuhnke et al. (2020b) demonstrated that left AG responds to both sound and action features of concepts when these are task-relevant. Again, the ATL did not show modality-specific effects. However, the ATL was engaged for abstract semantic information (i.e., words vs. pseudowords). In a follow-up study (Kuhnke et al. 2021), left AG was functionally coupled with auditory brain regions during sound feature retrieval, and with somatomotor regions during action feature retrieval. This suggests that left AG guides the retrieval of task-relevant semantic features via flexible coupling with different sensory-motor cortices. In contrast, the ATL interacted with other high-level cross-modal areas, but not sensory-motor regions. Finally, TMS over left AG can selectively disrupt the retrieval of individual task-relevant semantic features (Kuhnke et al. 2020a; also see Pobric et al. 2010a; Ishibashi et al. 2011). In contrast, TMS over ATL typically impairs semantic processing for all types of concepts (Pobric et al. 2010a, b).
Overall, our and previous findings suggest that AG and ATL play distinct, complementary roles during semantic processing. The ATL acts as an “amodal” hub that represents an abstract similarity structure transcending individual modalities (Patterson and Lambon Ralph 2016). In contrast, the AG acts as a “multimodal” hub that binds different semantic features of the same concept, enabling efficient access to task-relevant features (Seghier 2013; Reilly et al. 2016; Kuhnke et al. 2020b, 2021).