Psychonomic Bulletin & Review

, Volume 23, Issue 4, pp 1096–1108 | Cite as

In defense of abstract conceptual representations

Brief Report

Abstract

An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level “convergence zones” that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient “chunking” of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.

Keywords

Semantic memory Concept representation Neuroimaging Association cortex 

There is now strong empirical evidence that modal perception, action, and emotion systems play a large role in concept retrieval (Fischer & Zwaan, 2008; Kiefer & Pulvermüller, 2012; Meteyard, Rodriguez Cuadrado, Bahrami, & Vigliocco, 2012). Concepts are generalizations derived from sensory, motor, and affective experiences, and the principle that modal brain systems responsible for these experiences are also involved in knowledge retrieval provides a parsimonious account of concept acquisition and storage (Barsalou, 1999; Damasio, 1989). Embodiment of conceptual knowledge provides a natural mechanism for grounding concepts in perception and action and thus the critical means by which concepts can refer to the external world (Harnad, 1990).

Much more research is needed to clarify the extent to which different levels of sensory, motor, affective and other hierarchical processing systems are involved in concept representation, the role of bimodal and multimodal areas, the involvement of these systems in representing temporal and spatial event concepts, their role in abstract concepts, and so on. As this research unfolds, it is also useful to keep in mind that not all brain areas that process concepts are content-specific. Before the “embodiment revolution,” it was not uncommon to study conceptual processing in the brain without reference to any specific type of semantic content. Many functional imaging studies, for example, compared neural responses evoked by unselected words with responses evoked by pseudowords (Binder et al., 2003; Binder, Medler, Desai, Conant, & Liebenthal, 2005; Cappa, Perani, Schnur, Tettamanti, & Fazio, 1998; Démonet et al., 1992; Henson, Price, Rugg, Turner, & Friston, 2002; Ischebeck et al., 2004; Kotz, Cappa, von Cramon, & Friederici, 2002; Kuchinke et al., 2005; Mechelli, Gorno-Tempini, & Price, 2003; Orfanidou, Marslen-Wilson, & Davis, 2006; Rissman, Eliassen, & Blumstein, 2003; Xiao et al., 2005). The assumption was that meaningful words would engage concept retrieval to a greater degree than meaningless pseudowords, regardless of the specific content of the word meanings. A similar logic applied to studies contrasting related word pairs with unrelated word pairs (Assaf et al., 2006; Graves, Binder, Desai, Conant, & Seidenberg, 2010; Mashal, Faust, Hendler, & Jung-Beeman, 2007; Mechelli, Josephs, Lambon Ralph, McClelland, & Price, 2007; Raposo, Moss, Stamatakis, & Tyler, 2006) and studies contrasting sentences with random word strings (Humphries, Binder, Medler, & Liebenthal, 2006; Humphries, Willard, Buchsbaum, & Hickok, 2001; Kuperberg et al., 2000; Mashal, Faust, Hendler, & Jung-Beeman, 2009; Obleser & Kotz, 2010; Obleser, Wise, Dresner, & Scott, 2007; Pallier, Devauchelle, & Dehaene, 2011; Stringaris, Medford, Giampietro, Brammer, & David, 2007). In each case, a “semantic system” was expected to respond more strongly to the more meaningful stimulus than to the less meaningful stimulus, regardless of the specific type of content that was represented.

Somewhat surprisingly, given the unselected nature of the stimuli and the wide variety of tasks that were used, these studies yielded very reproducible results. My colleagues and I performed an activation likelihood estimate (ALE) meta-analysis of 87 such studies (Binder, Desai, Conant, & Graves, 2009). To be included, each experiment had to include a comparison task that provided controls for orthographic, phonological, and general cognitive demands of the semantic task. The results (see Fig. 1) revealed a distributed, left-lateralized network comprised of seven nodes: (1) inferior parietal cortex (angular gyrus and portions of the supramarginal gyrus); (2) middle and inferior temporal gyri, extending into the anterior temporal lobe; (3) ventromedial temporal cortex (fusiform and parahippocampal gyri); (4) dorsomedial prefrontal cortex (superior frontal gyrus and posterior middle frontal gyrus); (5) ventromedial prefrontal cortex; (6) inferior frontal gyrus (mainly pars orbitalis); and (7) the posterior cingulate gyrus and precuneus.
Fig. 1

A supramodal “conceptual hub” network identified by quantitative meta-analysis of 87 neuroimaging studies of semantic processing. The studies all included a manipulation of stimulus meaningfulness but no manipulation of modality-specific content. Note. DMPFC = dorsomedial prefrontal cortex; FG/PH = fusiform gyrus/parahippocampus; IFG = inferior frontal gyrus; IPC = inferior parietal cortex; PC = posterior cingulate/precuneus; VMPFC = ventromedial prefrontal cortex. Adapted with permission from Binder et al. (2009). (Color figure online.)

Some anatomical characteristics of this network are noteworthy. Without exception, all nodes in the network are high-level multimodal/supramodal areas distant from primary sensory and motor cortices (Mesulam, 1985; Sepulcre, Sabuncu, Yeo, Liu, & Johnson, 2012). Each has been identified as a “hub” with a dense and widely distributed pattern of connectivity (Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Buckner et al., 2009). A conspicuous feature of the parietal and temporal regions is that they are sandwiched between multiple modal association cortices. For example, the angular gyrus lies at a confluence of visual, somatosensory, and auditory processing streams. Macaque area PG/7a, the closest monkey homologue of the angular gyrus, receives inputs exclusively from secondary visual, auditory, and multimodal regions (Andersen, Asanuma, Essick, & Siegel, 1990; Cavada & Goldman-Rakic, 1989; Jones & Powell, 1970). The ventral anterior temporal lobe, which has probably been under-represented in fMRI studies of semantic processing due to difficulty obtaining MRI signals from this region (Devlin et al., 2000), is another case in point. This region receives inputs from a broad range of modal association cortices (Jones & Powell, 1970; Van Hoesen, 1982), and patients with damage to this general region show multimodal (visual, auditory, motor) knowledge deficits (Patterson et al., 2007). Such facts suggest that these temporal and parietal nodes occupy positions at the top of a multimodal, convergent sensory-motor-affective hierarchy (Damasio, 1989). Their activation across a wide range of meaningful stimuli regardless of sensory-motor-affective content suggests that the information processed in these regions is not strongly tied to a particular perceptual or motor modality.

But what is the precise nature of the information represented in these high-level convergence zones, and what role might these representations play in semantic cognition? Standard models of cognitive processing certainly depend on amodal symbolic representations (Newell & Simon, 1976; Pylyshyn, 1984), but are these abstract representations necessary for actual conceptual processing in the brain or merely a convenience for creating computational models? Evidence that sensory, motor, and affective systems play a role in conceptual processing is increasingly difficult to deny, and the principle of modality-specific knowledge representation provides an elegant account of concept acquisition and grounding. If the conceptual content of actual human consciousness can be fully specified by activation of sensory-motor-affective information, what need is there for highly abstract representations (Barsalou, 1999; Gallese & Lakoff, 2005; Martin, 2007; Prinz, 2002)? In addition to their possible redundancy, abstract representations are usually conceived as having fixed content, such that models composed entirely of abstract symbols are often criticized as inflexible and unable to account for context effects (Barsalou, 1982; McCarthy & Hayes, 1969; Murphy & Medin, 1985). In contrast, distributed modal representations of conceptual knowledge are capable of context-sensitive variation in the pattern and relative strength of activation of component modal features, enabling dynamically flexible conceptual representation (Barsalou, 2003).

In the following brief discussion, I propose a way of thinking about abstract conceptual representations as high-level conjunctions rather than amodal symbols, and discuss some specific functions these representations might have. A variety of empirical neuroimaging findings are then explained in terms of the predicted responses of such representations to particular stimulus and task manipulations. The formulation owes much to previous convergence zone theories (Damasio, 1989; Simmons & Barsalou, 2003) and pluralistic representational accounts (Andrews, Frank, & Vigliocco, 2014; Dove, 2009; Louwerse & Jeuniaux, 2010; Meteyard et al., 2012; Patterson et al., 2007). The principal aims here are to expand the list of potential computational advantages conferred by high-level conjunctive representations and to review in some detail the neuroimaging evidence specifically relevant to these proposed processes.

The utility of broadly conjunctive conceptual representations

Some clarification of terminology is first necessary. Symbolic representations in traditional computational theories of cognition are “abstract” by definition: They refer to concepts via an arbitrary relationship and have no intrinsic content aside from links to other symbols (Harnad, 1990). The theory presented here is rather different. Abstract representations in the brain arise from a process of hierarchical conjunctive coding, and it is their combinatorial nature that is important rather than their abstractness per se. Conjunctive representation occurs when a neuron or neural ensemble responds preferentially to a particular combination of inputs. The essential function of neurons is to collect and combine information, and conjunctive representation seems to be a ubiquitous feature of perceptual systems in the brain (Barlow, 1995). Abstraction occurs at the level of a conjunctive representation because the representation codes the simultaneous occurrence of two or more inputs, say A and B, and not, in general, all of the particulars of A or B. These particulars are retrieved as needed by top-down activation of A and B by the conjunctive representation (Damasio, 1989).

Rather than “abstract representation,” a term closely tied to nonbiological models of cognition, I will use the term “crossmodal conjunctive representation” (CCR) to emphasize the essential combinatorial function of these representations and their origin in neurobiological systems. Another advantage of this term is that it offers flexibility regarding how “abstract” a particular representation is relative to low-level sensory-motor representations. All indications are that conjunctive representations are arranged hierarchically in perception and action systems (Felleman & Van Essen, 1991; Graziano & Aflalo, 2007; Hubel & Wiesel, 1968; Iwamura, 1998; Kobatake & Tanaka, 1994; ), with multiple levels of representational complexity, where “complexity” refers to the number or range of low-level inputs contributing to activation of the conjunctive representation. The degree to which lower-level information (e.g., information coding a particular shape, color, or body action) is retained at higher levels of representation (e.g., banana) presumably varies depending on the salience of the information and level of representation. At very high levels of this convergent hierarchy, CCRs might retain so little representation of actual experiential information that they functionally resemble arbitrary symbols. The key point, however, is that CCRs are not theoretical constructs; they arise through neurobiological convergences of information. They are as abstract as they “need to be” to represent a combination of inputs. On this view, there is no absolute demarcation between embodied/perceptual and abstract/conceptual representation in the brain.

It is important to stress here that the CCR terminology is adopted purely as a convenient, descriptive label intended to bring to mind the basic neural computational process of conjunctive coding, and should not be taken as a novel proposal. A number of previous authors have proposed models of knowledge representation based on hierarchical conjunctive coding in convergence zones at different levels of complexity (Damasio, 1989; Simmons & Barsalou, 2003). A CCR is equivalent to the content represented in a crossmodal convergence zone (Simmons & Barsalou, 2003).

Another important clarification is that CCRs are not necessarily highly localist in their neural realization. The critical aspect of CCRs is that they represent broad combinations of inputs. In theory such representations could be instantiated in single, dedicated cells, and such sparse, highly localized representations have been observed in the medial temporal lobe (Quiroga, Reddy, Kreiman, Koch, & Fried, 2005). Given the almost infinite number of concepts and concept variations that are possible, however, it is more likely that CCRs are instantiated as distributed neural ensembles or networks, and that a given neural ensemble represents a range of related concepts through variation in a distributed pattern of activation (O’Reilly & Busby, 2001).

The role of conjunctive coding has been explored, under various guises, in multiple sensory and motor domains (Fitzgerald, Lane, Thakur, & Hsiao, 2006; Graziano & Aflalo, 2007; Hubel & Wiesel, 1968; Kobatake & Tanaka, 1994; Schreiner, Read, & Sutter, 2000; Suga, 1988) and in episodic memory encoding (O’Reilly & Rudy, 2001; Lin, Osan, & Tsien, 2006; Rudy & Sutherland, 1995). In the domain of semantic cognition, Rogers, Patterson, and colleagues argued that broadly convergent, supramodal conceptual representations allow the brain to recognize underlying object similarity structure in the face of variably overlapping and conflicting features (Rogers et al., 2004; Rogers & McClelland, 2004; Patterson et al., 2007). For example, people know that apples, oranges, bananas, grapes, and lemons are all fruit despite salient differences in their appearance, taste, associated actions, and names. In computer simulations in which neural networks were trained to map between sensory, motor, and verbal features of objects, only networks containing highly convergent representations were able to capture semantic similarity relationships between the objects (Rogers & McClelland, 2004). Thus, CCRs that capture multimodal convergences appear to be necessary for learning taxonomic category relationships.

A related and equally ubiquitous phenomenon for which CCRs provide a much-needed explanation is thematic association. Consider the statement “The boy walked his dog in the park.” The inference that the dog is likely wearing a leash cannot be made purely on the basis of the sensory-motor features of dog, walk, park, or leash. Rather, the leash is a thematic or situation-specific association based on co-occurrence experiences. Thematic associations of this kind (dog-bone, coffee-cup, paper-pencil, shoe-lace, etc.) are pervasive in everyday experience and provide much of the foundation for our pragmatic knowledge (Estes et al., 2011). What kind of neural mechanism would support such associations? A mechanism that is sensitive only to sensory-motor feature similarity would find this a hard problem. Any association between coffee and cup based on feature content would be unlikely to generalize to other associations of coffee (e.g., cream, sugar, café, barista). The problem is that thematic associations primarily reflect situational co-occurrence rather than the structure of feature content, and the enormous number and variety of such associations would seem to make links based solely on a linear function of overlapping features impossible.

CCRs solve this problem by providing highly abstract conceptual representations activated by conjunctions of features, which can then “wire together” with other highly abstract conceptual representations with which they co-occur. That is, activation of the concept leash in the context of walk, dog, and park results from direct activation of the CCR for leash by the CCRs for the other concepts, independent of the sensory-motor feature overlap between these concepts. Mapping between concepts that have little or no systematic feature overlap, like dog and leash, is conceptually similar to other arbitrary mapping problems, such as mapping between orthographic or phonological word forms and meaning. In such cases, the output is not a simple linear combination of features of the input, and intermediate representations that combine information across multiple features are necessary to enable nonlinear transformations (Rumelhart, Hinton, & Williams, 1986). Thus, another principal function of high-level CCRs is to provide a neural mechanism for activating a field of thematically associated concepts independent of any shared sensory-motor feature structure.

Learning and retrieving taxonomic and thematic associations, however, is not an end in itself. The ability to learn and retrieve associations between concepts makes possible a range of other abilities. Prominent among these is the ability to mentally retrieve a typical situation or context in which a concept occurs. Thematic association underlies, for example, our ability to retrieve the context kitchen when presented with the concept oven, and to retrieve a set of other concepts thematically related to ovens and kitchens. This rich associative retrieval in turn enables more efficient and more complete comprehension of oven, and it primes the processing of any items in the thematically related field that might subsequently appear (Estes et al., 2011; Hare, Jones, Thomson, Kelly, & McRae, 2009; Metusalem et al., 2012). Thus, thematic association can be thought of as a form of prediction that allows anticipation of future events and extensive inference about current situations (Bar, 2007).

Although associations derived from experience offer important predictive advantages, human conceptual abilities are not limited to retrieval of frequent associations. A defining feature of human thought is its generativity and creative capacity. This generative capacity depends on the ability to compute mental representations of situations (i.e., events, states, and other propositional content). A situation, in the most general sense, can be thought of as simply a configuration of concepts, generally including entities, actions, properties, and relationships. For illustration purposes, take any two objects O1 and O2, two intentional agents A1 and A2, an intransitive action I, a transitive action T, a locative preposition L, and a property P:
The O1 was P.

= property state (e.g., The ball was heavy.)

The O1 was L the O2.

= spatial relationship state (e.g., The ball was in the box.)

The A1 did I.

= intransitive event (e.g., The girl ran.)

The A1 did T to O1.

= transitive object event (e.g., The girl hit the ball.)

The A1 did T to A2.

= transitive social event (e.g., The girl hit the boy.)

As these schematic examples illustrate, propositional content is constructed of configurations of concepts. For a situation to be represented in awareness, all of the constituent concepts must be simultaneously activated and in some sense bound together, with each concept assigned its thematic role. It is difficult to see how such complex conceptual combinations could be instantiated using sensory-motor representations alone. This would require a flexible representation of thematic roles within sensory-motor systems that would distinguish, say, the concept of girl as an agent versus girl as a patient in a social situation. Such a distinction would depend on relationships between the girl and the other entities comprising the situation, which by definition arise de novo from the particulars of the situation and so could not be contained within the sensory-motor content of girl. High-level CCRs provide a schematic, or “chunked” representation of concepts to which roles can be assigned flexibly, based on context.

The specific mechanisms by which such conceptual composition occurs are still largely unknown, and a detailed discussion of these processes is beyond the scope of this review. In a language comprehension context, syntax obviously provides important sources of information for constraining conceptual composition. The present theory, however, is about conceptual processing in general, whether in a linguistic or a nonlinguistic “mental imagery” context. Even in language tasks it seems clear that a conceptual composition must be computed independent of language prior to comprehension or overt expression (Bransford & Johnson, 1973; Kintsch & van Dijk, 1978; Metusalem et al., 2012; Tanenhaus et al., 1995). One general idea is that CCRs are associated with other concepts that “afford” particular kinds of roles and relationships. As one example, the concept of intentionality is strongly associated with concepts of individual people and groups of people, and to some extent with intelligent animals. Activation of this associated concept biases interpretation toward a role as agent in a situation. As another example, very large, inanimate objects (parks, buildings, etc.) are associated with the concept of being fixed in space, which affords a role as a spatial reference point and a geographical ‘container’ in which activities can occur. Verb concepts, too, have associations that constrain the types of subjects and objects with which they can sensibly combine (e.g., a car can hit a tree but a car cannot eat a tree) and specify the spatial, temporal, body action, mental experience, social, and other schemata contained in the event that is being represented (Jackendoff, 1990; Levin, 1993).

According to this theory, then, another principal function of high-level CCRs is to create mental representations of situations. The importance of this process for human cognition is hard to overstate, as it provides the semantic content for our episodic memory, imagination of future events, evaluation of propositions for truth value, moral judgments, goal setting and problem solving, daydreaming and mind wandering, and all other thought processes that involve forming relational configurations of concepts. One often-discussed problem for which such configurations might provide a general solution is the representation of very abstract concepts, such as justice, evil, truth, loyalty, and idea. Many such concepts seem to be learned by experience with complex social and introspective situations that unfold over time and involve multiple agents, physical events, and mental events (Barsalou & Wiemer-Hastings, 2005; Borghi, Flumini, Cimatti, Marocco, & Scorolli, 2011; Wiemer-Hastings & Xu, 2005). Thus, Barsalou has proposed that such concepts seem “abstract” because their content is distributed across multiple components of situations (Barsalou & Wiemer-Hastings, 2005). According to this view, then, the ability to build mental representations of situations through relational configuration of high-level CCRs is central to the representation of many abstract concepts.

A frequently noted limitation of symbolic representations is their static nature, which rules out contextual flexibility in concept retrieval (Barsalou, 1982; McCarthy & Hayes, 1969; Murphy & Medin, 1985; Wittgenstein, 1958). It is important to realize, however, that this problem arises only in models composed entirely of static symbols. Hierarchical convergence zone models contain a mixture of (subsymbolic) distributed modal representations and more abstract conjunctive codes, and permit interactions between and within levels. Context effects could arise in these structures through two mechanisms. First, interactions at high levels between CCRs representing the context (call them “context CCRs”) and CCRs representing the topic concept could modulate activation of other high-level CCRs associated with the topic. For example, in the context of the question, “What color is your dog?”, the context CCR color activates a field of color concepts, one of which is associated with my dog and thus receives additional activation. Second, context CCRs could cause top-down activation of modal components of the topic CCR. In the context of the question, “What does your dog sound like?”, the context CCR sound interacts with the topic CCR my dog to produce top-down activation of a perceptual simulation of the sounds produced by your dog.

Some neuroimaging evidence for broadly conjunctive conceptual representations

Given the hypothesis that nodes in the “conceptual hub” network shown in Fig. 1 contain high-level CCRs, several fairly straightforward predictions are possible regarding modulation of activity in these nodes. The first is that activation in these areas should reflect the number of CCRs that are active (and their intensity of activation) at any given moment, which in turn depends on the number of associations that these CCRs have. Distributed neural ensembles in these regions are literally equivalent to CCRs, each of which can activate a set of associated CCRs. (The exact set activated and the strength of activation of each member in the set is assumed to vary with context and individual experience.) All else being equal, a CCR that activates many other associated CCRs (causing, in turn, activation of the CCRs associated with those CCRs, and so on) will produce greater activation in these areas than a CCR with relatively few or relatively weak associations. This prediction was verified by Bar and colleagues (Bar, 2007) in a series of studies contrasting object concepts that have strong thematic associations (e.g., microscope) with objects that have weaker or less consistent thematic associations (e.g., camera). Relative to low-association concepts, high-association concepts produced greater activation of the posterior cingulate/precuneus region, the medial prefrontal cortex, and a left parieto-occipital focus that is probably in the posterior angular gyrus (Talairach coordinates -49, -72, 13).

The Bar et al. experiment demonstrates the specific effect of association strength and set size on activity at high levels of the conceptual network, but the same principle accounts for a wide range of similar results from studies that did not explicitly manipulate this variable. For example, nodes in the conceptual hub network are activated by single words relative to pseudowords (Binder, Medler, et al., 2005; Binder et al., 1999; Binder et al., 2003; Cappa et al., 1998; Démonet et al., 1992; Henson et al., 2002; Ischebeck et al., 2004; Kotz et al., 2002; Kuchinke et al., Mechelli et al., 2003; Orfanidou et al., 2006; Rissman et al., 2003; Xiao et al., 2005; see Fig. 2, top left). According to the present theory, this is due to the fact that pseudowords have no strong associations with concepts, and therefore evoke little, if any, high-level CCR activation. Very similar results were obtained in studies comparing responses to familiar and unfamiliar proper names (Sugiura et al., 2006; Woodard et al., 2007; see Fig. 2, lower left). Like pseudowords relative to words, unfamiliar names, which refer to no known individual, have far fewer associations than familiar names, which refer to actual people about which one has associated knowledge. An important related observation is that activation of conceptual hub regions by known words is stronger when a semantic retrieval task is required than when a non-semantic task (e.g., phonological or orthographic decision) is required (Craik et al., 1999; Devlin, Matthews, & Rushworth, 2003; Gitelman, Nobre, Sonty, Parrish, & Mesulam, 2005; Miceli et al., 2002; Mummery, Patterson, Hodges, & Price, 1998; Otten & Rugg, 2001; Price, Moore, Humphreys, & Wise, 1997; Roskies, Fiez, Balota, Raichle, & Petersen, 2001; Scott, Leff, & Wise, 2003). This indicates that activation of CCRs and spread of activation to associated CCRs is not an entirely automatic process, but depends in part on cognitive control.
Fig. 2

Activation of the conceptual hub network by four manipulations of associative content. Top left: Activation by words (hot colors) relative to pseudowords (cool colors) during an oral reading task. (Adapted with permission from Binder, Westbury, et al., 2005.) Lower left: Activation by familiar (i.e., publicly famous) person names relative to unfamiliar names during a famous/unfamiliar decision task. (Adapted with permission from Woodard et al., 2007.) Top right: Activation by concrete words (hot colors) relative to abstract words (cool colors) averaged across three studies using lexical decision, semantic decision, and oral reading tasks. (Adapted with permission from (Binder, 2007) Lower right: Activation by high-frequency (hot colors) relative to low-frequency (cool colors) words during an oral reading task. (Adapted with permission from Graves, Binder, et al., 2010.) Areas activated in common across all studies include the angular gyrus, posterior cingulate gyrus/precuneus, and superior frontal gyrus (dorsomedial prefrontal cortex). (Color figure online.)

Another observation explained by the general principle of association is the activation of conceptual hubs, particularly the angular gyrus, ventral temporal lobe, and posterior cingulate region, by concrete relative to abstract concepts (Bedny & Thompson-Schill, 2006; Binder, Medler, et al., 2005; Binder, Westbury, et al., 2005; Binder et al., 2009; Jessen et al., 2000; Fliessbach, Wesi, Klaver, Elger, & Weber, 2006; Graves, Desai, Humphries, Seidenberg, & Binder, 2010; Sabsevitz et al., 2005; Wallentin, Østergaarda, Lund, Østergaard, & Roepstorff, 2005; see Fig. 2, top right). Concrete words show a variety of behavioral processing advantages over abstract words, including faster response times in lexical and semantic decision tasks and better recall in episodic memory tasks. Paivio explained these advantages as due to the availability of visual and other sensory associations in the case of concrete concepts and not in the case of abstract concepts (Paivio, 1986). Schwanenflugel and colleagues proposed that concrete concepts have greater “context availability” (Schwanenflugel, 1991), meaning that they more readily or automatically activate a network of situational and contextual associations than abstract concepts. Thus, these theories have in common the idea that abstract concepts produce less activation of associated knowledge than concrete concepts. This claim might initially seem to contradict other proposals, mentioned above, that abstract concepts depend on complex situational knowledge to a greater degree than concrete concepts (Barsalou & Wiemer-Hastings, 2005). However, the idea that abstract concepts depend more on situational knowledge does not mean that this knowledge is more available. Recent work by Hoffman et al. using latent semantic analysis of text corpora suggests that abstract concepts actually tend to occur in a wider variety of semantic contexts than concrete words (Hoffman et al., 2011). However, high contextual variability is also associated with reduced distinctiveness of meaning (Hoffman et al., 2013), which presumably makes retrieval of associations less automatic in the case of abstract concepts (Schwanenflugel, 1991). The greater activation of conceptual hub nodes by concrete concepts is therefore consistent with the idea that activation of these nodes reflects the overall intensity of associated concept activation rather than just their sheer number.

Word frequency is another variable correlated with number and strength of associations (Nelson & McEvoy, 2000). Frequency of use is an approximate indicator of the familiarity of a concept (Baayen, Feldman, & Schreuder, 2006; Graves, Desai, et al., 2010; Toglia & Battig, 1978) and the variety of contexts in which it is used (Adelman, Brown, & Quesada, 2006; Hoffman et al., 2011). Frequency was positively correlated with the number of semantic features subjects produced in a feature listing procedure (McRae, Cree, Seidenberg, & McNorgan, 2005). Several studies (Carreiras, Riba, Vergara, Heldmann, & Münte, 2009; Graves, Desai, et al., 2010; Prabhakaran, Blumstein, Myers, Hutchison, & Britton, 2006) have now reported activation of conceptual hub nodes (angular gyrus, posterior cingulate gyrus, and dorsomedial prefrontal cortex) as a function of increasing word frequency (see Fig. 2, lower right). Assuming that words with higher frequency of use automatically activate a larger number of associations, this result is consistent with the aforementioned word-pseudoword, familiar-unfamiliar name, and concrete-abstract effects, all of which can be accounted for by a common underlying mechanism (i.e., relative differences in the overall intensity of activation of associated concepts).

Note that these modulatory effects are strictly speaking “supramodal” in the sense that they are not related to any particular sensory, motor, or affective content, thus it is unclear how they could be explained in terms of modal representations. Vigliocco and colleagues (Kousta, Vigliocco, Vinson, Andrews, & Del Campo, 2011; Vigliocco et al., 2014) have pointed out a correlation between abstractness and affective content, but this correlation would explain only activation differences favoring abstract words, not the converse. Whereas associative networks of high-level CCRs provide a unified account of all of these phenomena, it is unclear how theories that deny or minimize the role of such representations (e.g., Barsalou, 1999; Gallese & Lakoff, 2005; Martin, 2007; Prinz, 2002) can account for any of them.

A closely related modulatory effect is predicted for stimulus contrasts involving different levels of compositionality. “Compositionality” refers here to the degree to which a combination of concepts expresses a coherent meaning. The word strings below, for example, illustrate different degrees of compositionality:
  1. (1)

    the man on a vacation lost a bag and a wallet

     
  2. (2)

    on vacation a lost a and bag wallet a man the

     
  3. (3)

    the freeway on a pie watched a house and a window

     
  4. (4)

    a ball the a the spilled librarian in sign through fire

     

In (1), the constituent concepts can be combined to represent a semantically coherent, plausible situation, and the lawful syntactic structure assists the formation of this representation by indicating thematic roles. In (2), the same constituents are present but without a supporting syntactic structure. In (3), thematic roles are clear from the syntax, but the constituents have no semantic relationship to a common theme that would enable the construction of a coherent situation. In (4) there is no clear thematic relationship among the constituents and no syntactic cues to indicate thematic roles.

The importance of compositionality is that it permits a wide range of additional associations to be activated. Once the situation depicted in (1) is represented in the conceptual hub network, for example, we can activate representations of how the man in the situation might feel having lost these valuable items, possible scenarios that led to the losses, what repercussions the losses might have, and what actions he might then take. Each of these associated representations can then lead to activation of other relevant associations, such as representations of possible objects that were in the lost bag and wallet, the possible locations of the missing items, and the likelihood that they will be found. Activation of such associated concepts and situations is much less likely to occur in response to (2) because of the relative difficulty in retrieving a coherent representation of the situation in the absence of syntactic cues, although a partial representation might still be possible as a result of interactions between the thematically-related concepts without explicit role assignment. Activation of associated concepts and situations is also less likely in response to (3) because the situation described does not correspond to any plausible real-word event (a freeway cannot be located on a pie, and a freeway cannot watch something), although the combination of freeway, house, and window might evoke a partial representation of a house situated near a freeway. Similarly, string (4) might conceivably activate a partial representation of a fire in a library, but the absence of a clear theme linking all the constituents and the lack of thematic roles would likely result in a rather weak and noisy representation.

Compositionality-related modulation of neural activity in the conceptual hub network has been demonstrated across several levels of linguistic complexity. At the simplest level, semantically related word pairs have been shown in multiple studies to produce stronger activation of conceptual hubs compared to semantically unrelated words (Assaf et al., 2006; Graves, Binder, et al., 2010; Kotz et al., 2002; Mashal et al., 2007; Mechelli et al., 2007; Raposo et al., 2006). As pointed out by Raposo et al. (2006), this “semantic enhancement” effect is unexpected based on neural models of priming that predict less neural activity when words are primed by feature overlap or repetition (Buckner, Koutstaal, Schacter, & Rosen, 2000; Copland et al., 2003; Masson, 1995). Unlike repetition priming, however, in which no new information is provided by the second stimulus in a pair, semantically related pairs provide an opportunity for conceptual combination, in which the pair of words now refers to a new concept or situation (Downing, 1977; Gagné & Shoben, 1997; Graves, Binder, et al., 2010; Smith & Osherson, 1984). At the sentence level, conceptual hubs respond more strongly to semantically meaningful, grammatical sentences than to semantically anomalous sentences and word strings (Humphries et al., 2001; Humphries et al., 2006; Kuperberg et al., 2000; Mashal et al., 2009; Obleser & Kotz, 2010; Obleser et al., 2007; Pallier et al., 2011; Stringaris et al., 2007). Fig. 3 illustrates a typical response pattern using sentence and word string conditions like those in examples (1–4) above. Finally, the principle of compositionality can be extended to the level of discourse and narrative text comprehension, which are characterized by representation of multiple situations in thematically related temporal sequences. Just as sentences elicit activation of a wider range of associated concepts than isolated words, connected text is capable of illustrating events in much richer detail and complexity than isolated sentences, thereby eliciting a larger and richer set of associated concepts. As expected, conceptual hubs respond more strongly to text narratives and discourse than to isolated sentences (Ferstl, Neumann, Bogler, & von Cramon, 2008; Fletcher et al., 1995; Homae, Yahata, & Sakai, 2003; Xu, Kemeny, Park, Frattali, & Braun, 2005; Martín-Loeches, Casado, Hernández-Tamames, & Álvarez-Linerad, 2008; Yarkoni, Speer, & Zacks, 2008).
Fig. 3

Activation of the conceptual hub network by sentence compositionality effects. The map shows areas activated by a contrast between semantically and syntactically coherent sentences (Sem+ Syn+), exemplified by item (1) in the text, and semantically random word strings (Sem- Syn-), exemplified by item (4) in the text. Graphs show activation levels (in arbitrary units of BOLD signal change relative to the “resting” baseline) for the four conditions exemplified by items (1–4) in the text: coherent sentences (Sem+ Syn+), thematically associated word strings (Sem+ Syn-), semantically random sentences (Sem- Syn+), and semantically random word strings (Sem- Syn-). A graded response is observed reflecting varying levels of compositionality, weighted more toward semantic than syntactic structure. Note. Adapted with permission from Humphries et al. (2006)

To summarize, the hypothesis that neural activity in conceptual hub areas reflects activation of associated networks of CCRs accounts for a wide range of empirical data. At the simplest level, this hypothesis explains effects of lexicality, familiarity, concreteness, and frequency observed in single word studies. With more complex conceptual structures, the same basic mechanism accounts for successively greater activation by sentences and phrases relative to unrelated word strings, and connected text relative to unrelated sentences. Finally, the same principles can be applied to account for the “spontaneous” activity that occurs in these regions during the conscious “resting” state. This state is now generally recognized to include rich and dynamically changing conceptual content in the form of mental representations of situations pertaining to the past, present, and future (Andreasen et al., 1995; Andrews-Hanna, 2012; Antrobus, 1968; Binder et al., 1999; McKiernan, D’Angelo, Kaufman, & Binder, 2006; Pope & Singer, 1976; Smallwood & Schooler, 2006). The adaptive and other intrinsic properties of such representations have made them an independent focus of study, but even such complex mental representations must arise from simpler neurobiological processes. The proposal offered here is that the conceptual content of these representations arises through activation of associated combinations of CCRs, the conceptual building blocks for representing situations in conscious awareness.

Summary

I have argued for the importance of a type of abstract conceptual representation derived from convergences of information at crossmodal levels. High-level CCRs capture broad conjunctions of inputs and retain variable amounts of experiential information content, thus they are not equivalent to amodal symbols. Broadly conjunctive conceptual representations perform an essential ‘chunking’ function that is useful for capturing taxonomic similarity structure, making possible thematic association, and enabling situation building in conscious awareness, three ubiquitous conceptual processes that seem difficult to explain using purely modal representations. The neurobiological importance of abstract CCRs is supported by empirical evidence for a network of high-level convergence zones (conceptual hubs) whose neural activity depends on the general associative richness (i.e., meaningfulness) of stimuli but not on the presence or absence of particular modal sensory-motor content. The need for abstract conceptual representations has been questioned by some proponents of a pure embodied knowledge view, perhaps in part as a reaction to traditional nonbiological, symbolic models of conceptual processing. While some versions of embodiment theory explicitly recognize the need for conjunctive representations (Barsalou, 1999; Damasio, 1989; Simmons & Barsalou, 2003), the computational advantages of high-level, supramodal conjunctions and the proportion of cortex devoted to their processing are often underestimated. The theory promoted here is that neural representations at different levels of abstraction contribute to conceptual knowledge in different ways. Whereas modal sensory, motor, and affective representations serve to ground concepts by enabling reference to the external world, abstract CCRs enable associative and generative processes that support a range of mental simulation, recall, deduction, prediction, and other phenomena dependent on the representation of situations. These associative and generative processes represent a large component of everyday conceptual cognition and depend on a large, distributed, dedicated brain network.

Notes

Acknowledgments

My thanks to Colin Humphries for generously providing the data shown in Fig. 3. Thanks also to Lisa Conant and Rutvik Desai for providing comments on an early draft of this paper.

References

  1. Achard, S., Salvador, R., Whitcher, B., Suckling, J., & Bullmore, E. (2006). A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. Journal of Neuroscience, 26, 63–72.PubMedCrossRefGoogle Scholar
  2. Adelman, J. S., Brown, G. D. A., & Quesada, J. F. (2006). Contextual diversity, not word frequency, determines word-naming and lexical decision times. Psychological Science, 17, 814–823.PubMedCrossRefGoogle Scholar
  3. Andersen, R. A., Asanuma, C., Essick, G., & Siegel, R. M. (1990). Corticocortical connections of anatomically and physiologically defined subdivisions within the inferior parietal lobule. Journal of Comparative Neurology, 296, 65–113.PubMedCrossRefGoogle Scholar
  4. Andreasen, N. C., O’Leary, D. S., Cizadlo, T., Arndt, S., Rezai, K., Watkins, G. L., . . . Hichwa, R. D. (1995). Remembering the past: Two facets of episodic memory explored with positron emission tomography. American Journal of Psychiatry, 152, 1576–1585.Google Scholar
  5. Andrews, M., Frank, S., & Vigliocco, G. (2014). Reconciling embodied and distributional accounts of meaning in language. Topics in Cognitive Science, 6, 359–370.PubMedCrossRefGoogle Scholar
  6. Andrews-Hanna, J. R. (2012). The brain’s default network and its adaptive role in internal mentation. The Neuroscientist, 18, 251–270.PubMedCrossRefGoogle Scholar
  7. Antrobus, J. S. (1968). Information theory and stimulus-independent thought. British Journal of Psychology, 59, 423–430.CrossRefGoogle Scholar
  8. Assaf, M., Calhoun, V. D., Kuzu, C. H., Kraut, M. A., Rivkin, P. R., Hart, J., & Pearlson, G. D. (2006). Neural correlates of the object-recall process in semantic memory. Psychiatry Research: Neuroimaging, 147, 115–126.PubMedCrossRefGoogle Scholar
  9. Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition of monosyllabic monomorphemic words. Journal of Memory and Language, 55, 290–313.CrossRefGoogle Scholar
  10. Bar, M. (2007). The proactive brain: Using analogies and associations to generate predictions. Trends in Cognitive Sciences, 11, 280–289.PubMedCrossRefGoogle Scholar
  11. Barlow, H. (1995). The neuron doctrine in perception. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 415–435). Cambridge: MIT Press.Google Scholar
  12. Barsalou, L. W. (1982). Context-independent and context-dependent information in concepts. Memory and Cognition, 10, 82–93.PubMedCrossRefGoogle Scholar
  13. Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660.PubMedGoogle Scholar
  14. Barsalou, L. W. (2003). Abstraction in perceptual symbol systems. Philosophical Transactions of the Royal Society of London B, 358, 1177–1187.CrossRefGoogle Scholar
  15. Barsalou, L. W., & Wiemer-Hastings, K. (2005). Situating abstract concepts. In D. Pecher & R. Zwaan (Eds.), Grounding cognition: The role of perception and action in memory, language, and thought (pp. 129–163). New York: Cambridge University Press.CrossRefGoogle Scholar
  16. Bedny, M., & Thompson-Schill, S. L. (2006). Neuroanatomically separable effects of imageability and grammatical class during single-word comprehension. Brain and Language, 98, 127–139.PubMedCrossRefGoogle Scholar
  17. Binder, J. R. (2007). Effects of word imageability on semantic access: Neuroimaging studies. In J. Hart & M. A. Kraut (Eds.), Neural basis of semantic memory (pp. 149–181). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  18. Binder, J. R., Frost, J. A., Hammeke, T. A., Bellgowan, P. S. F., Rao, S. M., & Cox, R. W. (1999). Conceptual processing during the conscious resting state: A functional MRI study. Journal of Cognitive Neuroscience, 11, 80–93.PubMedCrossRefGoogle Scholar
  19. Binder, J. R., McKiernan, K. A., Parsons, M., Westbury, C. F., Possing, E. T., Kaufman, J. N., & Buchanan, L. (2003). Neural correlates of lexical access during visual word recognition. Journal of Cognitive Neuroscience, 15, 372–393.PubMedCrossRefGoogle Scholar
  20. Binder, J. R., Medler, D. A., Desai, R., Conant, L. L., & Liebenthal, E. (2005a). Some neurophysiological constraints on models of word naming. NeuroImage, 27, 677–693.PubMedCrossRefGoogle Scholar
  21. Binder, J. R., Westbury, C. F., Possing, E. T., McKiernan, K. A., & Medler, D. A. (2005b). Distinct brain systems for processing concrete and abstract concepts. Journal of Cognitive Neuroscience, 17, 905–917.PubMedCrossRefGoogle Scholar
  22. Binder, J. R., Desai, R., Conant, L. L., & Graves, W. W. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19, 2767–2796.PubMedPubMedCentralCrossRefGoogle Scholar
  23. Borghi, A. M., Flumini, A., Cimatti, F., Marocco, D., & Scorolli, C. (2011). Manipulating objects and telling words: A study on concrete and abstract words acquisition. Frontiers in Psychology, 2, 15.PubMedPubMedCentralGoogle Scholar
  24. Bransford, J. D., & Johnson, M. K. (1973). Considerations of some problems of comprehension. In W. G. Chase (Ed.), Visual information processing (pp. 383–438). New York: Academic Press.CrossRefGoogle Scholar
  25. Buckner, R. L., Koutstaal, W., Schacter, D. L., & Rosen, B. R. (2000). Functional MRI evidence for a role of frontal and inferior temporal cortex in amodal components of priming. Brain, 123, 620–640.PubMedCrossRefGoogle Scholar
  26. Buckner, R. L., Sepulcre, J., Talukdar, T., Krienen, F. M., Liu, H., Hedden, T., . . . Johnson, K. A. (2009). Cortical hubs revealed by intrinsic functional connectivity: Mapping, assessment of stability, and relation to Alzheimer’s disease. Journal of Neuroscience, 29, 1860–1873.Google Scholar
  27. Cappa, S. F., Perani, D., Schnur, T., Tettamanti, M., & Fazio, F. (1998). The effects of semantic category and knowledge type on lexical-semantic access: A PET study. NeuroImage, 8, 350–359.PubMedCrossRefGoogle Scholar
  28. Carreiras, M., Riba, J., Vergara, M., Heldmann, M., & Münte, T. F. (2009). Syllable congruency and word frequency effects on brain activation. Human Brain Mapping, 30, 3079–3088.PubMedCrossRefGoogle Scholar
  29. Cavada, C., & Goldman-Rakic, P. S. (1989). Posterior parietal cortex in the rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. Journal of Comparative Neurology, 287, 422–445.PubMedCrossRefGoogle Scholar
  30. Copland, D. A., de Zubicaray, G. I., McMahon, K., Wilson, S. J., Eastburn, M., & Chenery, H. J. (2003). Brain activity during automatic semantic priming revealed by event-related functional magnetic resonance imaging. NeuroImage, 20, 302–310.PubMedCrossRefGoogle Scholar
  31. Craik, F. I. M., Moroz, T. M., Moscovitch, M., Stuss, D. T., Winocur, G., Tulving, E., & Kapur, S. (1999). In search of the self: A positron emission tomography study. Psychological Science, 10, 26–34.CrossRefGoogle Scholar
  32. Damasio, A. R. (1989). Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition. Cognition, 33, 25–62.PubMedCrossRefGoogle Scholar
  33. Démonet, J.-F., Chollet, F., Ramsay, S., Cardebat, D., Nespoulous, J.-L., Wise, R., . . . Frackowiak, R. (1992). The anatomy of phonological and semantic processing in normal subjects. Brain, 115, 1753–1768.Google Scholar
  34. Devlin, J. T., Russell, R. P., Davis, M. H., Price, C. J., Wilson, J., Moss, H. E., . . . Tyler, L. K. (2000). Susceptibility-induced loss of signal: Comparing PET and fMRI on a semantic task. NeuroImage, 11, 589–600.Google Scholar
  35. Devlin, J. T., Matthews, P. M., & Rushworth, M. F. S. (2003). Semantic processing in the left inferior prefrontal cortex: A combined functional magnetic resonance imaging and transcranial magnetic stimulation study. Journal of Cognitive Neuroscience, 15, 71–84.PubMedCrossRefGoogle Scholar
  36. Dove, G. O. (2009). Beyond perceptual symbols: A call for representational pluralism. Cognition, 110, 412–431.PubMedCrossRefGoogle Scholar
  37. Downing, P. (1977). On the creation and use of English compound nouns. Language, 53, 810–842.CrossRefGoogle Scholar
  38. Estes, Z., Golonka, S., & Jones, L. L. (2011). Thematic thinking: The apprehension and consequences of thematic relations. Psychology of Learning and Motivation, 54, 249–294.CrossRefGoogle Scholar
  39. Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47.PubMedCrossRefGoogle Scholar
  40. Ferstl, E. C., Neumann, J., Bogler, C., & von Cramon, D. Y. (2008). The extended language network: A meta-analysis of neuroimaging studies on text comprehension. Human Brain Mapping, 29, 581–593.PubMedPubMedCentralCrossRefGoogle Scholar
  41. Fischer, M. H., & Zwaan, R. A. (2008). Embodied language: A review of the role of the motor system in language comprehension. Quarterly Journal of Experimental Physiology, 61, 825–850.Google Scholar
  42. Fitzgerald, P. J., Lane, J. W., Thakur, P. H., & Hsiao, S. S. (2006). Receptive field (RF) properties of the macaque second somatosensory cortex: RF size, shape, and somatotopic organization. Journal of Neuroscience, 26, 6485–6495.PubMedPubMedCentralCrossRefGoogle Scholar
  43. Fletcher, P. C., Happé, F., Frith, U., Baker, S. C., Dolan, R. J., Frackowiak, R. S. J., & Frith, C. D. (1995). Other minds in the brain: A functional imaging study of “theory of mind” in story comprehension. Cognition, 57, 109–128.PubMedCrossRefGoogle Scholar
  44. Fliessbach, K., Wesi, S., Klaver, P., Elger, C. E., & Weber, B. (2006). The effect of word concreteness on recognition memory. NeuroImage, 32, 1413–1421.PubMedCrossRefGoogle Scholar
  45. Gagné, C., & Shoben, E. J. (1997). Influence of thematic relations on the comprehension of modifier–noun combinations. Journal of Experimental Psychology: Learning, Memory, and Cogntion, 23, 71–87.Google Scholar
  46. Gallese, V., & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22, 455–479.PubMedCrossRefGoogle Scholar
  47. Gitelman, D. R., Nobre, A. C., Sonty, S., Parrish, T. B., & Mesulam, M. M. (2005). Language network specializations: An analysis with parallel task designs and functional magnetic resonance imaging. NeuroImage, 26, 975–985.PubMedCrossRefGoogle Scholar
  48. Graves, W. W., Binder, J. R., Desai, R. H., Conant, L. L., & Seidenberg, M. S. (2010a). Neural correlates of implicit and explicit combinatorial semantic processing. NeuroImage, 53, 638–646.PubMedPubMedCentralCrossRefGoogle Scholar
  49. Graves, W. W., Desai, R., Humphries, C., Seidenberg, M. S., & Binder, J. R. (2010b). Neural systems for reading aloud: A multiparametric approach. Cerebral Cortex, 20, 1799–1815.PubMedCrossRefGoogle Scholar
  50. Graziano, M., & Aflalo, T. (2007). Mapping behavioral repertoire onto the cortex. Neuron, 56, 239–251.PubMedCrossRefGoogle Scholar
  51. Hare, M., Jones, M. N., Thomson, C., Kelly, S., & McRae, K. (2009). Activating event knowledge. Cognition, 111, 151–167.PubMedPubMedCentralCrossRefGoogle Scholar
  52. Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346.CrossRefGoogle Scholar
  53. Henson, R. N. A., Price, C. J., Rugg, M. D., Turner, R., & Friston, K. J. (2002). Detecting latency differences in event-related BOLD responses: Application to words versus nonwords and initial versus repeated face presentations. NeuroImage, 15, 83–97.PubMedCrossRefGoogle Scholar
  54. Hoffman, P., Rogers, T. T., & Lambon Ralph, M. A. (2011). Semantic diversity accounts for the “missing” word frequency effect in stroke aphasia: Insights using a novel method to quantify contextual variability in meaning. Journal of Cognitive Neuroscience, 23, 2432–2446.PubMedCrossRefGoogle Scholar
  55. Hoffman, P., Lambon Ralph, M. A., & Rogers, T. T. (2013). Semantic diversity: A measure of semantic ambiguity based on variability in the contextual usage of words. Behavior Research Methods, 45, 718–730.PubMedCrossRefGoogle Scholar
  56. Homae, F., Yahata, N., & Sakai, K. L. (2003). Selective enhancement of functional connectivity in the left prefrontal cortex during sentence processing. NeuroImage, 20, 578–586.PubMedCrossRefGoogle Scholar
  57. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology (London), 195, 215–243.CrossRefGoogle Scholar
  58. Humphries, C., Willard, K., Buchsbaum, B., & Hickok, G. (2001). Role of anterior temporal cortex in auditory sentence comprehension: An fMRI study. Neuroreport, 12, 1749–1752.PubMedCrossRefGoogle Scholar
  59. Humphries, C., Binder, J. R., Medler, D. A., & Liebenthal, E. (2006). Syntactic and semantic modulation of neural activity during auditory sentence comprehension. Journal of Cognitive Neuroscience, 18, 665–679.PubMedPubMedCentralCrossRefGoogle Scholar
  60. Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16, 727–741.PubMedCrossRefGoogle Scholar
  61. Iwamura, Y. (1998). Hierarchical somatosensory processing. Current Opinion in Neurobiology, 8, 522–528.PubMedCrossRefGoogle Scholar
  62. Jackendoff, R. (1990). Semantic structures. Cambridge: MIT Press.Google Scholar
  63. Jessen, F., Heun, R., Erb, M., Granath, D. O., Klose, U., Papassotiropoulos, A., & Grodd, W. (2000). The concreteness effect: Evidence for dual-coding and context availability. Brain and Language, 74, 103–112.PubMedCrossRefGoogle Scholar
  64. Jones, E. G., & Powell, T. S. P. (1970). An anatomical study of converging sensory pathways within the cerebral cortex of the monkey. Brain, 93, 793–820.PubMedCrossRefGoogle Scholar
  65. Kiefer, M., & Pulvermüller, F. (2012). Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex, 48, 805–825.PubMedCrossRefGoogle Scholar
  66. Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363–394.CrossRefGoogle Scholar
  67. Kobatake, E., & Tanaka, K. (1994). Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex. Journal of Neurophysiology, 71, 856–867.PubMedGoogle Scholar
  68. Kotz, S. A., Cappa, S. F., von Cramon, D. Y., & Friederici, A. D. (2002). Modulation of the lexical-semantic network by auditory semantic priming: An event-related functional MRI study. NeuroImage, 17, 1761–1772.PubMedCrossRefGoogle Scholar
  69. Kousta, S.-T., Vigliocco, G., Vinson, D. P., Andrews, M., & Del Campo, E. (2011). The representation of abstract words: Why emotion matters. Journal of Experimental Psychology: General, 140, 14–34.CrossRefGoogle Scholar
  70. Kuchinke, L., Jacobs, A. M., Grubich, C., Vo, M. L. H., Conrad, M., & Herrmann, M. (2005). Incidental effects of emotional valence in single word processing: An fMRI study. NeuroImage, 28, 1022–1032.PubMedCrossRefGoogle Scholar
  71. Kuperberg, G. R., McGuire, P. K., Bullmore, E. T., Brammer, M. J., Rabe-Hesketh, S., Wright, I., . . . David, A. S. (2000). Common and distinct neural substrates for pragmatic, semantic, and syntactic processing of spoken sentences: An fMRI study. Journal of Cognitive Neuroscience, 12, 321–341.Google Scholar
  72. Levin, B. (1993). English verb classes and alternations: A preliminary investigation. Chicago: University of Chicago Press.Google Scholar
  73. Lin, L., Osan, R., & Tsien, J. Z. (2006). Organizing principles of real-time memory encoding: Neural clique assemblies and universal neural codes. Trends in Neuroscience, 29, 48–57.CrossRefGoogle Scholar
  74. Louwerse, M. M., & Jeuniaux, P. (2010). The linguistic and embodied nature of conceptual processing. Cognition, 114, 96–104.PubMedCrossRefGoogle Scholar
  75. Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45.PubMedCrossRefGoogle Scholar
  76. Martín-Loeches, M., Casado, P., Hernández-Tamames, J. A., & Álvarez-Linerad, J. (2008). Brain activation in discourse comprehension: A 3t fMRI study. NeuroImage, 41, 614–622.PubMedCrossRefGoogle Scholar
  77. Mashal, N., Faust, M., Hendler, T., & Jung-Beeman, M. (2007). An fMRI investigation of the neural correlates underlying the processing of novel metaphoric expressions. Brain and Language, 100, 115–126.PubMedCrossRefGoogle Scholar
  78. Mashal, N., Faust, M., Hendler, T., & Jung-Beeman, M. (2009). An fMRI study of processing novel metaphoric sentences. Laterality, 14, 30–54.PubMedGoogle Scholar
  79. Masson, M. E. J. (1995). A distributed memory model of semantic priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 3–23.Google Scholar
  80. McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, 4, 463–502.Google Scholar
  81. McKiernan, K. A., D’Angelo, B. R., Kaufman, J. N., & Binder, J. R. (2006). Interrupting the “stream of consciousness”: An fMRI investigation. NeuroImage, 29, 1185–1191.PubMedCrossRefGoogle Scholar
  82. McRae, K., Cree, G. S., Seidenberg, M. S., & McNorgan, C. (2005). Semantic feature norms for a large set of living and nonliving things. Behavior Research Methods, Instruments, & Computers, 37, 547–559.CrossRefGoogle Scholar
  83. Mechelli, A., Gorno-Tempini, M. L., & Price, C. J. (2003). Neuroimaging studies of word and pseudoword reading: Consistencies, inconsistencies, and limitations. Journal of Cognitive Neuroscience, 15, 260–271.PubMedCrossRefGoogle Scholar
  84. Mechelli, A., Josephs, O., Lambon Ralph, M. A., McClelland, J. L., & Price, C. J. (2007). Dissociating stimulus-driven semantic and phonological effect during reading and naming. Human Brain Mapping, 28, 205–217.PubMedCrossRefGoogle Scholar
  85. Mesulam, M. (1985). Patterns in behavioral neuroanatomy: Association areas, the limbic system, and hemispheric specialization. In M. Mesulam (Ed.), Principles of behavioral neurology (pp. 1–70). Philadelphia: F. A. Davis.Google Scholar
  86. Meteyard, L., Rodriguez Cuadrado, S., Bahrami, B., & Vigliocco, G. (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48, 788–804.PubMedCrossRefGoogle Scholar
  87. Metusalem, R., Kutas, M., Urbach, T. P., Hare, M., McRae, K., & Elman, J. L. (2012). Generalized event knowledge activation during online sentence comprehension. Journal of Memory and Language, 66, 545–567.PubMedPubMedCentralCrossRefGoogle Scholar
  88. Miceli, G., Turriziani, P., Caltagirone, C., Capasso, R., Tomaiuolo, F., & Caramazza, A. (2002). The neural correlates of grammatical gender: An fMRI investigation. Journal of Cognitive Neuroscience, 14, 618–628.PubMedCrossRefGoogle Scholar
  89. Mummery, C. J., Patterson, K., Hodges, J. R., & Price, C. J. (1998). Functional neuroanatomy of the semantic system: Divisible by what? Journal of Cognitive Neuroscience, 10, 766–777.PubMedCrossRefGoogle Scholar
  90. Murphy, G., & Medin, D. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289–316.PubMedCrossRefGoogle Scholar
  91. Nelson, D. L., & McEvoy, C. L. (2000). What is this thing called frequency? Memory and Cognition, 28, 509–522.PubMedCrossRefGoogle Scholar
  92. Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19, 113–126.CrossRefGoogle Scholar
  93. O’Reilly, R. C., & Busby, R. S. (2001). Generalizable relational binding from coarse-coded distributed representations. Advances in Neural Information Processing Systems, 14, 75–82.Google Scholar
  94. O’Reilly, R. C., & Rudy, J. W. (2001). Conjunctive representations in learning and memory: Principles of cortical and hippocampal function. Psychological Review, 108, 311–345.PubMedCrossRefGoogle Scholar
  95. Obleser, J., & Kotz, S. A. (2010). Expectancy constraints in degraded speech modulate the language comprehension network. Cerebral Cortex, 20, 633–640.PubMedCrossRefGoogle Scholar
  96. Obleser, J., Wise, R. J. S., Dresner, M. A., & Scott, S. K. (2007). Functional integration across brain regions improves speech perception under adverse listening conditions. Journal of Neuroscience, 27, 2283–2289.PubMedCrossRefGoogle Scholar
  97. Orfanidou, E., Marslen-Wilson, W. D., & Davis, M. H. (2006). Neural response suppression predicts repetition priming of spoken words and pseudowords. Journal of Cognitive Neuroscience, 18, 1237–1252.PubMedCrossRefGoogle Scholar
  98. Otten, L. J., & Rugg, M. D. (2001). Task-dependency of the neural correlates of episodic encoding as measured by fMRI. Cerebral Cortex, 11, 1150–1160.PubMedCrossRefGoogle Scholar
  99. Paivio, A. (1986). Mental representations: A dual-coding approach. New York: Oxford University Press.Google Scholar
  100. Pallier, C., Devauchelle, A.-D., & Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences of the United States of America, 108, 2522–2527.Google Scholar
  101. Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. National Review of Neuroscience, 8, 976–987.CrossRefGoogle Scholar
  102. Pope, K. S., & Singer, J. L. (1976). Regulation of the stream of consciousness: Toward a theory of ongoing thought. In G. E. Schwartz & D. Shapiro (Eds.), Consciousness and self-regulation (pp. 101–135). New York: Plenum Press.Google Scholar
  103. Prabhakaran, R., Blumstein, S. E., Myers, E. B., Hutchison, E., & Britton, B. (2006). An event-related fMRI investigation of phonological-lexical competition. Neuropsychologia, 44, 2209–2221.PubMedCrossRefGoogle Scholar
  104. Price, C. J., Moore, C. J., Humphreys, G. W., & Wise, R. J. S. (1997). Segregating semantic from phonological processes during reading. Journal of Cognitive Neuroscience, 9, 727–733.PubMedCrossRefGoogle Scholar
  105. Prinz, J. J. (2002). Furnishing the mind: Concepts and their perceptual basis. Cambridge: MIT Press.Google Scholar
  106. Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge: MIT Press.Google Scholar
  107. Quiroga, R., Reddy, L., Kreiman, G., Koch, C., & Fried, I. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435, 1102–1107.PubMedCrossRefGoogle Scholar
  108. Raposo, A., Moss, H. E., Stamatakis, E. A., & Tyler, L. K. (2006). Repetition suppression and semantic enhancement: An investigation of the neural correlates of priming. Neuropsychologia, 44, 2284–2295.PubMedCrossRefGoogle Scholar
  109. Rissman, J., Eliassen, J. C., & Blumstein, S. E. (2003). An event-related fMRI investigation of implicit semantic priming. Journal of Cognitive Neuroscience, 15, 1160–1175.PubMedCrossRefGoogle Scholar
  110. Rogers, T. T., & McClelland, J. L. (2004). Semantic cognition: A parallel distributed processing approach. Cambridge: MIT Press.Google Scholar
  111. Rogers, T. T., Garrard, P., McClelland, J. L., Lambon Ralph, M. A., Bozeat, S., Hodges, J. R., & Patterson, K. (2004). Structure and deterioration of semantic memory: A neuropsychological and computational investigation. Psychological Review, 111, 205–235.PubMedCrossRefGoogle Scholar
  112. Roskies, A. L., Fiez, J. A., Balota, D. A., Raichle, M. E., & Petersen, S. E. (2001). Task-dependent modulation of regions in the left inferior frontal cortex during semantic processing. Journal of Cognitive Neuroscience, 13, 829–843.PubMedCrossRefGoogle Scholar
  113. Rudy, J. W., & Sutherland, R. J. (1995). Configural association theory and the hippocampal formation: An appraisal and reconfiguration. Hippocampus, 5, 375–389.PubMedCrossRefGoogle Scholar
  114. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing, Volume 1: Foundations (pp. 318–362). Cambridge: MIT Press.Google Scholar
  115. Sabsevitz, D. S., Medler, D. A., Seidenberg, M., & Binder, J. R. (2005). Modulation of the semantic system by word imageability. NeuroImage, 27, 188–200.PubMedCrossRefGoogle Scholar
  116. Schreiner, C. E., Read, H. L., & Sutter, M. L. (2000). Modular organization of frequency integration in primary auditory cortex. Annual Review of Neuroscience, 23, 501–529.PubMedCrossRefGoogle Scholar
  117. Schwanenflugel, P. (1991). Why are abstract concepts hard to understand? In P. Schwanenflugel (Ed.), The psychology of word meanings (pp. 223–250). Hillsdale: Erlbaum.Google Scholar
  118. Scott, S. K., Leff, A. P., & Wise, R. J. S. (2003). Going beyond the information given: A neural system supporting semantic interpretation. NeuroImage, 19, 870–876.PubMedCrossRefGoogle Scholar
  119. Sepulcre, J., Sabuncu, M. R., Yeo, T. B., Liu, H., & Johnson, K. A. (2012). Stepwise connectivity of the modal cortex reveals the multimodal organization of the human brain. Journal of Neuroscience, 32, 10649–10661.PubMedPubMedCentralCrossRefGoogle Scholar
  120. Simmons, W. K., & Barsalou, L. W. (2003). The similarity-in-topography principle: Reconciling theories of conceptual deficits. Cognitive Neuropsychology, 20, 451–486.PubMedCrossRefGoogle Scholar
  121. Smallwood, J., & Schooler, J. W. (2006). The restless mind. Psychological Bulletin, 132, 946–958.PubMedCrossRefGoogle Scholar
  122. Smith, E. E., & Osherson, D. N. (1984). Conceptual combination with prototype concepts. Cognitive Science, 8, 337–361.CrossRefGoogle Scholar
  123. Stringaris, A. K., Medford, N. C., Giampietro, V., Brammer, M. J., & David, A. S. (2007). Deriving meaning: Distinct neural mechanisms for metaphoric, literal, and non-meaningful sentences. Brain and Language, 100, 150–162.PubMedCrossRefGoogle Scholar
  124. Suga, N. (1988). Auditory neuroethology and speech processing: Complex sound processing by combination-sensitive neurons. In G. M. Edelman, W. E. Gall, & W. M. Cowan (Eds.), Functions of the auditory system (pp. 679–720). New York: Wiley.Google Scholar
  125. Sugiura, M., Sassa, Y., Watanabe, J., Akitsuki, Y., Maeda, Y., Matsue, Y., . . . Kawashima, R. (2006). Cortical mechanisms of person representation: Recognition of famous and personally familiar names. NeuroImage, 31, 853–860.Google Scholar
  126. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 632–634.CrossRefGoogle Scholar
  127. Toglia, M. P., & Battig, W. F. (1978). Handbook of semantic word norms. Hillsdale: Erlbaum.Google Scholar
  128. Van Hoesen, G. W. (1982). The parahippocampal gyrus: New observations regarding its cortical connections in the monkey. Trends in Neuroscience, 5, 345–350.CrossRefGoogle Scholar
  129. Vigliocco, G., Kousta, S.-T., Della Rosa, P. A., Vinson, D. P., Tettamanti, M., Devlin, J. T., & Cappa, S. F. (2014). The neural representation of abstract words: The role of emotion. Cerebral Cortex, 24, 1767–1777.PubMedCrossRefGoogle Scholar
  130. Wallentin, M., Østergaarda, S., Lund, T. E., Østergaard, L., & Roepstorff, A. (2005). Concrete spatial language: See what I mean? Brain and Language, 92, 221–233.PubMedCrossRefGoogle Scholar
  131. Wiemer-Hastings, K., & Xu, X. (2005). Content differences for abstract and concrete concepts. Cognitive Science, 29, 719–736.CrossRefGoogle Scholar
  132. Wittgenstein, L. (1958). Philosophical investigations (3rd ed.). New York: Macmillan.Google Scholar
  133. Woodard, J. L., Seidenberg, M., Nielson, K. A., Miller, S. K., Franczak, M., Antuono. P., . . . Rao, S. M. (2007). Temporally graded activation of neocortical regions in response to memories of different ages. Journal of Cognitive Neuroscience, 19, 1–12.Google Scholar
  134. Xiao, Z., Zhang, J. X., Wang, X., Wu, R., Hu, X., Weng, X., & Tan, L. H. (2005). Differential activity in left inferior frontal gyrus for pseudowords and real words: An event-related fMRI study on auditory lexical decision. Human Brain Mapping, 25, 212–221.PubMedCrossRefGoogle Scholar
  135. Xu, J., Kemeny, S., Park, G., Frattali, C., & Braun, A. (2005). Language in context: Emergent features of word, sentence, and narrative comprehension. NeuroImage, 25, 1002–1015.PubMedCrossRefGoogle Scholar
  136. Yarkoni, T., Speer, N. K., & Zacks, J. M. (2008). Neural substrates of narrative comprehension and memory. NeuroImage, 41, 1408–1425.PubMedPubMedCentralCrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2015

Authors and Affiliations

  1. 1.Department of NeurologyMedical College of WisconsinMilwaukeeUSA

Personalised recommendations