In defense of abstract conceptual representations
An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level “convergence zones” that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient “chunking” of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.
KeywordsSemantic memory Concept representation Neuroimaging Association cortex
There is now strong empirical evidence that modal perception, action, and emotion systems play a large role in concept retrieval (Fischer & Zwaan, 2008; Kiefer & Pulvermüller, 2012; Meteyard, Rodriguez Cuadrado, Bahrami, & Vigliocco, 2012). Concepts are generalizations derived from sensory, motor, and affective experiences, and the principle that modal brain systems responsible for these experiences are also involved in knowledge retrieval provides a parsimonious account of concept acquisition and storage (Barsalou, 1999; Damasio, 1989). Embodiment of conceptual knowledge provides a natural mechanism for grounding concepts in perception and action and thus the critical means by which concepts can refer to the external world (Harnad, 1990).
Much more research is needed to clarify the extent to which different levels of sensory, motor, affective and other hierarchical processing systems are involved in concept representation, the role of bimodal and multimodal areas, the involvement of these systems in representing temporal and spatial event concepts, their role in abstract concepts, and so on. As this research unfolds, it is also useful to keep in mind that not all brain areas that process concepts are content-specific. Before the “embodiment revolution,” it was not uncommon to study conceptual processing in the brain without reference to any specific type of semantic content. Many functional imaging studies, for example, compared neural responses evoked by unselected words with responses evoked by pseudowords (Binder et al., 2003; Binder, Medler, Desai, Conant, & Liebenthal, 2005; Cappa, Perani, Schnur, Tettamanti, & Fazio, 1998; Démonet et al., 1992; Henson, Price, Rugg, Turner, & Friston, 2002; Ischebeck et al., 2004; Kotz, Cappa, von Cramon, & Friederici, 2002; Kuchinke et al., 2005; Mechelli, Gorno-Tempini, & Price, 2003; Orfanidou, Marslen-Wilson, & Davis, 2006; Rissman, Eliassen, & Blumstein, 2003; Xiao et al., 2005). The assumption was that meaningful words would engage concept retrieval to a greater degree than meaningless pseudowords, regardless of the specific content of the word meanings. A similar logic applied to studies contrasting related word pairs with unrelated word pairs (Assaf et al., 2006; Graves, Binder, Desai, Conant, & Seidenberg, 2010; Mashal, Faust, Hendler, & Jung-Beeman, 2007; Mechelli, Josephs, Lambon Ralph, McClelland, & Price, 2007; Raposo, Moss, Stamatakis, & Tyler, 2006) and studies contrasting sentences with random word strings (Humphries, Binder, Medler, & Liebenthal, 2006; Humphries, Willard, Buchsbaum, & Hickok, 2001; Kuperberg et al., 2000; Mashal, Faust, Hendler, & Jung-Beeman, 2009; Obleser & Kotz, 2010; Obleser, Wise, Dresner, & Scott, 2007; Pallier, Devauchelle, & Dehaene, 2011; Stringaris, Medford, Giampietro, Brammer, & David, 2007). In each case, a “semantic system” was expected to respond more strongly to the more meaningful stimulus than to the less meaningful stimulus, regardless of the specific type of content that was represented.
Some anatomical characteristics of this network are noteworthy. Without exception, all nodes in the network are high-level multimodal/supramodal areas distant from primary sensory and motor cortices (Mesulam, 1985; Sepulcre, Sabuncu, Yeo, Liu, & Johnson, 2012). Each has been identified as a “hub” with a dense and widely distributed pattern of connectivity (Achard, Salvador, Whitcher, Suckling, & Bullmore, 2006; Buckner et al., 2009). A conspicuous feature of the parietal and temporal regions is that they are sandwiched between multiple modal association cortices. For example, the angular gyrus lies at a confluence of visual, somatosensory, and auditory processing streams. Macaque area PG/7a, the closest monkey homologue of the angular gyrus, receives inputs exclusively from secondary visual, auditory, and multimodal regions (Andersen, Asanuma, Essick, & Siegel, 1990; Cavada & Goldman-Rakic, 1989; Jones & Powell, 1970). The ventral anterior temporal lobe, which has probably been under-represented in fMRI studies of semantic processing due to difficulty obtaining MRI signals from this region (Devlin et al., 2000), is another case in point. This region receives inputs from a broad range of modal association cortices (Jones & Powell, 1970; Van Hoesen, 1982), and patients with damage to this general region show multimodal (visual, auditory, motor) knowledge deficits (Patterson et al., 2007). Such facts suggest that these temporal and parietal nodes occupy positions at the top of a multimodal, convergent sensory-motor-affective hierarchy (Damasio, 1989). Their activation across a wide range of meaningful stimuli regardless of sensory-motor-affective content suggests that the information processed in these regions is not strongly tied to a particular perceptual or motor modality.
But what is the precise nature of the information represented in these high-level convergence zones, and what role might these representations play in semantic cognition? Standard models of cognitive processing certainly depend on amodal symbolic representations (Newell & Simon, 1976; Pylyshyn, 1984), but are these abstract representations necessary for actual conceptual processing in the brain or merely a convenience for creating computational models? Evidence that sensory, motor, and affective systems play a role in conceptual processing is increasingly difficult to deny, and the principle of modality-specific knowledge representation provides an elegant account of concept acquisition and grounding. If the conceptual content of actual human consciousness can be fully specified by activation of sensory-motor-affective information, what need is there for highly abstract representations (Barsalou, 1999; Gallese & Lakoff, 2005; Martin, 2007; Prinz, 2002)? In addition to their possible redundancy, abstract representations are usually conceived as having fixed content, such that models composed entirely of abstract symbols are often criticized as inflexible and unable to account for context effects (Barsalou, 1982; McCarthy & Hayes, 1969; Murphy & Medin, 1985). In contrast, distributed modal representations of conceptual knowledge are capable of context-sensitive variation in the pattern and relative strength of activation of component modal features, enabling dynamically flexible conceptual representation (Barsalou, 2003).
In the following brief discussion, I propose a way of thinking about abstract conceptual representations as high-level conjunctions rather than amodal symbols, and discuss some specific functions these representations might have. A variety of empirical neuroimaging findings are then explained in terms of the predicted responses of such representations to particular stimulus and task manipulations. The formulation owes much to previous convergence zone theories (Damasio, 1989; Simmons & Barsalou, 2003) and pluralistic representational accounts (Andrews, Frank, & Vigliocco, 2014; Dove, 2009; Louwerse & Jeuniaux, 2010; Meteyard et al., 2012; Patterson et al., 2007). The principal aims here are to expand the list of potential computational advantages conferred by high-level conjunctive representations and to review in some detail the neuroimaging evidence specifically relevant to these proposed processes.
The utility of broadly conjunctive conceptual representations
Some clarification of terminology is first necessary. Symbolic representations in traditional computational theories of cognition are “abstract” by definition: They refer to concepts via an arbitrary relationship and have no intrinsic content aside from links to other symbols (Harnad, 1990). The theory presented here is rather different. Abstract representations in the brain arise from a process of hierarchical conjunctive coding, and it is their combinatorial nature that is important rather than their abstractness per se. Conjunctive representation occurs when a neuron or neural ensemble responds preferentially to a particular combination of inputs. The essential function of neurons is to collect and combine information, and conjunctive representation seems to be a ubiquitous feature of perceptual systems in the brain (Barlow, 1995). Abstraction occurs at the level of a conjunctive representation because the representation codes the simultaneous occurrence of two or more inputs, say A and B, and not, in general, all of the particulars of A or B. These particulars are retrieved as needed by top-down activation of A and B by the conjunctive representation (Damasio, 1989).
Rather than “abstract representation,” a term closely tied to nonbiological models of cognition, I will use the term “crossmodal conjunctive representation” (CCR) to emphasize the essential combinatorial function of these representations and their origin in neurobiological systems. Another advantage of this term is that it offers flexibility regarding how “abstract” a particular representation is relative to low-level sensory-motor representations. All indications are that conjunctive representations are arranged hierarchically in perception and action systems (Felleman & Van Essen, 1991; Graziano & Aflalo, 2007; Hubel & Wiesel, 1968; Iwamura, 1998; Kobatake & Tanaka, 1994; ), with multiple levels of representational complexity, where “complexity” refers to the number or range of low-level inputs contributing to activation of the conjunctive representation. The degree to which lower-level information (e.g., information coding a particular shape, color, or body action) is retained at higher levels of representation (e.g., banana) presumably varies depending on the salience of the information and level of representation. At very high levels of this convergent hierarchy, CCRs might retain so little representation of actual experiential information that they functionally resemble arbitrary symbols. The key point, however, is that CCRs are not theoretical constructs; they arise through neurobiological convergences of information. They are as abstract as they “need to be” to represent a combination of inputs. On this view, there is no absolute demarcation between embodied/perceptual and abstract/conceptual representation in the brain.
It is important to stress here that the CCR terminology is adopted purely as a convenient, descriptive label intended to bring to mind the basic neural computational process of conjunctive coding, and should not be taken as a novel proposal. A number of previous authors have proposed models of knowledge representation based on hierarchical conjunctive coding in convergence zones at different levels of complexity (Damasio, 1989; Simmons & Barsalou, 2003). A CCR is equivalent to the content represented in a crossmodal convergence zone (Simmons & Barsalou, 2003).
Another important clarification is that CCRs are not necessarily highly localist in their neural realization. The critical aspect of CCRs is that they represent broad combinations of inputs. In theory such representations could be instantiated in single, dedicated cells, and such sparse, highly localized representations have been observed in the medial temporal lobe (Quiroga, Reddy, Kreiman, Koch, & Fried, 2005). Given the almost infinite number of concepts and concept variations that are possible, however, it is more likely that CCRs are instantiated as distributed neural ensembles or networks, and that a given neural ensemble represents a range of related concepts through variation in a distributed pattern of activation (O’Reilly & Busby, 2001).
The role of conjunctive coding has been explored, under various guises, in multiple sensory and motor domains (Fitzgerald, Lane, Thakur, & Hsiao, 2006; Graziano & Aflalo, 2007; Hubel & Wiesel, 1968; Kobatake & Tanaka, 1994; Schreiner, Read, & Sutter, 2000; Suga, 1988) and in episodic memory encoding (O’Reilly & Rudy, 2001; Lin, Osan, & Tsien, 2006; Rudy & Sutherland, 1995). In the domain of semantic cognition, Rogers, Patterson, and colleagues argued that broadly convergent, supramodal conceptual representations allow the brain to recognize underlying object similarity structure in the face of variably overlapping and conflicting features (Rogers et al., 2004; Rogers & McClelland, 2004; Patterson et al., 2007). For example, people know that apples, oranges, bananas, grapes, and lemons are all fruit despite salient differences in their appearance, taste, associated actions, and names. In computer simulations in which neural networks were trained to map between sensory, motor, and verbal features of objects, only networks containing highly convergent representations were able to capture semantic similarity relationships between the objects (Rogers & McClelland, 2004). Thus, CCRs that capture multimodal convergences appear to be necessary for learning taxonomic category relationships.
A related and equally ubiquitous phenomenon for which CCRs provide a much-needed explanation is thematic association. Consider the statement “The boy walked his dog in the park.” The inference that the dog is likely wearing a leash cannot be made purely on the basis of the sensory-motor features of dog, walk, park, or leash. Rather, the leash is a thematic or situation-specific association based on co-occurrence experiences. Thematic associations of this kind (dog-bone, coffee-cup, paper-pencil, shoe-lace, etc.) are pervasive in everyday experience and provide much of the foundation for our pragmatic knowledge (Estes et al., 2011). What kind of neural mechanism would support such associations? A mechanism that is sensitive only to sensory-motor feature similarity would find this a hard problem. Any association between coffee and cup based on feature content would be unlikely to generalize to other associations of coffee (e.g., cream, sugar, café, barista). The problem is that thematic associations primarily reflect situational co-occurrence rather than the structure of feature content, and the enormous number and variety of such associations would seem to make links based solely on a linear function of overlapping features impossible.
CCRs solve this problem by providing highly abstract conceptual representations activated by conjunctions of features, which can then “wire together” with other highly abstract conceptual representations with which they co-occur. That is, activation of the concept leash in the context of walk, dog, and park results from direct activation of the CCR for leash by the CCRs for the other concepts, independent of the sensory-motor feature overlap between these concepts. Mapping between concepts that have little or no systematic feature overlap, like dog and leash, is conceptually similar to other arbitrary mapping problems, such as mapping between orthographic or phonological word forms and meaning. In such cases, the output is not a simple linear combination of features of the input, and intermediate representations that combine information across multiple features are necessary to enable nonlinear transformations (Rumelhart, Hinton, & Williams, 1986). Thus, another principal function of high-level CCRs is to provide a neural mechanism for activating a field of thematically associated concepts independent of any shared sensory-motor feature structure.
Learning and retrieving taxonomic and thematic associations, however, is not an end in itself. The ability to learn and retrieve associations between concepts makes possible a range of other abilities. Prominent among these is the ability to mentally retrieve a typical situation or context in which a concept occurs. Thematic association underlies, for example, our ability to retrieve the context kitchen when presented with the concept oven, and to retrieve a set of other concepts thematically related to ovens and kitchens. This rich associative retrieval in turn enables more efficient and more complete comprehension of oven, and it primes the processing of any items in the thematically related field that might subsequently appear (Estes et al., 2011; Hare, Jones, Thomson, Kelly, & McRae, 2009; Metusalem et al., 2012). Thus, thematic association can be thought of as a form of prediction that allows anticipation of future events and extensive inference about current situations (Bar, 2007).
- The O1 was P.
= property state (e.g., The ball was heavy.)
- The O1 was L the O2.
= spatial relationship state (e.g., The ball was in the box.)
- The A1 did I.
= intransitive event (e.g., The girl ran.)
- The A1 did T to O1.
= transitive object event (e.g., The girl hit the ball.)
- The A1 did T to A2.
= transitive social event (e.g., The girl hit the boy.)
As these schematic examples illustrate, propositional content is constructed of configurations of concepts. For a situation to be represented in awareness, all of the constituent concepts must be simultaneously activated and in some sense bound together, with each concept assigned its thematic role. It is difficult to see how such complex conceptual combinations could be instantiated using sensory-motor representations alone. This would require a flexible representation of thematic roles within sensory-motor systems that would distinguish, say, the concept of girl as an agent versus girl as a patient in a social situation. Such a distinction would depend on relationships between the girl and the other entities comprising the situation, which by definition arise de novo from the particulars of the situation and so could not be contained within the sensory-motor content of girl. High-level CCRs provide a schematic, or “chunked” representation of concepts to which roles can be assigned flexibly, based on context.
The specific mechanisms by which such conceptual composition occurs are still largely unknown, and a detailed discussion of these processes is beyond the scope of this review. In a language comprehension context, syntax obviously provides important sources of information for constraining conceptual composition. The present theory, however, is about conceptual processing in general, whether in a linguistic or a nonlinguistic “mental imagery” context. Even in language tasks it seems clear that a conceptual composition must be computed independent of language prior to comprehension or overt expression (Bransford & Johnson, 1973; Kintsch & van Dijk, 1978; Metusalem et al., 2012; Tanenhaus et al., 1995). One general idea is that CCRs are associated with other concepts that “afford” particular kinds of roles and relationships. As one example, the concept of intentionality is strongly associated with concepts of individual people and groups of people, and to some extent with intelligent animals. Activation of this associated concept biases interpretation toward a role as agent in a situation. As another example, very large, inanimate objects (parks, buildings, etc.) are associated with the concept of being fixed in space, which affords a role as a spatial reference point and a geographical ‘container’ in which activities can occur. Verb concepts, too, have associations that constrain the types of subjects and objects with which they can sensibly combine (e.g., a car can hit a tree but a car cannot eat a tree) and specify the spatial, temporal, body action, mental experience, social, and other schemata contained in the event that is being represented (Jackendoff, 1990; Levin, 1993).
According to this theory, then, another principal function of high-level CCRs is to create mental representations of situations. The importance of this process for human cognition is hard to overstate, as it provides the semantic content for our episodic memory, imagination of future events, evaluation of propositions for truth value, moral judgments, goal setting and problem solving, daydreaming and mind wandering, and all other thought processes that involve forming relational configurations of concepts. One often-discussed problem for which such configurations might provide a general solution is the representation of very abstract concepts, such as justice, evil, truth, loyalty, and idea. Many such concepts seem to be learned by experience with complex social and introspective situations that unfold over time and involve multiple agents, physical events, and mental events (Barsalou & Wiemer-Hastings, 2005; Borghi, Flumini, Cimatti, Marocco, & Scorolli, 2011; Wiemer-Hastings & Xu, 2005). Thus, Barsalou has proposed that such concepts seem “abstract” because their content is distributed across multiple components of situations (Barsalou & Wiemer-Hastings, 2005). According to this view, then, the ability to build mental representations of situations through relational configuration of high-level CCRs is central to the representation of many abstract concepts.
A frequently noted limitation of symbolic representations is their static nature, which rules out contextual flexibility in concept retrieval (Barsalou, 1982; McCarthy & Hayes, 1969; Murphy & Medin, 1985; Wittgenstein, 1958). It is important to realize, however, that this problem arises only in models composed entirely of static symbols. Hierarchical convergence zone models contain a mixture of (subsymbolic) distributed modal representations and more abstract conjunctive codes, and permit interactions between and within levels. Context effects could arise in these structures through two mechanisms. First, interactions at high levels between CCRs representing the context (call them “context CCRs”) and CCRs representing the topic concept could modulate activation of other high-level CCRs associated with the topic. For example, in the context of the question, “What color is your dog?”, the context CCR color activates a field of color concepts, one of which is associated with my dog and thus receives additional activation. Second, context CCRs could cause top-down activation of modal components of the topic CCR. In the context of the question, “What does your dog sound like?”, the context CCR sound interacts with the topic CCR my dog to produce top-down activation of a perceptual simulation of the sounds produced by your dog.
Some neuroimaging evidence for broadly conjunctive conceptual representations
Given the hypothesis that nodes in the “conceptual hub” network shown in Fig. 1 contain high-level CCRs, several fairly straightforward predictions are possible regarding modulation of activity in these nodes. The first is that activation in these areas should reflect the number of CCRs that are active (and their intensity of activation) at any given moment, which in turn depends on the number of associations that these CCRs have. Distributed neural ensembles in these regions are literally equivalent to CCRs, each of which can activate a set of associated CCRs. (The exact set activated and the strength of activation of each member in the set is assumed to vary with context and individual experience.) All else being equal, a CCR that activates many other associated CCRs (causing, in turn, activation of the CCRs associated with those CCRs, and so on) will produce greater activation in these areas than a CCR with relatively few or relatively weak associations. This prediction was verified by Bar and colleagues (Bar, 2007) in a series of studies contrasting object concepts that have strong thematic associations (e.g., microscope) with objects that have weaker or less consistent thematic associations (e.g., camera). Relative to low-association concepts, high-association concepts produced greater activation of the posterior cingulate/precuneus region, the medial prefrontal cortex, and a left parieto-occipital focus that is probably in the posterior angular gyrus (Talairach coordinates -49, -72, 13).
Another observation explained by the general principle of association is the activation of conceptual hubs, particularly the angular gyrus, ventral temporal lobe, and posterior cingulate region, by concrete relative to abstract concepts (Bedny & Thompson-Schill, 2006; Binder, Medler, et al., 2005; Binder, Westbury, et al., 2005; Binder et al., 2009; Jessen et al., 2000; Fliessbach, Wesi, Klaver, Elger, & Weber, 2006; Graves, Desai, Humphries, Seidenberg, & Binder, 2010; Sabsevitz et al., 2005; Wallentin, Østergaarda, Lund, Østergaard, & Roepstorff, 2005; see Fig. 2, top right). Concrete words show a variety of behavioral processing advantages over abstract words, including faster response times in lexical and semantic decision tasks and better recall in episodic memory tasks. Paivio explained these advantages as due to the availability of visual and other sensory associations in the case of concrete concepts and not in the case of abstract concepts (Paivio, 1986). Schwanenflugel and colleagues proposed that concrete concepts have greater “context availability” (Schwanenflugel, 1991), meaning that they more readily or automatically activate a network of situational and contextual associations than abstract concepts. Thus, these theories have in common the idea that abstract concepts produce less activation of associated knowledge than concrete concepts. This claim might initially seem to contradict other proposals, mentioned above, that abstract concepts depend on complex situational knowledge to a greater degree than concrete concepts (Barsalou & Wiemer-Hastings, 2005). However, the idea that abstract concepts depend more on situational knowledge does not mean that this knowledge is more available. Recent work by Hoffman et al. using latent semantic analysis of text corpora suggests that abstract concepts actually tend to occur in a wider variety of semantic contexts than concrete words (Hoffman et al., 2011). However, high contextual variability is also associated with reduced distinctiveness of meaning (Hoffman et al., 2013), which presumably makes retrieval of associations less automatic in the case of abstract concepts (Schwanenflugel, 1991). The greater activation of conceptual hub nodes by concrete concepts is therefore consistent with the idea that activation of these nodes reflects the overall intensity of associated concept activation rather than just their sheer number.
Word frequency is another variable correlated with number and strength of associations (Nelson & McEvoy, 2000). Frequency of use is an approximate indicator of the familiarity of a concept (Baayen, Feldman, & Schreuder, 2006; Graves, Desai, et al., 2010; Toglia & Battig, 1978) and the variety of contexts in which it is used (Adelman, Brown, & Quesada, 2006; Hoffman et al., 2011). Frequency was positively correlated with the number of semantic features subjects produced in a feature listing procedure (McRae, Cree, Seidenberg, & McNorgan, 2005). Several studies (Carreiras, Riba, Vergara, Heldmann, & Münte, 2009; Graves, Desai, et al., 2010; Prabhakaran, Blumstein, Myers, Hutchison, & Britton, 2006) have now reported activation of conceptual hub nodes (angular gyrus, posterior cingulate gyrus, and dorsomedial prefrontal cortex) as a function of increasing word frequency (see Fig. 2, lower right). Assuming that words with higher frequency of use automatically activate a larger number of associations, this result is consistent with the aforementioned word-pseudoword, familiar-unfamiliar name, and concrete-abstract effects, all of which can be accounted for by a common underlying mechanism (i.e., relative differences in the overall intensity of activation of associated concepts).
Note that these modulatory effects are strictly speaking “supramodal” in the sense that they are not related to any particular sensory, motor, or affective content, thus it is unclear how they could be explained in terms of modal representations. Vigliocco and colleagues (Kousta, Vigliocco, Vinson, Andrews, & Del Campo, 2011; Vigliocco et al., 2014) have pointed out a correlation between abstractness and affective content, but this correlation would explain only activation differences favoring abstract words, not the converse. Whereas associative networks of high-level CCRs provide a unified account of all of these phenomena, it is unclear how theories that deny or minimize the role of such representations (e.g., Barsalou, 1999; Gallese & Lakoff, 2005; Martin, 2007; Prinz, 2002) can account for any of them.
the man on a vacation lost a bag and a wallet
on vacation a lost a and bag wallet a man the
the freeway on a pie watched a house and a window
a ball the a the spilled librarian in sign through fire
In (1), the constituent concepts can be combined to represent a semantically coherent, plausible situation, and the lawful syntactic structure assists the formation of this representation by indicating thematic roles. In (2), the same constituents are present but without a supporting syntactic structure. In (3), thematic roles are clear from the syntax, but the constituents have no semantic relationship to a common theme that would enable the construction of a coherent situation. In (4) there is no clear thematic relationship among the constituents and no syntactic cues to indicate thematic roles.
The importance of compositionality is that it permits a wide range of additional associations to be activated. Once the situation depicted in (1) is represented in the conceptual hub network, for example, we can activate representations of how the man in the situation might feel having lost these valuable items, possible scenarios that led to the losses, what repercussions the losses might have, and what actions he might then take. Each of these associated representations can then lead to activation of other relevant associations, such as representations of possible objects that were in the lost bag and wallet, the possible locations of the missing items, and the likelihood that they will be found. Activation of such associated concepts and situations is much less likely to occur in response to (2) because of the relative difficulty in retrieving a coherent representation of the situation in the absence of syntactic cues, although a partial representation might still be possible as a result of interactions between the thematically-related concepts without explicit role assignment. Activation of associated concepts and situations is also less likely in response to (3) because the situation described does not correspond to any plausible real-word event (a freeway cannot be located on a pie, and a freeway cannot watch something), although the combination of freeway, house, and window might evoke a partial representation of a house situated near a freeway. Similarly, string (4) might conceivably activate a partial representation of a fire in a library, but the absence of a clear theme linking all the constituents and the lack of thematic roles would likely result in a rather weak and noisy representation.
To summarize, the hypothesis that neural activity in conceptual hub areas reflects activation of associated networks of CCRs accounts for a wide range of empirical data. At the simplest level, this hypothesis explains effects of lexicality, familiarity, concreteness, and frequency observed in single word studies. With more complex conceptual structures, the same basic mechanism accounts for successively greater activation by sentences and phrases relative to unrelated word strings, and connected text relative to unrelated sentences. Finally, the same principles can be applied to account for the “spontaneous” activity that occurs in these regions during the conscious “resting” state. This state is now generally recognized to include rich and dynamically changing conceptual content in the form of mental representations of situations pertaining to the past, present, and future (Andreasen et al., 1995; Andrews-Hanna, 2012; Antrobus, 1968; Binder et al., 1999; McKiernan, D’Angelo, Kaufman, & Binder, 2006; Pope & Singer, 1976; Smallwood & Schooler, 2006). The adaptive and other intrinsic properties of such representations have made them an independent focus of study, but even such complex mental representations must arise from simpler neurobiological processes. The proposal offered here is that the conceptual content of these representations arises through activation of associated combinations of CCRs, the conceptual building blocks for representing situations in conscious awareness.
I have argued for the importance of a type of abstract conceptual representation derived from convergences of information at crossmodal levels. High-level CCRs capture broad conjunctions of inputs and retain variable amounts of experiential information content, thus they are not equivalent to amodal symbols. Broadly conjunctive conceptual representations perform an essential ‘chunking’ function that is useful for capturing taxonomic similarity structure, making possible thematic association, and enabling situation building in conscious awareness, three ubiquitous conceptual processes that seem difficult to explain using purely modal representations. The neurobiological importance of abstract CCRs is supported by empirical evidence for a network of high-level convergence zones (conceptual hubs) whose neural activity depends on the general associative richness (i.e., meaningfulness) of stimuli but not on the presence or absence of particular modal sensory-motor content. The need for abstract conceptual representations has been questioned by some proponents of a pure embodied knowledge view, perhaps in part as a reaction to traditional nonbiological, symbolic models of conceptual processing. While some versions of embodiment theory explicitly recognize the need for conjunctive representations (Barsalou, 1999; Damasio, 1989; Simmons & Barsalou, 2003), the computational advantages of high-level, supramodal conjunctions and the proportion of cortex devoted to their processing are often underestimated. The theory promoted here is that neural representations at different levels of abstraction contribute to conceptual knowledge in different ways. Whereas modal sensory, motor, and affective representations serve to ground concepts by enabling reference to the external world, abstract CCRs enable associative and generative processes that support a range of mental simulation, recall, deduction, prediction, and other phenomena dependent on the representation of situations. These associative and generative processes represent a large component of everyday conceptual cognition and depend on a large, distributed, dedicated brain network.
My thanks to Colin Humphries for generously providing the data shown in Fig. 3. Thanks also to Lisa Conant and Rutvik Desai for providing comments on an early draft of this paper.
- Andreasen, N. C., O’Leary, D. S., Cizadlo, T., Arndt, S., Rezai, K., Watkins, G. L., . . . Hichwa, R. D. (1995). Remembering the past: Two facets of episodic memory explored with positron emission tomography. American Journal of Psychiatry, 152, 1576–1585.Google Scholar
- Barlow, H. (1995). The neuron doctrine in perception. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 415–435). Cambridge: MIT Press.Google Scholar
- Buckner, R. L., Sepulcre, J., Talukdar, T., Krienen, F. M., Liu, H., Hedden, T., . . . Johnson, K. A. (2009). Cortical hubs revealed by intrinsic functional connectivity: Mapping, assessment of stability, and relation to Alzheimer’s disease. Journal of Neuroscience, 29, 1860–1873.Google Scholar
- Démonet, J.-F., Chollet, F., Ramsay, S., Cardebat, D., Nespoulous, J.-L., Wise, R., . . . Frackowiak, R. (1992). The anatomy of phonological and semantic processing in normal subjects. Brain, 115, 1753–1768.Google Scholar
- Devlin, J. T., Russell, R. P., Davis, M. H., Price, C. J., Wilson, J., Moss, H. E., . . . Tyler, L. K. (2000). Susceptibility-induced loss of signal: Comparing PET and fMRI on a semantic task. NeuroImage, 11, 589–600.Google Scholar
- Fischer, M. H., & Zwaan, R. A. (2008). Embodied language: A review of the role of the motor system in language comprehension. Quarterly Journal of Experimental Physiology, 61, 825–850.Google Scholar
- Gagné, C., & Shoben, E. J. (1997). Influence of thematic relations on the comprehension of modifier–noun combinations. Journal of Experimental Psychology: Learning, Memory, and Cogntion, 23, 71–87.Google Scholar
- Jackendoff, R. (1990). Semantic structures. Cambridge: MIT Press.Google Scholar
- Kuperberg, G. R., McGuire, P. K., Bullmore, E. T., Brammer, M. J., Rabe-Hesketh, S., Wright, I., . . . David, A. S. (2000). Common and distinct neural substrates for pragmatic, semantic, and syntactic processing of spoken sentences: An fMRI study. Journal of Cognitive Neuroscience, 12, 321–341.Google Scholar
- Levin, B. (1993). English verb classes and alternations: A preliminary investigation. Chicago: University of Chicago Press.Google Scholar
- Masson, M. E. J. (1995). A distributed memory model of semantic priming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 3–23.Google Scholar
- McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, 4, 463–502.Google Scholar
- Mesulam, M. (1985). Patterns in behavioral neuroanatomy: Association areas, the limbic system, and hemispheric specialization. In M. Mesulam (Ed.), Principles of behavioral neurology (pp. 1–70). Philadelphia: F. A. Davis.Google Scholar
- O’Reilly, R. C., & Busby, R. S. (2001). Generalizable relational binding from coarse-coded distributed representations. Advances in Neural Information Processing Systems, 14, 75–82.Google Scholar
- Paivio, A. (1986). Mental representations: A dual-coding approach. New York: Oxford University Press.Google Scholar
- Pallier, C., Devauchelle, A.-D., & Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences of the United States of America, 108, 2522–2527.Google Scholar
- Pope, K. S., & Singer, J. L. (1976). Regulation of the stream of consciousness: Toward a theory of ongoing thought. In G. E. Schwartz & D. Shapiro (Eds.), Consciousness and self-regulation (pp. 101–135). New York: Plenum Press.Google Scholar
- Prinz, J. J. (2002). Furnishing the mind: Concepts and their perceptual basis. Cambridge: MIT Press.Google Scholar
- Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge: MIT Press.Google Scholar
- Rogers, T. T., & McClelland, J. L. (2004). Semantic cognition: A parallel distributed processing approach. Cambridge: MIT Press.Google Scholar
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing, Volume 1: Foundations (pp. 318–362). Cambridge: MIT Press.Google Scholar
- Schwanenflugel, P. (1991). Why are abstract concepts hard to understand? In P. Schwanenflugel (Ed.), The psychology of word meanings (pp. 223–250). Hillsdale: Erlbaum.Google Scholar
- Suga, N. (1988). Auditory neuroethology and speech processing: Complex sound processing by combination-sensitive neurons. In G. M. Edelman, W. E. Gall, & W. M. Cowan (Eds.), Functions of the auditory system (pp. 679–720). New York: Wiley.Google Scholar
- Sugiura, M., Sassa, Y., Watanabe, J., Akitsuki, Y., Maeda, Y., Matsue, Y., . . . Kawashima, R. (2006). Cortical mechanisms of person representation: Recognition of famous and personally familiar names. NeuroImage, 31, 853–860.Google Scholar
- Toglia, M. P., & Battig, W. F. (1978). Handbook of semantic word norms. Hillsdale: Erlbaum.Google Scholar
- Wittgenstein, L. (1958). Philosophical investigations (3rd ed.). New York: Macmillan.Google Scholar
- Woodard, J. L., Seidenberg, M., Nielson, K. A., Miller, S. K., Franczak, M., Antuono. P., . . . Rao, S. M. (2007). Temporally graded activation of neocortical regions in response to memories of different ages. Journal of Cognitive Neuroscience, 19, 1–12.Google Scholar