Advertisement

Psychonomic Bulletin & Review

, Volume 23, Issue 4, pp 979–990 | Cite as

GRAPES—Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain

  • Alex Martin
Theoretical Review

Abstract

In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed.

Keywords

Concepts and categories Cognitive neuroscience of memory Embodied cognition Neuroimaging and memory 

For the past two decades, my colleagues and I have studied the neural foundation for conceptual representations of common objects, actions, and their properties. This work has been guided by a framework that I have previously referred to as the “sensory–motor model” (Martin, 1998, Martin, Ungerleider, & Haxby, 2000), and that I will refer to here by the acronym GRAPES (standing for “grounding representations in action, perception, and emotion systems”). This framework is a variant of the sensory/functional model outlined by Warrington, Shallice, and colleagues in the mid-1980s (Warrington & McCarthy, 1987, Warrington & Shallice, 1984) that has dominated neuropsychological (e.g., Damasio, Tranel, Grabowski, Adolphs, & Damasio, 2004, Humphreys & Forde, 2001), cognitive (e.g., Cree & McRae, 2003), and computational (e.g., McClelland & Rogers, 2003, Plaut, 2002) models of concept representation (see also Allport, 1985).

In this article, I will describe the GRAPES model and discuss its implications for understanding how conceptual knowledge is organized in the human brain.

Preliminary concerns

For the present purposes, an object concept refers to the information an individual possesses that defines a basic-level object category (roughly equivalent to the name typically assigned to an object category, such as “dog,” “hammer,” or “apple”; see Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976, for details). Concepts play a central role in cognition because they eliminate the need to rediscover or relearn an object’s properties with each encounter (Murphy, 2002). Identifying an object or a word as a “hammer” allows us to infer that this is an object that is typically made of a hard substance, grasped in one hand, used to pound nails—that it is, in fact, a tool—and so forth. It takes only brief reflection (or a glance at a dictionary) to realize that object information is not limited to perception-, action-, or emotion-related properties. We know, for example, that “dogs” like to play fetch, carpenters use “hammers,” and “apples” grow on trees. In fact, most of the information that we possess about objects is this type of associative or encyclopedic knowledge. This knowledge is typically expressed verbally, is unlimited (there is no intrinsic limit on how much information we can acquire), and is often idiosyncratic (e.g., some people know things about “dogs” that others do not). In contrast, another level of representation, often referred to as semantic or conceptual “primitives,” is accessed automatically, constrained in number, and universal to everyone who possesses the concept. Conceptual primitives are object-associated properties that underpin our ability to quickly and efficiently identify objects at the basic-category level (e.g., as a “dog,” “hammer,” or “apple”), regardless of the modality of presentation (visual, auditory, tactile, or internally generated) or the stimulus format (verbal, nonverbal). Conceptual primitives provide a scaffolding or foundation to support both the explicit retrieval of object-associated information (e.g., enabling us to answer “orange” when asked, What color are carrots?), as well as to gain access to information from our large stores of associative/encyclopedic object knowledge (e.g., allowing us to answer “rabbit” when asked, What animal likes to eat carrots?). For a more detailed discussion of this and related issues, see Martin (1998, 2007, 2009).

The GRAPES model

The central claim is that information about the salient properties of an object—such as what it looks like, how it moves, how it is used, as well as our affective response to it—is stored in our perception, action, and emotion systems. The use of the terminology “perception, action, and emotion systems” rather than “sensory-motor regions” is a deliberate attempt to guard against an unfortunate and unintended consequence of the sensory–motor terminology that has given some the mistaken impressions that concepts could be housed in primary sensory and motor cortices (e.g., V1, S1, A1, M1) and that an object concept could be stored in a single brain region (e.g., a “tool” region). Nothing could be further from the truth. As is described below, my position is, and has always been, that the regions where we store information about specific object-associated properties are located within (i.e., overlap with) perceptual and action systems, specifically excluding primary sensory–motor regions (Martin, Haxby, Lalonde, Wiggs, & Ungerleider, 1995; however, the role of primary sensory and motor regions in conceptual-processing tasks is an important issue that will be addressed below).

It is also assumed that this object property information is acquired and continually updated through innately specified learning mechanisms (for a discussion, see Caramazza & Shelton, 1998, Carey, 2009). These mechanisms allow for the acquisition and storage of object-associated properties—form, color, motion, and the like. Although the architecture and circuitry of the brain dictates where these learning mechanisms are located, they are not necessarily tied to a single modality of input (i.e., they are property-specific, not modality-specific). For example, a mechanism specialized for learning about object shape or form will typically work upon visual input because that is the modality through which object form information is commonly acquired. As a result, this mechanism will be located in the ventral occipitotemporal visual object-processing stream. However, as has been convincingly demonstrated by studies of typically developing (Amedi, Malach, Hendler, Peled, & Zohary, 2001, Amedi, Jacobson, Hendler, Malach, & Zohary, 2002) as well as congenitally blind (Amedi et al., 2007, Pietrini et al., 2004) individuals, this mechanism can work upon tactile input, as well. Thus, information about the physical shape or form of objects will be stored in the same place in both normally sighted and blind individuals (e.g., ventral occipitotemporal cortex), regardless of the modality through which that information was acquired (see also Mahon et al., 2009, Noppeney, Friston, & Price, 2003). Relatedly, this information can be accessed through multiple modalities as well (e.g., information about how dogs look is accessed automatically when we hear a bark, or when we read or hear the word “dog”; Tranel et al., 2003a).

There are two major consequences of this formulation. Firstly, from a cognitive standpoint, it provides a potential solution for the grounding problem: How do mental representations become connected to the things they refer to in the world (Harnad, 1990)? Within GRAPES and related frameworks, representations are grounded by virtue of their being situated within (i.e., partially overlapping with) the neural system that supports perceiving and interacting with our external and internal environments.

Secondly, from a neurobiological standpoint, it provides a strong, testable—and easily falsifiable—claim about the spatial organization of object information in the brain. Not only is object property information distributed across different locations, but also, these locations are highly predictable on the basis of our knowledge of the spatial organization of the perceptual, action, and affective processing systems. Conceptual information is not spread across the cortex in a seemingly random, arbitrary fashion (Huth, Nishimoto, Vu, & Gallant, 2012), but rather follows a systematic plan.

The representation of object-associated properties: The case of color

According to the GRAPES model, object property information is stored within specific processing streams, but downstream from primary sensory, and upstream from motor, cortices. The overwhelming majority of functional brain-imaging studies support this claim (Kiefer & Pulvermüller, 2012, Martin, 2009, Thompson-Schill, 2003). Here I will concentrate on a single property, color, to illustrate the main findings and points.

Early brain-imaging studies showed that retrieving the name of a color typically associated with an object (e.g., “yellow” in response to the word “pencil”), relative to retrieving a word denoting an object-associated action (e.g., “write” in response to the word “pencil”), elicited activity in a region of the fusiform gyrus in ventral temporal cortex anterior to the region in occipital cortex associated with perceiving colors (Martin et al., 1995; and see Chao & Martin, 1999, and Wiggs, Weisberg, & Martin, 1999, for similar findings). Converging evidence to support this claim has come from studies of color imagery generation in control subjects (Howard et al., 1998) and in color–word synthestes in response to heard words (Paulesu et al., 1995).

Importantly, these findings were also consistent with clinical studies documenting a double dissociation between patients with achromatopsia—acquired color blindness concurrent with a preserved ability to generate color imagery (commonly associated with lesions of the lingual gyrus in the occipital lobe; e.g., Shuren, Brott, Schefft, & Houston, 1996)—and color agnosia—impaired knowledge of object-associated colors concurrent with normal color vision (commonly associated with lesions of posterior ventral temporal cortex, although these lesions can also include occipital cortex; e.g., Miceli et al., 2001, Stasenko, Garcea, Dombovy, & Mahon, 2014).

We interpreted our findings as supporting a grounded-cognition view based on the fact that the region active when retrieving color information was anatomically close to the region previously identified as underpinning color perception, whereas retrieving object-associated action words yielded activity in lateral temporal areas close to the site known to support motion perception (see Martin et al., 1995, for details). However, these data could just as easily be construed as being consistent with “amodal” frameworks that maintain that conceptual information is autonomous or separate from sensory processing (e.g., Wilson & Foglia, 2011). The grounded-cognition position maintains that the neural substrates for conceptual, perceptual, and sensory processing are all part of a single, anatomically broad system supporting both perceiving and knowing about object-associated information. Thus, evidence in support of grounded cognition would require showing functional overlap between, for example, the neural systems supporting sensory/perceptual and conceptual processing of color.

In spite of the failure of early attempts to demonstrate such a link (Chao & Martin, 1999), investigations have yielded strong, converging evidence to support that claim. Beauchamp, Haxby, Jennings, and DeYoe (1999) showed that when color-selective cortex was mapped by having subjects passively view colored versus grayscale stimuli, as had typically been done in previous studies (e.g., Chao & Martin, 1999, McKeefry & Zeki, 1997, Zeki et al., 1991), neural activity was restricted to the occipital cortex. However, when color-selective cortex was mapped using a more demanding task requiring subjects to make subtle judgments about differences in hue (modeled after the classic Farnsworth–Munsell 100-Hue Test used in the clinical evaluation of color vision), activity extended downstream from occipital cortex to the fusiform gyrus in ventral posterior temporal cortex (Beauchamp et al., 1999). We replicated this finding and further observed that this downstream region of the fusiform gyrus was also active when subjects retrieved information about object-associated color using a verbal, property-verification task (Simmons et al., 2007). These data provided support for the best of both worlds: continued support for the double dissociation between color agnosia and achromotopsia (because the color perceptual task, but not the color conceptual task, activated occipital cortex), but now coupled with evidence consistent with grounded cognition (because both tasks activated the same downstream region of the fusiform gyrus) (Simmons et al., 2007; see Martin, 2009, and Stasenko et al., 2014, for discussions).

Additional supporting evidence has come from a completely different source—electrophysiological recording and stimulation of the human cortex. Recording from posterior brain regions prior to neurosurgery, Murphey, Yoshor, & Beauchamp (2008) identified a site in the fusiform gyrus that not only was color-responsive, but was preferentially tuned to viewing a particular blue-purple color. Moreover, when electrical stimulation was applied to that site, the patient reported vivid, blue-purple color imagery (see Murphey et al., 2008, for details). The location of this region corresponded closely to the region active in previous imaging studies of color information retrieval, and, as is illustrated in Fig. 1, corresponded remarkably well to the region active during both perceiving and retrieving color information in the Simmons et al. (2007) study.
Fig. 1

Regions of ventral occipitotemporal cortex responsive to perceiving and knowing about color. (A) Ventral view of the right hemisphere of a single patient. The red dot shows the location of the electrode that responded most strongly to blue-purple color and that produced blue-purple visual imagery when stimulated (reprinted with permission; see Murphey et al., 2008, for details). (B) Ventral view of the left hemisphere from the group study on perceiving and knowing about color (Simmons et al., 2007). Regions active when distinguishing subtle differences in hue are shown in yellow. The black circle indicates the approximate location of the lingual gyrus region active when passively viewing colors. The region responding to both perceiving and retrieving information about color is shown in red. Note the close correspondence between that region and the location of the electrode in panel A

Thus, in support of the grounded-cognition framework, these data indicate that the processing system supporting color perception includes both lower-level regions that mediate the conscious perception—or more appropriately, the “sensation” of color—and higher-order regions that mediate both perceiving and storing color information. Moreover, as will be discussed below, these posterior and anterior regions are in a dynamic, interactive state to support contextual, task-dependent demands.

The effect of context 1: Conceptual task demands influence responses in primary sensory (color) cortex

The Simmons et al. (2007) study using the modified version of the Farnsworth–Munsell 100-Hue Test demonstrated that increasing perceptual processing demands resulted in activity that extended downstream from low-level into higher-order color-processing regions. Thompson-Schill and colleagues have provided evidence that the reverse effect also holds (Hsu, Frankland, & Thompson-Schill, 2012); that is, increasing conceptual-processing demands can produce activity that feeds back upstream into early, primary-processing areas in order to solve the task at hand. These investigators also used the modified Farnsworth–Munsell 100-Hue Test to map color-responsive cortex. However, in contrast to the property verification task used by Simmons and colleagues, which required a “yes/no” response to such probes as “eggplant–purple” (Simmons et al., 2007), the study by Hsu et al. (2012) used a conceptual-processing task requiring subjects to make subtle distinctions in hue, thereby more closely matching the demands of the color perception task (e.g., which object is “lighter”? lemon, basketball; see Hsu et al., 2012, for details). Under these conditions, both the color perception and color knowledge tasks yielded overlapping activity in a region of the lingual gyrus in occipital cortex associated with the sensory processing of color. Moreover, this effect seems to be tied to similarity in the demands of the perceptual and conceptual tasks, since previous work by these investigators had shown that simply making the conceptual task more attention-demanding increased activity in the fusiform, by not the lingual, gyri (Hsu, Kraemer, Oliver, Schlichting, & Thompson-Schill, 2011). These findings suggest that, in order to meet specific task demands, higher-level regions in the fusiform gyrus that store information about object-associated color can reactivate early, lower-level areas in occipital cortex that underpin the sensory processing of color (and see Amsel, Urbach, & Kutas, 2014, for more evidence for the tight linkage between low-level perceptual and high-level conceptual processes in the domain of color).

As will be discussed next, low-level sensory regions can also show effects of conceptual processing when the modulating influence arises from the demands of our internal, rather than the external, environment.

The effect of context 2: The body’s homeostatic state influences responses in primary sensory (gustatory) cortex to pictures of appetizing food

A number of functional brain-imaging studies have shown that identifying pictures of appetizing foods activates a site located in the anterior portion of the insula (as well as other brain areas, such as orbitofrontal cortex; e.g., Killgore et al., 2003, Simmons, Martin, & Barsalou, 2005; see van der Laan, de Ridder, Viergever, & Smeets, 2011, for a review). Because the human gustatory system sends inputs to the insula, we interpreted this activity as reflecting inferences about taste generated automatically when viewing food pictures (Simmons et al., 2005). We have now obtained direct evidence in support of this proposal by mapping neural activity associated with a pleasant taste (apple juice, relative to a neutral liquid solution) and inferred taste (images of appetizing foods, relative to nonfood pictures) (Simmons et al., 2013). Juice delivery elicited activity in primary gustatory cortex, located in the mid-dorsal region of the insula (Small, 2010), as well as in the more anterior region of the insula identified in the previous studies of appetizing food picture identification (representing inferred taste). Viewing pictures of appetizing foods yielded activity in the anterior, but not mid-dorsal, insula. Thus, these results followed the same pattern as our study on perceiving and knowing about color (Simmons et al., 2007). Whereas gustatory processing activated both primary (mid-dorsal insula) and more anterior insula sites, higher-order representations associated with viewing pictures of food were limited to the more anterior region of insular cortex.

However, a unique feature of our study was that, because it was part of a larger investigation of dietary habits, we were able to acquire data on our subjects’ metabolic states immediately prior to the scanning session. Analyses of those data revealed that the amount of glucose circulating in peripheral blood was negatively correlated with the neural response to food pictures in the mid-dorsal, primary gustatory region of the insula—the lower the glucose level, the stronger the insula response. This unexpected finding indicated that bodily input could modulate the brain’s response to visual images of one category of objects (appetizing foods) but not others (nonfood objects; see Simmons et al., 2013, for details). When the body’s energy resources are low (as indexed by low glucose levels), pictures of appetizing foods become more likely to activate primary gustatory cortex, perhaps as a signal to act (i.e., to obtain food; more on this later). Moreover, this modulatory effect of glucose on the neural response to food pictures occurred in primary gustatory cortex—an area, like primary color-responsive cortex in the occipital lobe, assumed not to be involved in processing higher-order information (Fig. 2).
Fig. 2

Regions of insular cortex responsive to perceived and inferred taste: Sagittal view of the left hemisphere showing regions in the insular cortex responsive to a pleasant taste (green) and viewing pictures of appetizing foods (blue). The histogram shows activation levels for food and nonfood objects in the anterior insula responsive to taste (red area). The graph shows the level of each subject’s response in primary gustatory cortex (mid-dorsal insula, green) as a function of peripheral blood glucose level. The correlation between glucose and the mid-dorsal insula response was significant (r = –.51) and significantly stronger than the response in this region to nonfood objects (r = –.04; see Simmons et al., 2013, for details)

Overall, these findings suggest a dynamic, interactive relationship between lower-level sensory and higher-order conceptual processing components of perceptual processing streams. Activity elicited in higher-order processing areas (fusiform gyrus for color, anterior insula for taste) may reflect the retrieval of properties associated with stable conceptual representations (invariant representations needed for understanding and communicating). In contrast, feedback from these regions to primary, low-level sensory processing areas may reflect contextual effects as a function of specific task requirements (as in the case of color) or bodily states (as in the case of taste). Neural activity elicited during conceptual processing is determined by both the content of the information retrieved and the demands of our external and internal environments.

What does overlapping activity mean?

The goal of these studies was to determine whether the neural activity selectively associated with retrieving object property information overlapped with the activity identified (independently localized) by a sensory or motor task. This approach has been used successfully multiple times. Some recent examples include showing that reading about motion activates motion-processing regions (Deen & McCarthy, 2010, Saygin, McCullough, Alac, & Emmorey, 2010), viewing pictures of graspable objects activates somatosensory cortex (Smith & Goodale, 2015), and viewing pictures of sound-implying objects (musical instruments, animals) activates auditory cortex (using an anatomical rather than a functional localizer; Meyer et al., 2010). The implication of these findings is that sensory/perceptual and conceptual processes are tightly linked. Demonstrating that retrieving information about color shows partial overlap with regions active when processing color licenses conclusions about where object property information is stored in the brain. This information is stored right in the processing system active when that information was acquired and updated. The alternative would be, for example, that we learn about the association between a particular object and its color in one place and then ship that information off to a different location for storage. The neuroimaging data provide clear evidence against that scenario.

The fact that overlapping activity is associated with perceptual and conceptual task performance does not mean, however, that the representations underpinning these processes—or their neural substrates—are identical. In fact, although functional brain-imaging data cannot address this issue,1 it is highly likely that the representations are substantially different. Perceiving, imagining, and knowing, after all, are very different things, and so must be their neural instantiations. Even at the level of the neural column, bottom-up and top-down inputs show distinct patterns of laminar connectivity (e.g., Felleman & van Essen, 1991, Foxworthy, Clemo, & Meredith, 2013) and rely on different oscillatory frequencies (Buffalo, Fries, Landman, Buschman, & Desimone, 2011, van Kerkoerle et al., 2014). Nevertheless, the fact that perceptual and conceptual representations differ leaves open the possibility that their formats are the same.

Content and format

There seems to be strong, if not unanimous, agreement about the content and relative location in the brain of perception- and action-related object property information. Hotly debated, however, is the functional significance of this information (see below), and, most especially, the format of this information. Is conceptual information stored in a highly abstract, “amodal,” language-like propositional format? Or, is it stored in a depictive, iconic, picture-like format? The chief claim of many advocates of embodied and/or grounded cognition is that object and action concepts are represented exclusively in depictive, modality-specific formats (e.g., Barsalou, 1999, Glenberg & Gallese, 2012, Zwaan & Taylor, 2006; and see Carey, 2009, for a discussion from a nonembodied perspective of why the representational format of all of “core cognition” is likely to be iconic). Others have argued forcibly that the representations are abstract, amodal, and disembodied (although necessarily interactive with sensory–motor information; see, e.g., the “grounding by interaction” hypothesis proposed by Mahon & Caramazza, 2008).

The importance of the distinction between the content and format of mental representations was raised by Caramazza and colleagues (Caramazza, Hillis, Rapp, & Romani, 1990) in their argument against the “multiple, modality-specific semantics hypothesis” advocated by Shallice (Shallice, 1988; and see Shallice, 1993, for his reply, and Mahon, 2015, for more on the format argument). Prior to that the issue of format was, and it continues to be, the central focus of the lengthy debate regarding whether the format of mental imagery is propositional (e.g., Pylyshyn, 2003) or depictive (e.g., Kosslyn, Thompson, & Ganis, 2006).

The problem, however, is that we do not know how to determine the format of a representation (if we did, we would not still be debating the issue). And, knowing where in the brain information is stored, and/or what regions are active when that information is retrieved, offers no help at all. Even in the earliest, lowest-level regions of the visual-processing stream, the format could be depictive on the way up, and propositional on the way back down. What we do know is that at the biological level of description, mental representations are in the format of the neural code. No one knows what that is, and no one knows how it maps onto the cognitive descriptions of representational formats (i.e., amodal, propositional, depictive, iconic, and the like), nor even if those descriptions are appropriate for such mapping. What is missing from this debate is agreed-upon procedures for determining the format of a representation. Until then, the format question will remain moot. It has no practical significance.

Object property information is integrated within category-specific neural circuits: The case of “tools”

Functional brain imaging has provided a major advance in our thinking about how the brain responds to the environment by showing that viewing objects triggers a cascade of activity in multiple brain regions that, in turn, represent properties associated with that category of objects. Viewing faces, for example, elicits activity that extends beyond the fusiform face area to regions associated with perceiving biological motion (the posterior region of the superior temporal sulcus) and affect (the amygdala), even when the face images are static and posed with neutral expressions (Haxby, Hoffman, & Gobbini, 2000). Similarly, viewing images of common tools (objects with a strong link between how they are manipulated and their function; Mahon et al., 2007) elicits activity that extends beyond the ventral object-processing stream to include left hemisphere regions associated with object motion (posterior middle temporal gyrus) and manipulation (intraparietal sulcus, ventral premotor cortex) (Beauchamp, Lee, Haxby, & Martin, 2002, 2003, Chao & Martin, 2000, Grafton, Fadiga, Arbib, & Rizzolatti, 1997, Kellenbach, Brett, & Patterson, 2003, Mahon et al., 2007, Mahon, Kumar, & Almeida, 2013; and see Chouinard & Goodale, 2010, for a review).

Thus, specific object categories are associated with unique networks or circuits composed of brain regions that code for different object properties. There are several important points to note about these circuits. Firstly, they reflect some, but certainly not all, of the properties associated with a particular category. Tools and animals, for example, have distinctive sounds (hammers bang, lions roar), yet the auditory system is not automatically engaged when viewing or naming tools or animals. Certain properties are more salient than others for representing a category of objects—a result that agrees well with behavioral data (e.g., Cree & McRae, 2003). Secondly, the regions comprising a circuit do not come online in piecemeal fashion as they are required to perform a specific task, but rather seem to respond in an automatic, all-or-none fashion, as if they were part of the intrinsic, functional neural architecture of the brain. Indeed, studies of spontaneous, slowly fluctuating neural activity recorded when subjects are not engaged in performing a task (i.e., task-independent or resting-state functional imaging) strongly support this possibility. These studies have shown that during the so-called resting state, there is strong covariation among the neural signals spontaneously generated from each of the regions active when viewing and identifying certain object categories, including faces (O’Neil, Hutchison, McLean, & Köhler, 2014, Turk-Browne, Norman-Haignere, & McCarthy, 2010), scenes (Baldassano, Beck, & Fei-Fei, 2013, Stevens, Buckner, & Schacter, 2010), and tools (Hutchison, Culham, Everling, Flanagan, & Gallivan, 2014, Simmons & Martin, 2012, Stevens, Tessler, Peng, & Martin, 2015; see Fig. 3). Certain object categories are associated with activity in a specific network of brain regions, and these regions are in constant communication, over and above the current task requirements.
Fig. 3

Intrinsic circuitry for perceiving and knowing about “tools.” (A) Task-dependent activations: Sagittal view of the left hemisphere showing regions in posterior middle temporal gyrus, posterior parietal cortex, and premotor cortex that are more active when viewing tools than when viewing animals (blue regions, N = 34) (Stevens et al., in press). (B) Task-independent data: Covariation of slowly fluctuating neural activity recorded at “rest” in a single subject (blue regions). Seeds were in the medial region of the left fusiform gyrus and in the right lateral fusiform gyrus (not shown), identified by the comparison of tools versus animals, respectively (independent localizer). Resting-state time series in the color regions were significantly more correlated with fluctuations in the left medial fusiform gyrus than with those in the right lateral fusiform gyrus (for details, see Stevens et al., in press). (C) Covariation of slowly fluctuating neural activity recorded at “rest” in a group study (blue regions, N = 25). Seeds were in the left posterior middle temporal gyrus and the right posterior superior temporal sulcus, identified by independent localizer scans (see Simmons & Martin, 2012, for details)

Although the function of this slowly fluctuating, spontaneous activity remains largely unknown, one possibility is that it allows information about different properties to be shared across regions of the network. If so, then each region may act as a convergence zone (Damasio, 1989, Simmons & Barsalou, 2003) or “hub,” representing its primary property and, to a lesser extent, the properties of one or more of the other regions in the circuit—depending, perhaps, on its spatial relation to the other regions in the circuit (Power, Schlaggar, Lessov-Schlaggar, & Petersen, 2013). The more centrally located a region, the more hub-like its function. This seems to be the case for tools, for which a lesion of the most centrally located component of its circuitry, the posterior region of the left middle temporal gyrus, produces a category-specific knowledge deficit for tools and their associated actions (Brambati et al., 2006, Campanella, D’Agostini, Skrap, & Shallice, 2010, Mahon et al., 2007, Tranel, Damasio, & Damasio, 1997, Tranel, Manzel, Asp, & Kemmerer, 2008).2

According to this view, information about a property is not strictly localized to a single region (as is suggested by the overlap approach), but rather is a manifestation of local computations performed in that region as well as a property of the circuit as a whole (cf. Behrmann & Plaut, 2013). Moreover, regions vary in their global connectivity, or “hubness” (i.e., the extent to which a region is interconnected with other brain regions) (see Buckner et al., 2009, Cole, Pathak, & Schneider, 2010, Gotts et al., 2012; and Power, Schlaggar, Lessov-Schlaggar, & Petersen, 2013, for approaches and data on the brain’s hub structure).

An advantage of this view is that it provides a framework for understanding how a lesion to a particular region or node of a circuit can sometimes produce a deficit for retrieving one type of category-related information, but not others, whereas other lesions seem to produce a true category-specific disorder characterized by a failure to retrieve all types of information about a particular category (Capitani, Laiacona, Mahon, & Caramazza, 2003). For example, in the domain of tools, some apraxic patients with damage to left posterior parietal cortex can no longer demonstrate an object’s use, but can still name it, whereas other patients with damage to left middle temporal gyrus seem to have more general losses of knowledge about tools and their actions (e.g., Tranel, Kemmerer, Adolphs, Damasio, & Damasio, 2003b), presumably as a result of disrupted connectivity or functional diaschisis (He et al., 2007, Price, Warburton, Moore, Frackowiak, & Friston, 2001; see Carrera & Tononi, 2014, for a recent review).

Once we accept that different forms of knowledge about a single object category (e.g., tools) can be dissociated, we are left with an additional puzzle. The neuropsychological evidence clearly shows that damage to left posterior parietal cortex can result in an inability to correctly use an object, without affecting the ability to visually recognize and name that object (Johnson-Frey, 2004, Negri et al., 2007, Rothi, Ochipa, & Heilman, 1991). If so, then why is parietal cortex active when subjects simply view and/or name tools? What is the functional role of that activity? One possibility is that this parietal activity does not reflect any function at all. Rather, it is simply due to activity that automatically propagates from other parts of the circuit necessary to perform the task at hand. Naming tools requires activity in temporal cortex. Thus, regions in posterior parietal cortex may become active merely as a by-product of temporal–parietal lobe connectivity; that activity might have no functional significance. Although this theory is logically possible, I do not think it is a serious contender. It takes a lot of metabolic energy to run a brain, and I doubt that systems have evolved to waste it (Raichle, 2006). Neural activity is never epiphenomenal; it always reflects some function, even though that function may not be readily apparent.

I think that there are two, non-mutually-exclusive purposes behind activity in the dorsal processing stream when naming tools. One possibility is that this activation is, in fact, part of the “full” representation of the concept of a tool (Mahon & Caramazza, 2008). Under that view, perception- and action-related properties are both constitutive, essential components of the full concept of a particular tool. Removal of one of these components—for example, action-related information—as a consequence of brain injury or disease would result in an impoverished concept. The concept of that tool would nevertheless remain grounded, but now by perceptual systems alone (for a different interpretation, see Mahon & Caramazza, 2008).

Another possibility is that parietal activity reflects the spread of activity to a function that typically occurs following object identification. For example, I have previously argued that the hippocampus is active when we name objects not because it is necessary to name them (it is not), but rather because it is necessary to be able to recall having named them (Martin, 1999). In a similar fashion, and consistent with the well-established role of the dorsal stream in action representation (Goodale & Milner, 1992), parietal as well as premotor activity associated with viewing tools might reflect a prediction or prime for future action (Martin, 2009, Simmons & Martin, 2012). Experience has taught us that seeing some objects is followed by an action. Activating the dorsal stream when viewing a tool may be a prime to use it. Activating the insula when viewing an appetizing food may be a prime to eat it—a phenomenon that the advertising industry has long been aware of.

Concluding comment

The GRAPES model provides a framework for understanding how information about object-associated properties is organized in the brain. A central advance of this model over previous formulations is a deeper recognition and understanding of the role played by the brain’s large-scale, intrinsic circuitry in providing dynamic links between regions representing the salient properties associated with specific object categories.

Many of these properties are situated within the ventral and dorsal processing streams that play a fundamental role in object and action representation. An ever-increasing body of data from monkey neuroanatomy and neurophysiology, and from human neuroimaging, is providing a more detailed understanding of this circuitry. One major implication of these findings is that the notion of serial, hierarchically organized processing streams is no longer tenable. Instead, these large-scale systems are best characterized by discrete, yet highly interactive circuits, which, in turn, are composed of multiple, recurrent feedforward and feedback loops (see Kravitz, Saleem, Baker, & Mishkin, 2011, Kravitz, Saleem, Baker, Ungerleider, & Mishkin, 2013, for a detailed overview and compelling synthesis of these findings). This type of architecture is assumed to characterize the category-specific circuits discussed here and to underpin the dynamic interaction between higher-order conceptual, perceptual, and lower-order sensory regions in the service of specific task and bodily demands.

The emphasis on grounded circuitry may also inform our understanding of how abstract concepts are organized. Imaging studies of social and social–emotional concepts (such as “brave,” honor, “generous,” “impolite,” and “convince”) have consistently implicated the most anterior extent of the superior temporal gyrus/sulcus (STG/STS; Simmons, Reddish, Bellgowan, & Martin, 2010, Wilson-Mendenhall, Simmons, Martin, & Barsalou, 2013, Zahn et al., 2007; for reviews, see Olson, McCoy, Klobusicky, & Ross, 2013, Simmons & Martin, 2009, Wong & Gallate, 2012), as well as medial—especially ventromedial—prefrontal cortex (Mitchell, Heatherton, & Macrae, 2002, Roy, Shohamy, & Wager, 2012, Wilson-Mendenhall et al., 2013). One might think that any conceptual information represented in these very anterior brain regions would be disconnected from, rather than grounded in, action and perceptual systems. Yet the circuitry connecting these regions with other areas of the brain suggests otherwise.

Tract-tracing studies of the macaque brain (Saleem, Kondo, & Price, 2008) and task-based (Burnett & Blakemore, 2009) and resting-state (Gotts et al., 2012, Simmons & Martin, 2012, Simmons et al., 2010) functional connectivity studies of the human brain have revealed strong connectivity between these anterior temporal and prefrontal regions. For example, anterior STG/STS, but not anterior ventral temporal cortex, is strongly connected to medial prefrontal cortex (Saleem et al., 2008). In addition, human functional-imaging studies have shown that both of these regions are part of a broader circuit implicated in multiple aspects of social functioning in typically developing individuals (for reviews, see Adolphs, 2009, Frith & Frith, 2007) and in social dysfunction in autistic subjects (e.g., Ameis & Catani, 2015, Gotts et al., 2012, Libero et al., 2014, Uddin et al., 2011, Wallace et al., 2010).

So, how are these social and social–emotional concepts grounded? They are grounded by virtue of being situated within circuitry that includes regions for perceiving and representing biological form (lateral region of the fusiform gyrus) and biological motion (posterior STS) and for recognizing emotion (the amygdala) (Burnett & Blakemore, 2009, Gotts et al., 2012, Simmons & Martin, 2012, Simmons et al., 2010). Clearly, much work remains to be done in uncovering the roles of the anterior temporal and frontal cortices in representing our social world. Nevertheless, these data provide an example of how even abstract concepts may be grounded in our action, perception, and emotion systems.

Footnotes

  1. 1.

    Multivariate pattern analysis methods (Haxby, Connolly, & Guntupalli, 2014) can be used to make claims about the degree of similarity in underlying representations, but they cannot determine whether the representations are identical.

  2. 2.

    It is of interest to note that this is one case in which the neuroimaging data preceded the lesion data; the prominence of the left posterior middle temporal gyrus for identifying tools and retrieving action knowledge was established prior to the neuropsychological findings. It is also noteworthy that these patient lesion data on loss of knowledge about tools stand as a serious challenge to proponents of a single, amodal semantic hub (e.g., Lambon Ralph, Sage, Jones, & Mayberry, 2010, Patterson, Nestor, & Rogers, 2007).

Notes

Acknowledgements

I thank Chris Baker, Alfonso Caramazza, Anjan Chatterjee, Dwight Kravitiz, Brad Mahon, and the members of my laboratory for their critical comments on earlier versions of this manuscript. This work was supported by the National Institute of Mental Health, National Institutes of Health, Division of Intramural Research (1 ZIA MH 002588-25; NCT01031407). The views expressed in this article do not necessarily represent the views of the NIMH, NIH, HHS, or the United States Government.

References

  1. Adolphs, R. (2009). The social brain: Neural basis of social knowledge. Annual Review of Psychology, 60, 693–716. doi: 10.1146/annurev.psych.60.110707 PubMedPubMedCentralCrossRefGoogle Scholar
  2. Allport, D. A. (1985). Distributed memory, modular subsystems and dysphasia. In S. P. Newman & R. Epstein (Eds.), Current perspectives in dysphasia (pp. 32–60). New York, NY: Churchill Livingstone.Google Scholar
  3. Amedi, A., Malach, R., Hendler, T., Peled, S., & Zohary, E. (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience, 4, 687–689.CrossRefGoogle Scholar
  4. Amedi, A., Jacobson, G., Hendler, T., Malach, R., & Zohary, E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex, 12, 1202–1212.PubMedCrossRefGoogle Scholar
  5. Amedi, A., Stern, W. M., Camprodon, J. A., Bermpohl, F., Merabet, L., Rotman, S., . . . Pascual-Leone, A. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience, 10, 324–330.Google Scholar
  6. Ameis, S. H., & Catani, M. (2015). Altered white matter connectivity as a neural substrate for social impairment in Autism Spectrum Disorder. Cortex, 62, 158–181. doi: 10.1016/j.cortex.2014.10.014 PubMedCrossRefGoogle Scholar
  7. Amsel, B. D., Urbach, T. P., & Kutas, M. (2014). Empirically grounding grounded cognition: The case of color. NeuroImage, 99, 149–157. doi: 10.1016/j.neuroimage.2014.05.025 PubMedPubMedCentralCrossRefGoogle Scholar
  8. Baldassano, C., Beck, D. M., & Fei-Fei, L. (2013). Differential connectivity within the parahippocampal place area. NeuroImage, 75, 228–237.PubMedPubMedCentralCrossRefGoogle Scholar
  9. Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660. doi: 10.1017/S0140525X99002149.
  10. Beauchamp, M. S., Haxby, J. V., Jennings, J. E., & DeYoe, E. A. (1999). An fMRI version of the Farnsworth–Munsell 100-hue test reveals multiple color-selective areas in human ventral occipitotemporal cortex. Cerebral Cortex, 9, 257–263.PubMedCrossRefGoogle Scholar
  11. Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2002). Parallel visual motion processing streams for manipulable objects and human movements. Neuron, 34, 149–159.PubMedCrossRefGoogle Scholar
  12. Beauchamp, M. S., Lee, K. E., Haxby, J. V., & Martin, A. (2003). fMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience, 15, 991–1001. doi: 10.1162/089892903770007380 PubMedCrossRefGoogle Scholar
  13. Behrmann, M., & Plaut, D. C. (2013). Distributed circuits, not circumscribed centers, mediate visual cognition. Trends in Cognitive Sciences, 17, 210–219.PubMedCrossRefGoogle Scholar
  14. Brambati, S. M., Myers D., Wilson A., Rankin, K. P., Allison, S. C., Rosen H. J., . . . Gorno-Tempini, M. L. (2006). The anatomy of category-specific object naming in neurodegenerative diseases. Journal of Cognitive Neuroscience, 18, 1644–1653.Google Scholar
  15. Buckner, R. L., Sepulcre, J., Talukdar, T., Krienen, F. M., Liu, H., Hedden, T., . . . Johnson, K. A. (2009). Cortical hubs revealed by intrinsic functional connectivity: Mapping, assessment of stability, and relation to Alzheimer’s disease. Journal of Neuroscience, 29, 1860–1873. doi: 10.1523/JNEUROSCI.5062-08.2009
  16. Buffalo, E. A., Fries, P., Landman, R., Buschman, T. J., & Desimone, R. (2011). Laminar differences in gamma and alpha coherence in the ventral stream. Proceedings of the National Academy of Sciences, 108, 11262–11267.CrossRefGoogle Scholar
  17. Burnett, S., & Blakemore, S.-J. (2009). Functional connectivity during a social emotion task in adolescents and adults. European Journal of Neuroscience, 29, 1294–1301.PubMedPubMedCentralCrossRefGoogle Scholar
  18. Campanella, F., D’Agostini, S., Skrap, M., & Shallice, T. (2010). Naming manipulable objects: Anatomy of a category specific effect in left temporal tumours. Neuropsychologia, 48, 1583–1597.PubMedCrossRefGoogle Scholar
  19. Capitani, E., Laiacona, M., Mahon, B., & Caramazza, A. (2003). What are the facts of semantic category-specific deficits? A critical review of the clinical evidence. Cognitive Neuropsychology, 20, 213–261.PubMedCrossRefGoogle Scholar
  20. Caramazza, A., & Shelton, J. R. (1998). Domain-specific knowledge systems in the brain: The animate-inanimate distinction. Journal of Cognitive Neuroscience, 10, 1–34.PubMedCrossRefGoogle Scholar
  21. Caramazza, A., Hillis, A. E., Rapp, B. C., & Romani, C. (1990). The multiple semantics hypothesis: Multiple confusions? Cognitive Neuropsychology, 7, 161–189.CrossRefGoogle Scholar
  22. Carey, S. (2009). The origin of concepts. New York, NY: Oxford University Press.CrossRefGoogle Scholar
  23. Carrera, E., & Tononi, G. (2014). Diaschisis: Past, present, future. Brain, 137, 2408–2422. doi: 10.1093/brain/awu101 PubMedCrossRefGoogle Scholar
  24. Chao, L. L., & Martin, A. (1999). Cortical regions associated with perceiving, naming, and knowing about colors. Journal of Cognitive Neuroscience, 11, 25–35.PubMedCrossRefGoogle Scholar
  25. Chao, L. L., & Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. NeuroImage, 12, 478–484.PubMedCrossRefGoogle Scholar
  26. Chouinard, P. A., & Goodale, M. A. (2010). Category-specific neural processing for naming pictures of animals and naming pictures of tools: An ALE meta-analysis. Neuropsychologia, 48, 409–418. doi: 10.1016/j.neuropsychologia.2009.09.032 PubMedCrossRefGoogle Scholar
  27. Cole, M. W., Pathak, S., & Schneider, W. (2010). Identifying the brain’s most globally connected regions. NeuroImage, 49, 3132–3148.PubMedCrossRefGoogle Scholar
  28. Cree, G. S., & McRae, K. (2003). Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Journal of Experimental Psychology: General, 132, 163–201. doi: 10.1037/0096-3445.132.2.163 CrossRefGoogle Scholar
  29. Damasio, A. R. (1989). Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition. Cognition, 33, 25–62.PubMedCrossRefGoogle Scholar
  30. Damasio, H., Tranel, D., Grabowski, T., Adolphs, R., & Damasio, A. (2004). Neural systems behind word and concept retrieval. Cognition, 92, 179–229. doi: 10.1016/j.cognition.2002.07.001 PubMedCrossRefGoogle Scholar
  31. Deen, B., & McCarthy, G. (2010). Reading about the actions of others: Biological motion imagery and action congruency influence brain activity. Neuropsychologia, 48, 1607–1615.PubMedPubMedCentralCrossRefGoogle Scholar
  32. Felleman, D. J., & van Essen, D. C. (1991). Distributed hierarchical processing in the primate cortex. Cerebral Cortex, 1, 1–47.PubMedCrossRefGoogle Scholar
  33. Foxworthy, W. A., Clemo, H. R., & Meredith, M. A. (2013). Laminar and connectional organization of a multisensory cortex. Journal of Comparative Neurology, 521, 1867–1890.PubMedPubMedCentralCrossRefGoogle Scholar
  34. Frith, C. D., & Frith, U. (2007). Social cognition in humans. Current Biology, 17, R724–R732.PubMedCrossRefGoogle Scholar
  35. Glenberg, A. M., & Gallese, V. (2012). Action-based language: A theory of language acquisition, comprehension, and production. Cortex, 48, 905–922. doi: 10.1016/j.cortex.2011.04.010 PubMedCrossRefGoogle Scholar
  36. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25.PubMedCrossRefGoogle Scholar
  37. Gotts, S. J., Simmons, W. K., Milbury, L. A., Wallace, G. L., Cox, R. W., & Martin, A. (2012). Fractionation of social brain circuits in autism spectrum disorders. Brain, 135, 2711–2725. doi: 10.1093/brain/aws160 PubMedPubMedCentralCrossRefGoogle Scholar
  38. Grafton, S. T., Fadiga, L., Arbib, M. A., & Rizzolatti, G. (1997). Premotor cortex activation during observation and naming of familiar tools. NeuroImage, 6, 231–236.PubMedCrossRefGoogle Scholar
  39. Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346.CrossRefGoogle Scholar
  40. Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). A distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223–233.PubMedCrossRefGoogle Scholar
  41. Haxby, J. V., Connolly, A. C., & Guntupalli, J. S. (2014). Decoding neural representational spaces using multivariate pattern analysis. Annual Review of Neuroscience, 37, 435–456. doi: 10.1146/annurev-neuro-062012-170325 PubMedCrossRefGoogle Scholar
  42. He, B. J., Snyder, A. Z., Vincent, J. L., Epstein, A., Shulman, G. L., & Corbetta, M. (2007). Breakdown of functional connectivity in frontoparietal networks underlies behavioral deficits in spatial neglect. Neuron, 53, 905–918.PubMedCrossRefGoogle Scholar
  43. Howard, R. J., ffytche, D. H., Barnes, J., McKeefry, D., Ha, Y., Woodruff, P. W., . . . Brammer, M. (1998). The functional anatomy of imagining and perceiving colour. NeuroReport, 9, 1019–1023.Google Scholar
  44. Hsu, N. S., Kraemer, D. J. M., Oliver, R. T., Schlichting, M. L., & Thompson-Schill, S. L. (2011). Color, context, and cognitive style: Variations in color knowledge retrieval as a function of task and subject variables. Journal of Cognitive Neuroscience, 23, 2544–2557.PubMedCrossRefGoogle Scholar
  45. Hsu, N. S., Frankland, S. M., & Thompson-Schill, S. L. (2012). Chromaticity of color perception and object color knowledge. Neuropsychologia, 50, 327–333. doi: 10.1016/j.neuropsychologia.2011.12.003 PubMedCrossRefGoogle Scholar
  46. Humphreys, G., & Forde, E. M. E. (2001). Hierarchies, similarity, and interactivity in object recognition: “Category-specific” neuropsychological deficits. Behavioral and Brain Sciences, 24, 453–509.PubMedGoogle Scholar
  47. Hutchison, R. M., Culham, J. C., Everling, S., Flanagan, J. R., & Gallivan, J. P. (2014). Distinct and distributed functional connectivity patterns across cortex reflect the domain-specific constraints of object, face, scene, body, and tool category-selective modules in the ventral visual pathway. NeuroImage, 96, 216–236. doi: 10.1016/j.neuroimage.2014.03.068 PubMedCrossRefGoogle Scholar
  48. Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76, 1210–1224. doi: 10.1016/j.neuron.2012.10.014 PubMedPubMedCentralCrossRefGoogle Scholar
  49. Johnson-Frey, S. H. (2004). The neural bases of complex tool use in humans. Trends in Cognitive Sciences, 8, 71–78.PubMedCrossRefGoogle Scholar
  50. Kellenbach, M. L., Brett, M., & Patterson, K. (2003). Actions speak louder than functions: The importance of manipulability and action in tool representation. Journal of Cognitive Neuroscience, 15, 20–46. doi: 10.1162/089892903321107800 CrossRefGoogle Scholar
  51. Kiefer, M., & Pulvermüller, F. (2012). Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex, 48, 805–825. doi: 10.1016/j.cortex.2011.04.006 PubMedCrossRefGoogle Scholar
  52. Killgore, W. D. S., Young, A. D., Femia, L. A., Bogorodzki, P., Rogowska, J., & Yurgelun-Todd, D. A. (2003). Cortical and limbic activation during viewing of high- versus low-calorie foods. NeuroImage, 19, 1381–1394.PubMedCrossRefGoogle Scholar
  53. Kosslyn, S. M., Thompson, W. L., & Ganis, G. (2006). The case for mental imagery. New York, NY: Oxford University Press.CrossRefGoogle Scholar
  54. Kravitz, D. J., Saleem, K. S., Baker, C. I., & Mishkin, M. (2011). A new neural framework for visuospatial processing. Nature Reviews Neuroscience, 12, 217–230. doi: 10.1038/nrn3008 PubMedPubMedCentralCrossRefGoogle Scholar
  55. Kravitz, D. J., Saleem, K. S., Baker, C. I., Ungerleider, L. G., & Mishkin, M. (2013). The ventral visual pathway: An expanded neural framework for the processing of object quality. Trends in Cognitive Sciences, 17, 26–49. doi: 10.1016/j.tics.2012.10.011 PubMedCrossRefGoogle Scholar
  56. Lambon Ralph, M. A., Sage, K., Jones, R. W., & Mayberry, E. J. (2010). Coherent concepts are computed in the anterior temporal lobes. Proceedings of the National Academy of Sciences, 107, 2717–2722.CrossRefGoogle Scholar
  57. Libero, L. E., DeRamus, T. P., Deshpande, H. D., & Kana, R. K. (2014). Surface-based morphometry of the cortical architecture of autism spectrum disorders: Volume, thickness, area, and gyrification. Neuropsychologia, 62, 1–10. doi: 10.1016/j.neuropsychologia.2014.07.001 PubMedCrossRefGoogle Scholar
  58. Mahon, B. Z. (2015). What is embodied about cognition? Language, Cognition and Neuroscience, 30, 420–429. doi: 10.1080/23273798.2014.987791 PubMedCrossRefGoogle Scholar
  59. Mahon, B. Z., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology, 102, 59–70. doi: 10.1016/j.jphysparis.2008.03.004 PubMedGoogle Scholar
  60. Mahon, B. Z., Milleville, S. C., Negri, G. A. L., Rumiati, R. I., Caramazza, A., & Martin, A. (2007). Action-related properties shape object representations in the ventral stream. Neuron, 55, 507–520. doi: 10.1016/j.neuron.2007.07.011 PubMedPubMedCentralCrossRefGoogle Scholar
  61. Mahon, B. Z., Anzellotti, S., Schwarzbach, J., Zampini, M., & Caramazza, A. (2009). Category-specific organization in the human brain does not require visual experience. Neuron, 63, 397–405.PubMedPubMedCentralCrossRefGoogle Scholar
  62. Mahon, B. Z., Kumar, N., & Almeida, J. (2013). Spatial frequency tuning reveals interactions between the dorsal and ventral visual systems. Journal of Cognitive Neuroscience, 25, 862–871.PubMedPubMedCentralCrossRefGoogle Scholar
  63. Martin, A. (1998). The organization of semantic knowledge and the origin of words in the brain. In N. G. Jablonski & L. C. Aiello (Eds.), The origins and diversification of language (pp. 69–88). San Francisco, CA: California Academy of Sciences.Google Scholar
  64. Martin, A. (1999). Automatic activation of the medial temporal lobe during encoding: Lateralized influences of meaning and novelty. Hippocampus, 9, 62–70.PubMedCrossRefGoogle Scholar
  65. Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45. doi: 10.1146/annurev.psych.57.102904.190143 PubMedCrossRefGoogle Scholar
  66. Martin, A. (2009). Circuits in mind: The neural foundations for object concepts. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (4th ed., pp. 1031–1045). Cambridge, MA: MIT Press.Google Scholar
  67. Martin, A., Haxby, J. V., Lalonde, F. M., Wiggs, C. L., & Ungerleider, L. G. (1995). Discrete cortical regions associated with knowledge of color and knowledge of action. Science, 270, 102–105. doi: 10.1126/science.270.5233.102 PubMedCrossRefGoogle Scholar
  68. Martin, A., Ungerleider, L. G., & Haxby, J. V. (2000). Category specificity and the brain: The sensory–motor model of semantic representations of objects. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (2nd ed., pp. 1023–1036). Cambridge, MA: MIT Press.Google Scholar
  69. McClelland, J. L., & Rogers, T. T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4, 310–322.PubMedCrossRefGoogle Scholar
  70. McKeefry, D. J., & Zeki, S. (1997). The position and topography of the human colour centre as revealed by functional magnetic resonance imaging. Brain, 120, 2229–2242.PubMedCrossRefGoogle Scholar
  71. Meyer, K., Kaplan, J. T., Essex, R., Webber, C., Damasio, H., & Damasio, A. (2010). Predicting visual stimuli on the basis of activity in auditory cortices. Nature Neuroscience, 13, 667–668.PubMedCrossRefGoogle Scholar
  72. Miceli, G., Fouch, E., Capasso, R., Shelton, J. R., Tomaiuolo, F., & Caramazza, A. (2001). The dissociation of color from form and function knowledge. Nature Neuroscience, 4, 662–667.PubMedCrossRefGoogle Scholar
  73. Mitchell, J. P., Heatherton, T. F., & Macrae, C. N. (2002). Distinct neural systems subserve person and object knowledge. Proceedings of the National Academy of Sciences, 99, 15238–15243.CrossRefGoogle Scholar
  74. Murphey, D. K., Yoshor, D., & Beauchamp, M. S. (2008). Perception matches selectivity in the human anterior color center. Current Biology, 18, 216–220. doi: 10.1016/j.cub.2008.01.013 PubMedCrossRefGoogle Scholar
  75. Murphy, G. L. (2002). The big book of concepts. Cambridge, MA: MIT Press.Google Scholar
  76. Negri, G. A. L., Rumiati, R. I., Zadini, A., Ukmar, M., Mahon, B. Z., & Caramazza, A. (2007). What is the role of motor simulation in action & object recognition? Evidence from apraxia. Cognitive Neuropsychology, 24, 795–816.PubMedCrossRefGoogle Scholar
  77. Noppeney, U., Friston, K. J., & Price, C. J. (2003). Effects of visual deprivation on the organization of the semantic system. Brain, 126, 1620–1627.PubMedCrossRefGoogle Scholar
  78. O’Neil, E. B., Hutchison, R. M., McLean, D. A., & Köhler, S. (2014). Resting-state fMRI reveals functional connectivity between face-selective perirhinal cortex and the fusiform face area related to face inversion. NeuroImage, 92, 349–355. doi: 10.1016/j.neuroimage.2014.02.005 PubMedCrossRefGoogle Scholar
  79. Olson, I. R., McCoy, D., Klobusicky, E., & Ross, L. A. (2013). Social cognition and the anterior temporal lobes: A review and theoretical framework. Social Cognitive and Affective Neuroscience, 8, 123–133. doi: 10.1093/scan/nss119 PubMedPubMedCentralCrossRefGoogle Scholar
  80. Patterson, K., Nestor, P. J., & Rogers, T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience, 8, 976–987. doi: 10.1038/nrn2277 PubMedCrossRefGoogle Scholar
  81. Paulesu, E., Harrison, J., Baron-Cohen, S., Watson, J. D., Goldstein, L., Heather, J., . . . Frith, C. D. (1995). The physiology of coloured hearing. A PET activation study of colour–word synaesthesia. Brain, 118, 661–676.Google Scholar
  82. Pietrini, P., Furey, M. L., Ricciardi, E., Gobbini, M. I., Wu, W.-H. C., Cohen, L., . . . Haxby, J. V. (2004). Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences, 101, 5658–5663. doi: 10.1073/pnas.0400707101
  83. Plaut, D. C. (2002). Graded modality-specific specialization in semantics: A computational account of optic aphasia. Cognitive Neuropsychology, 19, 603–639.PubMedCrossRefGoogle Scholar
  84. Power, J. D., Schlaggar, B. L., Lessov-Schlaggar, C. N., & Petersen, S. E. (2013). Evidence for hubs in human functional brain networks. Neuron, 79, 798–813. doi: 10.1016/j.neuron.2013.07.035 PubMedCrossRefGoogle Scholar
  85. Price, C. J., Warburton, E. A., Moore, C. J., Frackowiak, R. S. J., & Friston, K. J. (2001). Dynamic diaschisis: Anatomically remote and context-sensitive human brain lesions. Journal of Cognitive Neuroscience, 13, 419–429.PubMedCrossRefGoogle Scholar
  86. Pylyshyn, Z. (2003). Return of the mental image: Are there really pictures in the brain? Trends in Cognitive Sciences, 7, 113–118. doi: 10.1016/S1364-6613(03)00003-2 PubMedCrossRefGoogle Scholar
  87. Raichle, M. E. (2006). The brain’s dark energy. Science, 314, 1249–1250. doi: 10.1126/science.1134405 PubMedCrossRefGoogle Scholar
  88. Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. doi: 10.1016/0010-0285(76)90013-X CrossRefGoogle Scholar
  89. Rothi, L. J., Ochipa, Z. C., & Heilman, K. M. (1991). A cognitive neuropsychological model of limb apraxis. Cognitive Neuropsychology, 8, 443–458.CrossRefGoogle Scholar
  90. Roy, M., Shohamy, D., & Wager, T. D. (2012). Ventromedial prefrontal–subcortical systems and the generation of affective meaning. Trends in Cognitive Sciences, 16, 147–156. doi: 10.1016/j.tics.2012.01.005 PubMedPubMedCentralCrossRefGoogle Scholar
  91. Saleem, K. S., Kondo, H., & Price, J. L. (2008). Complementary circuits connecting the orbital and medial prefrontal networks with the temporal, insular, and opercular cortex in the macaque monkey. Journal of Comparative Neurology, 506, 659–693.PubMedCrossRefGoogle Scholar
  92. Saygin, A. P., McCullough, S., Alac, M., & Emmorey, K. (2010). Modulation of BOLD response in motion-sensitive lateral temporal cortex by real and fictive motion sentences. Journal of Cognitive Neuroscience, 22, 2480–2490. doi: 10.1162/jocn.2009.21388 PubMedPubMedCentralCrossRefGoogle Scholar
  93. Shallice, T. (1988). Specialization within the semantic system. Cognitive Neuropsychology, 5, 133–142. doi: 10.1080/02643298808252929 CrossRefGoogle Scholar
  94. Shallice, T. (1993). Multiple semantics: Whose confusions? Cognitive Neuropsychology, 10, 251–261. doi: 10.1080/02643299308253463 CrossRefGoogle Scholar
  95. Shuren, J. E., Brott, T. G., Schefft, B. K., & Houston, W. (1996). Preserved color imagery in an achromatopsic. Neuropsychologia, 34, 485–489.PubMedCrossRefGoogle Scholar
  96. Simmons, W. K., & Barsalou, L. W. (2003). The similarity-in-topography principle: Reconciling theories of conceptual deficits. Cognitive Neuropsychology, 20, 451–486. doi: 10.1080/02643290342000032 PubMedCrossRefGoogle Scholar
  97. Simmons, W. K., & Martin, A. (2009). The anterior temporal lobes and the functional architecture of semantic memory. Journal of the International Neuropsychological Society, 15, 645–649.PubMedPubMedCentralCrossRefGoogle Scholar
  98. Simmons, W. K., & Martin, A. (2012). Spontaneous resting-state BOLD fluctuations map domain-specific neural networks. Social Cognitive and Affective Neuroscience, 7, 467–475. doi: 10.1093/scan/nsr018 PubMedCrossRefGoogle Scholar
  99. Simmons, W. K., Martin, A., & Barsalou, L. W. (2005). Pictures of appetizing foods activate gustatory cortices for taste and reward. Cerebral Cortex, 15, 1602–1608.PubMedCrossRefGoogle Scholar
  100. Simmons, W. K., Ramjee, V., Beauchamp, M. S., McRae, K., Martin, A., & Barsalou, L. W. (2007). A common neural substrate for perceiving and knowing about color. Neuropsychologia, 45, 2802–2810. doi: 10.1016/j.neuropsychologia.2007.05.002 PubMedPubMedCentralCrossRefGoogle Scholar
  101. Simmons, W. K., Reddish, M., Bellgowan, P. S. F., & Martin, A. (2010). The selectivity and functional connectivity of the anterior temporal lobes. Cerebral Cortex, 20, 813–825.PubMedCrossRefGoogle Scholar
  102. Simmons, W. K., Rapuano, K. M., Kallman, S. J., Ingeholm, J. E., Miller, B., Gotts, S. J., . . . Martin, A. (2013). Category-specific integration of homeostatic signals in caudal but not rostral human insula. Nature Neuroscience, 16, 1551–1552. doi: 10.1038/nn.3535
  103. Small, D. M. (2010). Taste representation in the human insula. Brain Structure and Function, 214, 551–561.PubMedCrossRefGoogle Scholar
  104. Smith, F. W., & Goodale, M. A. (2015). Decoding visual object categories in early somatosensory cortex. Cerebral Cortex, 25, 1020–1031. doi: 10.1093/cercor/bht292 PubMedCrossRefGoogle Scholar
  105. Stasenko, A., Garcea, F. E., Dombovy, M., & Mahon, B. Z. (2014). When concepts lose their color: A case of object color knowledge impairment. Cortex, 58, 217–238.PubMedPubMedCentralCrossRefGoogle Scholar
  106. Stevens, W. D., Buckner, R. L., & Schacter, D. L. (2010). Correlated low-frequency BOLD fluctuations in the resting human brain are modulated by recent experience in category-preferential visual regions. Cerebral Cortex, 20, 1997–2006.PubMedCrossRefGoogle Scholar
  107. Stevens, W. D., Tessler, M. H., Peng, C. S., & Martin, A. (2015). Functional connectivity constrains the category-related organization of human ventral occipitotemporal cortex. Human Brain Mapping. doi: 10.1002/hbm.22764 PubMedPubMedCentralGoogle Scholar
  108. Thompson-Schill, S. L. (2003). Neuroimaging studies of semantic memory: Inferring “how” from “where.”. Neuropsychologia, 41, 280–292. 10.1016/S0028-3932(02)00161-6.PubMedCrossRefGoogle Scholar
  109. Tranel, D., Damasio, H., & Damasio, A. R. (1997). A neural basis for retrieving conceptual knowledge. Neuropsychologia, 35, 1319–1327.PubMedCrossRefGoogle Scholar
  110. Tranel, D., Damasio, H., Eichhorn, G. R., Grabowski, T., Ponto, L. L. B., & Hichwa, R. D. (2003a). Neural correlates of naming animals from their characteristic sounds. Neuropsychologia, 41, 847–854. doi: 10.1016/S0028-3932(02)00223-3 PubMedCrossRefGoogle Scholar
  111. Tranel, D., Kemmerer, D., Adolphs, R., Damasio, H., & Damasio, A. R. (2003b). Neural correlates of conceptual knowledge for actions. Cognitive Neuropsychology, 20, 409–432. doi: 10.1080/02643290244000248 PubMedCrossRefGoogle Scholar
  112. Tranel, D., Manzel, K., Asp, E., & Kemmerer, D. (2008). Naming dynamic and static actions: Neuropsychological evidence. Journal of Physiology, 102, 80–94.PubMedPubMedCentralGoogle Scholar
  113. Turk-Browne, N. B., Norman-Haignere, S. V., & McCarthy, G. (2010). Face-specific resting functional connectivity between the fusiform gyrus and posterior superior temporal sulcus. Frontiers in Human Neuroscience, 4, 176. doi: 10.3389/fnhum.2010.00176 PubMedPubMedCentralCrossRefGoogle Scholar
  114. Uddin, L. Q., Menon, V., Young, C. B., Ryali, S., Chen, T., Khouzam, A., . . . Hardan, A. Y. (2011). Multivariate searchlight classification of structural magnetic resonance imaging in children and adolescents with autism. Biological Psychiatry, 70, 833–841.Google Scholar
  115. van der Laan, L. N., de Ridder, D. T. E., Viergever, M. A., & Smeets, P. A. M. (2011). The first taste is always with the eyes: A meta-analysis on the neural correlates of processing visual food cues. NeuroImage, 55, 296–303.PubMedCrossRefGoogle Scholar
  116. van Kerkoerle, T., Self, M. W., Dagnino, B., Gariel-Mathis, M.-A., Poort, J., van der Togt, C., & Roelfsema, P. R. (2014). Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proceedings of the National Academy of Sciences, 111, 14332–14341. doi: 10.1073/pnas.1402773111 CrossRefGoogle Scholar
  117. Wallace, G. L., Dankner, N., Kenworthy, L., Giedd, J. N., & Martin, A. (2010). Age-related temporal and parietal cortical thinning in autism spectrum disorders. Brain, 133, 3745–3754. doi: 10.1093/brain/awq279 PubMedPubMedCentralCrossRefGoogle Scholar
  118. Warrington, E. K., & McCarthy, R. (1987). Categories of knowledge—Further fractionations and an attempted integration. Brain, 110, 1273–1296.PubMedCrossRefGoogle Scholar
  119. Warrington, E. K., & Shallice, T. (1984). Category specific semantic impairments. Brain, 107, 829–853. doi: 10.1093/brain/107.3.829 PubMedCrossRefGoogle Scholar
  120. Wiggs, C. L., Weisberg, J. A., & Martin, A. (1999). Neural correlates of semantic and episodic memory retrieval. Neuropsychologia, 37, 103–118.PubMedCrossRefGoogle Scholar
  121. Wilson, A., & Foglia, L. (2011). Embodied cognition. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2011 ed.). Retrieved from http://plato.stanford.edu/archives/fall2011/entries/embodied-cognition/
  122. Wilson-Mendenhall, C., Simmons, W. K., Martin, A., & Barsalou, L. W. (2013). Contextual processing of abstract concepts reveals neural representations of non-linguistic semantic content. Journal of Cognitive Neuroscience, 25, 920–935.PubMedPubMedCentralCrossRefGoogle Scholar
  123. Wong, C., & Gallate, J. (2012). The function of the anterior temporal lobe: A review of the empirical evidence. Brain Research, 17, 94–116. doi: 10.1016/j.brainres.2012.02.017 CrossRefGoogle Scholar
  124. Zahn, R., Moll, J., Krueger, F., Huey, E. D., Garrido, G., & Grafman, J. (2007). Social concepts are represented in the superior anterior temporal cortex. Proceedings of the National Academy of Sciences, 104, 6430–6435.CrossRefGoogle Scholar
  125. Zeki, S., Watson, J. D. G., Lueck, C. J., Friston, K. J., Kennard, C., & Frackowiak, R. S. J. (1991). A direct demonstration of functional specialization in human visual cortex. Journal of Neuroscience, 11, 641–649.PubMedGoogle Scholar
  126. Zwaan, R. A., & Taylor, L. J. (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General, 135, 1–11. doi: 10.1037/0096-3445.135.1.1 CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. (outside the USA) 2016

Authors and Affiliations

  1. 1.Laboratory of Brain and CognitionNational Institute of Mental HealthBethesdaUSA

Personalised recommendations