1 Introduction

The question of how we represent the meanings of concepts has been the subject of a large body of work in cognitive science. Researchers working in semantics and conceptual representations have taken a variety of approaches [1, 2] to find a convincing answer for this hotly-debated question. Distributed models of conceptual representations are one group of models that have been suggested to describe the processes involved in the understanding of concepts. According to these models, meanings of concepts are essentially componential. It means that the whole meaning of a concept consists of smaller units of meaning (components of meaning) in an interconnected semantic network. According to these models, meaning of any concept is represented by small units of meaning, which are called semantic features [3]. Distributed models of conceptual representations hold that every feature node of a concept is represented in a connectionist network, and the understanding of that concept involves the co-activation if its feature nodes [4,5,6,7,8,9,10]. For example, the features of < has eyes > and < has nose > are represented by their own units or nodes. When a concept that has these two features is processed in the mind, these nodes are co-activated.

Among a number of distributed models of conceptual representations, the Conceptual Structure Account (CSA) has particularly been influential. The CSA holds that the statistical characteristics of a concept features structure the conceptual space of that concept [7, 9, 11,12,13]. These statistical characteristics give an internal semantic structure to the concept [3]. The two factors of feature distinctiveness and feature co-occurrence have been claimed to interact to determine conceptual processing [7, 13,14,15,16,17]. Feature distinctiveness is defined as the extent to which a feature is shared by a set of concepts; feature co-occurrence is defined as the extent to which two features co-occur. Distinctive features are those that occur in only one or a few concepts. Distinctive features allow people to distinguish among concepts that belong to the same category. For example, while the feature of < has mane > is highly distinctive among living things, the feature of < has eyes > occur in a large number of living things and thus is a non-distinctive feature.

One of the claims of the CSA is that patterns of distinctive features in nonliving things are different from those in living things [14]. This model assumes that distinctive features of nonliving things are more strongly correlated than distinctive features of living things [16]. Summarizing the nature of relationship among features in living and nonliving things from the perspective of the CSA, it has been argued that distinctive features of living things are weakly correlated with other features, while non-distinctive features are strongly correlated [14]. For nonliving things, both distinctive and non-distinctive features have a tendency to be strongly correlated (p. 394). In these descriptions, all semantic features are considered to be at the same level. A question that might be raised is how features of concepts are hierarchically structured within their semantic space. This article intends to answer this question by presenting a proposal for hierarchical structure of semantic features and sub-semantic features within semantic space of concepts. Then, based on this proposal, the metaphorical understanding of abstract concepts in terms of concrete concept is discussed, and it is suggested that when an abstract concept is processed, the activation of low-level sub-features may take place in a variety of ways.

2 Sub-features in the semantic space

When we talk about < has eyes > as a semantic feature that is represented by a unit or node, we should not ignore that this semantic feature has a set of sub-features. For example, < color of eyes > can be seen as a sub-feature of < has eyes >. Eyes might be black, blue, green, or many other colors. Similarly, < size of eyes > can be considered as a sub-feature of < has eyes >. They might have a variety of sizes. An eye consists of pupil, retina, eyelid, and many other parts. All of these can be considered as the sub-features of the single feature of < has eyes >. Even each sub-feature itself might have a set of lower level sub-features. For example, the sub-feature of < blue pupil > may be realized in different forms. It might be very light blue or very dark blue. Between these two extremes, there is an infinite number of colors, all of which are considered to be blue. To take another example, the sub-feature of < size of the eye > may be realized in different forms. It might be very small or very large. Between these two extreme sizes, there is an infinite number of sizes. In fact, the structure of semantic space has a hierarchical order. Every component (semantic feature) has a set of sub-components (sub-features). In this hierarchical structure, every semantic feature, which is represented by a unit or node in the neural network, has a set of sub-features that can be realized in a variety of forms. In other words, sub-features can be seen as parameters that can take different forms. This makes the structure of semantic space extremely complex. In fact, the structure of semantic space of a concept can be seen as an endless hierarchical network of relations among features and sub-features.

When a concept is processed in the mind of an individual, those nodes which represent some of these features and sub-features are activated. For example, when the concept of ‘human being’ is processed in the mind, that node which represents < has eyes > is activated. However, this feature has a large number of sub-features. The eyes of a human being may be brown, black, blue, or many other colors. If we assume that each color (each sub-feature) is represented by a sub-unit, the processing of this concept might be done through a variety of neural activities. In other words, the neural activities that correspond to the concept of ‘human being’ cannot be completely the same for two different individuals, although there might be many similarities between them. For one individual, that node that represents ‘green eye’ might be activated, while that node that represents ‘blue eye’ may be activated for another individual. It cannot be said that the processing of a concept corresponds to the activation of a set of fixed nodes. Of course, there are some nodes that may be activated in the neural networks of all individuals. For example, that node which represents the feature of < has eyes > is activated in the neural network of all individuals when the concept of ‘human being’ is processed. However, those nodes which represent sub-features of this semantic feature may vary from one individual to another one. One of the reasons behind this variation may be the differences between the retrospective experiences of individuals. For example, for someone who has had interaction with blue-eyed people, the node that represents ‘blue eye’ is activated when the concept of ‘human being’ is processed. On the other hand, for someone who has had interaction with black-eyed people, that node that represents black eye is activated when this concept is processed. In other words, although many nodes that are activated during the understanding of this concept are shared by these two individuals, there are some differences between those nodes that represent sub-features. These parametric variations in the understanding of the same concept in the mind of two different individuals might be the result of the differences between their retrospective experiences.

Distributed models of conceptual representations have mainly focused on the structure of semantic space in concrete concepts. How these models can be applied to describe the understanding of abstract concepts is a question that will be discussed throughout the following sections.

3 Abstract concepts and distributed models

Another important question is about the representation of abstract concepts. An abstract concept is defined as a concept that cannot be pinned down to a concretely identifiable or clearly perceivable referent [18]. For example, we cannot imagine any perceivable referent for the concept of ‘freedom’. Although this concept might be associated with some concrete concepts, it does not have a concretely identifiable referent. On the other hand, concrete concepts have clearly perceivable features. For example, a ‘chair’ can be seen and touched by our sensory organs. Among a number of views that have been suggested to describe the process of understanding abstract concepts, the Context Availability Theory [19, 20] and the Dual Coding Theory [21, 22] have been particularly influential. The Context Availability Theory holds that concrete concepts are strongly associated with a small set of contexts, while abstract concepts are loosely associated with a much larger set of contexts [19]. The Dual Coding Theory [21, 22] posits a positive correlation between concreteness and imageability. According to this theory, abstract concepts are less imageable than concrete concepts. That is, while there is a direct relationship between concrete concepts and images, there is no direct relationship between abstract concepts and images.

In the previous sections, the description of concrete concepts by distributed models of conceptual representations was discussed. However, these models may also be used to describe the process of understanding abstract concepts. Since abstract concepts do not have a concretely identifiable referent, it is difficult to imagine any directly-perceivable semantic features for these concepts. Therefore, in order to describe the understanding of abstract concepts from the perspective of distributed models of conceptual representations, we have to rely on the semantic associations between abstract concepts and the related concrete concepts. For example, the abstract concept of ‘democracy’ is associated with a set of concrete concepts such as jail, open protest, parliament, political leaders, dictators, Statue of Liberty, etc. All of these concrete concepts can be related to the abstract concept of ‘democracy’ in one way or another. Therefore, when this abstract concept is processed in the mind of an individual, a set of associated concrete concepts and their semantic components are activated. The important point is that the set of associated concrete concepts that are activated during the processing of an abstract concept may be very different from one individual to another one. Therefore, the nodes that represent the semantic features of the associated concrete concepts are different from one individual to another one. In fact, when a given abstract concept is understood by two different individuals, the set of activated nodes that represent the related semantic features may be very different cross-individually.

Even the same concept might be processed differently in the mind of an individual in two different situations. This has been demonstrated by the findings of a number of priming studies (e.g., [23,24,25,26]). Depending on the context, the same word may be interpreted literally or metaphorically. For example, the word shark is interpreted metaphorically in the sentence My lawyer is a shark. In this sentence, the word shark represents a category that is defined by the semantic feature of ‘being aggressive and tenacious’ [27]. The rest of the semantic features of ‘shark’, which are metaphorically irrelevant, are inhibited [24]. In other words, those nodes that represent metaphorically-irrelevant features of ‘shark’ are inhibited when this term is interpreted in its metaphorical sense [28]. On the other hand, when this term is interpreted in its literal sense, a large number of nodes are activated, as the literal class of this term is defined by a large number of semantic features [29, 30]. Therefore, two very different mechanisms of neural activity may take place to derive two different meanings of the same word, one literal and one metaphorical. A receptive-oriented mode of processing, in which a large number of semantic features are activated, produces literal meaning; on the other hand, a suppressive-oriented mode of processing, in which a large number of semantic features are suppressed, produces metaphorical meaning [30].

4 Difference between abstract and concrete concepts

Another important difference between abstract and concrete concepts is in the range of their interpretations in a variety of contexts. Concrete concepts are usually fixed concepts and their interpretations in different cultural contexts are not very different. For example, the way that the concept of ‘chair’ is understood among people living in southwest Asia is not different from the way that this concept is understood in Europe. This is because the features of this concept are directly received through sensory channels. Therefore, two people cannot have very different interpretations of the whole structure of a chair, and cannot imagine it in two very different ways. Although two chairs may be different in terms of color, size, shape, and material, their whole structures and functions are very similar. However, in this regard, abstract concepts are different from concrete concepts. For example, the abstract concept of ‘freedom’ may be interpreted differently in various cultural contexts. For someone who lives in a rural area, ‘freedom’ may be understood as being free from the restrictions of modern life, the pressure of time, and regulations that are imposed by an industrialized society. In other words, our understanding of abstract concepts is largely dependent on our retrospective experiences and the context in which the word is used. That is, depending on the context, the same abstract word may activate a different range of associated concepts, and each associated concept may have its specific set of semantic features. Therefore, a wide range of semantic features can be associated with a single abstract concept. Depending on who understands the concept and in what context the word is used, some of these features and the nodes that represent these features are activated in the neural network. Here, the important point is the dynamism that is involved in the understanding of abstract concepts. The understanding of an abstract concept does not correspond to the activation of a fixed set of associated concepts or the activation of a clearly-defined set of semantic features. The interpretation of an abstract concept is dynamic and highly sensitive to the individual as well as the context of its use. It has been suggested that abstract concepts are understood through simulation of specific situations [31,32,33,34]. For example, while the concrete concept of ‘apple’ has many context-independent features (round, tart, etc.), the abstract concept of ‘power’ is much more context-dependent [34]. It has been reported that the understanding of abstract concepts is facilitated when contextual information is provided for the individual [35, 36]. These findings are consistent with the idea that situations are important for the understanding of abstract concepts [37].

However, when we talk about abstractness and concreteness, we must keep in mind that a concept may not be fully abstract or fully concrete [18]. It has been proposed that the relationship between concreteness and abstractness of concepts is not a binary relationship; rather, it is graded on a continuum [38]. Some concepts seem to be fully abstract; however, some degree of concreteness is revealed in their semantic features when they are closely examined. For example, the concept of ‘Euro’ may be seen as a fully concrete concept. It has the concrete characteristics of size, color, and weight. However, its value cannot be defined by concrete characteristics that are perceived by our senses [39]. The extent to which a concept is closer to absolute abstractness can be a factor that determines the width of the associated contexts and the strength of association between that concept and the associated contexts [21, 22]. In fact, when we move toward the extreme end of abstractness, the range of associated contexts becomes wider; therefore, the range of semantic features that is activated during the understanding of that concept may become wider. Among the features that are within this range, a large variety of sets may become activated during the understanding of the concepts. When we move toward the extreme end of concreteness on the continuum, the set of associated semantic features becomes narrower and narrower. To summarize, it seems that the scope of interpretation and the range of associated semantic features are two key differences between abstract and concrete concepts. While abstract concepts are interpreted within wide scopes and in association with large sets of semantic features, concrete concepts are interpreted within narrower scopes and in association with relatively small sets of semantic features. This difference is shown in Fig. 1, where the area of each circle shows the scope of interpretation, the number of squares shows the range of associated semantic features, and the width of lines that connect squares to the circles indicates the strength of correlation between those concepts and their associated concepts.

Fig. 1
figure 1

Difference between abstract and concrete concepts

5 Metaphor and distributed models

Another point that is critically important is the tools that we employ to understand abstract and concrete concepts. In many metaphors, an abstract concept is understood in terms of and through the mediation of a concrete concept. Although it might be difficult to imagine any similarity between abstract (target) and concrete (source) concepts in such metaphors, they are easily understood. For example, in the metaphors Life is travel and Angriness is a pressurized container, there is no observable similarity between the source and target domains. However, these two metaphors are universal and are used across many languages of the world. Here, an abstract concept is understood through the mediation of a concrete concept. This mediatory route may lead to the activation of those semantic features that are associated with the concrete concept. This idea is supported by the findings of a number of studies indicating that the processing of abstract domains involves the activation of concrete domains [40,41,42,43,44,45]. For example, the results of an experiment suggested that when subjects read words associated with powerful or powerless people, they performed better at recognizing letters in metaphor-congruent than in metaphor-incongruent spatial locations [46].

A question that may be raised is how distributed models of conceptual representations can be applied to describe the understanding of metaphors. The metaphor He grasped the idea is perhaps a good example that can be discussed here. It has been proposed that the metaphorical use of the verb grasp and actual grasping involve the activation of the same sensory-motor areas [47]. This is the view that is taken by the strong version of embodied cognition, according to which imagining the action of grasping and actual doing of it involve the same neural substrate [48]. In the metaphor He grasped the idea, the understanding of an idea is described in terms of grasping something. Therefore, the concept of ‘understanding’ is understood as ‘grasping something’. From the perspective of distributed models of conceptual representations and the strong version of embodied cognition, those nodes that represent semantic features of ‘grasping’ are activated when the metaphorical phrase grasping the idea is understood. When a metaphor is understood, the source domain (understanding) is transformed into the target domain (grasping). This is some kind of transformation in which a more-difficult-to-understand domain is understood in terms of an easier-to-understand domain. In this transformation of domain, the same neural substrates are used to understand both the source and target domains. In fact, it can be said that the source and target domains are isomorphic with each other at an abstract conceptual level, and the same neural networks are activated when source and target domains of a metaphor are processed. This is a kind of deep isomorphic relationship between two concepts (or two domains) that might be very different at a superficial concrete level. This deep abstract relationship between the source and target domains of a metaphor is realized in the form of similar neural activities in the mind of an individual that understands the metaphor. In fact, the target of metaphor is understood through the activation of those nodes that represent semantic features of the source.

In many metaphors, an abstract domain is understood in terms of a concrete domain. In these metaphors, concretely perceivable features of source domain are attributed to an abstract domain that does not have a clearly identifiable referent and does not have directly perceivable semantic features. In this process of mediatory understanding, the role of low-level sub-features may be critical. Low-level sub-features are small components of semantic features. Since abstract concepts are believed to be associated with a wider range of contexts, the activation of low-level sub-features may take place in a variety of ways when an abstract concept is processed. In other words, when an abstract concept is metaphorically understood in terms of a concrete concept, the ways that low-level sub-features are activated may be very different from one individual to another one. Cross-individual differences in understanding abstract concepts and the effect of age, culture, and some mental disorders on abstract concept processing have been discussed on some works (e.g., [49,50,51,52,53]). The role of very low-level sub-features may become more important in the understanding of abstract concepts. This may be particularly the case with highly abstract concepts. For the understanding of such concepts, the role of sub-features that are at a very low level in the hierarchical order of features becomes more important.

6 Conclusion

The proposal discussed in this paper has some potentially important implications for metaphorical understanding of concepts. Although the CSA has been utilized to describe semantic representations of concepts, it has specifically not been used to describe the process of metaphorical understanding of abstract concepts in terms of concrete concepts. The proposal discussed in this article suggests that the CSA can explain some aspects of metaphor comprehension that has not been properly explained by other theories of metaphor comprehension. Metaphor comprehension is a multi-dimensional and complex process. That is why a large number of theories has been suggested to offer a picture of how metaphors are processed in the mind. In this article, we discussed the idea of how distributed models in general and the CSA in particular can help us offer a picture of some specific aspects of metaphor comprehension. However, it must be noted that these models are not able to give a comprehensive picture of all dimensions of metaphor comprehension for all types of metaphor. Similar to other models of metaphor comprehension, these models have some limitations in explaining the understanding of some metaphors. This limitation of distributed models and other models of metaphor comprehension is a challenging question that can be addressed in future research.