Minds and Machines

, Volume 23, Issue 1, pp 105–121

An Ontology-Based Approach to Metaphor Cognitive Computation

Authors

    • The Institute of Computer Application TechnologyHangzhou Dianzi University
  • Huaxin Huang
    • Center for the Study of Language and CognitionZhejiang University
  • Beishui Liao
    • Center for the Study of Language and CognitionZhejiang University
  • Cihua Xu
    • Center for the Study of Language and CognitionZhejiang University
Article

DOI: 10.1007/s11023-012-9269-z

Cite this article as:
Huang, X., Huang, H., Liao, B. et al. Minds & Machines (2013) 23: 105. doi:10.1007/s11023-012-9269-z

Abstract

Language understanding is one of the most important characteristics for human beings. As a pervasive phenomenon in natural language, metaphor is not only an essential thinking approach, but also an ingredient in human conceptual system. Many of our ways of thinking and experiences are virtually represented metaphorically. With the development of the cognitive research on metaphor, it is urgent to formulate a computational model for metaphor understanding based on the cognitive mechanism, especially with the view to promoting natural language understanding. Many works have been done in pragmatics and cognitive linguistics, especially the discussions on metaphor understanding process in pragmatics and metaphor mapping representation in cognitive linguistics. In this paper, a theoretical framework for metaphor understanding based on the embodied mechanism of concept inquiry is proposed. Based on this framework, ontology is introduced as the knowledge representation method in metaphor understanding, and metaphor mapping is formulated as ontology mapping. In line with the conceptual blending theory, a revised conceptual blending framework is presented by adding a lexical ontology and context as the fifth mental space, and a metaphor mapping algorithm is proposed.

Keywords

MetaphorOntologyConceptual blendingCognitive scienceEmbodied cognition

Introduction

Language understanding is a process to understand concepts expressed by words in an utterance and the relationships between these concepts. It is one of the most important characteristics for human beings. Metaphor is a pervasive phenomenon in language, and it is also among the most vigorous offspring of the creative mind (Steinhart 2001). Metaphor understanding is an intricate task in language understanding. There are many factors involved in metaphor understanding, such as the transfer of literal sense to metaphorical sense, background knowledge and experience in metaphor mappings, etc. With the development of research on metaphor cognition, it is urgent to construct computational models for metaphor understanding based on the cognitive mechanism of metaphor, especially for promoting current natural language understanding systems (Zhou et al. 2007; Mason 2004).

During the last decades, a great deal of attention has been paid to computation of metaphor understanding. Researchers have realized that metaphor computation will play an active role in discourse understanding and machine translation (Zhou 2003, 66–69). To build a human-like natural language processing (NLP) system, auto-processing of metaphor is an unavoidable task, involving metaphor identification and metaphor interpretation. Zhou et al. (2007) and Shutova (2010) reviewed the existing computational models of metaphor, issues of metaphor annotation and resource building.

Since metaphors play an important role in our cognition, one of the long term goals of metaphor research in NLP and AI would be to build a computational intelligence model accounting for the way metaphors organize our conceptual system, in terms of which we think and act (Shutova 2010). From this view, in this paper, we will present a theoretic framework for metaphor understanding on the basis of metaphor cognitive theory combined with knowledge representation theory in AI, and propose an ideal model of metaphor in accordance with the cognitive mechanism of metaphor understanding.

The rest of this paper is organized as follows: First, background theories of metaphor understanding process are presented to provide a brief introduction to cognitive mechanism of metaphor. Then, a theoretic framework for metaphor understanding based on cognitive mechanism of metaphor will be proposed. In the next section, ontology as method for knowledge representation in metaphor understanding will be introduced, and a revised conceptual blending method for metaphor mapping based on ontology will be further discussed. The last section contains the conclusion and future work.

Backgrounds

Cognitive Perspective on Metaphor

Traditionally, for most of us, metaphor is a figure of speech in which one thing is compared with another by saying that one is the other, as in He is a lion. Or, people use metaphor in order to achieve some artistic and rhetorical effect (Kovecses 2010; Miller 1993; Kittay 1987). A new perspective of metaphor that challenged the traditional theory in a coherent and systematic way was first developed by Lakoff and Johnson (1980), which has become known as the conceptual metaphor theory (CMT).

Lakoff and Johnson (1980) showed convincingly that metaphor is pervasive both in thought and everyday language. Metaphor is not only a device of creative literary imagination, but also a valuable cognitive tool, with which we are able to know how to think and perceive the world. In fact, our conceptual system is metaphorical in nature.

Metaphor is defined as understanding one conceptual domain in terms of another conceptual domain. Metaphor mapping involves the following processes (Lakoff and Turner 1989): first, the slots in source domain are projected to slots in target domain; second, the relations between elements in source are mapped; third, the features of elements in source are mapped; at last, knowledge in source domain are transferred to the target domain.

In sum, from the perspective of cognitive linguistics, metaphor mapping is regarded as a set of constraints, which provide the selectivity of concepts projected from source to target. Specifically, a metaphor mapping consists of three parts: the first is image schemas, which are recurring structures within our cognitive processes which establish patterns of understanding and reasoning, and are formed from our bodily interactions, linguistic experience, and historical context (Johnson 1989; Lakoff 1987). The second is basic correlations, including causal relation, similarity relation, action-result relation, goal-goal relation, and so on. The third is the culture-dependent evaluations, demonstrating contexts’ role in metaphor understanding.

On the study of metaphor cognition, embodiment is an unavoidable notion. The so-called embodied cognition means that the nature of human mind is largely determined by the form of the human body (Lakoff and Johnson 1999). In cognitive semantics, researchers find that most of abstract concepts are expressed through embodied concepts (Lakoff and Johnson 1980). Embodied concepts are the basic concepts which are formed through the dynamic interaction of brain and the wider world (Anderson 2003). From the view of embodied cognition, our conceptual system is extended by embodied concepts through metaphor mapping.

Conceptual Blending Theory and Metaphor

Conceptual blending theory (CBT), also known as conceptual integration theory (Fauconnier and Turner 1998), was proposed by Fauconnier and Turner (2002), and was initially motivated to some specific cognitive phenomena such as metaphor, metonymy and counterfactual reasoning. CBT representation gives rise to complex networks by linking two (or more) input spaces by means of a generic space (see Fig. 1), and hence generates a third space which is called blended space. The new space will maintain partial structure of input spaces and add emergent structures of its own. Conceptual blending is a partial cross-domain mapping. In Fig. 1, the whole figure is a conceptual integration network, the circles are mental spaces, including one generic space, two input spaces and one blending space, the solid lines between input spaces denote corresponding relations, and the dashed lines among circles denote the links among generic space, input spaces and blending space. The square in blending space denotes the emergent structure of the blending.
https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig1_HTML.gif
Fig. 1

Conceptual integration network

There are three basic cognitive operators in conceptual blending: composition, completion, elaboration. Grady et al. (1997) explored the relationship between CBT and CMT, arguing that the two approaches are complementary. In our view, CMT provides a reasonable and systemic description about the cognitive mechanism of metaphor, while CBT gives an operable framework to metaphor understanding.

Theoretic Framework for Metaphor Understanding

With all the above theories and views about metaphor understanding together, we see that metaphor understanding involves many factors. Besides the transference of literal meaning and utterance meaning for general language understanding, there are factors of acquisition of embodied concept knowledge (embodied cognition), taxonomy of conceptual knowledge (categorization theory), constraints on conceptual mapping(Image Schemas, basic relations, context), subjective factor of agent, and the analogical association mechanism in conceptual mapping. In light of these factors, we propose a general framework to describe the process of metaphor understanding.

In Fig. 2, we show a model that takes most of the aspects above mentioned into consideration. The main components include:
  • Multi-domain Knowledge Base It is impossible to understand metaphor without knowledge. The multi-domain knowledge base include the linguistic knowledge base for parsing and literal meaning extraction, the basic conceptual knowledge base for embodied concepts and the conceptual knowledge base about conceptual domain.

  • Parsing and Literal Meaning Extraction This component provides the function to parse and extract the literal meaning of metaphorical sentence.

  • Associative Mechanism Association is a psychological process, which means bringing ideas or concepts together in memory or imagination. From the view of cross-domain mapping in metaphor, the associative mechanism is actually a bisociation (Boden 2003; Pereira 2007).

  • Reasoning Mechanism Reasoning is constructed on the premise of knowledge bases. It is a constraint on associative mechanism, and is used to select the mapping results.

  • Epistemic State Epistemic state denotes agent’s mental state, which considers the subjective factors of agent in metaphor understanding (Huang and Zhou 2005).

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig2_HTML.gif
Fig. 2

General framework for metaphor understanding

Therefore, the process of metaphor understanding can be described as follows: Firstly, in a given context, through analyzing the linguistic form and literal meaning of the metaphorical utterance, obtain its metaphorical elements and their corresponding conceptual domain; Secondly, under the current epistemic status, with the context and conceptual knowledge base, generate analogical mappings between concepts through association mechanism and select similarities between the source domain and the target domain (Gentner 1983; Steinhart 2001); Finally, understand the metaphorical meaning through the reasoning mechanism. The reasoning results of metaphor understanding can be extended to the embodied conceptual knowledge base and conceptual knowledge base convinces that knowledge in the process of metaphor cognition is accumulative.

Metaphor Knowledge Representation

We see from the above that knowledge plays a crucial role in metaphor understanding, and metaphor understanding involves cross-domain mappings, so, it is important to design a multi-domain knowledge base. The representation of concepts and their relations is the key issue in knowledge representation (Sowa 2000). In this paper, we adopt ontology as the tool to organize concepts, hereafter, we will introduce conceptual blending to describe metaphor mapping based on ontology.

Ontology

Ontology (Gruber 1993) is a notion from philosophy, and research on ontology has become widespread in the computer science community. In philosophy, an ontology refers to a particular system of categories accounting for a certain vision of the world, while in AI, an ontology is an engineering artifact, constructed by a specific vocabulary to describe a certain reality, plus some explicit assumptions regarding the intended meaning of the words in vocabulary (Guarino 1998).

The logical foundation for ontology in AI is description logic (DL). Baader et al. (2003) introduced the origin, theoretic foundation and applications of DL in detail. Different from first order predicate logic, DL provides a decidable reasoning mechanism, and has stronger reasoning ability.

Ontological Model

Ontology can be specified with five components (Gomez-Perez and Corcho 2002): (a) concepts; (b) relations; (c) functions; (d) axioms; (e) instances. Gomez-Perez and Corcho (2002) defined four kinds of basic relations: part-of, kind-of, instance-of and attribute-of.

We extend the five primitives to conceptual primitives which are applicable to knowledge representation for metaphor understanding.

Definition 1 Conceptual primitives include

  1. 1.

    Conceptual Constituent a concept can be the description of a task, function, action and so on, denoted as C1, C2, etc.

     
  2. 2.

    Relation Name relations are interactions between concepts, denoted as R1,R2, etc. The two concepts in the arguments of a relation are called domain and range respectively, and each concept is assigned a theta role, which restricts the domain or range to certain concepts, e.g. R1(T1:C1,T2:C2) denotes that the domain of R1 comes from concept C1 as a theta role T1, and the range from concept C2 as a theta role T2.

     
  3. 3.

    Theta Role Name theta role is used to restrict the arguments of relation.

     
  4. 4.

    Individual Name individual name is the description of object.

     
  5. 5.

    Taxonomy of Concepts and Relations the inclusion relation of concepts and relations, corresponding to kind-of in Gomez-Perez and Corcho (2002), denoted as \(\sqsubseteq\) or isa.

     
  6. 6.

    Part-Whole Relation part-whole relation is taxonomy of part and whole relation between concepts, corresponding to part-of, denoted as @ or partOf.

     
  7. 7.

    Instance instance is the relation between individual and concept, corresponding to instance-of.

     
  8. 8.

    Equivalence equivalence means that two concepts are the same, denoted as ≡.

     
  9. 9.

    Metaphorical Relation metaphorical relation means that a concept is regarded as another concept in some contexts, denoted as ∼.

     

Obviously, we can divide the primitives in Definition 1 into two types: nodes and links. Conceptual constituent, relation name, theta role name and individual name are nodes, and the others are links. From the view of graph theory, node is vertex of graph, and link is the edge between vertexes.

Combined with the syntax and semantics of Description Logic, we introduce the definition of concept and formula as follows.

Definition 2 Concept and Formula

Let C1 and C2 be concept constituents, a and b are individual names, R is a relation name, T1 and T2 are theta role names, then:
  • C1 and C2 are concepts;

  • \(C_{1}\sqcap C_{2}, \neg C_{1}\) and ∃R.C1 are concepts;

  • T1:C1 is concept.

  • C1(a), R(ab), C1C2C1 @ \(C_{2}, C_{1}\sqsubseteq C_{2}\) and C1C2 are all atomic formulas;

  • the boolean composition of atomic formulas is formula, e.g., if α and β are formulas, then α ∧ β and \(\neg \alpha\) are formulas.

Definition 3 Semantic Interpretation

The semantic model for conceptual primitives is a tuple \(<\triangle^{\mathcal{I}},\cdot^{\mathcal{I}}>\), where \(\triangle^{\mathcal{I}}\) is a non-empty set (the domain of the interpretation) and ·I is an interpretation function, which assigns to every atomic concept C a set \(C^{\mathcal{I}}\subseteq \triangle^{\mathcal{I}}\), to every relation R a binary relation \(R^{\mathcal{I}} \subseteq \triangle^{\mathcal{I}}\times\triangle^{\mathcal{I}}\), to every theta role T an element \(T^{\mathcal{I}} \in N_{T}(N_{T}\subseteq\triangle^{\mathcal{I}})\), and to every individual a an element \(a^{\mathcal{I}}\in \triangle^{\mathcal{I}}\). The semantic interpretation function is extended to concept descriptions and formulas by following inductive definitions:
  • \((C_{1}\sqcap C_{2})^{\mathcal{I}} = C_{1}^{\mathcal{I}}\cap C_{2}^{\mathcal{I}}\);

  • \((\neg C)^{\mathcal{I}} = \triangle^{\mathcal{I}}\setminus C^{\mathcal{I}}\);

  • \((\exists R.C)^{\mathcal{I}} =\{x \in \triangle^{\mathcal{I}}|\exists y \in C^{\mathcal{I}}\), and \(R^{\mathcal{I}}(x,y)\}\);

  • \((T_{1}:C_{1})^{\mathcal{I}} =\{x\in \triangle^{\mathcal{I}}|\lambda(x)=T_{1}^{\mathcal{I}}\}\), where λ is an assignment function, assigns the theta role to an instance;

  • \(\mathcal{I}\models C(a)\), iff \(a^{\mathcal{I}}\in C^{\mathcal{I}}\);

  • \(\mathcal{I}\models R(a,b)\), iff \(R^{\mathcal{I}}(a^{\mathcal{I}},b^{\mathcal{I}})\);

  • \(\mathcal{I}\models C_{1} \sqsubseteq C_{2}\), iff \(C_{1}^{\mathcal{I}} \subseteq C_{2}^{\mathcal{I}}\);

  • \(\mathcal{I}\models C_{1} @ C_{2}$, {\rm iff}\, $x\in C_{1}^{\mathcal{I}}, y\in C_{2}^{\mathcal{I}}\), such that partOf (xy);

  • \(\mathcal{I}\models \varphi\wedge \phi\), iff \(\mathcal{I}\models\varphi\) and \(\mathcal{I}\models\phi\);

  • \(\mathcal{I}\models \neg\varphi\), iff \(\mathcal{I}\not\models \varphi\).

A formula φ is true under \(\mathcal{I}\), denoted as \(\mathcal{I}\models \varphi\), and we call \(\mathcal{I}\) a model of φ. If there is a model for φ, we say φ is satisfiable. For a concept C, if there is a model \(\mathcal{I}\), such that \(C^{\mathcal{I}}\neq \emptyset\), then we say C is satisfiable.

Referred to Guarino (1998), given a domain space \(<\mathcal{D},\mathcal{W}>\), where \(\mathcal{D}\) is a domain and \(\mathcal{W}\) is a set of maximal states of affairs of such domain (also called possible worlds), we define the Ontological Model as following:

Definition 4 Ontological Model

An ontology is a tuple \(\mathcal{O = <C,H_{C},R,H_{R},T,A,D>}\), where, \(\mathcal{C}\) is set of concepts, \(\mathcal{H_{C} \subseteq C \times C}\) denotes the inclusion between concepts; \(\mathcal{R \subseteq C \times C}\) is the set of properties, denoting the binary relations between concepts; \(\mathcal{H_{R} \subseteq R \times R}\) denotes the inclusion between properties; \(\mathcal{T \subseteq C \times TH}\) denotes the theta role property, where \(\mathcal{TH} \subseteq \mathcal{D}\) is the set of theta roles; \(\mathcal{A}\) is the axiom set provided by the description language; \(\mathcal{D}\) is the domain.

In natural language, concepts are represented in the form of words. In order to efficiently process various concepts involved in metaphor understanding, it is necessary to associate words appearing in natural language text with the elements of ontology. Based on the Text-to-Onto (Maedche and Volz 2001) approach, we define a mapping from text to ontology elements:
$$ \Upgamma =\{ \Upgamma^{{\mathcal{C}}},\Upgamma^{{\mathcal{R}}},{\mathcal{F}},{\mathcal{G}} \} $$
where, \({\Upgamma^{\mathcal{C}}}\) and \({\Upgamma^{\mathcal{R}}}\) are two sets of lexical entries of concept and relation, respectively; \({\mathcal{F}\subseteq \Upgamma^{\mathcal{C}} \times \mathcal{C}}\) and \({\mathcal{G}\subseteq \Upgamma^{\mathcal{R}} \times \mathcal{R}}\) are references for concept and relation, respectively. Let \({l \in \Upgamma^{\mathcal{C}}}\), \(c\in \mathcal{C}\), define \(\mathcal{F}(l)=\{ c\in \mathcal{C}|(l,c)\in \mathcal{F}\}\) and \({\mathcal{F}^{-1}(c) =\{l\in \Upgamma^{\mathcal{C}}|(l,c)\in \mathcal{F}\}}\) (\(\mathcal{G}\) and \(\mathcal{G}^{-1}\) can be defined analogously).
In brief, mapping from text to ontology elements can be considered as a lexicon of an ontology. The words in \({\Upgamma^{\mathcal{C}}}\) and \({\Upgamma^{\mathcal{R}}}\) could be multi-language text, and in view of polysemy and synonym in natural language, \(\mathcal{F}\) and \(\mathcal{G}\) are both many-to-many mappings. We can further explain these many-to-many mappings from the view of word sense matrix model (see Table 1) in WordNet (Fellbaum 1998).
Table 1

Relations between concept and word

 

l1

l2

l3

l4

c1

E1,1

   

c2

 

E2,2

E2,3

 

c3

E3,1

E3,2

 

E3,4

c4

   

E4,4

In Table 1, Ei,j means that there is a mapping between lexical entry li and concept cj, i.e. \((l_{i},c_{j})\in \mathcal{F}\). From the table, we can see l1, l2 and l4 are polysemous words, while l2 and l3 are synonyms. We can also use the semantic triangle (Ogden and Richards 1989) to explain the relations among words, concepts and objects, see Fig. 3.
https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig3_HTML.gif
Fig. 3

Semantic triangle

On the basis of mapping from text to ontology elements, we introduce an auxiliary ontology, called Lexical Ontology as following:

Definition 5 Lexical Ontology is a tuple \(\Upomega=(L, LR, M, \eta, SR)\), where:

  • L is the set of words, called the vocabulary of \(\Upomega\).

  • LR is the set of lexical relations, and is the description of relations between words, such as antonym, participle, derivation, and so on.

  • M is the set of meanings (or senses) of \(\Upomega\), every \(m \in M\) is a meaning represented by a word, and every meaning has a corresponding concept in ontology.

  • \(\eta : L \rightarrow 2^{M}\) is a function. For any word \(l\in L\), η(l) = {m1m2, … , mk}, denotes that l has k meanings as m1m2, … , mk, respectively.

  • SR is the set of relations between concepts, such as hyponymy, part-whole and so on, the same as in ontology.

With analysis of the structure of WordNet, we formalize a lexical ontology \(\Upomega=(L, LR, M, \eta, SR)\), where L and M are sets consisting of all the words and meanings respectively in WordNet; LR and SR consist of the lexical relations and semantic relations (see Huang and Zhou 2007) in WordNet, respectively; function η accesses all the meanings of a word, for example, η(“bank”) = {bank#n#1, bank#n#2, bank#n#3, bank#n#4, bank#n#5, bank#n#6, bank#n#7, bank#n#8, bank#n#9, bank#n#10, bank#v#1, bank#v#2, bank#v#3, bank#v#4, bank#v#5, bank#v#6, bank#v#7, bank#v#8}, which means “bank” has 18 meanings, including 10 meanings as noun and 8 as verb.

Metaphor Mapping

As we know, metaphor involves the mapping between two conceptual domains. On the basis of the above ontological model and lexical ontology, we can discuss the formalization of metaphor from two levels: linguistic level and conceptual level. On the linguistic level, the formalization of metaphor is to tag metaphor roles on the parsing result of a metaphor utterance, which was discussed in Yang et al. (2009) and Huang et al. (2011). The conceptual level is the focus of this paper. We introduce mental spaces as source domain and target domain, and the representation of mental spaces as ontological model.

Revised Conceptual Integration Network

In the conceptual blending theory, the mental spaces are represented as frames (Fauconnier and Turner 2002). Here, we will use ontology to describe mental spaces, and design different ontological models for specific mental spaces according to their functions in blending.

Generic space is described as a general ontology, providing background knowledge for conceptual mapping in the blending. The generic space ontology is defined as \(GO=<\mathcal{C,H_{C},R,H_{R},T,A,D}>\). The main functions of GO include: (a) providing world knowledge; (b) providing support for mappings between input spaces, such as similarity computation, semantic relations, and so on.

We construct generic space on the basis of HowNet1, an online Chinese-English bilingual commonsense knowledge base built by Dong and Dong (2006), and the method to extend HowNet proposed by Chen et al. (2005). According to the Extended-HowNet2, we reorganize the sememe taxonomy in HowNet. The components of GO are built as follows: the sememes in {entity|https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figa_HTML.gif} and {event|https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figb_HTML.gif} are all built into concept set \(\mathcal{C}\); the hierarchical structure of concepts \(\mathcal{H_{C}}\) is shown in Fig. 4; the relations set \(\mathcal{R}\) consists of entity roles and properties; the theta roles set \(\mathcal{TH}\) consists of event roles that are used to describe the semantic role of verbs. Entity roles and event roles mainly come from Event Role and Feature and Secondary Feature in HowNet and the adjective taxonomy of Modern Mandarine Semantic Dictionary3 developed by Center of Chinese Linguistics of Peking University. Properties are the sememes in {attribute|https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figc_HTML.gif}.
https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig4_HTML.gif
Fig. 4

Conceptual hierarchy of generic space ontology (in part)

Input spaces represent the knowledge of source concept and target concept, called source ontology IOS and target ontology IOT. According to conceptual blending theory (Fauconnier and Turner 2002), input space is an online structure which is generated according to the ingredient of metaphor during an agent understanding a metaphor, so, there are two steps to form an input space. Firstly, obtain corresponding metaphorical words in tagged sentence resulted from metaphor identification and tagging module, which was implemented in Huang (2009) and Huang et al. (2011). Secondly, for the words tagged as source of metaphor, we can get the corresponding concepts through Text-to-Onto approach based on lexical ontology and Generic Ontology, and then create the source ontology IOS. Analogously, the target ontology IOT is created from the words tagged as target of metaphor.

Therefore, the four-space model of conceptual integration network can be extended to a five-space network as in Fig. 5. In this revised network model, lexical ontology and context information (i.e. the added space) are constraints on input spaces.
https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig5_HTML.gif
Fig. 5

Revised conceptual integration network for metaphor mapping

Implementation of Blending

For convenience, following the definition of conceptual primitives (see Definition 1) and ontological model (see Definition 4), we regard ontological model as a graph, where the nodes are conceptual constituents, theta role names and individual names, edges are the other primitives. Then, conceptual blending can be converted to looking for isomorphism, match and mapping relations between graphs.

There are three cognitive operators in conceptual blending theory (Fauconnier and Turner 2002), i.e. Composition, Completion and Elaboration. Before running these three operators, elements in two input spaces are projected into the blended space.

We define a mapping operator \(\Upphi\) to represent the corresponding relation between input spaces, namely \(\Upphi:O_{1}\times O_{2}\rightarrow\Uppi\), where O1 and O2 are mental spaces denoted by ontology, and \(\Uppi\) is a set of all the candidate element mappings between O1 and O2. To generate as many candidate mappings as possible, referring to the Triangulation Rule and Squaring rule in Sapper (Veale 1995), the mapping operator consists of three steps corresponding to the cognitive operators in conceptual blending respectively: (1) generate all possible candidate mappings through Triangulation Rule and Squaring rule; (2) evaluate the candidates; (3) project these elements into blended space selectively.

In this procedure, Triangulation Rule means that, if both elements a in O1 and b in O2 are connected with c through relation R, then we can build a mapping between a and b, denoted as \(\Upphi(a,b)\); Squaring rule means that, if a and b in O1 are connected by relation R, c and d in O2 are also connected by R, and there is a mapping between b and d, namely \(\Upphi(b,d)\), then we can build a mapping between a and c, denoted as \(\Upphi(a,c)\). The first step repeats Triangulation Rule and Squaring rule until there is no new mapping generated. In fact, this step takes on the Composition operator in conceptual blending.

The first step of the mapping operator \(\Upphi\) is implemented in GenerateCandidates. Algorithm 1 shows the pseudo code the first loop of GenerateCandidates, which implements the Triangle Rule. The second loop of GenerateCandidates can be implemented by replacing lines 12–15 with corresponding code for Squaring Rule.
Algorithm 1

GenerateCandidates

Inputs: Target T, Source S, Generic Ontology G

Output: List of all candidate mapping relations between T and S

Procedure:

1  ArrayList<Pair> candidates= new ArrayList<Pair>();

2  //for every node in target domain

3  for(Node nodeT : T)

4    //get edges related to the node

5    ArrayList<Edge> curEdges=nodeT.getRelation();

6    //for every edge related to the node in target domain

7    for(Edge edgeT: curEdges)

8      //get the end of current edge

9      Node desc = edgeT.getEnd();

10      //for every node in source domain

11      for(Node nodeS : S)

12        //apply the triangle rule

13        if(nodeS.hasRelation(edgeT.name, desc))

14          Candidates.add(new Pair(nodeT, nodeS));

15        end if

16      end for

17    end for

18  end for

At the second step, according to the optimality principles and governing principles for compression in conceptual blending theory, we present an quantification equation to evaluate a candidate mapping, called Integrated Degree. Suppose m = (nodeTnodeS) is a set of candidate mappings, then, the integrated degree for m is defined as follows:
$$ I(m)={{(m.size-1)\times 2}\over {nodeT.rSize + nodeS.rSize}} $$
where, the property rSize of node denotes the number of edges in connection with it, m.size is the number of candidate mappings which are raised from nodeT and nodeS.

At the last step, the candidate mappings with integrated degree over some threshold are projected into the blended space.

Example

We show the above procedure with an example in detail.

Example 1

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figd_HTML.gif (Market is sea)

After parsing and metaphorical role tagging, the source of this metaphor is “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fige_HTML.gif (sea)” and the target is “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figf_HTML.gif (market)”. According to the method of input space ontology construction, we first access the lexical ontology to obtain the input ontologies for “sea” and “market”, denoted as IOS and IOT as shown in Figs. 6 and 7, respectively. In Figs. 6 and 7, the rectangles denote concepts, the circles denote properties, ovals denote the set of possible values of a property which is connected by relation val, edges labeled with isa between rectangles denote the hyponymy of concepts and partOf represent the part-whole relation, the dashed line between rectangle and circle means the concept has the property.
https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig6_HTML.gif
Fig. 6

The source domain ontology with “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figad_HTML.gif (Sea)” (in part)

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fig7_HTML.gif
Fig. 7

The target domain ontology with “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figae_HTML.gif (Market)” (in part)

If we set a threshold as 0.5, then through triangulation rule we can get the following mapping pairs: m1 = (“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figg_HTML.gif _place”, “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figh_HTML.gif _waters”), with integrated degree 0.6667; m2 = (“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figi_HTML.gif _market”, “ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figj_HTML.gif _sea”), with integrated degree 0.4. So, in the first loop, m1 is a candidate mapping. Then, continue to execute the squaring rule through m1, the integrated degree of m2 becomes 0.6, larger than the threshold, therefore, m2 is also a candidate mapping. Further execution of triangulation rule and squaring rule through m2 results new candidates as m3=(“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figk_HTML.gif _goods”,“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figl_HTML.gif _gulf”) and m4 = (“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figm_HTML.gif _shelf”,“ https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Fign_HTML.gif _gulf”), with integrated degree 0.6667. The resulting candidate mapping list is shown in Table. 2.
Table 2

Candidate mappings in example 1

Target concept

Source concept

Integrated degree

Similarities

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figo_HTML.gif _place

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figp_HTML.gif _waters

0.6667

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figq_HTML.gif _situation, https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figr_HTML.gif _space

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figs_HTML.gif _market

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figt_HTML.gif _sea

0.6

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figu_HTML.gif _situation, https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figv_HTML.gif _extent, https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figw_HTML.gif _waters

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figx_HTML.gif _goods

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figy_HTML.gif _gulf

0.6667

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figz_HTML.gif _sea

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figaa_HTML.gif _shelf

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figab_HTML.gif _gulf

0.6667

https://static-content.springer.com/image/art%3A10.1007%2Fs11023-012-9269-z/MediaObjects/11023_2012_9269_Figac_HTML.gif _sea

If we set the number of similarities as a second threshold, e.g., there are at least 2 similarities between the mapping, then only m2 is the valid one. As for metaphor understanding of Example 1, it can be interpreted from the perspectives of properties “situation”, “extent”, and the common superordinate concept “waters”. For example, from the perspective of “situation”, understanding “Market is sea” is to select the value of “situation” in IOS (Fig. 6) for that of “market”. The selection depends on agent’s epistemic state and context, if an agent thinks the situation in a sea is surging, and a surging situation means danger, then she may comprehend the metaphor “Market is sea” as “Market is full of danger”.

Conclusion

In this paper, we have discussed the cognitive mechanism of metaphor understanding in the light of conceptual metaphor theory (Lakoff and Johnson 1980) and conceptual blending theory (Fauconnier and Turner 2002), and proposed a theoretic framework for metaphor understanding based on the embodiment in metaphor cognition.

In the preliminary implementation of our framework for metaphor understanding, we introduce ontology as knowledge representation method, and present an ontological model to formalize metaphor knowledge. Based on the ontological model, we formalize metaphor mapping under conceptual blending framework, where mental spaces are represented as ontology and their constructions are also presented. We further propose an algorithmic description of blending with a quantitative measure Integrated Degree, and show the effectiveness with an example.

In the future work, we will further investigate the formalization of knowledge for embodied concepts and their inquiry.

Acknowledgments

This research is supported in part by research grants from the National Natural Science Foundation of China for Young Scientists(No.61103101), the Major Program of National Social Science Foundation of China(No.11&ZD088), the Humanity and Social Sciences Foundation for Young Scholars of China’s Ministry of Education (No.10YJCZH052), the Zhejiang Provincial Natural Science Foundation of China (No.Y1080606), the China Postdoctoral Science Foundation(No.20100481443 and No.201104743).

Copyright information

© Springer Science+Business Media B.V. 2012