Keywords

1 Introduction

Multi-agent systems have emerged as one of the most important areas of research and development in information technology in the 1990s. Since the theme has received attention of specialists and nowadays a number of research topics has been considered such as cooperative distributed problem solving, mechanism design, auctions, game theory, multi-agent planning, negotiation protocols, multi-agent learning, conflict resolution, agent-oriented software engineering, including implementation languages and frameworks, E-business agents, novel computing paradigms (autonomic, grid, P2P, ubiquitous computing), among innumerous themes.

In this paper we are focused in the problem of conflict resolution among agents. For this task we need a suitable language for represent agent’s communication and moreover inference rules to get interest results; in other words we need an underlying logical system to represent agent’s interaction.

Agents’ conflicts arise for different reasons, involve different concepts, and are dealt with in different ways, depending on the kind of agents and on the domain where they are considered.

For example,

  • incompleteness and uncertainty of the agents’ knowledge or beliefs: in dynamic contexts, an agent may have more recent or more complete information than the others, and the differences in the agents’ knowledge create knowledge conflicts;

  • limited or unavailable resources: not all agents have access to the same resources, thus resulting in resource conflicts;

  • differences in the agents’ skills and points of view: autonomous and heterogeneous agents have different abilities, or even different preferences, which can cause conflicts if the agents’ pieces of information are not comparable, if they come up with different answers to the same questions, or if they are strongly committed to their own preferences.

Up till now, the focus has been much on how to avoid, solve or get rid of conflicts. However, recent research has shown that conflicts have positive effects in so far as they can generate original solutions and be a basis for a global enrichment of the knowledge within a multi-agent system.

Since more and more concern is attached to agents’ teamwork and agents’ dialogue, conflicts naturally arise as a key issue to be dealt with, not only with application dedicated techniques, but also with more formal and generic tools.

2 Paraconsistent, Paracomplete, and Non-alethic Logics

In what follows, we sketch the non-classical logics discussed in the paper, establishing some conventions and definitions. Let T be a theory whose underlying logic is L. T is called inconsistent when it contains theorems of the form A and ¬A (the negation of A). If T is not inconsistent, it is called consistent. T is said to be trivial if all formulas of the language of T are also theorems of T. Otherwise, T is called non-trivial.

When L is classical logic (or one of several others, such as intuitionistic logic), T is inconsistent iff T is trivial. So, in trivial theories the extensions of the concepts of formula and theorem coincide. A paraconsistent logic is a logic that can be used as the basis for inconsistent but non-trivial theories. A theory is called paraconsistent if its underlying logic is a paraconsistent logic.

Issues such as those described above have been appreciated by many logicians. In 1910, the Russian logician Nikolaj A. Vasil’év (1880–1940) and the Polish logician Jan Lukasiewicz (1878–1956) independently glimpsed the possibility of developing such logics. Nevertheless, Stanislaw Jaskowski (1996–1965) was in 1948 effectively the first logician to develop a paraconsistent system, at the propositional level. His system is known as ‘discussive’ (or discursive) propositional calculus’. Independently, some years later, the Brazilian logician Newton C.A. da Costa (1929–) constructed for the first time hierarchies of paraconsistent propositional calculi C i, 1 ≤ i ≤ ω of paraconsistent first-order predicate calculi (with and without equality), of paraconsistent description calculi, and paraconsistent higher-order logics (systems NF i, 1 ≤ i ≤ ω). Another important class of non-classical logics are the paracomplete logics. A logical system is called paracomplete if it can function as the underlying logic of theories in which there are formulas such that these formulas and their negations are simultaneously false. Intuitionistic logic and several systems of many-valued logics are paracomplete in this sense (and the dual of intuitionistic logic, Brouwerian logic, is therefore paraconsistent).

As a consequence, paraconsistent theories do not satisfy the principle of non-contradiction, which can be stated as follows: of two contradictory propositions, i.e., one of which is the negation of the other, one must be false. And, paracomplete theories do not satisfy the principle of the excluded middle, formulated in the following form: of two contradictory propositions, one must be true.

Finally, logics which are simultaneously paraconsistent and paracomplete are called non-alethic logics.

3 A Logical Framework for Representing Impreciseness, Conflicts and Paracompleteness

We present, in this section, the multimodal predicate calculi Mτ, based on annotated logics extensively studied by Abe [1, 4, 6, 13] and multimodal systems considered in [7, 8, 9, 10, 11, 12].

The symbol τ = <∣τ∣, ≤, ~> indicates some finite lattice with operator called the lattice of truth-values. We use the symbol ≤ to denote the ordering under which τ is a complete lattice, ⊥ and ┬ to denote, respectively, the bottom element and the top element of τ. Also, ∧ and ∨ denote, respectively, the greatest lower bound and least upper bound operators with respect to subsets of ∣τ∣. The operator ~:∣τ∣→ ∣τ∣ will work as the “meaning” of the negation of the system Mτ.

The language of Mτ has the following primitive symbols:

  1. 1.

    Individual variables: a denumerable infinite set of variable symbols: x 1, x 2, …

  2. 2.

    Logical connectives: ¬ (negation), ∧ (conjunction), ∨ (disjunction), and → (implication).

  3. 3.

    For each n, zero or more n-ary function symbols (n is a natural number).

  4. 4.

    For each n ≠ 0, n-ary predicate symbols.

  5. 5.

    The equality symbol: =

  6. 6.

    Annotational constants: each member of τ is called an annotational constant.

  7. 7.

    Modal operators: []1, []2, …, []n, (n ≥ 1), [] G , []\( _{G}^{C} \), []\( _{G}^{D} \) (for every nonempty subset G of {1, …, n}).

  8. 8.

    Quantifiers: ∀ (for all) and ∃ (there exists).

  9. 9.

    Auxiliary symbols: parentheses and comma.

A 0-ary function symbol is called a constant. We suppose that Mτ possesses at least one predicate symbol.

We define the notion of term as usual. Given a predicate symbol p of arity n and n terms t 1, …, t n, a basic formula is an expression of the form p(t 1, …, t n). An annotated atomic formula is an expression of the form p λ(t 1, …, t n), where λ is an annotational constant. We introduce the general concept of (annotated) formula in the standard way. For instance, if A is a formula, then []1 A, []2 A, …, []n A, [] G A, []\( _{G}^{C} A\), and []\( _{G}^{D}A\) are also formulas [7].

Among several intuitive readings, an atomic annotated formula p λ(t 1, …, t n) can be read: it is believed that p(t 1, …, t n)’s truth-value is at least λ.

Definition 1.

Let A and B be formulas. We put A ↔ B = Def. (A → B) ∧ (B → A) and ¬*A =Def. A → ((AA) ∧ ¬(AA)). The symbol ‘↔’ is called biconditional and ‘¬*’ is called strong negation.

Let A be a formula. Then: ¬0 A =Def. A, ¬1 A =Def.¬A, and ¬k A =Def. ¬(¬k−1 A), (k ∈ N, k > 0). Also, if μ ∈ τ, ~0μ =Def. μ, ~1μ =Def. ~μ, and ~kμ =Def. ~(~k−1μ), (k ∈ N, k > 0). If A is an atomic formula p λ(t 1, …, t n), then a formula of the form ¬k p λ(t 1, …, t n) (k ≥ 0) is called a hyper-literal. A formula other than hyper-literals is called a complex formula.

The postulates (axiom schemata and primitive rules of inference) of Mτ are the same of the logics Qτ [1] plus the following listed below [7], where A, B, and C are any formulas whatsoever, p(t 1, …, t n) is a basic formula, and λ, μ, μj are annotational constants.

  1. (M1)

    []i(AB) → ([]i A → []i B), i = 1, 2, …, n

  2. (M2)

    []i A → []i[]i A, i = 1, 2, …, n

  3. (M3)

    ¬*[]i A → []i¬*[]i A, i = 1, 2, …, n

  4. (M4)

    []i AA, i = 1, 2, …, n

  5. (M5)

    \( \frac{A}{{[]_{i} A}} \), i = 1, 2, …, n

  6. (M6)

    [] G A ↔ ∧ iG []i A

  7. (M7)

    []\( _{G}^{C}A \) → [] G (A ∧ []\( _{G}^{C}A \))

  8. (M8)

    []\( _{{\{ i\} }}^{D}A \) ↔ []i A, i = 1, 2, …, n

  9. (M9)

    []\( _{G}^{D}A \) → []\( _{G'}^{D}A \) if G′G

  10. (M10)

    \( \frac{{A \to []_{G} (B \wedge A)}}{{A \to []_{G}^{B} B}} \)

  11. (M11)

    x[]i A → []ixA, i = 1, 2, …, n

  12. (M12)

    ¬*(x = y) → []i¬*(x = y), i = 1, 2, …, n

Mτ is an extension of the logic Qτ. As Qτ contains classical predicate logic, Mτ contains classical modal logic S5, as well as the multimodal system studied in [7] in at least two directions. So, usual all valid schemes and rules of classical positive propositional logic are true. In particular, the deduction theorem is valid in Mτ ant it contains intuitionistic positive logic.

Theorem 1.

Mτ is non-trivial.

Now we introduce a semantical analysis by using Kripke models [3, 5].

Definition 2.

A Kripke model for Mτ (or Mτ structure) is a set theoretical structure K = [W, R 1, R 2, …, R n , I] where W is a nonempty set of elements called ‘worlds’; R i (i = 1, 2, …, n) is a binary relation on W such that it is an equivalence relation. I is an interpretation function with the usual properties with the exception that for each n-ary predicate symbol p we associate a function p I:W n → |τ|.

Given a Kripke model K for the language L of Mτ, the diagram language L(K) is obtained as usual.

Definition 3.

If A is a closed formula of Mτ, and wW, we define the relation K,wA (K,w force A) by recursion on A:

  1. 1.

    If A is atomic of the form p λ(t 1, …, t n), then K,wA iff p I(K(t 1), …,K(t n)) ≥ λ.

  2. 2.

    If A is of the form ¬k p λ(t 1, …, t n) (k ≥ 1), K, wA iff K,w ╟ ¬k−1 p (t 1, …, t n).

  3. 3.

    Let A and B formulas. Then, K,w ╟ (AB) iff K,wA; K,wB; K,w ╟ (AB) iff K,wA or K,wB; K,w ╟ (AB) iff it is not the case that K, wA or K,wB;

  4. 4.

    If F is a complex formula, then K,w ╟ (¬F) iff it is not the case that K,wF.

  5. 5.

    If A is of the form (∃x)B, then K,wA iff K,wB x[i] for some i in L(K).

  6. 6.

    If A is of the form (∀x)B, then K,wA iff K,wB x[i] for all i in L(K).

  7. 7.

    If A is of the form []i B then K,wA iff K, w′B for each w′W such that wR i w′, i = 1, 2, …, n

Definition 4.

Let K = [W, R 1, R 2, …, R n , I] be a Kripke structure for Mτ. The Kripke structure K forces a formula A (in symbols, KA), if K,wA for each wW. A formula A is called Mτ-valid if for any Mτ-structure K, KA. A formula A is called valid if it is Mτ-valid for all Mτ structure. We symbolize this fact by ╟ A.

Theorem 2.

Let K = [W, R 1, R 2, …, R n , I] be a Kripke structure for Mτ. Then

  1. 1.

    If A is an instance of a propositional tautology then, KA

  2. 2.

    If KA and KAB, then KB

  3. 3.

    K ╟ []i(AB) → ([]i A → []i B), i = 1, 2, …, n

  4. 4.

    K ╟ []i A → []i[]i A, i = 1, 2, …, n

  5. 5.

    K ╟ []i AA, i = 1, 2, …, n

  6. 6.

    If KA then K ╟ []i A, i = 1, 2, …, n

Theorem 3.

Let K be a Kripke model for Mτ and F a complex formula. Then we have not simultaneously K,w ╟ ¬F and K,wF.

Theorem 4.

Let p(t 1, …, t n) be a basic formula and λ, μ, ρ ∈ ∣τ∣. We have ╟ p (t 1, …, t n); ╟ p λ(t 1, …, t n) → p μ(t 1, …, t n), if λ ≥ μ; ╟ p λ(t 1, …, t n) ∧ p μ(t 1, …, t n) → p ρ(t 1, …, t n), where ρ = λ ∨ μ

Theorem 5.

Let A and B be arbitrary formulas and F a complex formula. Then:

╟ ((AB) → ((A → ¬*B) → ¬*A)); ╟ (A → (¬*AB)); ╟ (A ∨ ¬*A); ╟ (¬F ↔ ¬*F); ╟ A ↔ ¬*¬*A; ╟ ∀xA ↔ ∃x¬*A; ╟ (AB) ↔ ¬*(¬*A ∨ ¬*B); ╟ ∀A ↔ ∃x¬*A; ╟ ∀xAB ↔ ∃x(AB); ╟ A ∨ ∃xB ↔ ∃x(AB).

Corollary 5.1.

In the same conditions of the preceding theorem, we have not simultaneously K ╟ ╖A and KA. The set of all formulas together with the connectives ∧, ∨, →, and ╖ has all properties of the classical logic.

Theorem 6.

There are Kripke models K such that for some hyper-literals A and B and some worlds w and w′W, we have K,w ╟ ¬A and K,wA and it is not the case that K, w′B.

Proof.

Let W = {{a}} and R = {({a},{a})} (that is w = {a}) and p(t 1, …, t n) and q(t′ 1, …, t′ n) basic (closed) formulas such that I(p) ≡ ┬ and I(q) ≡ ⊥. As ┬ ≥ ┬, it follows that p (t 1, …, t n) ≥ ┬. Also, ┬ ≥ ~┬. So, p I ≥ ~┬. Therefore, K,wp (t 1, …, t n) and K,wp ~┬(t 1, …, t n). By condition 2 of Definition 3, it follows that K,w ╟ ¬ p (t 1, …, t n). On the other hand, as it is false that ⊥ ≥ ┬; it follows that it is not the case that q I ≥ ┬, and so, it is not the case that K,wq (t′ 1, …, t′ n).

Theorem 7.

For some systems Mτ there are Kripke models K such that for some hyper-literal formula A and some world wW, we don’t have K,wA nor K, w ╟ ¬A.

Corollary 7.1.

For some systems Mτ there are Kripke models K such that for some hyper-literal formulas A and B, and some worlds w, w′W, we have K,w ╟ ¬A and K,wA and we don’t have K,wB nor K,w ╟ ¬B.

The earlier results show us that there are systems Mτ such that we have “inconsistent” worlds, “paracomplete” worlds, or both.

Now we present a strong version these results linking with paraconsistent, paracomplete, and non-alethic logics.

Definition 5.

A Kripke model K is called paraconsistent if there are basic formulas p(t 1, …, t n), q(t 1, …, t n), and annotational constants λ, μ ∈ ∣τ∣ such that K,wp λ(t 1, …, t n), K,w ╟ ¬ p λ(t 1, …, t n), and it is not the case that K,wq μ(t 1, …, t n).

Definition 6.

Asystem Mτ is called paraconsistent if there is a Kripke model K for Mτ such that K is paraconsistent.

Theorem 8.

Mτ is a paraconsistent system iff #∣τ∣≥ 2.

Proof.

Define a structure K = [{w}, {(w, w)}, I] such that \( \left\{ {\begin{array}{*{20}c} {q_{I} \, = \, \bot } \\ {p_{I} \, = \,{\text{T}}} \\ \end{array} } \right. \)

It is clear that p I ≥ ┬, and so Kp (t 1, …, t n). Also, p I ≥ ~┬, and, so Kp ~┬(t 1, …, t n), or K ╟ ¬p (t 1, …, t n). Also, it is not the case that q I(t 1, …, t n) ≥ ⊥, so it is not the case that K, wq (t 1, …, t n).

Definition 7.

A Kripke model K is called paracomplete if there are a basic formula p(t 1, …, t n) and an annotational constant λ ∈ ∣τ∣ such that it is false that K,wp λ(t 1, …, t n) and it is false that K,w ╟ ¬p λ(t 1, …, t n). A system Mτ is called paracomplete if there is a Kripke models K for Mτ such that K is paracomplete.

Definition 8.

A Kripke model K is called non-alethic if K are both paraconsistent and paracomplete. A system Mτ is called non-alethic if there is a Kripke model K for Mτ such that K is non-alethic.

Theorem 9.

If #∣τ∣≥ 2, then there are systems Mτ which are paracomplete and systems Mτ’ that are not paracomplete, #∣τ∣≥ 2.

Corollary 9.1.

If #∣τ∣≥ 2, then there are systems Mτ which are non-alethic and systems Mτ’ that are not non-alethic, #∣τ∣≥ 2.

Theorem 10.

Let U be a maximal non-trivial maximal (with respect to inclusion of sets) subset of the set of all formulas. Let A and B formulas whatsoever. Then if A is an axiom of Mτ, then AU; ABU iff AU and BU;ABU iff AU or BU; ABU iff AU or BU; If p μ(t 1, …, t n) ∈ U and p λ(t 1, …, t n) ∈ U, then p ρ(t 1, …, t n) ∈ U, where ρ = μ ∨ λ; ¬k p λ(t 1, …, t n) ∈ U iff ¬k−1 p (t 1, …, t n) ∈ U. If A and ABU, then BU; AU iff ¬*AU. Moreover AU or ¬*AU. If A is a complex formula, AU iff ¬AU. Moreover AU or ¬AU. If AU, then []i AU.

Proof.

Let us show only 3. In fact, if p μ(t 1, …, t n) ∈ U and p λ(t 1, …, t n) ∈ U, then p μ(t 1, …, t n) ∧ p λ(t1, …, tn) ∈ U by 2. But it is an axiom pμ(t1, …, tn) ∧ pλ(t1, …, tn) → pρ(t1, …, tn), where ρ = μ ∨ λ. It follows that pμ(t1, …, tn) ∧ pλ(t1, …, tn) → pρ(t1, …, tn) ∈ U, and so pρ(t1, …, tn) ∈ U, by 6.

Given a set U of formulas, define U/[]i = {A∣[]iA ∈ U}, i = 1, 2, …, n. Let us consider the canonical structure K = [W, Ri, I] where W = {U∣U is a maximal non-trivial set} and the interpretation function is as usual with the exception that given a n-ary predicate symbol p we associate the function pI : Wn→ ∣τ∣ defined by pI(\( \mathop {t_{\;1} }\limits^{^\circ } \), …, \( \mathop {t_{\;n} }\limits^{^\circ } \)) =def. ∨{μ ∈∣τ∣∣pμ(t1, …, tn) ∈ U} (such function is well defined, so p(t1, …, tn) ∈ U). Moreover, define Ri =Def. {(U, U’) ∣U/[]i ⊆ U’}.

Lemma 1.

For all propositional variable p and if U is a maximal non-trivial set of formulas, we have ppI( \( \mathop {t_{\;1} }\limits^{^\circ } \) , …, \( \mathop {t_{\;n} }\limits^{^\circ } \) ) (t1, …, tn) ∈ U.

Proof.

It is a simple consequence of the previous theorem, item 5.

Theorem 11.

For any formula A and for any non-trivial maximal set U, we have (K, U) ╟ A iff A ∈ U.

Proof.

Let us suppose that A is pλ(t1, …, tn) and (K, U) ╟ pλ(t1, …, tn). It is clear by previous lemma that ppI( \( \mathop {t_{\;1} }\limits^{^\circ } \) , …, \( \mathop {t_{\;n} }\limits^{^\circ } \) ) (t1, …, tn) ∈ U. It follows also that pI(\( \mathop {t_{\;1} }\limits^{^\circ } \), …, \( \mathop {t_{\;n} }\limits^{^\circ } \)) ≥ λ. It is an axiom that ppI(t1, …, tn) (t1, …, tn) → pλ(t1, …, tn). Thus, pλ(t1, …, tn) ∈ U. Now, let us suppose that pλ(t1, …, tn) ∈ U. By previous lemma, ppI(t1, …, tn) (t1, …, tn) ∈ U. It follows that pI(t1, …, tn) ≥ λ. Thus, by definition, (K, U) ╟ pλ(t1, …, tn). By Theorem 10, ¬kpλ(t1, …, tn) ∈ U iff ¬k−1p(t1, …, tn) ∈ U. Thus, by Definition 3, (K, U) ╟ ¬kpλ(t1, …, tn) iff (K, U) ╟ ¬k−1p(t1, …, tn) . So, by induction on k the assertion is true for hyper-literals.

The other cases, the proof is as in the classical case.

Corollary 11.1.

A is a provable formula of Mτ iff ╟ A

4 Concluding Remarks

It is quite interesting to observe the role of conflict within a multiagent system, i.e. how this system may evolve thanks to, despite, or because of conflicts. Such concept receives different ‘interpretations’ or characterizations depending on of the domain considered. Some considerations regarding to it

  • It is easier not to be in conflict than to be in conflict. The former may mean that the agents are not even interacting. The latter supposes that the agents are within the same context.

  • incompleteness and uncertainty of the agents’ knowledge or beliefs: in dynamic contexts, an agent may have more recent or more complete information than the others, and the differences in the agents’ knowledge create knowledge conflicts;

  • limited or unavailable resources: not all agents have access to the same resources, thus resulting in resource conflicts;

  • differences in the agents’ skills and points of view: autonomous and heterogeneous agents have different abilities, or even different preferences, which can cause conflicts if the agents’ pieces of information are not comparable, if they come up with different answers to the same questions, or if they are strongly committed to their own preferences.

  • When two agents (in this case, e.g. two robots) a conflict does not seem to be necessarily symmetric: conflict (a, b) does not imply conflict (b, a). When two robots roam in a 2D space for instance, the notion of spatial conflict appears only at the time when a robot attempts to move to the location of the other robot. But the latter does not see this conflict.

  • Are conflicts useful? The answer depends on the problem. To be useful, a conflict must be observed. For example, if an agent think about a solution and ask for two other agent’s (experts) for an opinion and they have contradictory opinions, the former agent can decide, for instance, consult a third agent.

  • What we learn from a conflict depends on the situation. Learning is possible if agents are aware of the conflict.

  • Conflicts are positive in certain cases, e.g. they may create specific behaviours, create competition, or stimulate inference.

Up till now, the focus has been much on how to avoid, solve or get rid of conflicts. However, recent research has shown that conflicts have positive effects in so far as they can generate original solutions and be a basis for a global enrichment of the knowledge within a multiagent system. Thus this work is a contribution in this direction, showing, for instance that it is unnecessary to try to avoid conflicts; on the contrary with a logical knowledge representation of conflicts we can manage mathematically them and so it is possible to understand better the nature of them [2].