Skip to main content
Log in

Measuring Inconsistency in Multi-Agent Systems

  • Technical Contribution
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

We introduce and investigate formal quantitative measures of inconsistency between the beliefs of agents in multi-agent systems. We start by recalling a well-known model of belief in multi-agent systems, and then, using this model, present two classes of inconsistency metrics. First, we consider metrics that attempt to characterise the overall degree of inconsistency of a multi-agent system in a single numeric value, where inconsistency is considered to be individuals within the system having contradictory beliefs. While this metric is useful as a high-level indicator of the degree of inconsistency between the beliefs of members of a multi-agent system, it is of limited value for understanding the structure of inconsistency in a system: it gives no indication of the sources of inconsistency. We therefore introduce metrics that quantify for a given individual the extent to which that individual is in conflict with other members of the society. These metrics are based on power indices, which were developed within the cooperative game theory community in order to understand the power that individuals wield in cooperative settings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In the present paper, we will not be concerned with the axiom known as additivity.

  2. Consider the case where \(\Delta = \{\lnot K p \rightarrow q, \lnot K q \rightarrow p\}\). In this case there are two expansions of \(\Delta \): one containing \(p\) but not \(q\), the other containing \(q\) but not \(p\). The set \(\Delta = \{K p\}\) has no expansions.

  3. Exactly how they achieve this isn’t relevant here, but in essence they recursively construct a grounded extension [6] so that when the dialogue terminates both agents agree on the acceptability of a common set of beliefs.

References

  1. Ågotnes T, van der Hoek W, Wooldridge M (2011) Scientia potentia est. In: Proceedings of the Tenth International Joint Conference on autonomous agents and multiagent aystems (AAMAS-2011). Taipei, Taiwan

  2. Besnard P, Hunter A (2008) Elements of argumentation. The MIT Press, Cambridge

    Book  Google Scholar 

  3. Bond AH, Gasser L (eds) (1988) Readings in distributed artificial intelligence. Morgan Kaufmann Publishers, San Mateo

  4. Brewka G, Dix J, Konolige K (eds) (1997) Nonmonotonic reasoning: an overview. Center for the Study of Language and Information

  5. Chalkiadakis G, Elkind E, Wooldridge M (2011) Computational aspects of cooperative game theory. Morgan-Claypool

  6. Dung PM (1995) On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif Intell 77:321–357

    Article  MATH  MathSciNet  Google Scholar 

  7. Fagin R, Halpern JY, Moses Y, Vardi MY (1995) Reasoning about knowledge. The MIT Press, Cambridge

    MATH  Google Scholar 

  8. Gottlob G (1992) Complexity results for nonmonotonic logics. J Logic Comput 2:397–425

    Article  MATH  MathSciNet  Google Scholar 

  9. Grant J, Hunter A (2006) Measuring inconsistency in knowledge bases. J Intell Inf Syst 27:159–184

    Article  Google Scholar 

  10. Grant J, Hunter A (2011) Measuring consistency gain and information loss in stepwise inconsistency resolution. In: Proceedings of European Conference on symbolic and quantitative approaches to reasoning with uncertainty (LNCS 6717). Springer, Berlin, pp 362–373

  11. Hunter A (2006) How to act on inconsistent news: ignore, resolve, or reject. Data Knowl Eng 57:221–239

    Article  Google Scholar 

  12. Hunter A, Konieczny S (2011) On the measure of conflicts: Shapley inconsistency values. Artif Intell 174:1007–1026

    Article  MathSciNet  Google Scholar 

  13. Knight KM (2002) Measuring inconsistency. J Philos Logic 31:77–98

    Article  MATH  MathSciNet  Google Scholar 

  14. Konieczny S, Lang J, Marquis P (2003) Quantifying information and contradiction in propositional logic through epistemic tests. In: Proceedings of the 18th International Joint Conference on artificial intellignce (IJCAI’03), pp 106–111

  15. Konolige K (1986) A deduction model of belief. Pitman Publishing, London; Morgan Kaufmann, San Mateo

  16. Marek W, Truszczynski M (1991) Autoepistemic logic. J ACM 38(3):588–619

    Article  MATH  MathSciNet  Google Scholar 

  17. Parsons S, Wooldridge M, Amgoud L (2003) Properties and complexity of some formal inter-agent dialogues. J Logic Comput 13(3):347–376

    Article  MATH  MathSciNet  Google Scholar 

  18. Pigozzi G (2006) Belief merging and the discursive dilemma: an argument-based account to paradoxes of judgment aggregation. Synthese 152(2):285–298

    Article  MATH  MathSciNet  Google Scholar 

  19. Rahwan I, Simari GR (eds) (2009) Argumentation in artificial intelligence. Springer, Berlin

Download references

Acknowledgments

Wooldridge gratefully acknowledges the support of the ERC under Advanced Investigator Grant 291528 (“RACE”).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Wooldridge.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hunter, A., Parsons, S. & Wooldridge, M. Measuring Inconsistency in Multi-Agent Systems. Künstl Intell 28, 169–178 (2014). https://doi.org/10.1007/s13218-014-0306-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-014-0306-3

Keywords

Navigation