Toward Justifying Actions with Logically and Socially Acceptable Reasons

  • Hiroyuki Kido
  • Katsumi Nitta
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7094)


This paper formalizes argument-based reasoning for actions supported by believable reasons in terms of nonmonotonic consequences and desirable reasons in terms of Pareto optimality and maximizing social welfare functions. Our unified approach gives a four-layer practical argumentation framework structured with a propositional modal language with defaults and defeasible inference rules associated with practical reasoning. We show that the unified argument-based reasoning justifies an argument whose conclusion is supported by Pareto optimal, social welfare maximizing and nonmonotonic consequence reasons. Our formalization contributes to extend argument-based reasoning so that it can formally combine reasoning about logical believability and social desirability by benefiting from economic notions.


Inference Rule Pareto Optimal Solution Pareto Optimality Social Welfare Function Argumentation Framework 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n-person games. Artificial Intelligence 77, 321–357 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Kido, H., Nitta, K.: Practical argumentation semantics for socially efficient defeasible consequence. In: Proc. of The Tenth International Conference on Autonomous Agents and Multiagent Systems, pp. 267–274 (2011)Google Scholar
  3. 3.
    Shafir, E., Simonson, I., Tversky, A.: Reason-based choice. Cognition 49, 11–36 (1993)CrossRefGoogle Scholar
  4. 4.
    Rahwan, I., Larson, K.: Pareto optimality in abstract argumentation. In: Proc. of The 23rd National Conference on Artificial Intelligence, pp. 150–155 (2008)Google Scholar
  5. 5.
    Rosenschein, J.S., Zlotkin, G.: Rules of Encounter: Designing Conventions for Automated Negotiation among Computers. The MIT Press (1994)Google Scholar
  6. 6.
    Wooldridge, M.: An Introduction to MultiAgent Systems, 2nd edn. John Wiley & Sons (2009)Google Scholar
  7. 7.
    Dubois, D., Prade, H., Sabbadin, R.: Decision-theoretic foundations of qualitative possibility theory. European Journal of Operational Research 128, 459–478 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Prakken, H.: A study of accrual of arguments, with applications to evidential reasoning. In: Proc. of The 10th International Conference of Artificial Intelligence and Law’, pp. 85–94 (2005)Google Scholar
  9. 9.
    Bench-Capon, T.J.M., Prakken, H.: Justifying actions by accruing arguments. In: Proc. of The First International Conference on Computational Models of Argument, pp. 247–258 (2006)Google Scholar
  10. 10.
    Reiter, R.: A logic for default reasoning. Artificial Intelligence 13, 81–132 (1980)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Hiroyuki Kido
    • 1
  • Katsumi Nitta
    • 1
  1. 1.Interdisciplinary Graduate School of Science and EngineeringTokyo Institute of TechnologyJapan

Personalised recommendations