Conditional Labelling for Abstract Argumentation

  • Guido Boella
  • Dov M. Gabbay
  • Alan Perotti
  • Leendert van der Torre
  • Serena Villata
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7132)


Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.


Belief Revision Abstract Argumentation Propositional Formula Argumentation Framework Dialogue Game 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amgoud, L., Hameurlain, N.: A formal model for designing dialogue strategies. In: Nakashima, H., Wellman, M.P., Weiss, G., Stone, P. (eds.) Autonomous Agents and Multiagent Systems (AAMAS), pp. 414–416. ACM (2006)Google Scholar
  2. 2.
    Booth, R., Caminada, M., Podlaszewski, M., Rahwan, I.: Quantifying disagreement in argument-based reasoning. In: International Workshop on the Theory and Applications of Formal Argumentation (TAFA), Barcelona, Spain (2011)Google Scholar
  3. 3.
    Caminada, M.: On the Issue of Reinstatement in Argumentation. In: Fisher, M., van der Hoek, W., Konev, B., Lisitsa, A. (eds.) JELIA 2006. LNCS (LNAI), vol. 4160, pp. 111–123. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–358 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Jakobovits, H., Vermeir, D.: Robust semantics for argumentation frameworks. J. Log. Comput. 9(2), 215–261 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Prakken, H.: Coherence and flexibility in dialogue games for argumentation. J. Log. Comput. 15(6), 1009–1040 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Roth, B., Riveret, R., Rotolo, A., Governatori, G.: Strategic argumentation: a game theoretical investigation. In: International Conference on AI and Law (ICAIL), pp. 81–90. ACM (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Guido Boella
    • 1
  • Dov M. Gabbay
    • 2
  • Alan Perotti
    • 1
  • Leendert van der Torre
    • 3
  • Serena Villata
    • 4
  1. 1.Dipartimento di InformaticaUniversità di TorinoItaly
  2. 2.King’s College LondonUK
  3. 3.ICRUniversity of LuxembourgLuxembourg
  4. 4.INRIA, Sophia AntipolisFrance

Personalised recommendations