Conditional Labelling for Abstract Argumentation
Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.
KeywordsBelief Revision Abstract Argumentation Propositional Formula Argumentation Framework Dialogue Game
Unable to display preview. Download preview PDF.
- 1.Amgoud, L., Hameurlain, N.: A formal model for designing dialogue strategies. In: Nakashima, H., Wellman, M.P., Weiss, G., Stone, P. (eds.) Autonomous Agents and Multiagent Systems (AAMAS), pp. 414–416. ACM (2006)Google Scholar
- 2.Booth, R., Caminada, M., Podlaszewski, M., Rahwan, I.: Quantifying disagreement in argument-based reasoning. In: International Workshop on the Theory and Applications of Formal Argumentation (TAFA), Barcelona, Spain (2011)Google Scholar
- 7.Roth, B., Riveret, R., Rotolo, A., Governatori, G.: Strategic argumentation: a game theoretical investigation. In: International Conference on AI and Law (ICAIL), pp. 81–90. ACM (2007)Google Scholar