Dynamic epistemic logics for abstract argumentation

This paper introduces a multi-agent dynamic epistemic logic for abstract argumentation. Its main motivation is to build a general framework for modelling the dynamics of a debate, which entails reasoning about goals, beliefs, as well as policies of communication and information update by the participants. After locating our proposal and introducing the relevant tools from abstract argumentation, we proceed to build a three-tiered logical approach. At the first level, we use the language of propositional logic to encode states of a multi-agent debate. This language allows to specify which arguments any agent is aware of, as well as their subjective justification status. We then extend our language and semantics to that of epistemic logic, in order to model individuals’ beliefs about the state of the debate, which includes uncertainty about the information available to others. As a third step, we introduce a framework of dynamic epistemic logic and its semantics, which is essentially based on so-called event models with factual change. We provide completeness results for a number of systems and show how existing formalisms for argumentation dynamics and unquantified uncertainty can be reduced to their semantics. The resulting framework allows reasoning about subtle epistemic and argumentative updates—such as the effects of different levels of trust in a source—and more in general about the epistemic dimensions of strategic communication.


Introduction
When engaging in a debate, we do not only exchange arguments, we also reason about information available to others, and both things play crucial roles. On the one hand, acquiring and communicating new arguments can shift one's point of view on the issue of the debate, or make it more robust. On the other hand, beliefs about someone else's background information determine which arguments one is willing to put on the table and in which order, like in a game of incomplete information.
To understand how argumentation unfolds in real-life debates we need to reason, at least, about goals, beliefs, and information change. The latter involves communication moves of the speaker (sender)-choosing and disclosing certain piece of informationand information updates by the hearer (receiver)-incorporating that piece into her knowledge base. 1 Our running example illustrates how strongly these elements interact with each other.
Example 1 Charlie wants to convince his mother that he has right to have a chocolate candy (a). Mom rebuts that too much chocolate is not good for his teeth (b). Charlie may counterargue that he didn't have chocolate since yesterday (d). Unfortunately for him, Mom has seen him grabbing chocolate from the pantry just a few hours ago (e)-by the way, she wrongly thinks that Charlie noticed this. Alternatively, Charlie may quote scientific evidence from a journal paper on Pscience that eating chocolate is never too much (c). Mom does not know that this paper has been retracted ( f ) and, in principle, this would be a safe move for Charlie. 2 Charlie's goal is to make a justified in the eyes of his mother. To achieve this goal he needs to rebut b. He has several options to do so: he may put forward d, or c or both, i.e. he has to select a communication move. To choose his strategy, he needs clues on Mom's background information, i.e. to form beliefs about her beliefs. Finally, success also depends on Mom's attitude towards the information she receives, i.e. her updating policy.
Logical languages and semantics provide a powerful tool to reason about these aspects of argumentation. 3 Here we aim to show that dynamic epistemic logic (DEL) can serve as a general framework to deal with many conceptual aspects of argumentation which are of interest in general argumentation theory and its more recent developments in AI and computer science, specifically in the study of computational models of argument (see Sect. 8).
We can see the language of DEL as structured in three layers. The first layer consists of the propositional language. The one we adopt enables to encode the state of a multiagent debate, which semantically constitutes a propositional valuation. Using tools from abstract argumentation (Dung 1995), such states are modelled here as multi-agent argumentation frameworks (MAF). They include (a) the description of a universal argumentation framework consisting of all the arguments (and conflicts among them) that are potentially available to the agents, and (b) the specific information of each agent, i.e. the part of the universal framework each agent is aware of. Languages of propositional logic are widely used to encode argumentation frameworks, see Besnard et al. (2020) for a survey. In many cases such encodings employ minimal resources as they are designed with efficiency in mind, e.g. to reduce computational problems in abstract argumentation to SAT-solving problems (Cerutti et al. 2013). The language and semantics we adopt are not tailored for computational purposes and are rather rich instead. On the other hand, they allow us to encode fine-grained argumentative notions such as the agents' subjective justification status of specific arguments, which, as we will see, is needed to talk about their goals.
The modal part of the language constitutes the second layer and includes epistemic (resp. doxastic) operators for knowledge (resp. belief). With these operators it is possible to express individual attitudes at any level of nesting, such as the second level attitude 'Charlie believes that Mom believes that argument a is justified for Charlie'. At this stage, the language is interpreted in standard Kripke-style semantics where states are MAFs. The plurality of states serves to capture the uncertainty of agents about the actual state of the debate. As mentioned, modelling uncertainty is relevant to analyze the strategic aspects of argumentation. Recent approaches in formal argumentation model uncertainty by means of incomplete argumentation frameworks (Baumeister et al. 2018a, b), control argumentation frameworks (Dimopoulos et al. 2018), and opponent models (Oren and Norman 2009;Rienstra et al. 2013;Hadjinikolis et al. 2013;Thimm 2014;Black et al. 2017). These approaches provide efficient solutions for computational and application purposes, such as building automated persuasion systems (Hunter 2018). Our goal here is mainly one of conceptual analysis, for which we seek to achieve generality. Indeed, we show in Sect. 8 that it is possible to translate the central notions of these approaches by means of our language and semantics. Moreover, having the expressive power for talking about epistemic attitudes at any level, our language is able to frame agents' goals of complex kinds. In our running example, Charlie's goal amounts to inducing Mom to believe that a is justified, i.e. a first-level attitude. However, we shall see in Sect. 7 that goals and strategies for action may entail more articulated nestings. Furthermore, although we frame our main examples in contexts of strategic and persuasive argumentation, this framework is not conceptually limited to such contexts. Other uses of argumentation entail different kinds of goals but, inasfar as they can be phrased in terms of individual or collective beliefs, the DEL approach is useful there too. This holds, for example, for collective inquiry, where the aim is to reach common knowledge or shared belief. 4 The third layer of the language includes dynamic modalities to reason about the effect of argumentative actions (e.g. communicating an argument) and different belief updates by the agents. Here again, while dynamics of argument communication is the focus of a well-established tradition in abstract argumentation (see Doutre and Mailly 2018 for a survey), belief updates are mostly confined to the tradition going from AGM belief revision (Alchourrón et al. 1985) to DEL (van Ditmarsch et al. 2007;van Benthem 2011). To the best of our knowledge, there is no unified logical framework for treating both aspects 5 A general framework for reasoning about argumentative and epistemic actions becomes relevant insofar agents are liable to revise their knowledge base in different ways, as it is the case for Mom in our running example. For this purpose, we use a rather expressive language, the one of DEL with factual change 4 The influential classification by Walton (1984), Walton and Krabbe (1995) distinguishes six types of dialogical contexts depending on their respective goals, namely persuasion, negotiation, information seeking, deliberation, inquiry and quarrel. Some of them require other conceptual ingredients to be framed, such as desires and intentions, but beliefs are essential for all of them. 5 Some works (Booth et al. 2013;de Saint-Cyr et al. 2016 among others) integrate abstract argumentation with belief revision theory to incorporate an epistemic dimension. Modelling techniques from DEL provide however two additional features which are relevant in this context and left opaque in belief revision: (1) expressing higher order beliefs of agents and (2) reasoning explicitly-in the object language-about how agents perceive changes. (van Ditmarsch et al. 2005;van Benthem et al. 2006) which comes at the price of a blow in computational complexity. 6 The rest of this paper is organized as follows. In Sect. 2 we illustrate the background and the general motivations for our work. Section 3 presents the preliminary tools from abstract argumentation and introduces the notion of MAF. There are indeed several alternative ways to represent a multi-agent scenario of debate. Here we take a specific option and leave critical discussion of other possibilities to Sect. 9. In Sect. 4 we introduce a propositional language to encode MAFs and prove soundness for this encoding in Proposition 1. In Sect. 5, we develop the epistemic fragment for reasoning about knowledge and belief in abstract argumentation. We introduce the general semantics of epistemic argumentative models (Definition 8). After this, we isolate specific subclasses of models that capture a number of constraints on the awareness of arguments and attacks, as well as on epistemic accessibility. Then we provide axiomatistions for these subclasses and show their soundness and completeness in Theorem 1. In Sect. 6 we introduce the full language of DEL for argumentative models. Semantics are given in terms of event models and product updates as in Baltag and Moss (2004). Here we show how to model basic communication moves and information updates under full trust. We then provide completeness results via reduction axioms (Theorem 2). In Sect. 7, we exploit event models to encode the effects of more subtle policies of communication and information update under mixed trust. In Sect. 8 we show how this framework relates to other formalisms developed in the area of computational argumentation. We conclude with Sect. 9, by discussing conceptual alternatives to our modelization as well as open problems and future work. Given the length of the proofs of most of our results, and the substantial amount of tools they involve, we leave them for the final "Appendix", where we also prove additional results for an extended modal language.

Historical background and general motivations
By bringing together two different formal traditions such as epistemic modal logic and abstract argumentation, we aim not only to provide results of interest for both, but also to show that their respective toolboxes provide powerful conceptual resources to think both traditions in a different light. At least since Aristotle, logic and the study of argumentation ran along separated lines, the latter being the exclusive competence of rhetoric. This separation contributed to crystalize the notion of deductive inference from classical logic as the golden standard of correct reasoning. Classical inference is non-defeasible and typically abstracts away from the dialogical/adversarial dimension in which real-life argumentation takes place. From the philosophical side, major criticisms of this paradigm came in the twentieth century from the works of Toulmin (2003), Perelman and Olbrechts-Tyteca (1958), and Hamblin (1970). Yet, formal research was still mastered by traditional approaches, at least until the new-born field of artificial intelligence undertook modelling human-like reasoning, and eventually converged in the definition of systems of non-monotonic logics (Reiter 1980) and defeasible reasoning (Pollock 1987(Pollock , 1991. A turning point has been the introduction of abstract argumentation by Dung (1995). Here the main tool are argumentation frameworks, i.e. directed graphs which represent debates at an abstract level, where arguments are nodes and attacks from one argument to another-e.g. undercuts or rebuttals-are directed edges. The key semantic notion in abstract argumentation is that of a solution, i.e. a set of arguments that constitutes an acceptable opinion as the outcome of a debate. It turns out that the most relevant semantics for non-monotonic and defeasible reasoning can be expressed in terms of solution concepts for argumentation frameworks (Dung 1995), which thus provide a powerful mathematics for defeasible reasoning in dialogical scenarios. Abstract argumentation can be seen as a very general theory of conflict that, in the words of Dung, captures the fact that the way humans argue is based on a very simple principle which is summarized succinctly by an old saying: "The one who has the last word laughs best" (Dung 1995, p. 322).
For our purposes, argumentation frameworks are a first adequate building block to model scenarios like Example 1, where solution concepts provide the essentials for defining agents' (defeasible) justification of an argument and their goals.
From the beginning of the 1980s-in the wake of the "dynamic turn" pushed by the introduction of propositional dynamic logic (Fischer and Ladner 1979)-logicians have dedicated increasing interest to information change, the study of how information states transform under new data. The early approach that dominated the field was AGM belief revision (Alchourrón et al. 1985), later joined by DEL (Plaza 1989;Gerbrandy and Groeneveld 1997;. Dynamic epistemic logics, endowed with plausibility models and operators of conditional belief, allow a systematic treatment of AGM-style belief revision and can model a wide range of information updates Baltag and Smets 2008). A dominant part of the work in both areas has been shaped by a normative approach to the study of information change. AGM belief revision typically focuses on postulates encoding the properties that an update operation should satisfy to be considered rational. Although DEL has the flexibility to model a wide range of epistemic transformations, including the effects of lying and deception (Baltag and Moss 2004;van Ditmarsch et al. 2007), it is fair to say that the mainstream focus has been the update of information under new evidence, where the latter is intended as truthful information made available to the agent. The typical belief upgrades studied in DEL applied to belief revision-such as public announcement !P, lexicographic upgrade ⇑ P and minimal upgrade ↑ P-implicitly assume that the source of information is trusted as infallible (public announcement) or at least believed to be trustworthy (minimal upgrade) (Rodenhäuser 2014). However, most situations of real-life information exchange among individuals are of mixed trust: the source of information is taken to be trustworthy to a limited, or at least context-dependent, extent: we may trust Professor Bertrand Russell on logic matters, probably less so when he predicts the outcome of the next horse race. With the exception of Rodenhäuser (2014), mixed trust of this and other kinds deserved limited attention in DEL. We will handle situations of mixed trust with our formal machinery in Sect. 7.
From a normative perspective, many interesting real-life mechanisms of information update are deemed "descriptive" and left to psychologists, when not discarded as reasoning flaws of an imperfect reasoner. This holds for confirmation bias (Wason 1960), more adequately called myside bias (Perkins et al. 1986)-that is the tendency to strictly evaluate information disconfirming our prior opinions and, vice versa, loosely filter and search for confirming evidence-and for the operation by which we reduce cognitive dissonance upon receiving information which is inconsistent with our prior beliefs (Festinger 1957). Scholars in logic can hardly be blamed for this attitude, since it is supported by most psychology of reasoning, as the extensive debate on, e.g., the Wason selection task witnesses (Wason 1966). More recently, Mercier and Sperber's argumentative theory of reasoning advances a different view, according to which these purported flaws are rather features of reasoning, having an evolutionary explanation in the social context of human communication Sperber 2011, 2017). The argumentative theory of reasoning is a naturalized approach that sees reasoning as a specific cognitive module which "evolved for the production and evaluation of arguments in communication" (Mercier and Sperber 2011, p. 58) rather than to perform sound logical and probabilistic inferences, or to enhance individual cognition. Seen from this angle, the myside bias serves the goal of convincing others and keeping epistemic vigilance. Indeed, what we often blame as a bad attitude in everyday confrontations is a common-and mostly healthy-practice in scientific debate over new theories and explanations (Kelly 2008). In general, an argumentation-based approach to reasoning and communication can explain collective dynamics like groupthink and opinion polarization. When individuals with similar opinions on a given issue discuss, they tend to mutually reinforce their views by providing each other novel and persuasive arguments towards the same direction. 7 A further step in this direction is to investigate the triggering effect of more subtle mechanisms of information update, akin to the myside bias. Sect. 7 shows that DEL can be used for this purpose. Indeed, the notion we characterize as sceptic update provides one possible way of understanding biased assimilation of new arguments. Before getting there, a careful logical construction is needed though, that we begin in the next section.

Multi-agent argumentation frameworks
The fundamental notion we employ is that of an argumentation framework, which is no more and no less than a directed graph. Definition 1 [Argumentation framework (Dung 1995)] An argumentation framework (AF) is a pair F = (A, R) where A = ∅ is a set of arguments and R ⊆ A × A is called the attack relation. We adopt the infix notation a Rb to abbreviate (a, b) ∈ R. Given a set of arguments B ⊆ A, we denote by B + the set of arguments attacked by B, that is B + := {a ∈ A | ∃b ∈ B: bRa}.
An AF represents a full debate seen from a third-person point of view, where all potential arguments and attacks are on the table. Clearly, at a given moment of a debate, each participant is aware of a specific subset of arguments and attacks, i.e. her subjective information about the debate. This calls for the definition of multi-agent AF. A number of alternative options is available in the literature, and many others are there. Each choice depends on specific assumptions about the common ground of the debate and the awareness constraints on the agents' information. In our approach we assume the following: (a) the set of arguments that are potentially available to agents is finite; (b) it is fixed in advance; (c) there is an objective matter of fact; independendent from subjective views, by which an argument attacks another; (d) agents can only be aware of arguments in set (a), i.e. there are no non-existing or virtual arguments (cf. Schwarzentruber et al. 2012;Rienstra et al. 2013); (e) agents can be aware of an attack between a and b only if they are aware of both a and b; (f) if an agent is aware of an attack then this attack holds; (g) if an objective attack holds between two arguments and some agent is aware of both, then she is also aware of the attack.
Together, (f) and (g) imply that agents have a (locally) sound and complete awareness of attacks (SCAA). In general, each of these choices has alternatives, and this gives a very large combinatorics of possibilities for design, which we critically discuss in Sect. 9. It may seem at first sight that constraints (a)-(g) impose strict limitations on the agent' uncertainty, but we shall see (Sect. 5) that this is not quite so, since the modal component of our framework allows to recapture all sorts of uncertainty. Based on our assumptions we define a multi-agent argumentation framework as follows: Definition 2 (Multi-agent argumentation framework) A multi-agent argumentation framework (MAF) for a non-empty and finite set of agents Ag is a 4-tuple is a finite AF (the universal argumentation framework, UAF), A i ⊆ A is the set of arguments agent i is currently aware of, and {E 1 , . . . , E n } is a specific enumeration of the subsets of A, which we assume as fixed from now on. Given a MAF and an agent i ∈ Ag, agent i's partial information is defined as Having A and R finite and fixed captures the constraints from (a) to (c). Constraint (d) amounts to A i ⊆ A. Finally, the definition of R i subsumes (e)-(g). The enumeration {E 1 , . . . , E n } of ℘ (A) is an important device for encoding, the use of which will be clarified in Sect. 4. Figure 1 provides a pictorial representation of a two-agent MAF describing Example 1.
Solution concepts from abstract argumentation are a key to subjective justification and goals. A solution is a set of arguments that meets intuitive constraints to constitute • a 1 2 2 Fig. 1 A MAF for Example 1 (Charlie and Mom). The universal argumentation framework consists of nodes a-f and the corresponding attacks, as described in the Example. Agent 1 (Charlie) is aware of the entire universal argumentation framework (area in blue) while agent 2 (Mom) is only aware of {a, b, e} and the attacks between them (red ellipses). We omit the representation of an enumeration of ℘ (A) an acceptable point of view. 8 Several solution concepts have been introduced by Dung (1995) and subsequent work in abstract argumentation, see Baroni et al. (2018) for an extensive state-of-the-art. For the sake of presentation, we focus on preferred solutions, but our approach can be straightforwardly extended to other admissibility-based semantics (i.e., grounded, complete and stable). 9 Definition 3 (Defence and preferred solutions) Given an AF F = (A, R), a set of arguments B ⊆ A, and an argument a ∈ A: B defends a iff for every c ∈ A: if cRa then c ∈ B + . Moreover, B is said to be a complete solution iff (1) it is conflict-free, i.e. B ∩ B + = ∅ and (2) it contains precisely the arguments that it defends, i.e. b ∈ B iff B defends b. B is a preferred solution iff it is a maximal (w.r.t. set inclusion) complete solution. Given an AF F = (A, R) we denote by Pr(F) the set of all its preferred solutions.
In the UAF of Fig. 1, the only preferred solution is {b, e, f }. This also corresponds to agent's 1 preferred solution, as his awareness set A 1 coincides with the entire framework. When we relativize to agent's 2 awareness set A 2 , we obtain instead {b, e} as the unique preferred solution. An AF may have more than one preferred solution. Plurality of solutions allows to define-following Wu and Caminada (2010)-the finegrained justification status of an argument relative to an AF. The latter is key to express graded notions of acceptability (Beirlaen et al. 2018;Baroni et al. 2019) for reasoning about agents' goals and the degree of their opinion about the debated issue. 10 8 Alternative names for solutions in the literature are extensions or semantics, which are almost interchangeable. The latters are indeed more standard in the field of abstract argumentation. We opt for the more neutral "solution" to avoid confusion with homonymous notions in logic. 9 Research with methods of experimental psychology (Rahwan et al. 2010) suggests that preferred solutions are, among those introduced by Dung (1995), the best predictor for human argument acceptance. More extended experiments by Cramer and Guillaume (2018) confirm this finding, but further speak in favour of so-called naive-based semantics such as CF2 (Baroni et al. 2005). However, the authors carefully warn that all the results may be influenced by the specific thematic contexts of the natural-language arguments chosen in the experimental setting (news reports, arguments based on scientific publications, and arguments based on the precision of a calculation tool). In particular, only one context (scientific publications) was used for the comparison between preferred and CF2 solutions. 10 This can be seen as generalization of the classical concepts of credulous/sceptic acceptance and is probably the most immediate way to define graded acceptability (Baroni et al. 2019) of arguments. Moreover, it provides an elegant way of dealing with the phenomenon of floating conclusions (see Wu and Caminada 2010 for details).
We follow the extension-based characterization of this notion provided by Baroni et al. (2018). 11 Definition 4 (Fine-grained justification status) Given an AF F = (A, R) and an argument a ∈ A, then a is said to be: strongly (or sceptically) accepted iff ∀E ∈ Pr(F) a ∈ E; weakly accepted iff (∃E ∈ Pr(F): a ∈ E, ∃E ∈ Pr(F): a / ∈ E, and ∀E ∈ Pr(F) a / ∈ E + ); weakly rejected iff (∃E ∈ Pr(F): a ∈ E + , ∃E ∈ Pr(F): a / ∈ E + , and ∀E ∈ Pr(F) a / ∈ E); strongly rejected iff ∀E ∈ Pr(F) a ∈ E + ; and borderline otherwise. 12 Note that the justification status of an argument is always relative to an AF F = (A, R), but we omit an explicit reference to F when the context is clear enough. Again, the notion can be straightforwardly relativised to agents. For instance, given MAF = (A, R, {A i } i∈Ag , {E 1 , ..., E n }) we say that a ∈ A is strongly accepted by agent j iff a ∈ A j and a is strongly accepted w.r.t. (A j , R j ). As an example, argument b of Fig. 1 is strongly accepted by 1 and 2, and argument a is strongly rejected by both agents.

Encoding argumentative notions
Logical languages are a general tool to describe mathematical structures, and multiagent AFs are one of these. Compared to others, a propositional language has minimal descriptive power though. 13 However, it turns out that, in the finite case, its expressivity is sufficient for our purpose to encode the notions introduced in the previous section. 14 Furthermore, since we construct a Kripke semantics where multi-agent AFs are states (Sect. 5), a propositional language provides a natural fit with the techniques of epistemic logic.
The set of propositional variables V A Ag , where A is a set of arguments (intuitively, the domain of the UAF) and Ag is a set of agents, is defined as the union of the following sets: The original one by Wu and Caminada (2010) is presented in terms of labellings (Caminada 2006;Caminada and Gabbay 2009). Both ways are equivalent (Baroni et al. 2018). 12 The class of borderline arguments allows for further distinctions (Wu and Caminada 2010). We overlook them here as it is not our primary interest to provide a full classification. In the rest of our example we indeed only make use of strong acceptance and strong rejection, which are the only possible statuses of an argument in a well-founded graph as that of Fig. 1. 13 There is already a significant amount of work on encoding abstract argumentation semantics with logical languages. Typical candidates are propositional logic (Besnard and Doutre 2004;Besnard et al. 2014;Doutre et al. 2014Doutre et al. , 2017, modal logic (Grossi 2010a, b;Caminada and Gabbay 2009), first-order logic (de Saint-Cyr et al. 2016) and second order logic (Dvořák et al. 2012). 14 To the best of our knowledge, ours is the first encoding of the (fine-grained) justification status of an argument in propositional logic.
O := {aw i (a) | i ∈ Ag, a ∈ A}; and Each variable a b reads "argument a attacks b" and aw i (a) stands for "agent i is aware of a". The informal reading of the third kind of variables a E k is "argument a belongs to subset E k ". These variables are needed because the definition of (finegrained) justification status quantifies over sets (Definition 4). The language L(V A Ag ) is built from V A Ag using Boolean functors ¬, ∧, ∨, → and ↔ as usual. A given MAF = The semantics of this propositional language is defined, as standard, by means of valuations of its propositional variables. Given a valuation v ⊆ V A Ag and a propositional variable p, we say that p is true at v iff p ∈ v. A valuation recursively determines the truth value of any formula ϕ ∈ L(V A Ag ) in the usual way. v ϕ stands for "ϕ is true at v".

Definition 5 (Associated valuation and theory of a MAF) Given
Furthermore, the following Boolean formula Th MAF , called the theory of MAF, encodes MAF, in the sense that v MAF is the unique valuation such that v MAF Th MAF : .
For what follows it is relevant to note that not every valuation is a valuation for a MAF. The reason is that subset variables may fail to represent a proper enumeration of subsets, in the sense of the following definition.
Definition 6 Let A be a finite set of arguments with |℘ (A)| = n, we say that a valuation v ⊆ V A Ag represents an enumeration of ℘ (A) iff for all k, m: The inequality of two sets E k and E m can be expressed in our propositional language: This allows to encode the representation of an enumeration by the following formula Most importantly, based on this language and semantics we can provide encodings for the relevant notions introduced in the previous section, as by the following list: The shorthand E k E l (resp. E k E l ) stands for "E k is a subset (resp. a proper subset) of E l ". conf_free i (E k ) (resp. complete i (E k ), preferred i (E k )) means "the set E k is conflict-free (resp. complete, preferred) for agent i (i.e. w.r.t. (A i , R i ))". stracc i (a) encodes "argument a is strongly accepted by agent i" (Definition 4). Analogously, wekacc i (a), strrej i (a), wekrej i (a) and border i (a) stand respectively for "argument a is weakly accepted, strongly rejected, weakly rejected, borderline for agent i". 15 The following proposition shows that our encoding is sound, following the satisfiability approach of Besnard et al. (2014), in the sense that MAF has a given property if and only if its encoding is true at v MAF .
Ag ) be the propositional language for MAF. The following holds, where 1 ≤ k,l ≤ n, i ∈ Ag, and a ∈ A: ) iff a is strongly accepted (resp. weakly accepted, weakly rejected, strongly rejected, borderline) by i.
As mentioned, this is a fundamental step to talk about goals of communication, when these involve the justification status of a specific argument (the issue of the debate) that the speaker wants to induce in the hearer.

Epistemic logics for abstract argumentation
As our initial example shows, agents need to form beliefs about the awareness set of other agents, and these beliefs may be more or less accurate. Agents may also have different capacities to detect whether an argument attacks another. To reason about agents' uncertainty we need to expand our language with epistemic modalities i , which stand for "agent i believes that" or sometimes "agent i knows that". For reasons explained below, we do not need to choose between the two readings at this stage.
Definition 7 Formulas of the language L(V A Ag , ) are given by the following grammar: Other Boolean connectives (∨, →, ↔) and constants ( , ⊥) are defined as usual and ♦ i is defined as ¬ i ¬ with the informal meaning "agent i consider epistemically possible that…". In some axiomatisations, we will make use of the mutual belief (knowledge) modality, defined as Ag ϕ := i∈Ag i ϕ, which reads "everyone in Ag believes (knows) that ϕ". Standard Kripke-style semantics, where states are MAFs over a given set A, provides a natural interpretation of this language and allows to model uncertainty about other agents' information and about the presence of attacks. Uncertainty is captured by the accessibility of different states. Intuitively, each state is an alternative of the actual MAF, based on the same pool of arguments A and the same enumeration of its subsets, but with possibly different objective attacks, and where agents may be aware of different arguments. We name them epistemic argumentative models and define them as follows. 16 Or equivalently,

Definition 8 (Model) An epistemic argumentative model
The valuation V should satisfy the following additional constraints: The class of all EA-models is denoted by EA. When no confusion is possible we simply refer to them as models.
Condition ER guarantees that some state w in the model has an unequivocally asso- Condition SU guarantees that the enumeration of subsets is constant over the whole model. Taken together, ER and SU guarantee that every state u ∈ W is unequivocally is a specific world representing the actual state of affairs. A pointed model for a given MAF is just a pointed model ((W , R, V ), w) such thatV (w) = v MAF . As for the interpretation of formulas, truth in pointed models is defined recursively as usual: Definition 9 (Truth) Given an EA-model M = (W , R, V ) and a state w ∈ W , define the relation as the smallest one satisfying the following clauses: Remark 1 (Unawareness of attacks) Note that, according to Definition 8, it is possible to build a model M with a world w ∈ M[W ] at which: 1. agent i is not aware of , capturing uncertainty about the attack relation. The actual world w 0 is within a double-line frame a (i.e. w / ∈ V (aw i (a)) and 2. she considers possible a state u (i.e. wR i u) at which a b holds (i.e. u ∈ V (a b)). Although this could seem a defect of Definition 8, it is not. The key is that, in the intended interpretation of EA-models, once sound and (locally) complete awareness of attacks (SCAA) is assumed, i is simply not aware of the attack a b (although this attack holds at u as a matter of fact). More formally, recall that, since we are assuming SCAA, we use R i defined as R ∩ (A i × A i ) to denote the attacks that agent i is aware of in a multi-agent AF. Note that R i can be easily captured in our object language as , and then we have M, u a i b. However, we do not need to make this distinction explicit in our object language, since it is already captured in the syntactic definitions of solution concepts/justification status for a given agent (see complete i , preferred i , stracc i , etc in page 12).
General EA-models tell us very little about the constraints on agents' awareness of arguments and attacks. Even if SCAA holds at every point, agents may still be uncertain about attacks if they are not able to distinguish between two points with radically different underlying universal frameworks, as in the following example: Here, the valuation is as indicated by Fig. 2 Note that the valuation of attack variables is not uniform. The reader can check satisfiability of some interesting facts as Informally, agent 1 is not sure about which argument attacks b but he knows that its justification status is strong rejection. To see that the last part of the conjunction is true, note that both MAF w 0 and MAF w 1 have a unique preferred solution i.e. {c, d} and that, in both cases, it attacks b, hence M 0 strrej 1 (b) holds by applying Definition 4 and Proposition 1.
EA-models can then be seen as minimal semantic devices for joint reasoning about argumentation and epistemic attitudes. We qualify them as minimal because they capture no assumption about the reasoning/awareness introspection capabilities of the formalised agents. We shall mostly focus on particular subclasses of EA which incrementally combine additional constraints.

Definition 10 (Properties of models)
We say that M satisfies: Condition AU amounts to assuming that attacks are the same through all the states and therefore SCAA is common knowledge (belief). PIAw and NIAw are adapted versions of the introspective properties for general awareness (Fagin and Halpern 1987). Condition PIAw dictates that if one is aware of a specific argument, then he cannot consider it possible that he is not. Conversely, NIAw amounts to saying that if one is not aware of a specific argument then he cannot think it possible that he is. They are respectively captured by axioms GNIAw is a stronger constraint, saying that if one is not aware of a specific argument then he cannot think it possible that other agents are, and therefore NIAw is just a special case of GNIAw. 18 GNIAw is captured by the axiom ¬aw i (a) → i ¬aw j (a) or, maybe more intuitively, by its contrapositive We denote by AoA (awareness of arguments) the class of all EA-models satisfying AU, PIAw and GNIAw and refer to its elements as AoA-models. Clearly, the one in Fig. 2 is not a AoA-model. However, the class AoA is general enough to subsume scenarios like our Example 1. Figure 3 represents a pointed AoA-model (M 1 , w 0 ) capturing the relevant epistemic features of Example 1. We assume that (M 1 , w 0 ) is an AoA-model for the MAF of Fig. 1, that isV (w 0 ) = v MAF . Again, we assume some enumeration E of the set ℘ (A) to be given and that the valuation of M 1 represents that enumeration. Condition AU in the definition of AoA-models allows dispensing with the graphical representation of the valuation of attack variables (as far as we keep in mind what is the underlying universal framework), since attack variables are uniform throughout the model. In the case of model M 1 , depicted in Fig. 3, we assume that V matches the structure of the UAF of Fig. 1 Schwarzentruber et al. (2012), we represent in a compact way also the valuation of awareness variables; e.g. 1:

Fig. 3
A pointed AoA-model (M 1 , w 0 ) representing the agents' uncertainty in Example 1. The actual world w 0 is within a double-line frame the justification status of a. However, this agreement is based on different reasons: Agent's 1 strong rejection is based on full awareness of the universal framework and is therefore not defeasible, while Agent's 2 rejection is based on partial awareness and is defeasible by new information.
We did not discuss specific properties of R i so far, since we want to provide a comprehensive approach, taking into account both knowledge and belief. Moreover, there is no universal agreement about the properties of both notions. 19 We do not intend to take a stand in this debate and we are content to show that the different constraints on R i do not pose any technical problem for completeness. Accordingly, given a class of models C, we denote by S4(C) (resp. S5(C), KD45(C)) the subclass of C where every R i is a preorder (resp. an equivalence relation; a serial, transitive and euclidean relation).
We now provide sound and strongly complete axiomatistions for the relevant classes of models. Let us first define the corresponding proof systems:

Definition 11 (Proof systems)
-EA is the proof system containing all instances of Taut, K, PIS (positive introspection of subsets), NIS (negative introspection of subsets), ER (enumeration representation) and both inference rules from Table 1. 20 S4(EA) (resp. S5(EA), KD45(EA)) extends EA with axioms T and 4 (resp. T, 4 and 5; D, 4 and 5) from Table 1. -AoA (Awareness of Arguments) is the system extending EA with PIAt (positive introspection of attacks), NIAt (negative introspection of attacks), PIAw (positive introspection of awareness) and GNIAw (generalized negative introspection of 19 Concerning knowledge, computer scientists usually model it as an equivalence relation (Fagin et al. 2004; Meyer and van der Hoek 1995)-which informally corresponds to factive, fully introspective knowledge. On the other hand, since Hintikka (1962), philosophers have argued against some counter-intuitive consequences of assuming euclideanness-which informally corresponds to negative introspection, see Stalnaker (2006) for a more detailed discussion. For belief, the situation is slightly more unequivocal. It is mostly agreed that serial, transitive and euclidean relations capture the relevant features of belief-which informally corresponds to consistent and fully introspective beliefs. Nonetheless, consistency of (rational) beliefs has been questioned in some cases, as e.g. in Parikh (2008, §2). 20 PIS and NIS are implied by SU. Although they capture the weaker condition that subsets are uniform along the agents' indistinguishability relation, they are together sufficient for proving completeness.
Let L be any of the proof systems defined above, we denote by C L the corresponding class of models according to Table 2. For instance C S4(AoA) = S4(AoA).
Theorem 1 Let L be any of the proof systems defined above, then L is sound and strongly complete w.
Although the details of the proof are left for the "Appendix", some remarks are in order. Soundness results are straightforward by induction on the length of derivations, given that all axioms are valid and that rules preserve validity (in their corresponding class of models). Strong completeness will be proved using the canonical model technique. Note however that the canonical model for EA is not an EA-model-hence this problem is inherited by every system extending EA. More concretely, SU is violated by the canonical model for EA. This inconvenience is circumvented by taking its generated submodels, which, thanks to the constraints encoded by PIS and NIS, turn out to be EA-models (see Theorem 7.3. of Blackburn et al. 2002 for a similar proof). Furthermore, truth is preserved under generated submodels for our language and semantics, just as in the general modal case (Blackburn et al. 2002, Prop. 2.6.), even if we are not working with normal modal logics in the sense of Blackburn et al. (2002), because the rule of uniform substitution is not sound here.

Epistemic and argumentative dynamics
Standard approaches to the dynamics of AFs focus almost exclusively on changes generated by addition and deletion of arguments and/or attacks, leaving epistemic updates aside (Doutre and Mailly 2018). Here, we present a framework where both dynamics (epistemic and argumentative) are encompassed. Moreover, this framework allows reasoning about different communication moves and complex information updates. For presentational purposes, we focus on completeness results for dynamic extensions of EA, AoA, S4(AoA), KD45(AoA) and, semantically, on transformations of the corresponding classes of models. Completeness proofs and conceptual considerations concerning the dynamic extensions of other systems can be easily extrapolated and are therefore not discussed. The main technical idea of our dynamic approach is to use event models Baltag and Moss 2004) enriched with propositional assignments or substitutions (van Benthem et al. 2006;van Ditmarsch and Kooi 2008) to capture both kind of dynamics. 22 A key notion to define these models is that of propositional substitution.
Definition 12 (Substitutions) A propositional EA-substitution (or an EA-substitution, for short) is a function σ : : (i) for every p ∈ B it holds that σ ( p) = p (i.e. subset variables are not substituted); and (ii) for every p ∈ AT ∪ O either σ ( p) = p or σ ( p) = or σ ( p) =⊥. 23 We use SUB EA to denote the set of all EA-substitutions, and λ to denote the identity substitution. Moreover, an AoA-substitution is an EA-substitution s.t.: (iii) for every p ∈ AT it holds that σ ( p) = p (persistence of attacks). We use SUB AoA to denote the set of all AoA-substitutions.
Intuitively, condition i ensures that the enumeration is kept fixed under update. 24 In the general case of EA-substitutions, condition ii allows to modify both awareness and 22 These structures are sometimes called event models with factual change [e.g. in van Benthem et al. (2006), van Ditmarsch and Kooi (2008)], but this term can be deceiving in the actual context, where the informal meaning of some propositional variables is agent-related. 23 As shown by van Ditmarsch and Kooi (2008), this is equivalent to use more general substitutions w.r.t.
. 24 This will be relevant for complete axiomatisations. attack variables. The modification of awareness variables corresponds to addition or deletion of arguments from the agents' awareness set. Modification of attack variables is of interest in order to contextualize other formalisms we deal with in Sect. 8. Since modification of attacks is not relevant for our main focus, we will mostly deal with AoA-substitutions, where this is forbidden by condition iii. We can also represent EAsubstitutions (resp. AoA-substitutions) as maps of the form { p 1 → * 1 , . . . , p n → * n } where for every 0 ≤ k ≤ n we have that: and for every 0 ≤ m ≤ n, k = m implies p k = p m . With this notion at hand, we define event models as follows: Definition 13 (Event model) An EA-event model (or an event model, for short) for a given language L(V A Ag , ) is a tuple E = (S, T , pre, pos) where S = ∅ is a finite set of events; T : Ag → ℘ (S × S) assigns to each agent i an indistinguishability relation T i between events (intended to represent uncertainty of agent i about which changes are happening); pre: S → L(V A Ag , ) is a function assigning a precondition to each event and pos: S → SUB EA assigns a substitution to each event, indicating its effect on awareness and attacks.

Given an event model
We denote by ea the class of all EA-event models. The next definition explains how EA-models and event models interact through action execution.
Informally, product update is meant to provide a new EA-model where the possible states are pairs (w, s), accessibility holds between pairs iff it holds coordinatewise and the valuation of variables is updated according to the substitution labelling s as its postcondition.
We use the symbols '•, •, ' to name events. Let us now look at two examples of event models.
Pub a is graphically represented in the left-hand side of Fig. 4, for the special case where Ag = {1, 2}.
Example 6 (Private addition of an argument) We define the event model for i privately adding of an argument a as Pri a i := (S, T , pre, pos) where: In this case, the definition of T captures the intuition of a completely private learning action for i; meaning that, after the execution of (Pri a i , •) everyone (except i) believes that nothing has happened. Pri a 1 is pictorially represented in the right-hand side of Fig. 4 for the special case Ag = {1, 2}.
Both event models represent the same (well-studied) action of adding an argument to an argumentation framework (Cayrol et al. 2010), but DEL modelling allows to account for the distinction between public and private communication, thus adding a relevant epistemic dimension. 25 As an example of the product model execution, Fig. 5 illustrates the operation M ⊗ Pub d , that we discuss in Example 7.
More in general, given a set of arguments B, the public addition of the whole set is captured by the action Pub B , which only modifies the definition of Pub a in that Analogously, private addition of B by i is Pri B i and works as in Fig. 4 The effects of updating EA-models with actions are described by the following dynamic languages: Ag be a set of propositional variables, and let ⊆ ea be a class of event models for L(V A Ag , ). The formulas of the language L (V A Ag , ) (or simply L when the context is clear) are given by the following grammar: where [E, s] reads: "after executing (E, s), ϕ holds". 26 We extend the truth relation for the new kinds of formulas as follows: The flexibility of event models is well-known. In their actual epistemic-argumentative reading, they can be used to model the effects of acts of information exchange. As mentioned, these acts have two sides: (i) how the hearer decides to update her knowledge base with new information (information update) and (ii) what argument(s) the speaker decides to communicate in order to fulfill his goal (communication moves). In order to persuade their interlocutors, smart players choose (ii) based on their expectations about (i). Let us now illustrate this through the simplest combination of (i) and (ii) of Example 1. A deeper analysis is left for Sect. 7.

Example 7
We assume that (i) is as follows: Mom behaves credulously. This means that whenever Charlie communicates an argument, she simply adds it to her knowledge base. This is modelled through the event model for public addition of an argument (left-hand-side of Fig. 4). As for (ii), we assume that Charlie thinks that Mom is indeed behaving credulously. Recall that Charlie has three options: communicating c, communicating d or communicating {c, d}. Hence, his way of selecting the best set of arguments to communicate consists in reasoning about the effects of all options. Note that although one of the three moves will not work (M 1 , w 0 [Pub d , ]strrej 2 (a), see

Remark 3
Here communication of an argument x to everybody is modelled by the operation of public addition and not, as common in DEL, as the public announcement of the formula aw i (x) (where i the speaker) or of i∈Ag (aw i (x)). 27 The usual event model for public announcement of a formula ϕ is based on the same single-event structure of Fig. 4 (left), but with ϕ (instead of ) as precondition and with no postconditions. If we did so, agents could never learn arguments whose (collective) awareness is not considered as a doxastic possibility before communication takes place; but this fails to capture what actually happens in most real-life debates. For instance, if we modelled communication of d by Charlie as the public announcement of aw 1 (d) (resp. as the 26 Note that each dynamic language is parametrised not only by Ag and A but also by . This is useful when defining action-friendly logics, in the sense of Baltag and Renne (2016, Sect. 3.3.). Moreover, we fixed the range of pre in Definition 13, so that our event models always have static formulas as preconditions. This is by no means an essential limitation of the current framework, but rather a simplifying assumption for presentation purposes. The interested reader is referred to van Ditmarsch et al. (2007, Sect. 6 We give axiomatisations and prove completeness for the dynamic extensions of EA, AoA, S4(AoA) and KD45(AoA). 28 For this we use reduction axioms and an insideout reduction (as described e.g. in Wang and Cao 2013). That is to say, we don't use axioms for event model composition but we show how to eliminate all dynamic operators starting from their innermost occurrences. To do so, we need to prove that the rule of substitution of proven equivalents is sound w.r.t. all the systems considered. From a semantic perspective, this implies showing that the class of models we are working with is closed under product update. It is easy to show that this is in general not the case for AoA, S4(AoA) and S5(AoA). One of the possible solutions to this shortcoming is to restrict the class of "allowed" event models, so as to ensure that we remain in the targeted class after the execution of the product update.
The general case of updating EA models with EA event models does not present any problem. Indeed, a E k -variables do not change their truth values by the constraint i of Definition 12, and this guarantees that subset uniformity (SU) and enumeration representation (ER) are trivially preserved. Updating an AoA-model with an event model that only uses AoA-substitutions guarantees further that attack uniformity (AU) is preserved by the constraint iii of Definition 12, since -variables are also left untouched. The problem for updating AoA-models lies in the awareness constraints PIAw and GNIAw. We can however provide a set of sufficient conditions for their preservation. For this we need to introduce some additional notation. Let Let us explain these conditions informally. In an event model satisfying EM 1 , if we suppose that s is the event that actually happens, then EM 1 implies that any event t that agent i cannot tell from s is one where he gains at least the same new arguments and does not loose any argument he actually keeps. It is intuitive to see that EM 1 preserves PIAw. Indeed, suppose that i is aware of a after the execution of s (antecedent aw i (a) of PIAw). Two things are possible. Either a is a newly acquired argument (by the execution of s). Then, since any state accessible after the update is "filtered" by some indistinguishable event t, the condition pos + i (s) ⊆ pos + i (t) forces a to be acquired at that state too, and therefore the consequent i aw i (a) is satisfied. Or else, i was already aware of a before the execution of s, and therefore he has not lost it. Here pos − i (t) ⊆ pos − i (s) guarantees that a is not lost at any state accessible after the execution of the event. An analogous informal reading, generalized to other agents, can be given for EM 2 : at any indistiguishable event any other agent looses at least the same arguments as i and gains no more. By the same pattern as before we can extrapolate that this condition preserves GNIAw (see Lemma 1 for a detailed proof).
Let us now define some relevant classes of event models: Definition 16 (Classes of event models) We denote by em12, emS4 and pure the following classes of event models: -em12 is the class of event models satisfying EM 1 , EM 2 and assigning AoAsubstitutions (see Definition 12) to all their events. In other words, E = (S, T , pre, pos) ∈ em12 iff E satisfies EM 1 , EM 2 and pos : S → SUB AoA . -emS4 is the subclass of em12 where every T i is a preorder.
pure is the subclass of em12 s.t. pre(s) = for every s ∈ E[S] and every T i of E is serial, transitive and euclidean. 29

Remark 4
Note that both Pub a and Pri a i (see Examples 5,6 and Fig. 4) are purely argumentative event models (i.e. they belong to pure) and, a fortiori, they also belong to em12.
We can then prove the following result: be an EA-model and let E = (E, T , pre, pos) be an event model, then: , and E ∈ pure, then M ⊗ E ∈ KD45(AoA).  Remark 5 (The general value of EM 1 and EM 2 ) If we dispense with the argumentation/awareness interpretation of the current formalism, Lemma 1(ii) tells us that we can look at EM 1 and EM 2 as general, sufficient conditions that guarantee the preservation of certain constraints over propositional valuations after product update. Therefore, they can be reused in any framework including event models and indexed operators (awareness operators, in our case) ranging over atomic entities (arguments, in our case). As suggested before, PIAw and GNIAw characterize a de re reading of operators ranging over atomic entities. Therefore, EM 1 and EM 2 are structural event constraints that, taken together, work as a sufficient condition to preserve these de re operators.
General completeness results follow from Lemma 2. Let us first define the targeted axiom systems:

Definition 17 (Dynamic axiom systems)
-EA ea extends EA with all axioms schemes and rules of Table 3 that can be written in L ea (see Definitions 15 and 16). -AoA em12 extends AoA with all axioms schemes and rules of Table 3 that can be written in L em12 . -S4(AoA) emS4 extends S4(AoA) with all axioms schemes and rules of Table 3 that can be written in L emS4 . -KD45(AoA) pure extends KD45(AoA) with all axioms schemes and rules of Table 3 that can be written in L pure .
Let us remark that in the case of KD45(AoA) our completeness result is restricted to event models belonging to pure. Although pure is a rather simple class of event models, all the actions used in our analysis of Example 7, as well as the one that will be used in the next section, fall into it. Unfortunately, modelling certain complex scenarios requires mixing purely argumentative actions with other types-for instance, public and private announcement of formulas, where preconditions are not trivial. One last axiomatisation, inspired by the works of Balbiani et al. (2012) and Aucher (2008), aims to fill this gap. The interested reader can find it in the "Appendix A4" (Theorem 3). The axiomatisation is based on an modal language with a global modality. Interestingly, this more expressive language also allows to provide necessary and sufficient conditions for the preservation of PIAw and GNIAw under product update.

Modelling persuasion, sceptic updates and conditional trust
In Example 7 of the previous section we unfolded the dynamics of our running example by assuming that Mom was open to accept whatever Charlie says at face value. In a more likely scenario this does not happen: Mom will filter the information received from Charlie, precisely because she does not trust him in such circumstances. Yet Charlie would still be confident, as kids often are, to be fooling her. Although Mom is not immediately aware of the counter-argument against the Pscience publication, she can obtain it after a quick (private) search on Pscience's website. It is important to stress that Mom does not discard argument c, she rather accepts it, but eventually finds out the counterargument f . This is a rather common mechanism of epistemic vigilance, one of the kind we mentioned in Sect. 2. One possible way of capturing this epistemic action in our framework is what we call a sceptic update Scp x j , where the recipient j of an argument x privately and non-deterministically learns an attacker of x (if any). When our language contains Pub and Pri, it is possible to define a modality [Scp x j ]ϕ, expressing that ϕ holds after j performs a sceptic update upon receiving argument x 30 : As an example, the bottom part of Fig. 6 represents the outcome of Mom's sceptic update as the result of two consecutive actions-a public addition of c followed by Mom privately learning f -on the initial AoA-model M 1 (Fig. 6 (top part)). In our model, Charlie thinks that he has succeeded M 1 , w 0 [Scp c 2 ] 1 stracc 2 (a), while actually he has not M 1 , w 0 [Scp c 2 ]¬stracc 2 (a) and, moreover, agent 2 (Mom) believes all this M 1 , w 0 [Scp c 2 ] 2 ( 1 stracc 2 (a) ∧ ¬stracc 2 (a)). 30 From a semantic perspective, Scp x j is an action encoded as the multi-pointed event model ∪ y: x Ry ((Pub x , ); (Pri y j , •)), where ∪ and ; respectively stand for non-deterministic choice and for sequential composition. See van Ditmarsch and Kooi (2008) for the precise meaning of both operators. We now have two scenarios with substantially different outcomes. The question is how close they reflect the typical behaviour of players in a more or less adversarial exchange. In both cases, we assumed that Charlie is confident that Mom will accept everything he says without further inquiry. Is Charlie the prototype of a skilled debater? Clearly not: He still lacks some mindreading and the subtlety of anticipating easy counterobjections, those skills that kids typically learn to use at an advanced stage of their cognitive development, and after a lot of trial and error. We further assumed that Mom has full trust in the first scenario and absolute distrust in the second. Distrust is driven by epistemic vigilance, but circumstances are not always black or white. After all, there are cases in which she has the right-or even the educational duty-to trust her kid.
We want to put our finger on the fact that trust is most of the time mixed, and it is such in a relevant sense. Not only it varies with the source of information-Mom may trust Charlie and not Dad-or the type of information we get from the source-she may trust Charlie more or less depending on the matter at stake. Trust is often also conditional on the epistemic circumstances we find ourselves in, all other things being equal. In order to see this clearly, we introduce a different example, which we borrow from Kagemusha, a famous film directed by Akira Kurosawa.
Example 8 (Kagemusha) The warlord of the clan Takeda has been killed unbeknownst to everybody except for the members of his clan and his political decoy (kagemusha). It is vital that the warlord's death stay secret and that his double keeps playing his role. Therefore, everybody outside the clan must be persuaded that a: "the warlord is alive". The warlord's funeral is then performed anonymously and in a peculiar way: a jar with the ashes is launched into the lake Suwa on a raft. Unfortunately, spies from rival clans are around and, by snooping on this strange ritual, they start suspecting the truth, that is they are provided with an evidential argument b that rebuts a. Now, by accident, the spies are spied, in turn, by the kagemusha, who reports this to the rest of the clan. The clan then decides to bake up an alternative (false) explanation of the ritual-an offering of sake to the god of the lake-and to tell it around. This alternative explanation c undercuts the second and reinstates the first. This has the effect of persuading the spies that they were wrong, that the ritual was indeed not a funeral and the warlord is still alive.
As things stand, argument c is de facto undermined by a decisive argument d, to the effect that c does not hold water. But the spies do not find access to d, and the clan's strategy has its effect of persuading them that a is reinstated and therefore acceptable. The following MAF captures the situation right after the spies have observed the funeral, where agent 1 represents Takeda's clan and agent 2 represents the spies: What is clear from the story is that the spies would never have accepted the fake explanation c at face value had they only suspected that the clan was aware of being spied. Instead, they would have easily resorted to d by performing a sceptic update. The only difference between success and failure lies in the initial epistemic state of the agents involved, as the modelling in Fig. 7 shows. In the first scenario (captured in model M 2 ) 2 believes that 1 believes that her goal is already achieved (2 is only aware of a at w 2 ), i.e. M 2 , w 0 2 1 stracc 2 (a), while in the second case (captured in model M 2 ) 2 believes that 1 believes that it is not (2 is aware of a and b at w 2 ), i.e., M 2 , w 0 2 1 stracc 2 (a). 31 What is also clear is that the different attitude displayed by the spies in the alternative situations does not depend on the source-which is the same-nor on the subject matter-again, the same. It is fair to say that the trust they put in the information received is part of one and the same conditional plan for updating information. They sceptically process the information received if they believe that the clan believes that its goal is not achieved yet but will be after communicating c; otherwise they uncritically accept it. This condition can be generally defined as where i is the speaker, goal i ∈ L(V A Ag , ) is its goal, j is the hearer, and x is the communicated argument. The clan, on its side, is well aware of this: they know they can get away with a fake only because, given the circumstances, epistemic vigilance is defused. It is possible to reason about the effects of conditional plans like the above in our language by defining more complex modalities like the following one that captures the effects of this kind of strategic update: where i is the speaker, j is the hearer, and x is the communicated argument. Note that M 2 , w 0 [Str a 2 ]stracc 2 (a) but M 2 , w 0 ¬[Str a 2 ]stracc 2 (a). This kind of operations has received attention in semantically oriented belief revision (see e.g. Rodenhäuser 2014, §2.6.1 and the definition of mixed doxastic attitude). A throughout analysis of the subtleties of strategic communication seems to require powerful tools of analysis akin to those currently developed in the area of epistemic planning (Andersen et al. 2012). This investigation goes beyond the scope of our paper and we leave it for future research.

Relation to other formalisms
Recently, uncertainty about AFs has been modelled through quantitative methods (Li et al. 2011) and qualitative ones within the formal argumentation community. Among the qualitative approaches the use of incomplete argumentation frameworks (Coste-Marquis et al. 2007;Baumeister et al. 2018a, b) and control argumentation frameworks (Dimopoulos et al. 2018) has been prominent. Also, opponent modelling in strategic argumentation (Oren and Norman 2009) has been endowed with higher order uncertainty about adversaries ). Our logic can be naturally connected to these three lines of research.

Incomplete AFs
General models of incompleteness in abstract argumentation (Baumeister et al. 2018b) capture uncertainty by extending standard AFs with uncertain arguments A ? and uncertain attacks R ? . Their formal definition is as follows. Baumeister et al. 2018b

Definition 18 (Incomplete AF and completions
A completion of IAF is any pair (A * , R * ) s.t.: Completions can be seen as possible ways of removing uncertainty by making some arguments and attacks definite. Here, the constraint on R * entails that definite attacks between a and b must be present in all completions where both a and b are present. Classic computational problems for AFs, such as sceptical or credulous acceptance are easily generalized to incomplete AFs. As an example, consider two generalizations of the classic preferred reasoning tasks as given in Baumeister et al. (2018a): Pr-Possible-Sceptical-Acceptance (Pr-PSA)

Given:
An incomplete argumentation framework IAF = (A, A ? , R, R ? ) and an argument a ∈ A Question: Is it true that there is a completion F * = (A * , R * ) of (A, A ? , R, R ? ) s.t. for all E ∈ Pr(F * ), a ∈ E?

Given:
An incomplete argumentation framework IAF = (A, A ? , R, R ? ) and an argument a ∈ A. Question: Is it true that for each completion F * = (A * , R * ) of (A, A ? , R, R ? ), there is a E ∈ Pr(F * ) : a ∈ E?
The Pr-Necessary-Sceptical-Acceptance and Pr-Possible-Credulous-Acceptance problems are obtained by changing quantifiers of the definitions above in an obvious way. Similarly, Pr can be replaced by any other solution concept. It is not difficult to show that the set of completions of an IAF is a single-agent S5(EA)-model in disguise, where A ∪ A ? is the underlying pool of arguments. This has the effect that the above computational problems can be regarded as model-checking problems in our framework. Let us make this claim more precise.

From incomplete AFs to EA-models
Given an incomplete argumentation framework IAF = (A, A ? , R, R ? ), we can build a single-agent EA-model to reason about IAF using our object language. First, we fix some enumeration of ℘ (A ∪ A ? ) = {E 1 , . . . , E n }. Then, we define the set of propositional variables associated to IAF as V IAF = V A∪A ?
{1} . Since we have only one agent, we remove subindices from awareness and epistemic operators. We can then provide the following definition: -V IAF is defined for each kind of variables as follows: The above reduction allows to obtain the following result: . We have that: -The answer to Pr-PSA with input IAF and a ∈ A is yes iff M IAF , w ♦stracc(a). 33 -The answer to Pr-NCA with input IAF and a ∈ A is yes iff M IAF , w Proof See "Appendix A5".
In other words, the main reasoning problems about incomplete AFs can be reduced to model-checking problems in our framework. 34

From EA-models to incomplete AFs
In the opposite direction, we can easily transform members of a specific class of EAmodels into incomplete AFs, with a sound and systematic way to associate states to completions. This is provided by the following definition.
where C is any finite, non-empty set of arguments, 32 See Definition 18 for the notion of completion. 33 See p. 12 for the meaning of stracc. 34 The above result is just formulated for two reasoning problems and preferred solutions for the sake of brevity, but it can be easily generalised to all other acceptance problems concerning any solution concept. and such that V represents the enumeration ℘ (C) = {E 1 , . . . , E n }. We define the incomplete argumentation framework associated to M as the tuple . 36 Under this restriction, we can prove the following correspondence result analogous to Proposition 2. Remark 6 (AF spaces) Interestingly, if we drop the exhaustive valuation requirement, we obtain a one-to-one association from total S5(EA)-models to a more general class of structures, that we call AF spaces and which is worth of interest. An AF space is a pair (IAF, X ) where X is any set of completions of IAF. Incomplete AFs can be seen as a special case of AF spaces (those for which X is maximal w.r.t. set inclusion). The converse, however, does not hold. As an example, consider the set of completions associated to the worlds of the S5(EA)-model depicted in Fig. 2

Proposition 3 Let M be a total S5(EA)-model for V
It is easy to show that there is no IAF with such a set of completions. We can obviously redefine the main acceptance problems for AF spaces. As an example, the following is the variant of the Pr-PSA: 35 According to Definition 20, it is possible that A M ∪ A ? M = ∅, but this is also not excluded by Baumeister et al. (2018b, Definition 22) or Baumeister et al. (2018a, Definition 4). If we require A M = ∅, as done by Baumeister et al.(2018b, Definition 4) then the underlying EA-model must satisfy that V (aw(x)) = W for some x ∈ C. 36 As the familiar reader may notice, there is a tight connection between this restriction on Kripke models and the logic of visibility as presented, for instance, in Herzig et al. (2018). We leave a more detailed analysis for future work.

Given:
An AF space ((A, A ? , R, R ? ), X ) and an argument a ∈ A. Question: Is it true that there is a completion F * ∈ X s.t. for all E ∈ Pr(F * ), a ∈ E?
Intuitively, AF spaces drop the assumption that the agent perceives (A ? , R ? ) as completely uncertain, meaning that all combinations of its elements are possible (as far as they are completions). This can be seen as an inconvenience for some modelling processes, since uncertainty does not need to have that level of homogeneity in many real-life argumentative scenarios.
Interestingly, within the class of total S5(EA)-models we can isolate two subclasses corresponding to the specific types of IAFs most discussed in the literature, and this by simply applying some of the restrictions we have axiomatised.
-If M satisfies PIAw and NIAw, then A ? M = ∅. In order words, IAF M is an attackincomplete AF, also called partial AF, firstly studied by Cayrol et al. (2007). As it should be now evident, EA-models are much more general than incomplete AFs. More concretely, there are three types of information that we can model with EA-models but fall out of the scope of incomplete AFs: nested beliefs, multi-agent information and non-total uncertainty about the elements of A ? and R ? . Moreover, the basic modal language can be used to answer the queries of the main reasoning tasks regarding argument acceptability in incomplete AFs.

Control AFs
Control argumentation frameworks (Dimopoulos et al. 2018) are a more complex kind of structure for representing qualitative uncertainty. They enrich incomplete AFs in two different senses. First, they augment incomplete AFs with an additional uncertain attack relation , whose precise meaning will be clarified later on. Second, they include a dynamic component by considering yet another partition of the underlying AF (the control part), which is intuitively assumed to be modifiable by the agent. In this subsection, we provide a natural epistemic multi-agent interpretation of control argumentation frameworks (CAFs) using our logic. The intuitive picture behind this interpretation is that of an agent, PRO (the proponent), reasoning about how to convince another agent, OPP (the opponent). Here, the uncertain part of CAFs captures the lack of total knowledge of PRO about OPP's knowledge of the underlying AF. Moreover, the so-called control part of a CAF represents the private knowledge of PRO. We also provide a reduction of the main reasoning tasks regarding CAFs to our logic. Let us start with the main definitions concerning CAFs and their semantics (Dimopoulos et al. 2018).

Definition 21 (Control argumentation framework)
A control argumentation framework is a triple CAF = (F, C, U ) where: ) and A and A ? are two finite sets of arguments; and is symmetric and irreflexive; -C = (A C , R C ) is called the control part where A C is yet another finite set of arguments and , and A C are pairwise disjoint; and -R, R ? , , and R C are pairwise disjoint.
We sometimes call A∪ A C ∪ A ? the domain of CAF and denote it as Δ CAF . Intuitively, the new components can be thought as follows.
is an attack relation s.t. the existence of its elements is known by the agent, but the direction is unknown. So, whenever (x, y) ∈ , it intuitively means that the agent knows that there is an attack among x and y but it does not know who attacks who. As for C = (A C , R C ), it is supposed to be the part of the framework that depends on the actions of the agent. These intuitions are formally specified in the following definitions: and -for every x, y: (x, y) ∈ and x, y ∈ A * implies (x, y) ∈ R * or (y, x) ∈ R * .
From an epistemic perspective, completions can be understood as possible knowledge bases that PRO attributes to OPP. Note that control arguments A C are always a subset of every completion. Something similar happens with control attacks (conditionally on the domain A * of each completion). The intuition here is that (F, C, U ) provides the picture of a finished debate seen from PRO's point of view, where she has communicated all her available arguments A C . The spectrum of debate states that are between the initial one (where nothing has been said) and (F, C, U ) are captured by the notion of control configuration: Definition 23 (Control configuration) Given CAF = (F, C, U ), a control configuration is a subset of control arguments CFG ⊆ A C and its associated CAF is One more time, classical reasoning tasks regarding AFs can be naturally generalised to CAFs. As an example, consider the following one (Dimopoulos et al. 2018): Pr-Necessary-Sceptical-Controllability (Pr-NSCon)

Given:
A control argumentation framework CAF = (F, C, U ) and an argument a ∈ A. Question: Is it true that there is a configuration CFG ⊆ A C s.t. for every completion F * = (A * , R * ) of CAF CFG and for all E ∈ Pr(F * ), a ∈ E?
We now show how to build a two-agent EA-model to reason about a given CAF. First, let CAF = (F, C, U ), we define the set of variables of CAF as V CAF := V A∪A C ∪A ?
-R CAF PRO := W IAF × W IAF and R CAF OPP := ∅. 37 -V CAF is defined for each kind of variables as follows: 38 Moreover, for any CFG ⊆ A C , it can be shown that the set of completions of CAF CFG is equal to

Remark 7 Note that the set of completions of CAF
The following proposition digs into this multi-agent epistemic interpretation of CAFs: Proposition 4 Let CAF = (F, C, U ) be a CAF, let M CAF be its associated model, and let w ∈ M CAF [W ]. We have that: i.e. the set of fixed arguments is the set of arguments that the proponent knows that the opponent is aware of.
arguments are those that PRO considers possible both that OPP is aware of them and that OPP is not. 37 Actually, the accessibility relation for OPP is irrelevant. 38 Notice that the definitions of A i (·) and R(·) (Definition 8) did not include the parameter M, since we were reasoning about different states in the same model. Nevertheless, for this subsection, it will be useful to recover the parameter, since we need to reason about awareness sets and attacks that holds at states throughout different models.
i.e. control arguments are the arguments that PRO knows that OPP is not aware of. (y)) → x y) ∧ ♦ PRO (aw OPP (x) ∧ aw OPP (y))}, i.e. fixed attacks are those that the proponent knows that the opponent is aware of (conditionally on the awareness of the involved arguments). Moreover, the second condition serves to distinguish R from R C .
. So if (x, y) ∈ , then PRO knows (conditionally on OPP's awareness of x and y) that either x attacks y or vice-versa. Moreover, the meaning of (provided by Definition 22) forces PRO to consider as epistemically possible situations where OPP is aware of both arguments but where (x, y) (resp. (y, x)) does not hold.
In words, uncertain attacks are those that (i) PRO considers possible that OPP is aware of them and that OPP is unaware (first two conjuncts), and (ii) they are not members of (third conjunct).
i.e. control attacks are those such that: (i) they are private for PRO (meaning that it knows that OPP is a unaware of some of the involved arguments), and (ii) PRO is sure that they hold.
Finally, we can reduce controllability of a given CAF to a model-checking problem in the associated EA-model. For doing so, we will use the following shorthand, informally expressing that E k is part of PRO's private knowledge (i.e. that E k is a set of control arguments in the associated EA-model).
be its associated model, and let w ∈ W CAF . We have that: -The answer to Pr-NSCon with input CAF and a ∈ A is yes iff Again, the proposition can be easily adapted to other controllability problems. Besides, the fact that the control part of a CAF is representable through public additions reveals that the very notion of control configuration assumes that the speaker (proponent) is sure about the effects of the communication. More refined forms of communication (like the ones we have studied in Sect. 7) seem to deserve future attention so as to develop variants of CAFs.

Reasoning about opponent models
Strategic argumentation (Thimm 2014) studies how agents should interact in adversarial dialogues in order to maximize their expected utility. A useful tool in this context is opponent modelling (Oren and Norman 2009;Rienstra et al. 2013), a well-known technique among AI researchers that deals with more general adversarial situations (Carmel and Markovitch 1996a, b). Opponent modelling for abstract argumentation assumes, as we do for MAFs, that there is an underlying UAF (A, R) which contains all arguments relevant to a particular discourse (Oren and Norman 2009;Rienstra et al. 2013;Thimm 2014;Black et al. 2017). Based on this, it provides a model of a proponent in a strategic dialogue. The central notion is that of a belief state of the proponent, which is defined in general as a couple (B, E) where B ⊆ A is the set of arguments the proponent is aware of and E ⊆ ℘ (A) is the set of belief states the agent considers possible for its opponent. 39 The belief state can be more or less refined: at level 0 of refinement it only includes the arguments the proponent is aware of. At level 1 it also contains her beliefs about her opponent's awareness, at level 2 it includes her beliefs about her opponent's beliefs about her own awareness and so on, up to an arbitrary level n of nesting.
In our semantics, any pointed EA-model (M, w) for two agents contains all information available to define a belief state of any level n of refinement for agent i. To make this more precise we introduce some notation. - . Intuitively, View i (w) consists of the arguments that agent i believes (knows) to be aware of at state w. Based on this, we can define the belief states of agent i at state w for an arbitrary level n as follows . It is interesting to show that the actual definitions of a belief state provided by Oren and Norman (2009) and Rienstra et al. (2013) are a particular case of our definition modulo the restriction to specific classes of pointed  In the case of the simple agent models (Oren and Norman 2009, Definition 5;Rienstra et al. 2013, Definition 8), a belief state of level n has the form (B 0 , (B 1 , . . . (B n , ∅) . . . )), where each B i is an awareness set (of the proponent if i is even and of the opponent if i is odd), and where B i+1 ⊆ B i . Here, B 0 contains the awareness set of the proponent, B 1 the awareness set the proponent attributes to the opponent, B 2 the awareness set the proponent thinks the opponent attributes to him, and so forth. From our model-theoretic perspective, this tacitly assumes that we are in a AoA-model where each R i is functional. Indeed, functionality forces each R i [{w}] to consist of a singleton set. This implies that each BS n i (w) has a singleton set as its second element E. Moreover, combined positive and negative introspection guarantee that , as presupposed in simple agent models. Furthermore, GNIAw forces B i+1 ⊆ B i as desired.
In the more general case of uncertain agent models , Definition 10), a belief state (B, E) instead consists of an awareness set B for agent i and a set of belief states E of the opponent, each one of the form (B , E ) such that B ⊆ B. Again, the latter condition assumes that GNIAw holds. The fact that B is the awareness set of the actual state tacitly assumes PIAw as before, but functionality does not need to hold any more, and therefore we are in the more general class of (serial) AoA-models.
Yet a more general class of models, extended agent models is defined by Rienstra et al. (2013, Definition 11). Here virtual arguments are added as arguments the agent is not aware of but consider possible other agents are. From our point of view this corresponds to the failure of GNIAw (while PIAw and NIAw still hold).
Applied to this approach to strategic argumentation, our logics and semantics provide a systematic way to reason about the effects of different kinds of argumentative events on the belief states of agents. This can be useful, in turn, to compute the best move for an agent at a given moment of a dialogue. Furthermore, an important part of the work in strategic argumentation using opponent models consists in finding appropriate ways to update belief states. More formally, given a class of belief states B and a universal set of arguments C, the challenge consists in finding functions of the form upd: B × ℘ (C) → B. From this perspective, our Lemma 1 provides sufficient conditions for accomplishing this task given different constraints on B.

Discussion, open problems and future research
As mentioned in Sect. 3, there are many alternative design choices for multi-agent argumentation frameworks, which are worth discussing. A first choice concerns the finiteness of the argumentative pool, i.e. (a) of p. 8. Indeed, the set A of potentially available arguments may well be infinite. In principle, this option is viable for a propositional language with a countable set of variables. However, a propositional language allows to encode the standard solution concepts only in the finite case. 41 As many other works in this field, we restrict ourselves to finite AFs, which is enough for modelling most real-life debates.
A second branching option for design concerns (b), the fact that A is fixed in advance. One can instead assume that it is evolving through updates, as in Doutre and Mailly (2018, Sect. 1.3). Our option has been shared by Sakama (2012), Doutre et al. (2014), de Saint-Cyr et al. (2016 and Caminada and Sakama (2017), among others. The rationale behind it is that this imposes no limitation for modelling acquisition of new arguments by an agent and other relevant dynamics of information change, at least when the propositional language is rich enough to encode subjective awareness of arguments (Sect. 4).
Another option is not to assume (c), the existence of an objective attack relation R between members of A. Proposals like Dyrkolbotn and Pedersen (2016), Baumeister et al. (2018b) avoid (c). This goes in hand with the very minimal assumption that agents only share a "pool" of arguments A, but no constraint on how these arguments interact with each other. This amounts to eliminating the R component of our structures, and may be adequate in contexts where conflicts between arguments cannot be assessed even from a third person perspective. We should however stress that this is just a special case of a MAF, one where R = ∅. In line with others-for instance Schwarzentruber et al. (2012)-we decided to build assumption (c) into our design, since our Kripke semantics still allows, in the general case, to model radical disagreement about attacks at the epistemic level. Besides, this assumption is acceptable in many applications and provides a straightforward way to define more complex notions we are after. We note however, that it is possible to perform the same constructions without assuming (c) by a slightly different language and semantics.
Regarding the nature of the subjective awareness of arguments ( A i ) and attacks (R i ), there are multiple choices to be made, which consist in accepting or rejecting the following constraints: (d) A i ⊆ A (agents are only aware of "real" arguments). (e) R i ⊆ A i × A i (agents are only aware of attacks among arguments they are aware of). (f) R i ⊆ R (sound awareness of attacks). (g) R ∩ (A i × A i ) ⊆ R i (complete awareness of attacks). (h) A ⊆ A i (agents are aware of all "real" arguments).
Recall that our choice for design (Definition 2) integrates (d), (e), (f), and (g), but all of them are open to discussion. Although strongly intuitive, (d) and (e) are questioned by Schwarzentruber et al. (2012), which defines a logic for reasoning about "non-existent" or "virtual" arguments {? 0 , ? 1 , . . .}. We do not integrate constraint (h) as it discards the natural intuition that different agents are aware of different sets of arguments. 42 Under this assumption the agents' view can only differ with respect to the attack relations, as in Dyrkolbotn and Pedersen (2016), Cayrol et al. (2007). Again, this condition isolates a specific subclass of our MAFs, those for which, A i = A, which can be captured axiomatically by imposing all awareness atoms as axioms.
Assuming both (f) and (g), i.e. SCAA, is common in the literature on multi-agent abstract argumentation (Caminada 2006;Sakama 2012;Schwarzentruber et al. 2012;Doutre et al. 2017;Rahwan and Larson 2009). However, SCAA may seem too idealized in many contexts, since it brings the notion of awareness of arguments closer to the one of knowledge of arguments. 43 Agents may indeed have different abilities to spot conflicts between arguments 44 or, even more, they may be entitled to radically different views about the nature of the attacks. 45 Here again, the just mentioned differences in awareness of attacks can still be modelled in our Kripke semantics. Indeed, what matters here is the distinction between simple SCAA and common knowledge (belief) of SCAA. The latter is a much stronger assumption and their difference becomes transparent in the language and semantics of epistemic logic (Sect. 5).
The aim of this paper has been to introduce a new DEL framework for reasoning about multi-agent abstract argumentation. This involves the setup of a three-layer logic: propositional, epistemic and dynamic. Our first goal was to encode the key argumentation-theoretic notions in the language of propositional logic, and we showed that this is possible in the finite case. Concerning the epistemic layer, we provided complete axiomatistions for a number of intuitive constraints on awareness of arguments and attacks. Moreover, specific constraints isolate different classes of structures already used in abstract argumentation to model qualitative uncertainty about AFs, and our logic is comprehensive enough to reason about them (Sect. 8.1). As for the third layer, its language and semantics allow modelling subtle forms of information change (Sect. 7) and reasoning about other formalisms for uncertainity and dynamics (Sects. 8.2,8.3).
Although event models for DEL are apt to describe the effects of complex information updates, its language describes only indirectly the agential component of a debate. More in detail, the language allows to reason about what happens after some combination of communicative act and information update has been performed, but it does not allow to reason about what agents "see to it that" in a debate. This is likely to require additional tools from logics of agency and epistemic planning, which suggest promising venues for future work.
Author Contributions Both authors contributed equally to this paper.
Funding This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 748421. Carlo Proietti would also like to thank the Swedish Foundation for Humanities and Social Sciences (Riksbankens Jubileumsfond) for funding received during the preparatory phase of this research (P16-0596:1). The research activity of A. Yuste-Ginel is supported by the Spanish Ministry of Universities through the predoctoral Grant MECD-FPU 2016/04113. Declaration 43 In this sense, knowing an argument would mean being aware of all an only the right conflict relations in which it is involved. 44 I may be aware of some argument against anthropogenic forcing on climate change, and a climatologist may present me the result of some scientific study that undermines such argument. Yet I may not be able to see the attack from the second argument to the first. 45 Given two potentially conflicting arguments a and b, it may be the case that two individuals disagree on their relative weights and therefore on whether a attacks b or vice versa. See Dyrkolbotn and Pedersen (2016) for examples and discussion. A milder assumption is that awareness of attacks is shared, i.e. uniform among individuals. Under this assumption the possibility is still open that all agents may be wrong with respect to the objective attacks.

Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Proposition 1
Proof All results are obtained via a chain of equivalences from the left-hand side of each item to its right-hand side. [1] For the other claim . ⇐⇒ E k ⊆ E l and for some a ∈ A: v MAF a E l and v MAF ¬a E k (by the semantics of Boolean connectives and previous claim). ⇐⇒ E k ⊆ E l and for some a ∈ A: a ∈ E l and a / ∈ E k (definition of v MAF ). ⇐⇒ E k ⊂ E l (definition of ⊂). For the equivalent formulation, note that Th MAF is uniquely satisfied by v MAF . [2] . ⇐⇒ For every a ∈ A: a ∈ E k implies (a ∈ A i and for all b ∈ A: either b / ∈ E k or it is not the case that bRa) (definition of v MAF ).
⇐⇒ E k ⊆ A i and E k is conflict-free (definition of ⊆ and conflict-free). [3] . ⇐⇒ (E k ⊆ A i and E k is conflict-free) and for all a ∈ A: (v MAF a E k iff for all b ∈ A: ((v MAF aw i (b) and v MAF b a) implies that for some c ∈ A: (v MAF c E k and v MAF c b))) (semantics of Boolean connectives). ⇐⇒ (E k ⊆ A i and E k is conflict-free) and for all a ∈ A: (a ∈ E k iff for all b ∈ A:((b ∈ A i and bRa) implies that for some c ∈ A: (c ∈ E k and cRb))) (definition of v MAF ).
(semantics of ∧, → and ¬ and propositional reasoning). ⇐⇒ (P) and for every 1 ≤ l ≤ n: implies that it is not the case that E k ⊂ E l (item 1, second claim). ⇐⇒ E k is preferred w.r.t. (A i , R i ) (Definition 3).

[5] [strong acceptance]
. ⇐⇒ a is strongly accepted for i (Definition 4). for some 1 ≤ k ≤ n : E k ∈ Pr(A i , R i ) and a ∈ E k ) and
Proofs for other cases (weak rejection, strong rejection and borderline) run very similar and are left to the reader.

A2. Proof of Theorem 1
To prove completeness, we need a few preliminaries. First, we need to define the class of structures that will fit the canonical characterization of EA (and its extensions).

For every w ∈ W ,V (w) represents an enumeration of ℘ (A).
For the sake of readability, we shorten i∈Ag R i as R Ag . A quasi-AoA-model is a quasi-model satisfying positive and negative introspection of attacks, 46 PIAw and GNIAw. In general, a quasi-C L -model is one that satisfies the constraints of C L . For instance, a quasi-S4(AoA)-model is a quasi-AoA-model where every R i is a preorder. The notion of truth in pointed quasi-models is the same as in pointed models (Definition 9). Given a pointed quasi-model ( sets. As for AU, its proof runs completely analogous to point 3, with attacks variables instead of subset variables. Point 5 is proven by induction on ϕ. The steps for propositional variables and Boolean connectives are straightforward, hence we just show the one for modal formulas. Assume, as induction hypothesis, that for every v ∈ W : This is true iff for every u ∈ W : vR i u implies M, u ϕ (semantics of i ). Note that, by Lemma 3.1 for every u ∈ W s.t. vR i u, we have that u ∈ W . Using the induction hypothesis and the last observation we have that for every u ∈ W (vR i u implies M, u ϕ) iff for every u ∈ W (vR i u implies M , u ϕ). The last assertion is equivalent to M , v i ϕ (by the semantics of i ).
Let L be any of the proof systems under consideration. We say that ϕ is an L-theorem, in symbols L ϕ, iff there is a sequence ϕ 1 , . . . , ϕ n s.t. each ϕ i is either an instance of an L-axiom scheme or it has been obtained from precedent formulas of the sequence by the application of an L-inference rule. We say that ϕ is L-deducible from a set of Let us denote by MC L the set of all L-MCs. Proofs for the two following lemmas are standard (see e.g. Blackburn et al. 2002, Section 4.1): Lemma 5 (Properties of MCs) Let Γ ∈ MC and let ϕ ∈ L(V A Ag , ) then: (i) if Γ L ϕ, then ϕ ∈ Γ , (ii) ϕ ∈ Γ or ¬ϕ ∈ Γ .
The canonical model for L is defined as M L := (W L , R L , V L ) where: Remark 8 Note that no M L is a C L -model. To see this, note that both {a E k } and {¬a E k } are L-consistent, and by the Lindenbaum Lemma they are members of different states of M L . Hence V L (a E k ) = W L and V L (a E k ) = ∅, i.e. SU is violated by M L .

Lemma 6 M L is a quasi-C L -model.
Proof Suppose that L = EA. We have to show that M EA satisfies the three conditions of Definition 25. For condition 1, suppose that Γ ∈ V EA (a E k ) and Γ R EA Ag Δ. From the first part of the hypothesis we have that a E k ∈ Γ (definition of V EA ). This implies that Γ a E k . Since a E k → i a E k for every i ∈ Ag (it is derivable from (PIS)), we have that Γ a E k → i a E k , because is monotonic. Using MP and Lemma 5(i) we have that (*) i a E k ∈ Γ for every i ∈ Ag. From the second part of the hypothesis, we have that Γ R EA i Δ for some i ∈ Ag (definition of R Ag ), which is equivalent by definition of R EA to (**) {ϕ | i ϕ ∈ Γ } ⊆ Δ for some i ∈ Ag. From (*) and (**) we obtain a E k ∈ Δ which implies by definition of V EA that Δ ∈ V EA (a E k ).
As for condition 2, suppose that Γ / ∈ V EA (a E k ) and Γ R EA Ag Δ. From the first part of the hypothesis we get that a E k / ∈ Γ and this implies, by Lemma 5(ii) that ¬a E k ∈ Γ . Then we get that (*) i ¬a E k ∈ Γ for every i ∈ Ag by using a similar argument as for condition 1, but this time with axiom (NIS). From the second part of the hypothesis we obtain that Γ R EA i Δ for some i ∈ Ag (by definition of R EA Ag ), which in turn implies (**) {ϕ | i ϕ ∈ Γ } ⊆ Δ for some i ∈ Ag. Finally, from (*) and (**), it is easy to deduce ¬a E k ∈ Δ, which is equivalent to Δ / ∈ V EA (a E k ), by definition of V EA .
For condition . From the last assertion, using the semantics of ↔ and Lemma 5.(ii) it is easy to deduce that for every 1 ≤ k < m ≤ n there is some x ∈ A such that either (x E k ∈ Γ and x E m / ∈ Γ ) or (x E k / ∈ Γ and x E m ∈ Γ ). Applying the definitions of V EA andV , as well as Definition 6, one can deduce thatV EA (Γ ) represents an enumeration of ℘ (A). Since Γ was an arbitrary state of M EA , we have that every state of M EA represents an enumeration of ℘ (A).
Suppose that L = AoA. Proving that M AoA satisfies positive and negative introspection of attacks is completely analogous to proving that M EA is a quasi-EA-model.
As for PIAw, suppose Γ R AoA i Δ and a ∈ A i (Γ ), which implies Γ ∈ V AoA (aw i (a)) by definition of A i . This entails {ϕ | i ϕ ∈ Γ } ⊆ Δ and aw i (a) ∈ Γ , by definition of R AoA i and V AoA , and therefore Γ aw i (a) (by definition of ). It follows that i aw i (a) ∈ Γ , by axiom PIAw, monotonicity of and Lemma 5. Therefore aw i (a) ∈ Δ holds, from which Δ ∈ V AoA (aw i (a)) follows (definition of V AoA ). The latter entails , and the second implies Δ ∈ V AoA (aw j (a)) (definition of A j ), and therefore aw j (a) ∈ Δ (definition of V AoA ). Suppose, for the sake of contradiction, that a / ∈ A i (Γ ). This entails Γ / ∈ V AoA (aw i (a)) (definition of A i ), and therefore aw i (a) / ∈ Γ (definition of V AoA ), and consequently ¬aw i (a) ∈ Γ (Lemma 5). From the latter Γ ¬aw i (a) follows (definition of ), which implies i ¬aw j (a) ∈ Γ (axiom GNIAw, monotonicity of and Lemma 5). Putting both lines of reasoning together we have that {ϕ | i ϕ ∈ Γ } ⊆ Δ, aw j (a) ∈ Δ and i ¬aw j (a) ∈ Γ , which yields aw j (a), ¬aw j (a) ∈ Δ, contradicting the consistency of Δ. Therefore a ∈ A i (Γ ).
The proof of the Truth Lemma ( ∀Γ ∈ MC L , ϕ ∈ Γ iff M L , Γ ϕ) is exactly as in the basic modal case. We refer to Blackburn et al. (2002, Lemma 4.21) or Fagin et al. (2004, Theorem 3.1.3) for details.
Finally, we are able to prove Theorem 1 by contraposition. Let us show the case for L = EA. Suppose Γ EA ϕ. By Lindenbaum and the Truth Lemma we obtain M EA , Γ * Γ ∪ {¬ϕ}. Since truth is preserved by generated submodels (Lemma 3.5), we have that M EA , Γ * Γ ∪ {¬ϕ} where M EA is the submodel of M EA generated by Γ * using R EA Ag . Moreover, by Lemma 3.3, M EA is an EA-model, and therefore Γ EA ϕ. The remaining cases (L = AoA, L = S4(EA), etc) follow along the same lines, combining Lemma 3.1, 3.2 and/or 3.3 in the corresponding way.

Lemma 1 (Closure)
Proof For (i), suppose that M ⊗ E is defined. It suffices to show that M ⊗ E satisfies ER and SU, and this is an almost direct consequence of the definition of EA-substitutions and the operation ⊗.
For item (ii), assume that E ∈ em12 and M ⊗ E is defined. Showing that M ⊗ E satisfies ER, SU, and AU is straightforward by the definition of AoA-substitutions and the operation ⊗. Thus, let M ⊗ E = (W , R , V ), we just need to show that it satisfies PIAw and GNIAw.
[PIAw] Suppose (w, s)R i (w , s ) and (w, s) ∈ V (aw i (a)), which is equivalent to wR i w , sT i s and M, w pos(s)(aw i (a)) (by definition of ⊗). We need to show that (w , s ) ∈ V (aw i (a)), which is equivalent to M, w pos(s )(aw i (a)). We prove this for every possible value of pos(s)(aw i (a)).
Case A. Suppose that pos(s)(aw i (a)) = . Then a ∈ pos + i (s) (by definition of pos + i ). The last assertion, together with the hypothesis that E satisfies EM 1 , leads to a ∈ pos + i (s ), which is equivalent to pos(s )(aw i (a)) = (by definition of pos + i ). Therefore, M, w pos(s )(aw i (a)).
Case B. Suppose that pos(s)(aw i (a)) = aw i (a). Then M, w aw i (a) and, since M is an AoA-model that satisfies positive introspection of awareness and we know that wR i w , we have that M, w aw i (a). Now, consider the three possible values of pos(s )(aw i (a)). If pos(s )(aw i (a)) = aw i (a), then we already know that M, w pos(s )(aw i (a)). The case pos(s )(aw i (a)) = is trivial. Finally, the case pos(s )(aw i (a)) =⊥ is not possible because EM 1 would force pos(s)(aw i (a)) =⊥ and we know that is not the case.
Case C. The case pos(s)(aw i (a)) =⊥ is absurd since we assumed that M, w pos(s)(aw i (a)).
[GNIAw] Suppose that (w, s)R i (w , s ) and (w , s ) ∈ V (aw j (a)) for an arbitrary j ∈ Ag. Applying the definition of ⊗ we have that wR i w , sT i s and M, w pos(s )(aw j (a)). We have to show that (w, s) ∈ V (aw i (a)), or equivalently that M, w pos(s)(aw i (a)). We prove this by cases on the value of pos(s )(aw j (a)). For the validity preservation of SE within EA (resp. S4(AoA), KD45(AoA)), the proof follows the same lines, but additionally using item i) (resp. iii, iv) of Lemma 1.

Details of the proof of Theorem 2
Soundness follows from the soundness of the static systems and Lemma 2. The proof for strong completeness works, as standard in DEL, via reduction axioms. In order to prove the key reduction lemma we combine techniques from Kooi (2007) with the intuitive idea of defining a reduction function τ with domain the dynamic language and co-domain its static fragment, as in van Ditmarsch et al. (2007), van Benthem et al. (2006), Wang and Cao (2013). This provides a simplified proof of the reduction lemma. Proof (sketched) We first need to show that the Lemma holds for the special case where Od(ϕ) = 1. So suppose that Od(ϕ) = 1 and continue by induction on d(ϕ). We skip details and just note that in the definition of τ , the function is applied to a formula of smaller depth in the right-hand side of the equations. The only problematic case is τ ( [E, s] [F, t] ([F, t]ψ)) = 1 which reduces to the case above.
Proof (sketched) We follow the same strategy as for the previous lemma, i.e. proving it first for the special case where Od(ϕ) = 1, by induction on d(ϕ). We omit details. After that, we are able to prove it in general, again by induction on d(ϕ). We just illustrate the case of ϕ = [E, s] [F, t] As an immediate corollary of Lemmas 2 and 8, we have that for every ϕ ∈ L ea (resp. ϕ ∈ L em12 , ϕ ∈ L emS4 , ϕ ∈ L pure ) EA ϕ ↔ τ (ϕ) (resp. AoA ϕ ↔ τ (ϕ), S4(AoA) ϕ ↔ τ (ϕ), KD45(AoA) ϕ ↔ τ (ϕ)). Finally, strong completeness follows from Theorem 1 in the usual way ). Let us just show the case for AoA em12 for illustration. Suppose Γ AoA ϕ, then τ (Γ ) AoA τ (ϕ) (by the corollary mentioned above). By Lemma 7, we know that τ (Γ ) ⊆ L and τ (ϕ) ∈ L, which implies, together with τ (Γ ) AoA τ (ϕ) and Theorem 1 that τ (Γ ) AoA τ (ϕ). But since AoA em12 is an extension of AoA, we have that τ (Γ ) AoA em12 τ (ϕ). Applying the definition of deduction from assumptions (see p. 47), Lemma 8 and SE, we have that Γ AoA em12 ϕ. Balbiani et al. (2012) introduce an enhanced truth condition for public announcement operators, establishing as a necessary condition for ψ ϕ to be true that the updated model M ψ must be in the targeted class. Their axiomatisations make use of the global modality and apply to public announcements logic. In a different work, Aucher (2008) captures a sufficient condition, expressible in a language with a global modality, for seriality to be preserved under product update. We can therefore obtain the axiomatisation of KD45(AoA) for a broader dynamic language than the one used in Theorem 2, by putting together both these results and our construction of Lemma 1. First of all, we need to augment our language with a global modality [U]. Formulas of the language L (V A Ag , , [U]) (for any ⊆ ea) are generated by the following grammar:

A4. Preserving doxastic relations via an enhanced truth condition
where [U]ϕ reads "it is everywhere the case that ϕ". Now, we define the enhanced truth condition, denoted by . The -truth clauses for propositional variables, Boolean connectors and epistemic operators are the same as for . As for Since we want to apply again a reduction argument, a previous step is needed, that is, a sound and strongly complete axiomatisation in the extended static language L(V A Ag , , [U]) w.r.t. KD45(AoA). More concretely, the proof system we are looking for is KD45(AoA) [U] which is the result of extending KD45(AoA) with (i) S5 axioms schemes for [U]; (ii) the axiom scheme (INC) [U]ϕ → i ϕ and the rule NEC [U] (from ϕ, infer [U]ϕ). Soundness and strong completeness of KD45(AoA) [U] w.r.t. KD45(AoA) follows from Blackburn et al. (2002, Theorem 7.3). The same comments made in the proof of Theorem 2 apply here. 47 Note that now axioms PIAt and NIAt (resp. PIS and NIS) can be rewritten more transparently as a unique axiom [U]a b ∨ [U]¬a b (resp. [U]a E k ∨ [U]¬a E k ). For reduction, we need to capture the new precondition for dynamic modalities (the one imposed in the definition of ) in the object language. As proved in Aucher (2008, Proposition 2), the following formula is true in a pointed model M, w iff M ⊗ E is defined and serial:

pre(t)).
Let emd45 denote the class of event models satisfying EM 1 and EM 2 and where each T i is serial, transitive and euclidean. Using this result and Lemma 1.(ii), the following can be proved:   (a))) .
The formula PIAw(E) says that everywhere in the model, any point that meets the precondition of some action s and satisfies aw i (a) as a postcondition for s has only access to states that satisfy the same postcondition for any t which is related to s and executable. Similarly, GNIAw(E) says that, everywhere, when aw i (a) is not satisfied as a postcondition of some executable s, then aw j (a) is not satisfied at any accessible state as a postcondition for any executable t which is related to s. It is then a matter of standard verification to prove the two following items.

A.5. Incomplete and control AFs
We close this technical appendix by proving the results stated in Sect. 8.

Proof of Proposition 2
Proof We just prove the first item (the other one is analogous).
Suppose that the answer to Pr-PSA with input IAF = (A, A ? , R, R ? ) and a ∈ A is yes.

⇐⇒
There is a u ∈ W IAF s.t. M IAF , u stracc(a) (Definition 9 and the fact that stracc(a) does not contain modal operators, see p. 12).
⇐⇒ M IAF , w ♦stracc(a) (by the fact that R IAF is total (Definition 19) and the truth clause for ♦).

Proof of Proposition 3
Proof We just prove the second item, the first runs analogously.
Suppose that M, w 1≤k≤n (preferred(E k ) ∧ a E k ).

⇐⇒
For all u ∈ M[W ], M, u 1≤k≤n (preferred(E k ) ∧ a E k ) (because R is total by assumption).

For all u ∈ M[W ],V (u)
1≤k≤n (preferred(E k ) ∧ a E k ) (because no modal operators occurs in 1≤k≤n (preferred(E k ) ∧ a E k ), see p. 12).

⇐⇒
For all u ∈ M[W ], there is some 1 ≤ k ≤ n:V (u) preferred(E k ) ∧ a E k (truth clause for ∨).

⇐⇒
The answer to Pr-PSA with input IAF M and a ∈ A M is yes (definition of Pr-PSA).

Proof of Proposition 5
Proof Suppose that the answer to Pr-NSCon with input CAF and a ∈ A is yes.

⇐⇒
∃CFG ⊆ A C : for every u ∈ M CFG [W ], for every E ∈ Pr((A * u , R * u )), a ∈ E (Remark 7). PRO stracc OPP (a) (note that R CAF PRO is total in M CFG because it is total in M CAF (Definition 24) and the execution of Pub CFG does not vary accessibility relations (see Definition 14 and Example 5)).  (a)) (propositional reasoning, variable renaming (CFG by E l ) and the fact that ℘ (Δ CAF ) has n elements by assumption).