Blame it on me

In this paper, we develop a formalisation of the main ideas of the work of Van de Poel on responsibility. Using the basic concepts through which the meanings of responsibility are defined, we construct a logic which enables to express sentences like “individual i is accountable for φ”, “individual i is blameworthy for φ” and “individual i has the obligation to see to it that φ”. This formalization clarifies the definitions of responsibility given by Van de Poel and highlights their differences and similarities. It also helps to assess the consistency of the formalisation of responsibility, not only by showing that definitions are not inconsistent, but also by providing a formal demonstration of the relation between three main meanings of responsibility (accountability, blameworthiness, and obligation). The formal account can be used to derive new properties of the concepts. With the help of the formalisation, we detect the occurrence of the problem of many hands (PMH) by defining a logical framework for reasoning about collective and individual responsibility. This logic extends the Coalition Epistemic Dynamic Logic (CEDL) by adding a notion of group knowledge (and generalize the definitions of individual responsibility to groups of agents), agent ability and knowing how to its semantics.


Introduction
The concept of responsibility is central to a theory of collective agency and organisations. Responsibility issues arise any time a group of agents acts collectively in order to achieve certain objectives. Plans are made for the collective action of the group and specific agents are stated to be "responsible" for certain tasks. If something goes wrong certain agents might be found "responsible" for what happened, they might be held "accountable" and be "blamed". The notions of responsibility display different nuances all related with particular aspects of, such as, agents' ability, obligation and knowledge.
Van de Poel [38,39] has presented a conceptual analysis of moral responsibility. He has introduced a taxonomy of the various normative meanings of responsibility. For example, when we say that someone is responsible for a consequence ϕ, it may mean that one is "accountable" for ϕ, or that one is "blameworthy" for ϕ, or, that one has the "obligation to see to it that" ϕ. As we have also seen the application of these various meanings of responsibility depends on more basic concepts, such as moral agency, causality, wrong-doing, freedom and knowledge. For instance, an individual i is accountable for ϕ if i is capable of acting as a moral agent, is obliged to realise ¬ϕ but instead acts so as to necessitate ϕ. In addition, i is blameworthy for ϕ if i is accountable for ϕ, knows (or could know) that ϕ would be the case, and acts freely, i.e., i could have chosen a morally acceptable plan which would not have necessitated ϕ. Finally, i has the obligation to see to it that ϕ if i must ensure the occurrence of ϕ.
Van de Poel has classified these different meanings of responsibility in two categories. Accountability and blameworthiness are "backward-looking responsibilities", i.e., responsibilities regarding the past, whereas "obligation to see to it that" is a "forward-looking responsibility", i.e., a responsibility regarding the future. Van de Poel [39] also argues that these three notions are related, and has given four conceptual relations between these different notions. Roughly, (1) if an individual i has the obligation to see to it that ϕ but does not realise ϕ, then i is accountable for ¬ϕ; (2) if i is accountable for ϕ and does not provide a satisfactory account for ϕ, then i is blameworthy for ϕ, (3) if i is blameworthy for ϕ, then i is also accountable for ϕ, and (4) if ϕ is not the case and i is not accountable for ¬ϕ, then i had no obligation to see to it that ϕ.
Using the basic concepts upon which the meanings of responsibility are defined, we construct a logic which enables to express sentences like "individual i is accountable for ϕ", "individual i is blameworthy for ϕ" and "individual i has the obligation to see to it that ϕ". Such effort contributes to the discussion about responsibility in at least two ways. First, it clarifies the definitions and also their differences and similarities. Second, it assesses the consistency of formalisation of responsibility, not only by showing that definitions are not inconsistent, but also by providing a formalisation of the four relations between the three notions of responsibility. Moreover, the formal account can be used to derive new properties of the concepts, thus, giving new insights that can be used to advance the discussion. And finally, the formalism proposed here provides a framework wherein criteria for ascribing responsibilities can be stated and, if individuals are to be held responsible for outcomes, then, at least, justifications can be made clear.
We also develop a formal theory in which the "problem of many hands" (PMH), a term coined by Thompson in [37], can be precisely understood. This problem has been studied in the context of organisations (e.g., by Bovens [8]) and has to do with the potential discrepancy between individual and collective responsibility: in some situations, a group of individuals can reasonably be held responsible for an undesirable outcome, while no member of the group can reasonably be held responsible for the outcome. Bovens argues that such situations are problematic because it "frustrates the need for compensation and retribution on the part of the victims" and, maybe more important, because "the fact that no one can be meaningfully called to account after the event also means . . . that no one need feel responsible beforehand" [8]. In our formalisation, we are able to precisely define when a group is responsible (both in terms of accountability and blameworthiness) and also when individuals (or subgroups, more generally) share that responsibility.
In this paper, we define a logical framework for reasoning about collective and individual responsibility based on Coalition Epistemic Dynamic Logic (CEDL) proposed in [13]. However, the formalisation found in de Lima, et al., was not completely adequate, because it misses, for example, indirect accountability which is, especially in organisations, an essential notion with respect to responsibility. We show this here through the example "Light bulb and light switch". Consider a scenario inhabited by two agents: Alice and Betty. They live in a strange house: its interior is illuminated by a light bulb, but the corresponding switch is located outside the house. In this scenario, Alice is inside the house and Betty is outside it, close to the switch. Thus, Alice can see whether the light bulb is on or off and tell (or rather shout) this fact to Betty, but she cannot toggle the switch. Betty, on the other hand, can toggle the switch but she cannot see whether the light is on or off. If she toggles the switch with the light on, it will turn off, and if she toggles it with the light off, it will turn on. Suppose that the light bulb is off and that Alice and Betty have the obligation that the light be on. If Alice and Betty do nothing, they are in a violation state. Betty is accountable, directly accountable, since she could perform an action (toggle the switch) that necessarily leads to the fact that the light bulb is on. On the other hand, Alice could have informed Betty that the light bulb is off. In this case Betty would have the information to prevent a violation. So, one may conclude that if both do nothing, ending in a violation state, it is Alice's fault as well, because she is indirectly accountable. This intuitive conclusion cannot be achieved using the formalisation proposed in [13]. Furthermore, we say that both individuals are accountable, but that only Alice is blameworthy, since she knew how the group (Alice and Betty) could have prevented the violation. In this paper we will address these issues raised by this example, among others.
To provide a formal understanding of responsibility issues within group of agents is of definite importance for the theory and development of multi-agent systems (MAS). In fact, many methodologies for MAS are based on organisational concepts as their cornerstones. A formal theory of responsibility would provide these methodologies with conceptual tools for interpreting, in organisational terms, faulty performances of a given MAS, and at the same time suggest guidelines for the design of MAS behaving in specific ways with respect to the assessment of responsibilities among the agents.
The rest of this paper is organised as follows. In the next section we will introduce a variation of CEDL. Then, in Section 3, we define the operator to express agent ability and obligation. Sections 4, 5 and 6, present the formalisation of three notions of responsibility, namely, obligation to see to it that, accountability, and blameworthiness. In Section 7, we will provide the formalisation of Van de Poel's relations between the three notions of responsibility. The last section draws conclusions and discusses possible future work.

The Formal Framework
In this section we present a logic that will be the basis of our formalisation of individual responsibility. This logic is a variation of the coalition epistemic dynamic logic (CEDL), which, in turn, is an extension of the well-known propositional dynamic logic (PDL) [21]. PDL is a classical propositional logic augmented with modal operators '[a]'. A formula of the form [a]ϕ means 'after every possible occurrence of event a, the consequence is true'. Thus, it permits expression of what consequences are caused by the occurrence of some given event a, where such events can be actions executed by one agent, actions executed by several agents, exogenous events or even programs. But since PDL does not have agents in its language, it does not enable expression of what consequences are caused by which agents of the scenario. To be able to express agent causality, CEDL extends it by actions that are 'enacted' by agents, in a similar way as done, e.g., Wieringa and Meyer [41], and more recently by Lorini and Herzig [22,26] andÅgotnes and Alechina [2]. In CEDL, one can write formulae of the form [(i, a)]ϕ, meaning 'after every possible occurrence of event (i, a), the consequence is true', where the event (i, a) is the action a executed by agent i. Moreover, to be able to express agent knowledge, CEDL has modal operators K i , in a similar way as done, e.g., by Grossi et al. [20] and Herzig et al. [23]. In CEDL, one can also write formulae of the form K i ϕ, meaning 'agent i knows that ϕ'.
The remainder of this section is organised as follows. The next subsection presents the models used to give semantics to CEDL. Then, Section 2.2 presents CEDL language, its interpretation on the models and its complete axiomatisation. After that, Section 2.3 provides a formalisation of group knowledge. And Finally, Section 2.4 compares the models of CEDL with the Concurrent Epistemic Game Models (CEGMs).

Models
The mathematical structures defined in this section aim at modeling environments inhabited by one or more entities called agents. These agents are able to execute actions in order to change the environment. These include epistemic actions, i.e., actions that change the knowledge of the agents of the environment, such as sensing actions, communication actions, etc., and also physical actions, i.e., actions that change the environment, such as toggling light switches, opening and closing doors, etc. The idea is to be able to describe scenarios where one may want to ascribe responsibilities to agents or simply want to verify whether agents are responsible or not for consequences. Then, ideally, these structures should be rich enough to express the important concepts that define the various conditions of responsibility discussed in [39], namely, moral agency, causality, wrong-doing, freedom and knowledge. However, as the name suggests, these structures are meant to "model" the reality and not to copy it. Hence, some simplifications are made.
A model is defined for a given vocabulary, which consists of a triple of disjoint set P , N, A where: -P is a countable (possibly infinite) set of propositional variables denoting propositional facts; -N is a finite set of labels denoting agents; and -A is a finite set of labels denoting the actions available for the agents.
We use to denote the set of all joint actions available for the agents in N, which is defined as the set of all total functions δ with signature N → A. In other words, = {δ 1 , δ 2 , . . .}, where each δ j is a set of pairs of the form {(i 1 , a 1 ), (i 2 , a 2 ), . . . , (i |N| , a |N| )} (one pair for each agent in N) and where i m ∈ N and a m ∈ A for 1 < m < |N|.
CEDL models are a specific kind of Kripke models, which consist of directed labelled graphs as structures, where nodes represent possible worlds and relations between worlds represent (in our case) either indistinguishability relations or transitions. The indistinguishability relation for agent i represents the knowledge of i: the more worlds are indistinguishable for i, the less agent i knows about the scenario. The transition relation for some action δ represents the set of possible outcomes of δ: if w 1 is in the transition relation of δ with w 0 then it means that the occurrence of δ at w 0 may lead to w 1 . A graphical representation of one such model is displayed in Fig. 1.
In such a structure, each possible world is labelled with a set of propositional variables from P , each indistinguishability relation is labelled with an agent from N, and each transition is labelled with a joint action from N. For example, consider the  Fig. 1. The propositional variable p is present in the label of w 0 and w 2 but it is not present in the label of w 1 . This means that the propositional fact represented by p is true at w 0 and w 2 , but it is false at w 1 . Moreover, the indistinguishability relation between w 0 and w 1 is labelled with agent i, meaning that agent i cannot distinguish between these two worlds. Thus in our example, if w 0 (or w 1 ) is the actual world, then i considers it possible that either w 0 or w 1 is the actual world. This means that, if w 0 (or w 1 ) is the actual world, then i does not know whether the fact represented by p is true or false. And finally, the transition between w 0 and w 2 is labelled with joint action δ, meaning that the occurrence of δ at w 0 leads to w 2 .
The formal definition is as follows. Let the vocabulary P , N, A be given, a CEDL model is a quadruple W, K, T , V , where: -W is a non-empty set of possible worlds; -K is a function with signature N → P(W × W ), defining, for each agent i ∈ N, an equivalence relation between possible worlds, which represents the knowledge of agent i; -T is a function with signature → P(W × W ), defining, for each joint action δ ∈ , a relation between possible worlds, which represents the transition associated to the joint action δ; and -V is a function with signature P → P(W ), defining, for each p ∈ P , the interpretation of the propositional variable p in the model.
To simplify notation, we write K i instead of K(i), and also use K i (w) to denote the set of worlds that i considers possible at w. That is, K i (w) abbreviates the set {w : (w, w ) ∈ K(i)}. Analogously, we write T δ instead of T (δ), and also use T δ (w) to denote the set of possible outcomes of the occurrence of δ at w. That is, T δ (w) abbreviates the set {w : (w, w ) ∈ T (δ)}.
We impose the following constraint to CEDL models: Constraint (C1) is called 'activity'. It stipulates that, at all possible worlds of the model, there is at least one non-empty transition which is labelled by some joint action in . In other words, at any moment, there is at least one executable action for each agent, which prevents those strange structures mentioned earlier from being CEDL models. The dynamic portion of our model is quite similar to the joint action models over 0 ActN discussed in [2]. 1 There are two primary differences to note. 2 First, joint action models are explicitly deterministic, a requirement we omit. We allow that the outcome of the whole group N may be non-deterministic, since agents may perform actions with unknown outcome (one member of the group may throw a ball at a target, for instance, with no guarantee that it hits the bullseye). Determinism, in our setting, amounts to the requirement that T δ (w) is a singleton for every δ and w.
Secondly, joint action models satisfy the following (axiom A3 in ibid, also referred to as Independent Choice (IC)): if, in world w, Stan can perform a given action a S and Ollie, in the same world, can perform a O , then it is possible for Stan to perform a S and Ollie a O simultaneously. However, Stan may be able to walk through a doorway if and only if Ollie doesn't walk through the doorway at the same time (or else they get stuck and fail to perform the action "walk through the doorway"). Similarly, Will may write a play with a pencil or Abe may write an address with the same pencil, but they can't do these two actions simultaneously. We will discuss IC in some detail in Section 5.1 and in particular indicate why we find it unsuitable for our aims.
There is yet another constraint imposed to CEDL models: This constraint corresponds to 'no-forgetting' in [23] and 'perfect recall' in [17]. It defines an interaction between accessibility relations and transitions. With this constraint, the knowledge of an agent either increases or stays the same, after the execution of any action. This means that agents never lose information, i.e., once an agent knows something, this agent will never forget it. This is obviously very restrictive but, at the same time, useful. For instance, this assumption allow us to ignore models where agents may keep losing information for whatever reason. Such possibility would make much more difficult (if not impossible) to derive some interesting properties of our formalism. We also note that, because each K i is an equivalence relation, constraint (C2) implies that action occurrences are perceived by all agents. The latter implies that the agents perceive the passage of time.
For example, the structure in Fig. 1 respects both (C1) and (C2). The first is respected because, for all possible worlds w, there is at least one transition from w to some possible world. The second is respected because the equivalence classes of the accessibility relation K i does not increase its size when we follow a transition from one possible world to another.

Syntax and Semantics of CEDL
Given a vocabulary P , N, A , the language of CEDL is the smallest set L which satisfies the following conditions: where δ| G is δ, but with its domain restricted to G, i.e., let δ be the joint action {(i 1 , a 1 ), . . . , (i |N| , a |N| )}, then δ| G is the partial joint action formed by the set of pairs {(i n , a n ) | (i n , a n ) ∈ δ and i n ∈ G}.
In what follows, the common abbreviations for the operators ∨, → and ↔ are also used, the symbol ⊥ (contradiction) abbreviates ¬ , and, to simplify notation, we sometimes write [i 1 :a 1 , . . . , i n :a n ]ϕ instead of [{(i 1 , a 1 ), . . . , (i n , a n )}]ϕ. We also sometimes write Note also that, as , N and 2 N are finite sets, existential and universal quantification over these sets abbreviate disjunctions and conjunctions, respectively.
The intended meaning of formulae of the form K i ϕ is, as usual: 'agent i knows that ϕ'. The meaning of a partial joint action of the form {(i 1 , a 1 ), . . . , (i n , a n )} is: 'the agents in {i 1 , . . . , i n } execute their corresponding actions in {a 1 , . . . , a n } simultaneously (and we do not consider what the other agents do at the same time)'. And the meaning of formulae of the form [δ| G ]ϕ is: 'after every possible occurrence of δ G , ϕ is true'. Furthermore, δ| G ϕ abbreviates ¬[δ| G ]¬ϕ.
To match their meanings, formulae from L are interpreted using 'pointed CEDL models'. The latter are pairs of the form (M, w), where M = W, K, T , V is a CEDL model and w ∈ W . The semantic interpretation of Boolean operators is the usual one. For formulae of the form K i ϕ we use the accessibility relation K i : K i ϕ is true at the world w if and only if ϕ is true at all possible worlds w accessible from w via K i , i.e., it is true if and only if ϕ is true at all worlds that the agent i considers possible at w. The interpretation of operators [δ| G ] is more complex. A formula of the form [δ| G ]ϕ is true at the world w if and only if ϕ is true at all possible worlds w that are attained from w via some transition in T which is labelled with δ| G ∪ δ | N\G . In other words, to verify whether [δ| G ]ϕ is true at some possible world w, we must verify whether ϕ is true at all worlds w belonging to T δ (w) for all δ ∈ , provided that the restriction of δ to G is equal to the restriction of δ to G, i.e., provided that δ | G = δ| G . That is, we must verify whether ϕ is true at all G transitions from w no matter what the agents outside the group G do.
Formally, the satisfaction relation |=, between pointed CEDL models and formulae from L, is inductively defined as follows: As usual, a formula ϕ is valid (notation: M |= ϕ or, when context is clear, |= ϕ) if and only if every pointed CEDL model (M, w) satisfies ϕ.
Example 1 (Light Bulb and Light Switch) To better explain the definitions given so far, we now use the 'light bulb and light switch' scenario from the Introduction. Alice is inside the house and Betty is outside it, close to the switch. Thus, Alice can see whether the light bulb is on (denoted p) or off (denoted ¬p) and tell it to Betty (action tell), but she cannot toggle the switch. Betty, on the other hand, can toggle the switch (action tog) but she cannot see whether the light is on or off. If she toggles the switch with the light on, it will turn off, and if she toggles it with the light off, it will turn on. Let the vocabulary be P , N, A , where P = {p}, N = {a, b} and A = {none, tell, tog}. A CEDL model for this scenario is given by the structure in Fig. 2.
We assume that the light bulb is off, i.e., the actual world is w 0 . We then use the definition of |= to verify truth of some formulae in the pointed CEDL model (M, w 0 ): Betty does not know whether the light is on or off. This is true because there is a world that Betty considers possible (namely w 0 ) where the light is off and there is a possible world that Betty considers possible (namely w 1 ) where the light is on.
Many other interesting (and more complex) formulae can be verified in this model. We leave it to the reader to verify the following formulae: -M, w 0 |= [a: tell, b: tog ]p: after the parallel execution of Alice telling whether the light is on or off and Betty toggling the switch, the light is on.
After Betty toggles the switch, the light is on. (Note that here we consider only the action executed by Betty.) It is not the case that after Alice tells Betty that the light is off, she knows that the light is off. (This is because Betty might have toggled the switch at the same time, so the light might actually be on.) After the parallel execution of Alice telling whether the light bulb is on or off to Betty and Betty doing nothing, Betty knows that after Betty toggling the switch, the two agents know that the light is on. In other words, it means that if Betty waits for the announcement of Alice, then she knows how to reach the state where the two agents know that the light is on.
From ϕ and ϕ→ψ infer ψ (modus ponens) (MP) We now turn our attention to some important CEDL validities, displayed in Table 1. Table 1 are sound and complete with respect to the class of CEDL models. 3 The axiomatisation in Table 1 reveals one additional assumption that is implicit in CEDL models. This assumption corresponds to Axiom (S), which is called 'superadditivity'. It stipulates that if group G obtains outcome ϕ by acting as determined by the partial joint action δ and H (disjoint from G) obtains outcome ψ by acting as determined by the partial joint action δ , then the group of agents G ∪ H obtains outcome ϕ ∧ ψ by acting as determined by the union of their partial joint actions. In particular, this implies that the bigger the group, the more it can achieve. This seems to be an intuitive property.

Group Knowledge
To formalise collective responsibility, we have to provide a formalisation of 'group knowledge'. Unfortunately, there is no consensus in the literature of what group knowledge means (see, e.g., the discussion in [18]). Here, we choose to use the notion of 'distributed knowledge', found, e.g., in [17]. In the language, we replace operators K i by the more general K G , which semantics is formally defined as follows. Let Distributed knowledge approximately describes the knowledge of someone who has complete knowledge of what each member of the community knows. For example, assume that agent i knows that p is true but does not know whether q is true or false, whereas agent j knows that q is true but does not know whether p is true or false. Then, the group {i, j } has distributed knowledge that p ∧ q is true.

Proposition 1
The following schemata are valid in CEDL with distributed knowledge: Proof KK to 5 are easy, and left for the reader. PR is true because (C2) is preserved for groups G by the definition of K G . And KS is true because i∈G Note that with axioms KS and KK', one can derive both: However, whether these principles constitute a complete axiomatic system for CEDL with distributed knowledge is left as an open question.

Comparison with CEGMs
The models of CEDL are very similar to Concurrent Epistemic Game Models (CEGMs), structures which are commonly used to model joint action of epistemic agents (see [1,9,16,24,25] for some examples). There are four principle differences, one of which is, from our perspective, significant.
-CEGMs do not necessarily satisfy (C2). This is not a deep difference, since we could either relax (C2) for CEDLs or require it for CEGMs. -CEDL models do not necessarily satisfy Determinism (DET) (the outcome of full joint actions is deterministic, i.e.a singleton or empty), which CEGMs require. 4 This is again not deep, for the same reasons as above. -CEDL models do not necessarily satisfy Action Awareness (AA), the principle that if two states are indistinguishable for i, then the same actions are executable for i in each state. Again, we have the options of relaxing this condition for CEGMs or requiring it for CEDLs. See Example 3 for a CEDL model which does not satisfy AA. -CEGMs, by their very specification, require a principle of Independent Choice (IC) [2], that is, that if an action a is executable by agent i in a given world w, then it is executable no matter what the other agents do. This is baked into the usual presentation of CEGMs (or, more generally, CGMs), since one specifies the individual actions i can do in w and then builds up joint actions from these. On the other hand, CEDL models start with joint actions and require no such independence principle.
Condition (IC) aside, the differences between CEGMs and CEDL models are not essential. The conditions (C2), (DET) and AA are requirements imposed in addition to the basic structure of these models, whereas (IC) occurs in as a fundamental step in the construction of CEGMs. We will argue in Section 5.1 that (IC) is unsatisfactory for our needs: the most obvious way to enforce (IC) on CEDL models entails an unreasonable consequence for accountability. On the other hand, weakening (DET) and AA or requiring C2 for CEGMs is not as essential to their usual definition.
It is not hard to show that, fixing the sets P of propositions, N of agents and A of actions, the category of CEGMs satisfying (C2) is isomorphic to the category of CEDL models satisfying (DET), AA and (IC). The proof is not difficult, but long and fairly tedious and so we omit it here.

Ability and Knowing How
Our aim in this section is to define operators to express agent ability. The first operator we define here expresses that "by executing δ| G , the group G of agents ensures an outcome satisfying ϕ in the next step". That is, formulae of the form E δ| G ϕ should mean that G can execute δ| G (i.e. δ| G is executable) and that it necessarily leads to an outcome satisfying ϕ. Thus, we define E δ| G as follows: Sometimes, we will need to express that some group of agents ensure an outcome ϕ by executing a sequence of actions δ 1 | G ; . . . ; δ n | G . Hence, we define The following properties relate the group and subgroup ability and are used in proofs of theorems which follow.
The following are theorems of CEDL.
There is a technical detail regarding our definition of E which will require some attention. While the converse is not true in general. This is because E δ| G E δ 1 | G ;...;δ n | G ϕ entails that and this implies, but is not implied by, Condition (1) requires that δ 1 | G ; . . . ; δ n | G is executable in every world reachable by doing δ| G , whereas (2) requires only that there is some world w reachable by doing δ| G such that δ 1 | G ; . . . ; δ n | G is executable from w (see Fig. 3). This difference will complicate some of the statements of our theorems. It would be possible to avoid these complications, of course, by defining E δ| G ;δ 1 | G ;...;δ n | G to be E δ| G E δ 1 | G ;...;δ n | G . Nonetheless, there are good reasons to define E as we have. It is common to define so-called "test actions" ϕ? in dynamic logics, where In the presence of such actions, it is common to hit a "dead end" along some paths and not others, and in that context, our definition of E is more appropriate than the alternative. While we do not explicitly include test actions in our presentation, they are a natural addition to our model, and so we shouldn't presume that every path can be executed all the way to the end of δ 1 | G ; . . . ; δ n | G . Let us call a sequence δ 1 | G ; . . . ; δ n | G uniform at w just in case M, w |= In Fig. 3, δ 1 | G ; δ 2 | G is uniform at w 0 , but not at w 0 . The following properties are useful for some of the proofs hereafter. The second is used in the proof of Theorem 3 and thethird property is proved using the derived rule of inference Both w 0 and w 0 satisfy E δ 1 |G;δ 2 |G , but E δ 1 |G E δ 2 |G is true at w 0 and not at w 0 Lemma 2 The following are theorems of CEDL.
With this definition of ability in hand, we turn our attention to know-how. There is a subtle distinction to be made here: one may know that there is a way to realise ϕ, but not know what the strategy is. In other words, she knows that ϕ is realisable, but she does not know how to realise ϕ. We can see this situation in our Light Bulb and Light Switch scenario (Example 2): let us recall that, at w 0 , Betty is able to ensure that the light is on after one step. In fact, this is true at w 1 too. Hence, at w 0 , Betty knows that she can ensure p in one step, but Betty does not know that the action tog ensures p. Thus, Betty knows that she is able to ensure that the light is on after one step, but she does not know what she must do to ensure it! In game theory [32] we say that an agent has a "non-uniform 5 strategy" for a given goal whenever for every state that the agent cannot distinguish from the current state, there is a strategy whose execution leads to the goal; and we say that an agent has a "uniform strategy" for a given goal whenever there is a strategy such that for every state that the agent cannot distinguish from the current one, its execution leads to the goal.
In terms of our logic, agent i has a non-uniform strategy for ϕ at w just in case whereas she has a uniform strategy for ϕ at w if and only if This leads naturally to a definition of know-how for groups. A group G knows how to realise ϕ just in case there is a non-empty subgroup G of G such that (1) G can realise ϕ by performing a sequence of actions δ 1 | G ; . . . ; δ n | G ϕ and (2) each member of G knows that δ 1 | G ; . . . ; δ n | G ϕ will realise ϕ. In this case, no matter what other members of the group do, each of the members of G knows how G can ensure that ϕ. Consequently, we define where C G ϕ := ∀ i∈G K i ϕ represents that ϕ is universal knowledge for the group G.
In the following proposition and hereafter, H n G is a sequence of n operators H G . Explicitly, As the proposition shows, H n G ϕ expresses the claim that some part of group G knows how to ensure ϕ in n steps. In the case that n = 0, the claim is trivial, since there is nothing for the group to know how to do -either ϕ attains or not.

Proposition 2
The following are valid in CEDL.

Responsibility-as-obligation
In this section, we define an operator expressing a group's obligations, as in deontic logic (see [29]). We do so by adapting the simple, yet effective, idea of Anderson and Moore [4] to our framework, rewritten in [11]. The idea is that "ϕ is obligatory if and only if failure to do what is required to make ϕ true would lead to sanction." For example, if it is obligatory for some agent i that the light is on, then every possible world where it is false satisfies the proposition that the agent i is sanctioned. The way the static and dynamic obligations are introduced in PDL in d'Altan et al. is based on ideas of Meyer [28] which is important inspiration for our treatment of the subject.
From now on, we assume that the set of propositional variables of our language is P ∪ Vio, where Vio = vio G | G ∈ 2 N \ ∅ . That is, for each group of agents, we introduce a new atomic formula vio G , representing the condition that "the group of agents G is in violation." In addition, we require that our models now be quadruples of the form W, K, T , V , where W , K and T are defined as before but the domain of V is extended to the new set of propositional variables, i.e., V : (P ∪ Vio) → 2 W .
In addition, we introduce an operator O G , which expresses that a particular state of affairs ϕ is obligatory, equivalently, that ¬ϕ entails a violation. Hence, we define Obligations satisfy some interesting properties.

Proposition 3
The following theorems and rule of inference are valid in CEDL.
We note that agents do not necessarily know their obligations, i.e., O G ϕ does not imply K G O G ϕ. This is consistent with traditional views of morality, dating back to Aristotle's Nicomachean Ethics.
Also, our approach permits models where violations are unavoidable. For instance, we have that vio G →O G ⊥ is valid, which also means that we do not impose Axiom D (i.e., ¬O G ⊥), which is present in some deontic systems. In our view it just means that dilemmas are possible (cf. [40]). As in Sartre's classic example, one can have the obligation to stay at home to look after an elderly mother and, at the same time, have an obligation to join the resistance movement to fight the Nazis.
With this definition of obligation we formalize one kind of forward-looking responsibility, namely, responsibility-as-obligation: the "obligation to see to it that". The aim here is to augment our logic with operators R, where formulae of the form R n+1 G ϕ are to be read as "it is obligatory for G to see to it that ϕ is true after n steps." Goodin's ideas are used to define this meaning of responsibility. The argument advanced is the following quote from Goodin ([19], p. 51): The standard form of responsibility is that A see to it that ϕ. It is not enough that ϕ occurs. A must have "seen to it" that ϕ occurs. "Seeing to it that ϕ" requires, minimally: that A satisfy himself that there is some process (mechanism or activity) at work whereby ϕ will brought about; that A check, from time to time, to make sure that that process is still at work, and is performing as expected; and that A take steps as necessary to alter or replace processes that no longer seem likely to bring about ϕ.
This is a complex definition. It is unlikely that we will be able to capture all its details in our framework. Nonetheless, we may approximate it by the following inductive definition: That is, R 0 G ϕ is true at some world w if and only if ¬ϕ implies a violation for G. But formulae R n G ϕ, for n > 1, are more than mere obligations that one does what is necessary to ensure ϕ. They also include the "supervisory nature" of this kind of responsibility, because they also require that, after n steps, ¬H G ϕ implies a violation for G. Intuitively, the group is responsible for discovering a plan, that is, for knowing how to accomplish their task of ensuring ϕ. As a result, members of the group may be responsible for sharing the requisite knowledge to ensure that the group actually does know how to realise ϕ.

Lemma 3
The following theorem is valid in CEDL.
In the next two sections, we will discuss two meanings of backwardlooking responsibility, namely responsibility as accountability and responsibility as blameworthiness. These responsibilities are backward-looking in the sense that they usually apply to something that has occurred.

Accountability
Let us start by recalling the conditions for "accountability" given in [39]: . . . an agent i is accountable for ϕ if i has the capacity to act responsibly (has moral agency), is somehow causally connected to the outcome (by an action or omission) and there is a reasonable suspicion that agent i did somehow something wrong.
The first condition for holding an agent accountable, "moral agency", is a tacit assumption in our framework. In fact, our logical framework also assumes that all agents are rational and that they are perfect reasoners. This, for instance, means that agents can foresee all the logical consequences of the facts they know. Again, these assumptions may be seen as very restrictive ones, but, in a logical framework like ours, the relaxation of such assumptions would make it much more difficult (if not impossible) to derive some interesting properties.
The other two conditions for holding an agent accountable are causality and wrong-doing. In the context of a responsibility to see to it that ¬ϕ, these two conditions must be related to each other, in the sense that being causally connected to a prohibited outcome is itself "something wrong" done by the agent. There are two distinct ways in which the agent's action can be causally connected to the outcome.
The more direct way is that the agent did something that ensured ϕ when ¬ϕ was obligatory. More explicitly, the actions that the agent took necessarily resulted in ϕ, no matter what others did. In this case, the agent is accountable for the fact that ϕ. We can characterize this condition by introducing a relation A E which expresses that the agent's (or group's) actions have necessitated ϕ. Let G be a group such that R n G ¬ϕ and let G ⊆ G. Then G is directly accountable for ϕ by doing Thus, G is directly accountable for ϕ by doing δ 1 | G ; . . . ; δ n | G ϕ iff E δ 1 | G . . . E δ n | G ϕ, that is, iff doing δ 1 | G ; . . . ; δ n | G will realise ϕ no matter what the rest of the group G does. In this case, G has really brought about ϕ all by themselves, so to speak. 6 An agent (or subgroup) can also be indirectly accountable for ϕ by failing to ensure that the group as a whole knows how to avoid ϕ. This can happen in two distinct ways. First, it may be that i could have ensured that the group G knows how to avoid ϕ if only i had shared some of his knowledge-for instance, if Alice fails to tell Betty that the light is off, then she is indirectly accountable in this sense. In such a case, if the actions the group actually pursues fail to realise ϕ, then i is accountable for her failure to share information. On the other hand, it could be that the group currently knows how to realise ¬ϕ, but the agent does something unexpected, no one in the group knows how to recover (though recovery is possible) and instead realises ϕ. Again, in this case, the agent is accountable for the group's failure to realise ϕ. Thus, in the same conditions as above, G is indirectly accountable for ϕ Note that the base case is trivial. This is because if one has a one-step responsibility that ¬ϕ, there is no responsibility regarding group knowledge in the next step. More explicitly, in the case that n = 0, Lemma 3 reduces to a tautology, and so for one-step responsibilities, any accountability must be direct. The second disjunct in the inductive step requires that, if some subgroup G is in a situation in which G could have ensured that the group G had the know-how to ensure ¬ϕ, but G 's action instead ensured that G hadn't the requisite know-how, then G is accountable for this failure. Recall, of course, that when we say a group G has the know-how to ensure ¬ϕ, what we mean is that some subgroup of G has the knowhow to do so. This is sufficient to ensure ¬ϕ.
We can now define accountability by merging these two conditions. Then a subgroup G ⊆ G is accountable for ϕ by doing δ 1 | G ; . . . ; δ n | G iff A δ 1 | G ;...;δ n | G ϕ, where Note that the inclusion of indirect accountability (i.e.A H ) is an important feature of accountability involving group coordination. A subgroup is accountable if it could have ensured that some portion of the larger group could know how to bring about ¬ϕ but failed to convey the relevant information. Accountability is not just about what one does in pursuit of ¬ϕ, but also about the information one could make available (chiefly, though not exclusively, by communication, as in our lightbulb example). This aspect of accountability, which was also present as the "supervisory nature" condition in our account of forward-looking responsibility, is missing from other accounts of group accountability and blameworthiness, including the models found in [10] and [27], discussed in Section 6. As far as we know, this is an original feature of our model.
The following theorem confirms that our definition of accountability satisfies some of our most basic intuitions.
The following are theorems of CEDL.
Item (1) says that a (sub)group is directly accountable for ϕ only if their actions necessitated ϕ, while Item (2) says that accountability propagates upward, assuming that the sequence δ 1 ; . . . ; δ n is uniform and executable by G .
Example 2 (Light Bulb and Light Switch with Violations (revisited)) To illustrate operator A, we reuse the scenario of Example 1 and the model in Fig. 2. The only difference in this variation is that worlds w 0 , w 1 and w 2 are violation states for the group. This scenario is depicted in Fig. 4.
The careful reader may wonder why w 1 is a violation state, given that M, w 1 |= p. In fact, this is a little "hack" to ensure that M, w 0 |= R 2 G p, as desired. By Lemma 3 and the definition of R 1 G p, this requires that every world reachable in one step from w 0 satisfies O G H 1 G p. Since w 1 does not satisfy H 1 G p, we must set it to a violation state. One can confirm that, in this model, w 0 does indeed satisfy R 2 G p (and, in fact, so do each of the other worlds in M).
One can confirm the following.
-M, w 0 |= A δ 0 | a ;δ 0 | a ¬p: In particular, it is easy to see that Alice is indirectly accountable for ¬p. -M, w 0 |= ¬A δ 2 | a ;δ 0 | a ¬p: Theorem 2(1) entails that Alice is not directly accountable in this case, and a careful examination shows that neither she is indirectly accountable. Finally, let us consider the situation in which the group executes δ 0 | G ; δ 1 | G in w 0 , resulting in the lucky happenstance that the light bulb ends up on, even though Betty did not know that this would be the case. Clearly, neither Alice nor Betty can be accountable for ¬p, because ¬p was not realised. Nonetheless, Alice does not escape all accountability. Applying Lemma 3, we conclude that M, w 0 |= R 1 G H G p, and Alice is thereby accountable for ¬H G p after doing δ 1 | a . She had a (one-step) responsibility to tell Betty the state of the bulb, and she neglected to do so.
Let us consider an alteration of this example. Add a third person, Cecile, also standing outside and able to toggle a second switch. The light will switch from on to off or vice versa iff exactly one of Betty and Cecile toggles her switch; if both toggle, then nothing happens. Cecile's knowledge relation is the same as Betty's, and both Cecile and Betty can see what action the other has taken. The amended graph is shown in Fig. 5.
As before, the group is responsible to reach a p state in two steps. If the group executes δ 0 ; δ 0 , then they will fail to realise their obligation, and the group is accountable for their failure, i.e.
Moreover, Alice will also be (indirectly) accountable in this case, since she failed to notify Betty and Cecile of the state of the light. But notice that neither Betty nor Cecile are accountable. Betty's actions did not necessitate ¬p-had Cecile toggled her switch once, they would have realised p. And Cecile is not accountable either, for symmetric reasons.
However, the subgroup {b, c} is accountable. The action sequence δ 0 | {b,c} ; δ 0 | {b,c} does indeed necessitate ¬p. Thus, we have an example of a subgroup which is accountable, though no individual in the subgroup is accountable. -M, w 0 |= A δ 0 | a ;δ 0 | a ¬p: If Alice does nothing in either step, then she is (indirectly) accountable, since she failed to alert Betty regarding the state of the light. -M, w 0 |= ¬A δ 4 | a ;δ 0 | a ¬p: Alice alerted Betty in the first step, but Betty and Cecile did nothing. Alice is not accountable for the failure. -M, w 0 |= ¬A δ 4 | b ;δ 0 | b ¬p: Betty is not accountable for the failure to turn the light on, since her actions did not necessitate that the light was off after two steps. Cecile could have turned on the light as Betty did nothing, after all, ending in a desired state. -M, w 0 |= A δ 0 | {b,c} ;δ 0 | {b,c} ¬p: The subgroup consisting of Betty and Cecile are accountable for ¬p, even though neither of the two are individually accountable.

Accountability and Independent choice
Let us return to the topic of independent choice discussed in Section 2.4 (see also [2]), that is, the property that, if an agent i can perform an action in a in world w, then the agent can perform that action no matter what other agents do. Formally, this amounts to Intuitively, this is an implausible condition, especially where cooperation is required for certain joint actions. It is possible that Alice can execute α iff Betty executes β. Imagine Alice needs a boost to get over a fence and only Betty can provide that boost. Then Alice can perform scale iff Betty performs boost, thus violating (IC).
One might think that (IC) is a matter of perspective. After all, one might interpret the action scale as "Alice attempts to scale the fence." Under this interpretation, scale would be executable in every world but makes little or no difference in worlds where Betty does not cooperate. However, this interpretation leads to unfortunate consequences for accountability (and hence blameworthiness).
Consider the two situations illustrated in Fig. 6. The lefthand figure represents a model in which Alice can perform scale iff Betty chooses to boost. This model obviously does not satisfy (IC). The righthand model is one in which we interpret scale in terms of attempting to cross the fence. If Betty doesn't cooperate, then Alice's attempt to cross the fence results in no change to the world. Suppose, further, that the group has an obligation to ensure Alice does not cross the fence (in one step).
However, notice that in w 0 , Alice is directly accountable for crossing the fence by executing scale. This is because every execution of scale from w 0 ends in a violation state (w 1 ). In w 0 , however, while the group {a, b} is accountable for crossing the Fig. 6 The left-hand diagram shows a natural CEDL model for scaling a fence, while the right-hand shows the most plausible CEDL model satisfying Independent Choice fence if they execute the joint action {a : scale, b : boost}, neither Alice nor Betty is accountable for her individual actions. This does not seem plausible. The action scale aims at reaching w 1 and will do so, as long as Betty cooperates. There is, indeed, no other reason to execute scale (attempting to scale the fence is a poor substitute for doing nothing!). An appropriate model would judge that Alice is accountable for choosing scale, Betty accountable for choosing boost and the group accountable for the joint action.
For this reason, we regard (IC) as inappropriate for the notion of accountability as we've sketched it here and that CEDL models provide a more appropriate semantics for our theory than CEGMs.

Blameworthiness
The definition given in [39] for this meaning of responsibility is the following: agent i is blameworthy for consequence ϕ whenever i is accountable for ϕ and is not capable to give an acceptable account for it. Two accounts (excuses) are considered acceptable. The first one is ignorance: the agent i is excused (and thus is not blameworthy) for ϕ if i can show that she could not know that her behavior would cause ϕ. The second is compulsion. A possible interpretation of compulsion is that the agent i is excused for ϕ if i can show that no behavior that does not cause a violation is possible.
We extend van de Poel's definition to include groups and their subgroups and not just individuals. Let G ⊆ G and suppose further that G has executed sequence of actions δ 1 | G ; . . . ; δ n | G . Define Note that the above definition uses distributed knowledge rather than common knowledge. If the group as a whole recognizes that a particular plan would result in accountability, then the group itself is blameworthy.
In both the base case and the inductive case, one can explicitly see that, if one knows she is accountable and is under no compulsion to end up in a violation state, then one is blameworthy. The inductive step also includes a clause ensuring that one is blameworthy if they are blameworthy at any stage in executing the action sequence. We discuss this feature in Example 4 below.
Clearly, our definition allows one to be accountable but not blameworthy if he lacked knowledge that his actions would make him accountable. Similarly, if one's actions necessitated ¬ϕ, but nothing that he could do would avoid a violation state, then he is accountable, but not blameworthy.
Our formalisation of blameworthiness is similar in spirit to agentive responsibility discussed in [27]. Both accounts lean heavily on Aristotle's remarks on voluntary actions [5] and blameworthiness (recast as a kind of responsibility in Lorini, et al.). For instance, our ignorance exception corresponds to the authors' epistemic condition, while the compulsion exception corresponds to the control condition, at least in the passive sense. Lorini, et al., however, use atemporal group STIT in order to model agentive responsibility, 7 which essentially amounts to focusing on blame for single actions rather than sequences of actions. Since organisations often must implement complicated plans in order to satisfy their obligations, we regard our framework as more appropriate for modeling blame within organisations. 8 A very different approach to responsibility and blame can be found in [10], in which Chockler and Halpern develop a model based on causality. As a result, for instance, their definition of responsibility is broad enough that rain can be responsible (but not blameworthy) for the cancellation of a picnic, whereas our definitions of responsibility, accountability and blame apply only to agents and groups. There is also a difference insofar as the authors' definition of responsibility (backwardslooking and hence analogous to accountability in our terminology) includes the compulsion clause, whereas we reserve that clause for blameworthiness. One significant difference in Chockler and Halpern's account is that both responsibility and blame come in degrees, whereas on our account, one is either accountable (blameworthy, resp.) or not. Degrees of responsibility is a sensible feature, but one which has no obvious analog in our formal setting.
As was the case with accountability, we expect blame to propagate upwards. Because (distributed) knowledge is an essential part of blame, however, it will have to figure into the conditions that allow blame to apply to the supergroup of a blameworthy subgroup. The following example provides a situation where a subgroup is blameworthy, while the larger group is not, verifying the relevance of knowledge for upward propagation.
Example 3 Ethel and Fred are international spies currently being held by an international baddie. They are in a cell with two different doors, each with different lock mechanisms, and Fred has managed to lift a key from one of the guards. Unfortunately, neither Ethel nor Fred knows whether the key unlocks the left door or the right.
Their one chance at escape requires that Ethel distract the guards while Fred unlocks the proper door. As all international spies know, captured spies have a duty to attempt to escape. The situation is illustrated in Fig. 7. Ethel can either do nothing or distract the guard, while Fred can either unlock the left door or the right, depending on which key he has. 9 Suppose that the actual world is w 0 , where Fred has the key to the righthand door. In this world, the group {e, f } is responsible for p (escaping the cell) in one step, i.e.
We can confirm the following.
-M, w 0 |= A δ 0 | e ¬p: If Ethel does nothing, it is certain that the escape will fail and she is thereby accountable. -M, w 0 |= ¬A δ 0 | f ¬p: If Fred tries to unlock the right door, then he is not accountable for failure to escape, since he would have succeeded had Ethel distracted the guards. -M, w 0 |= A δ 0 | {e,f } ¬p: The group as a whole would be accountable for failure to escape if Ethel does nothing, because accountability propagates upward. -M, w 0 |= B δ 0 | e ¬p: Ethel would be blameworthy for doing nothing, since she knows the plan would not work unless she distracts the guards. -M, w 0 |= ¬B δ 0 | {e,f } ¬p: The group as a whole is not blameworthy if Ethel does nothing, since the group does not know that δ 0 | {e,f } is executable, even though δ 0 | e is known to be executable.
The point here is that, in w 0 , Ethel is blameworthy if she does nothing, because she knows that she will be accountable in that case. The group is not blameworthy for doing δ 0 , because the group does not know that this is executable and hence don't know that they will be accountable.
The following theorem gives explicit conditions that allow one to conclude that a supergroup is blameworthy on the basis that a subgroup is blameworthy.
The following are theorems of CEDL.
Example 4 (Light bulb and switch re-revisited) Consider again the example of Alice and Betty, shown in Fig. 4.
First, let us take this opportunity to explain the disjunct in our definition of B. This clause ensures that, if there is an k such that that is, if δ 1 | G ; . . . ; δ k | G necessarily leads to a state in which subgroup G is blameworthy for the tail of the sequence δ k+1 | G ; . . . ; δ n+1 | G , then G is blameworthy for the whole sequence δ 1 | G ; . . . ; δ n+1 | G . Our reasoning for this condition can be explained in terms of Alice and Betty. Consider the sequence δ 2 ; δ 0 , where Alice alerts Betty regarding the state of the light, but Betty does not toggle the switch. In this case, Betty is accountable for ¬p, as we saw in Example 2. In addition, Betty will be blameworthy for executing δ 0 in the second step, that is, But we also regard Betty as blameworthy for executing the whole sequence δ 2 ; δ 0 . The fact that she is ignorant in w 0 is no excuse, since after one step, she knows what she ought to do. Ignorance at the beginning of the plan doesn't excuse knowingly failing to do what one ought later. This is precisely the motivation for including the clause (3). Without it, we would conclude that the group {a, b} is blameworthy for executing δ 2 ; δ 0 , but neither a nor b are individually blameworthy. On the contrary, Alice did what she ought to have done and Betty did not, even though Betty knew better. Therefore, Betty is blameworthy here.
In addition, our model satisfies the following.
-M, w 0 |= B δ 0 | a ;δ 0 | a ¬p: We must show that M, w 0 |= K a A δ 0 | a ;δ 0 | a ¬p ∧ ∃ δ 1 | a ;...;δ 2 | a p. The latter conjunct is, in fact, witnessed by δ 0 | a ; δ 0 | a itself, since it would be possible to end in w 3 if Alice did nothing at all (since Betty might toggle the switch on her on). Moreover, because K a (w 0 ) = {w 0 }, Alice knows she will be accountable for executing δ 0 | a ; δ 0 | a . Hence Alice is blameworthy.
In the three-person variation (shown in Fig. 5), we can confirm the following. Since the reasoning is familiar, we leave it to the reader.

Problem of many hands
The 'problem of many hands' (PMH) is a term coined by Thompson in [37], and studied in the context of organisations (e.g., by Bovens [8]). It is a problem that sometimes arises when we address collective responsibility: a group of individuals can reasonably be blamed for an undesirable outcome, while no member of the group can reasonably be blamed for the outcome. The core of this problem appears to be the potential gap between individual and collective responsibility. Following Van de Poel [39], we will use the following general characterisation of the PMH: The PMH occurs if a group G is blameworthy for ϕ , whereas none of the individuals of the group G is blameworthy for ϕ. We can formalize this as follows: ...;δ n | G ϕ means that by performing δ 1 | G ; . . . ; δ n | G , the problem of many hands arises in group G with respect to the outcome ϕ. Analysing the definitions of operator B a bit further we realise that the PMH may arise from two different sources: 1. the individuals do not have the ability to avoid the violation, or 2. the individuals do not have the necessary knowledge that they are accountable for the violation state.
An example of the PMH is the example of the three person variation of the light bulb and switch scenario (see Fig. 5).One can show that if Alice, Betty and Cecile execute the sequence δ 4 | {a,b,c} ; δ 3 | {a,b,c} , then the group {a, b, c} is blameworthy for the outcome ¬p, though none of the individuals is blameworthy. Now it might be objected that if the three individuals are collectively blameworthy for the light being off, each of them is also individually as member of the group blameworthy. This argument, however, presupposes a stronger version of the reducibility thesis. This last thesis states that blameworthiness of a group can always be analysed in terms of individual blameworthiness (see [30]). The stronger version is formulated as follows (cf. [39]): if a group is blameworthy for ϕ, all individuals of the group are individually blameworthy, which can be formalised as follows: If we accept this strong version of reducibility, it follows that the PMH cannot occur. Another extreme position to avoid the PMH is that there is an individual who bears the ultimate responsibility for the actions of all the other individuals of the collective, such as the ministerial responsibility. So, if there is waste, corruption, or any other misbehaviour found to have occurred within a ministry, the minister is blameworthy even if the minister had no knowledge of the actions. This ultimate responsibility can be formalized as follows: Work on multi-agent systems (MAS) tries to avoid the PMH by managing explicitly the interdependencies between the organisational activities [14], such as delegation activity (the flow of obligations with an organisation) and the information activity (the flow of knowledge with the organisation).

Short-term and long-term blame
In our definition of blameworthiness, we have interpreted Aristotle's notion of compulsion in terms of being unable to avoid a state of violation. In some ways, this is misleading, since Aristotle defines compulsion as follows [5]: What is forced has an external principle, the sort of principle in which the agent or victim contributes nothing-if, for instance, a wind or human beings who control him were to carry him off.
This notion of compulsion is included in our definition of blame-if a wind carries one away, preventing him from realising ϕ, then it is not the case that he could have realised ϕ, i.e., ¬ δ 1 ; . . . ; δ n ¬ vio G attains. Nonetheless, we are obviously more interested in the actions which Aristotle calls "mixed", actions in which the principle is internal, but which are similar enough to compelled actions that we pardon any violation.
Aristotle gives two examples of such mixed actions. In the first, a tyrant threatens the actor's family with harm if he does not do something "shameful". In the second, sailors are forced to throw cargo overboard in order to prevent the ship from foundering in a storm. In both cases, the actions are pardonable (perhaps even praiseworthy in the latter) since a greater harm was averted.
In the situation of the ship, for instance, the sailors face a choice between jettisoning the cargo or doing nothing. The safe passage of the cargo is their responsibility, and so they would be accountable for throwing the cargo overboard. But if they do not jettison the cargo, then all will be lost (including the cargo!). We pardon their decision to jettison the cargo, since the alternative is worse. In other words, while they are accountable, they are not blamed.
Of course, to compare the degree of harm (roughly, how bad each violation may be), we would have to complicate the model somewhat, providing a means of comparing violations in distinct worlds so that we may say whether jettisoning the cargo avoids a worse violation than not doing so. In essence, this amounts to adding defeasibility to our model, so that inferences of blame may be "blocked" by defeating obligations, as surveyed in [31]. Obviously, this requires significant work, which is beyond the scope of the present paper. But even without this technical fix, we can get a sense for how mixed actions may be pardonable.
Consider once more the light bulb and switch example, but suppose that Alice has signed an unfortunate non-disclosure agreement (NDA) regarding the state of the light bulb. Alice and Betty live in a funny old world. In order to reflect this commitment, we must amend the model to satisfy 10 M, w 2 |= vio a andM, w 3 |= vio a , since the only way to end up in these two worlds is if Alice breaks the NDA and tells Betty the state of the light bulb. In other words, Because it is also the case that it is clear that Alice is between a rock and a hard place. If Alice does nothing (i.e., chooses δ 0 | a ), then she is blameworthy for failing to realise H G p. On the other hand, if she chooses to tell in the first step, she will be blameworthy for realising H G p. It is reasonable to ask whether we should blame her for either or both of these potential outcomes, and it is reasonable to think that our answer depends on the degree of violation. If G's responsibility for p takes precedence, then we may forgive Alice for violating her non-disclosure agreement, while if the agreement is more important, we may forgive Alice for her role in failing to achieve p-while still blaming G for this failure. In this case, while neither Alice nor Betty is blameworthy for realising ¬p, the group {a, b} is nonetheless blameworthy.
The situation is even more complicated in the case that a violation can be rectified. Suppose that Alice's non-disclosure agreement has a rectification clause: if she violates the agreement, she can discharge the violation by making amends (perhaps paying a penalty). The situation is shown in Fig. 8. Worlds w 2 and w 3 represent states in which vio a is true, since in these worlds, Alice has violated her agreement. However, once in these states, she can perform the action rectify in order to discharge the violation. Thus, by performing the sequence tell; rectify, Alice will neither be accountable for the group's responsibility that p, nor for her own non-disclosure agreement, although she does enter a violation world (w 2 or w 3 ) temporarily. The question is whether "all's well that ends well." Should the temporary violation be ignored, even if (for instance) the non-disclosure agreement is more important than the group's obligation to turn on the light? It is clear that Aristotle's "mixed" actions require some care to represent properly in this formal setting, including the addition of a defeasibility condition to the models, as in [31]. In that case, the violation may be pardonable, even if it is not strictly speaking "forced". The more complicated situation, when a rectifiable but greater violation is temporarily incurred for the sake of a lesser responsibility, requires further consideration. Fig. 8 In this variation on the light bulb and switch scenario, Alice has a non-disclosure agreement with a rectification clause. If she tells Betty the state of the light bulb, she will violate the agreement, but if she chooses rectify, she will realise a non-violation state. Consequently, worlds w 0 , w 1 , w 2 and w 4 are vio G states, while worlds w 2 and w 3 are vio a states

On Ibo Van De Poel's Four Propositions
Ibo van de Poel [39] has suggested four relations between backward-looking responsibility and forward-looking responsibility. P-1 If agent i had a forward-looking responsibility-as-obligation for ϕ and ϕ did not occur and ¬ϕ is not caused by exceptional circumstances then agent i is accountable for ¬ϕ. P-2 If agent i is accountable for ϕ and has no appropriate excuse why ϕ is the case then agent i is blameworthy for ϕ. P-3 If an agent i is blameworthy for ϕ and ϕ is the case, then agent i is also accountable for ϕ. P-4 If agent i is not accountable for ¬ϕ while ϕ is not the case and ¬ϕ is not caused by exceptional circumstances, then agent i had no forward-looking responsibility-as-moral-obligation for ϕ.
We should first note that (P-4) can be restated thus: If i had a forward-looking responsibility-as-moral-obligation for ϕ and ϕ is not the case and ¬ϕ is not caused by exceptional circumstances, then i is accountable for ¬ϕ.
Thus, (P-4) is equivalent to (P-1) and so we will omit (P-4) from our considerations. The first issue in taking up (P-1) is to interpret the phrase "exceptional circumstances". For our purposes, non-uniformity (Section 3) is an exceptional circumstance. That is, if δ 1 | G ; . . . ; δ n+1 | G is not uniform at world w, it is possible to "cause" ¬ϕ by executing δ 1 | G ; . . . ; δ n+1 | G just by reaching a world in k steps where δ k+1 | G cannot be executed. Consider again Fig. 3, in which the sequence δ 1 | G ; δ 2 | G is nonuniform at w 0 . By executing this sequence, one may end up at world w 3 after one step. At w 3 , the action δ 2 | G "causes" ¬ϕ-for any formula ϕ-in the sense that We regard this way of realising ¬ϕ as extraordinary.
Note that uniformity is essential here. Consider again the graph from Figure 3 and assume that w 2 is not a violation state for G. The reader can confirm that M, w 0 |= R 2 G and also that M, Let us turn our attention to (P-2). We have explicitly interpreted the phrase "G has an excuse for ϕ being the case" in terms of ignorance (lack of distributed knowledge that G would be accountable for ϕ) and compulsion (lack of ability to avoid vio G ). Accordingly, we may say that G has an excuse for ϕ in case the following proposition holds.
¬K G A δ 1 | G ;...;δ n | G ϕ ∨ ∀ δ 1 | G ;...;δ n | G [δ 1 | G ; . . . ; δ n | G ] vio G . Thus, if G is accountable for ϕ and has no excuse, then it follows that It is easy to see that B δ 1 | G ;...;δ n | G ϕ follows from this. 11 Note that there are instances in which one has an excuse in the above sense, but is nonetheless blameworthy. This arises in our light bulb example (Example 4). Consider the situation in which the group G = {a, b} executes the sequence δ 2 ; δ 0 , as discussed in Example 4. Betty is blameworthy for failing to bring about p by executing this sequence, even though in world w 0 , Betty does not know that she is accountable for δ 2 ; δ 0 .
Let us turn our attention to (P-3), where the situation is a bit subtler than one may prefer. This proposition requires that blameworthiness entails accountability, but in fact is provable in our system only in the special case that G = G, i.e., one can prove |= B δ 1 | G ;...;δ n | G ϕ→A δ 1 | G ;...;δ n | G ϕ, 11 The careful reader will notice that the presumption of A δ 1 | G ;...;δn| G ϕ is not necessary to infer B δ 1 | G ;...;δ n | G ϕ here, since A δ 1 | G ;...;δn| G ϕ can be inferred from K G A δ 1 | G ;...;δn| G ϕ. but the general statement involving an arbitrary subgroup G ⊆ G does not hold. Let us first prove the theorem in the special case, and then discuss the failure of the general case.
The problem in the general case G ⊆ G is that one must show that E δ 1 | G A δ 2 | G ;...;δ n+1 | G ϕ→A δ 1 | G ;...;δ n+1 | G ϕ, where the E operator depends on the group G and the accountability relation depends on the subgroup G . In essence, the incompatibility arises because of the clause (3) discussed in Example 4: if, at any point in the execution of δ 1 | G ; . . . ; δ 1 | G , the subgroup G becomes accountable without excuse, then they are blameworthy for their sequence of actions, even though possibly accountable only for the tail of δ 1 | G ; . . . ; δ 1 | G . That said, it is still the case that there is a connection between blameworthiness and accountability, even for subgroups. If M, w |= B δ 1 | G ;...;δ n | G ϕ, then there is an 0 ≤ i ≤ n such that M, w |= E δ 1 | G . . . E δ i | G A δ i+1 | G ;...;δ n | G ϕ.
We regard this as quite intuitive. Suppose one recognizes their accountability in the middle of executing δ 1 | G ; . . . ; δ n | G and continues the same execution nonetheless. Then they are blameworthy for δ 1 | G ; . . . ; δ n | G . Thus, we regard it as a feature of our formalisation that one may be blameworthy for the whole sequence δ 1 | G ; . . . ; δ n | G , but accountable only for the tail.
In this paper, we have presented a logic that extends PDL by epistemic formulae of the form K i ϕ, expressing that agent i knows that ϕ (similarly to [20,23]), and also by enacted actions, i.e., formulae of the form [δ| G ]ϕ, expressing that ϕ holds after the execution of δ by group G (similarly to [22,26,35,41]). It turns out that in our logic formulae of the form E δ| G ϕ, expressing that G has the ability to ensure ϕ, can be defined as simple abbreviations. Therefore, we can express operators with the same properties as the ones found in other logics of agency, such as coalition logic [33,34], STIT [22] and ATL [3], but using a simpler semantics. AsÅgotnes and Alechina [2] said, PDL has as advantage that it is theoretically well understood, with a range of mathematical and computational tools available without the need for exotic new formalisms. Additionally, as discussed in Section 5.1, the property of Independent Choice (a feature of CEGMs which are used to provide semantics for ATL, Coalition Logic and STIT) is inappropriate for our notion of accountability while CEDL is an appropriate formal setting for capturing our intuitions. Finally, our logic enables us to give a solution to the problem of uniform strategies. With this tool in hand, we propose a formalisation of one notion of forward-looking responsibility, two notions of backward-looking responsibility, and also the relation between them. We show that, by reasoning about obligations alone, agents are not able to decide to perform the actions that prevent them of reaching a violation state, even in cases where it is possible to do so. The usage of the concepts of forward-looking and backward-looking responsibility in the specification of multiagent normative systems is more appropriate. When agents reason using such concepts, they are able to undertake a course of actions that is more likely to be successful.
Our formalisation of forward-looking and backward-looking responsibility is based on basic "ingredients": agents' actions, abilities, obligations and knowledge. Earlier works on the formalisation of responsibility often do not deal with all of these ingredients. For instance, [36] deal only with obligations and agents' abilities. As we do here, they propose that responsibility should be paraphrased by 'obligation to ensure'. Their formalisation is done by using a logic wherein one can write formulae of the form OE i ϕ, which stand for 'it is obligatory that i ensures ϕ'. The most interesting feature of this approach is the validity of the scheme E i E j ϕ→E i ϕ. It expresses that 'if agent i ensures that agent j ensures ϕ then i ensures ϕ'. This is a useful feature for modelling indirect agency that is not be present in our framework. However, Santos and Carmo's logic is not appropriate to address our problem for two reasons: it does not permit to express agents' incomplete knowledge about the situation and also does not have actions in its object language. The fact that Betty has incomplete knowledge is crucial in the example presented in Section 2.2, as well as the actions, since the problem is precisely that she needs to decide which action to execute.
With the logic for agent organisation (LAO) [15] Dignum and Dignum propose to formalise responsibility in terms of agents abilities. It seems to be a better alternative than Santos and Carmo's approach, because the former avoids considering that the agent is backward-looking responsible for a failure in the case the agent is not able to avoid this failure. However, as in Santos and Carmo's approach, LAO does not have actions in its object language, which means that, again, we cannot address our example using this logic either.
The formalism proposed by Grossi et al. [20] is based on a combination of dynamic and epistemic logics. Therefore, it has actions in the object language and also permits to express incomplete knowledge scenarios. However, this logic does not permit to express agents' abilities. As mentioned above, abilities are important in the definition of responsibility, and it is also important in the definition of obligations.
Possible future works include a couple of questions and improvements. For instance, we did not address decidability, complexity and expressivity of our logic. In particular, a deep comparison analysis between our logic and other logics of agency, such as coalition logic, ATL, STIT [6], etc., has not been addressed here. Another possible extension of our logic that may be promising is the addition of temporal operators. When reasoning about responsibilities, one may want to express statements such as 'Betty must turn on the light before 7 p.m.' This statement means that Betty may fulfill her task by turning on the light before the specified deadline. Such statements cannot be expressed in our language.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.