Ranking kinematics for revising by contextual information

Probability kinematics is a leading paradigm in probabilistic belief change. It is based on the idea that conditional beliefs should be independent from changes of their antecedents’ probabilities. In this paper, we propose a re-interpretation of this paradigm for Spohn’s ranking functions which we call Generalized Ranking Kinematics as a new principle for iterated belief revision of ranking functions by sets of conditional beliefs with respect to their specific subcontext. By taking into account semantical independencies, we can reduce the complexity of the revision task to local contexts. We show that global belief revision can be set up from revisions on the local contexts via a merging operator. Furthermore, we formalize a variant of the Ramsey-Test based on the idea of local contexts which connects conditional and propositional revision in a straightforward way. We extend the belief change methodology of c-revisions to strategic c-revisions which will serve as a proof of concept.


Introduction
In multi-agent systems, it is crucial for agents working together to represent and to reason with beliefs coming from different contexts independently. Adding new beliefs to existing while maintaining consistency is called belief revision. The AGM theory [1] named after its inventors Alchourrón, Gärdenfors and Makinson, is by far the most influential formal account of belief revision and it has developed over the years. Two main problems have been adressed in the past, first AGM specifies revision in dependance of the prior belief set only rather than as a fully binary operator. This issue has been adressed in the seminal paper of Darwiche and Pearl [7] where AGM theory was extended to iterated belief revision, so that not only belief sets of an agent are taken into account but also epistemic states. Epistemic states provide adequate semantical structures on possible worlds, which allow to perform AGM revison while respecting the connections between revision operators on different prior belief sets. The second issue which has been adressed in various research papers is that AGM revision is not local, meaning that beliefs which are entirely irrelevant to the new information may be affected. In terms of syntax, Kern-Isberner and Brewka introduced a powerful tool to incorporate relevance into belief revision called syntax splitting ([17]). In this paper, we deal with local revisions respecting the semantics of the new information. In other words, if in a multi-agent system, each agent collects evidence from a different part of the world, then we require our revision operator to revise the information according to the specific subcontexts in which it plays a role. Jeffrey introduced in 1965 [11] a probabilistic revision method called Jeffrey's Rule which is able to deal with information coming from contexts that complement each other. Jeffrey's Rule is a generalization of Bayesian conditionalization. The key to revising probabilistic beliefs with new evidence from different subcontexts is Probability Kinematics, which is an important issue in probabilisitic belief revision in general. Briefly it says that conditional probabilities should not change if the probabilities of the conditions change. Later on, Shore and Johnson [22] introduced Subset Independence as a crucial property of belief revision in a probabilistic framework. Subset Independence is a generalization of Probability Kinematics using a set of conditionals with antecedents implying exclusive and exhaustive cases. Generally, Subset Independence deals with the following belief revision problem involving epistemic, multiple, and conditional revision in probabilistic frameworks: (CondPS) Let * be a revision operator that takes epistemic states Ψ and sets of conditionals Δ resp. a disjunction of (plausible, probable etc.) propositions S as inputs and returns a revised epistemic state Ψ * ( ∪ {S}) as output. Let A 1 , . . . , A n be exhaustive and exclusive formulas, and let Δ = Δ 1 ∪ . . . ∪ Δ n be a set of conditionals with subsets Δ i whose premises imply A i . Furthermore, let S = j ∈J A j with ∅ = J ⊆ {1, . . . , n} stating which of the A i 's should be believed. How to revise Ψ by Δ∪{S} such that both the different contexts A i and the information regarding the belief status of the A i 's are taken into regard while respecting the basic idea of kinematics, i.e., that conditional beliefs in Ψ with respect to each A i should be changed only by information that is relevant to A i , respectively?
The acronym (CondPS) refers to the conditional beliefs in combination with Premise Splitting, which we will discuss later. The A i 's represent different (exclusive and exhaustive) cases, and the Δ i 's provide new information referring (only) to the respective cases. Moreover, S expresses a belief which of the cases are plausible. The concept of Generalized Ranking Kinematics, which we introduced in [21] and will discuss in this paper would state that, first, the plausibility of cases (expressed by S) does not affect the conditional belief for each case A i , and second, that for the posterior conditional belief given A i only the respective new information Δ i is relevant. We give an example to illustrate (CondPS) : Example 1 An agent with the current belief state Ψ is working in the warehouse of a big (swedish) furniture company and has to oversee various departments. It is her task to coordinate the agents stocking the shelves in the warehouse and ensure that everything is in stock. The agent receives information about a new delivery. Some of the items belong to the kitchen department (A 1 ), and involve some heavy lifting. Therefore she needs agents who are capable of driving lift trucks to store the items; the rest of the delivery belongs to the bath room department (A 2 ). While planning the coordination of the delivery the agent needs to revise her beliefs about the location of the agent's working for her and also the storage capacity of the warehouse. Incorporating new conditional beliefs given A 1 resp. A 2 and given the information that there are more bathroom items to be stored than goods for the kitchen department (making A 2 more plausible than A 1 ) should yield an epistemic state in which the conditional beliefs given A 1 resp. A 2 are influenced only by the corresponding new conditional resp. propositional information.
In [21], Sezgin and Kern-Isberner presented a property for revising epistemic states by conditionals in the framework of ranking functions which they called Generalized Ranking Kinematics (GRK) that can be seen as an analog to Subset Independence for qualitative belief revision. (GRK) connects Jeffrey's Rule to the axioms of inductive inference introduced by Shore and Johnson and can be seen as an extension of Spohn's work on transferring Jeffrey's rule to the framework of ranking functions. The introduction of a strong and weak version of (GRK) allows to distinguish between two assertions of irrelevance either taking the relevance of the exclusive subcontexts into account or not. Moreover, an algorithm to compute the finest premise splitting is introduced in [21], which ensures that the notion of locality in (GRK) is exploited in the best possible way. In [21], we showed that c-revisions, introduced by Kern-Isberner ( [14], [13]), are a revision method that satisfies (GRK) . Note that our framework goes far beyond the classical AGM belief revision theory [1] and the approach of iterated revision by Darwiche and Pearl [7] because it deals with revision by sets of conditionals, but is fully compatible with these seminal frameworks. Building on the results of [21], we will further elaborate on (GRK) and the notion of locality that it introduces.
This article is a revised and largely extended version of the conference paper [21]. In summary, the novel contributions of this article are the following: -We introduce strategic c-revisions to make the general concept of c-revisions [13] usable in a more elegant and axiomatic way. -We present our main results from [21] -the proof of concept for Generalized Ranking Kinematics -in the framework of strategic c-revisions. -We broaden the scope of the Ramsey Test to conditional revisions and explore the relationship between propositional and conditional revision for a special type of conditionals. -We show that strategic c-revisions satisfy the Ramsey Test for conditional revision.
-We show to reconstruct a full c-revised ranking function from the conditionalized crevised sub-ranking functions by making use of non-normalized OCFs (which we call pre-OCFs) and a simple merging operator. -We present a new revision operator for pre-OCFs which is based on strategic c-revision and called pre-c-revision, and investigate the relation between the revision on local contexts versus the global revision with the complete set of conditionals. -We investigate pre-c-revisions in the light of postulates for revisions with sets of conditionals, which we derive from the postulates for conditional revision from [13].
The rest of the article is organized as follows: In Section 2 we present relevant formal preliminaries. Section 3 discusses aspects of probabilistic belief revision and related work. Generalized Ranking Kinematics is recalled in Section 4 in the framework of ranking functions together with results concerning the splitting of premises. In Section 4, strategic c-revisions are introduced along with various properties incorporating the idea of locality to the revision process. Section 5 presents strategic c-revisions as a concrete example of a revision method that satisfies Generalized Ranking Kinematics. In Section 6, we compare propositional and conditional revision under the prerequisites of (GRK) and introduce a variant of the Ramsey Test for conditional revision. Before we conclude in Section 8, we reconstruct the global c-revisions from the local contexts in Section 7, completing the full picture of the connection between global and local revisions in the framework of c-revisions.

Formal preliminaries
Let L be a finitely generated propositional language over an alphabet Σ with atoms a, b, c, . . . and with formulas A, B, C, . . .. For conciseness of notation, we will omit the logical and-connector, writing AB instead of A ∧ B, and overlining formulas will indicate negation, i.e. A means ¬A. We call formulas The set of all propositional interpretations over Σ is denoted by Ω Σ . As the signature will be fixed throughout the paper, we will usually omit the subscript and simply write Ω. ω A means that the propositional formula A ∈ L holds in the possible world ω ∈ Ω; then ω is called a model of A, and the set of all models of A is denoted by , as usual. By slight abuse of notation, we will use ω both for the model and the corresponding conjunction of all positive or negated atoms. This will allow us to ease notation a lot. Since ω A means the same for both readings of ω, no confusion will arise. L is extended to a conditional language (L |L ) by introducing a conditional operator |: (L | L ) = {(B|A)|A, B ∈ L }. (L |L ) is a flat conditional language, no nesting of conditionals is allowed. Conditionals with tautological premises can represent propositions A, i.e. revising with a proposition A yields the same result as revising with the conditional ( A| ) (see [13]). In contrast to propositions, conditionals are three-valued logical entities (see [8]): .
( 1 ) Conditionals are usually considered within richer semantic structures such as epistemic states.
Besides certain knowledge, epistemic states also allow for the representation of preferences, beliefs, assumptions etc. of an intelligent agent. We differentiate between qualitative and quantitative settings. In a qualitative setting, epistemic states can be represented by preorderings of L , whereas in a quantitative setting we can express degrees of plausibility, probability, possibility etc.. We briefly describe well-known representations of epistemic states, namely ordinal conditional functions which are semi-quantitative, i.e., they use numbers and arithmetics but in a qualitative way: Ordinal Conditional Functions (OCF, also called ranking functions) κ : Ω → N ∪ {∞} with κ −1 (0) = ∅ express degrees of plausibility and were firstly introduced by Spohn [23]. We have κ(A) := min{κ(ω)|ω A}. Hence, due to κ −1 (0) = ∅, at least one of κ(A), κ(A) must be 0. An OCF can be conditionalized by a proposition A through Note that, κ(·|A) is an OCF on Mod(A). Instead of writing κ(·|A) we will use the shorter notation κ|A . The conditionalized OCF κ|A is only defined on the set Mod(A) and not for the worlds ω ∈ Mod(A). OCF's can be considered as a qualitative counterpart of probability distributions. For any set M, M = M 1∪ M 2 means that M is a union of disjoint sets A proposition A is believed/accepted by a ranking function, i.e. κ |= A, if κ(A) > 0, which implies κ(A) = 0 via the properties of ranking functions. Conditionals are assigned a degree of plausibility by setting κ(B|A) = κ(AB) − κ(A) and they are accepted in an epistemic state represented by κ, written as κ |= ( B|A ), iff κ(AB) < κ(AB). Note that, the acceptance of a conditional, in contrast to the acceptance of a proposition, does not include any information about the minimal layer of the OCF. The verification of the conditional just has to be more plausible than the falsification.
OCFs can be seen as a mediating framework between fully quantitative belief representation implemented by probability distributions and qualitative frameworks of belief representations. Since OCFs use ranks in natural numbers, they offer a convenient way of expressing conditional beliefs qualitatively while preserving many of the positive aspects of probability distributions, e.g., they can be conditionalized. Therefore, they are ideal candidates to transfer the idea of probability kinematics to a qualitative framework. Moreover, OCFs implement total preorders by ranks, thus they can cover many qualitative approaches to knowledge representation as well. In particular, all AGM revisions which have to be based on total preorders [12] can be expressed conveniently in terms of OCFs. Since they inherently use a discrete numerical scale with arithmetics, the underlying mechanisms of belief revision and representation methods can be made more transparent.

Generalized ranking kinematics for OCFs
In 1965, Jeffrey introduced a generalization of Bayesian conditionalization [11], later called Jeffrey's rule, which displays a multiple probabilistic revision method for a set of probabilistic facts. Crucial for this probabilistic revision method is a strong assumption called Probability Kinematics claiming that assessing the probability of a fact should not influence the respective conditional probabilities. In 1980, Shore and Johnson [22] characterized probabilistic revision based on the principle of minimum cross-entropy by four basic axioms, one of them being Subset Independence, which can be seen as a generalization of Probability Kinematics. For a more in-depth discussion of the connection between Probability Kinematics and Subset Independence, please see [21]. Subset Independence concerns situations in which the new information decomposes naturally in information referring to exclusive contexts or cases, respectively, represented by the A i 's, and in information about the probabilities of these cases. For OCFs, we can also make use of conditionalization to split up prior and posterior epistemic states according to these cases. Moreover, it should be possible to take information on the plausibility of cases into account, which is dealt with in the strong version of the following axiom: Definition 1 (Generalized Ranking Kinematics (GRK) for OCFs) Let A 1 , . . . , A n be exhaustive and exclusive formulas. Let κ be an ordinal conditional function and Δ = Δ 1 ∪ . . . ∪ Δ n be a set of conditionals, with subsets Δ i whose premises imply A i , and S = j ∈J A j with ∅ = J ⊆ {1, . . . , n}. The revision operator * satisfies strong Generalized Ranking Kinematics iff We can also define weak Generalized Ranking Kinematics for OCFs in which no statement about the plausibility of the A i 's is made: Generalized Ranking Kinematics expresses two strong irrelevance assertions: Firstly, if we condition the revised OCF by a case A i , then only the conditionals talking about this case are relevant. Secondly, the plausibility of a case A i is not relevant for revising the respectively conditionalized OCF. Whereas in the probabilistic case, S is the set which assigns posterior probabilities to the formulas A i , for OCFs we just take into account that some of the A i 's are deemed to be more plausible than others and revise with the most plausible ones. Note that exclusivity is the crucial property of the A i 's because exhaustiveness can always be obtained by taking the negation of A i also as a case, i.e., a set {A i } 1≤i≤n of exclusive but not exhaustive propositional formulas can always be extended This implies that in the case where we just have Δ = Δ 1 and A 1 = A, i.e., A 2 = ¬A and Δ 2 = ∅, (GRK weak ) claims that revision and conditionalization are commutable: κ * Δ|A = κ|A * Δ.
Note that neither in Jeffrey's rule nor in Subset Independence the partitioning of the new conditional information have to be nonempty, the same holds for (GRK) , i.e., the subsets Δ i may be empty. Since the A i 's are exhaustive and after conditionalizing with A i , A i is believed by κ|A i , the information in S is no longer relevant. Still, it holds that (2) is a more general statement than (3) because it takes into account the plausibility of the different cases A i . However, if * satisfies the property of Tautological Vacuity (TV) then (GRK strong ) clearly implies (GRK weak ) by setting S = j ∈J A j with J = {1, . . . , n}.
In the following, we will refer to the general idea of kinematics defined by (GRK strong ) and (GRK weak ) as (GRK) to ease the notation. We continue Example 1 to illustrate (GRK) : Example 2 (Continue Example1) We extend the current knowledge of the agent overseeing the delivery for the warehouse by the fact that some of the items are part of the standard and some of the new collection. Items from the standard kitchen's collection involve heavy lifting, items from the new collection do not. All items for the bathroom department are from the new collection. Let Σ = {k, l, c} be the signature describing the agent's current belief state Ψ with the following meanings: The coordination of the new delivery depends on the department and the type of collection the item belongs to. The information can be summed up using the following set of conditionals Δ = {( l|kc ), ( l|kc ), ( c|k )}. Each conditional in Δ is implied by one of the exclusive and exhaustive formulas A 1 = kc, A 2 = kc and A 3 = k. In Example 1, it was mentioned that there are more bathroom than kitchen items, making A 3 more plausible than A 1 ∨ A 2 , thus S = A 3 . As we see, (GRK strong ) is applicable in this example and the agent should revise her beliefs with respect to the relevant context the new information is coming from.

k:
the item belongs to the kitchen department; k: the item belongs to the bathroom department; l: agents are capable of driving lift trucks; l: agents are not capable of driving lift trucks; c: the item is part of the standard collection; c : the item is part of the new collection.
Now, we will investigate the preconditions of Generalized Ranking Kinematics in more detail. We shall at first explain the notion of premise splitting which is the main precondition for Subset Independence: The premises must imply formulas A i which are exclusive and exhaustive. This means the agent is able to distinguish between different cases. During the revision the agent learns something about the consequences resulting from these cases. The definition of premise splitting is as follows: A premise splitting P Δ of Δ is a set P = {A 1 , . . . , A n } of exclusive and exhaustive formulas A 1 , . . . A n , such that every premise X k (k = 1, . . . , m) implies exactly one A i .
We continue Example 2 to explain the notion of premise splitting:

Example 3
The following set of conditionals models the agent's knowledge about the new delivery: Δ = {( l|kc ), ( l|kc ), ( c|k )}. Every premise in Δ is implied by one of the exclusive and exhaustive formulas A 1 = kc, A 2 = kc and A 3 = k, which leads to the following premise splitting P Δ = {kc, kc, k}. This illustrates how sets of conditionals that satisfy the conditions of (GRK) model exclusive and exhaustive cases, meaning that they describe the agent's actions for every possible scenario.
A premise splitting induces a partitioning of the set of conditonals We can find a premise splitting for each set of conditionals Δ by using simply the trivial premise splitting P Δ = { }. But in order to maximise the benefits of (GRK), the premise splitting should be as fine as possible, meaning that the subsets Δ i should be as small as possible and as many as possible.

Definition 3 (Refinement and specificity) Let Δ be a set of conditionals. For two premise
This means that the A i 's are more specific than the B j 's. Two premise splittings P 1 Δ and P 1 Δ are equivalent iff P 1 We continue Example 3 to illustrate the refinement-relation defined above: , we could also use B 1 = k and B 2 = k as exhaustive and exclusive formulas to define a premise splitting P Δ = {k, k}. Then P Δ P Δ , since kc k and k k.
A premise splitting which refines every other splitting for a set of conditionals Δ is called its finest premise splitting. The following theorem shows that for every Δ the finest premise splitting is unique up to semantic equivalences and permutations. To prove the theorem, we need to define a relation on the set of premises of Δ, we denote this set by P rem(Δ) = {X ∈ L |(B|X) ∈ Δ}. We start by defining X ∼ Y iff XY ≡ ⊥ for X, Y ∈ P rem(Δ). ∼ is reflexive and symmetric, so the transitive closure ∼ * of ∼ is an equivalence relation on the elements of P rem(Δ).

Theorem 1 For a set of conditionals Δ there is a (unique) finest premise splitting (up to semantic equivalences and permutations).
Proof Let P rem(Δ) = {X ∈ L |(B|X) ∈ Δ}, and ∼ * be the equivalence relation defined above.
. . , A n } defines a premise splitting because for i = j it holds that: and that P Δ is unique up to permutation and semantic equivalences. But we still need to show that P Δ refines every other premise splitting: To compute the finest premise splitting for a finite arbitrary set of conditionals we present Algorithm 1.

Theorem 2 Algorithm 1 terminates and is correct in the sense that it computes the unique finest premise splitting for a finite set of conditionals Δ.
This theorem follows immediately from the constructive proof of Theorem 1 by observing that the transitive closure of ∼ is obtained by considering disjunctions in line 7 and adding these to the set of premises P rem in line 8. For premises X 1 , X 2 , X 3 with The choice of an 'anchor' premise in line 4 of the algorithm does not affect the uniqueness of the finest premise splitting, since we take the disjunction of the whole equivalence class of these premises as part of the premise splitting; therefore, differences between splittings can only be syntactical. Moreover, the possibility of different (non-finest) splittings is a desirable feature when it comes to the determination of the induced subsets of Δ. Here, a balanced number of conditionals in each subset Δ i may be preferable in order to balance the revision task for each subpart of the belief state.
The running time of Algorithm 1 is determined by the SAT-Test in line 5 and 6 and therefore the problem is NP-complete. In the worst case the equivalence classes determined in the while-loop are singletons and we obtain O(|Ω| 2 ), where |Ω| = 2 |Σ| represents the runtime of the SAT-Test in the worst case.
The following example illustrates the algorithm:

Algorithm 1 Finest premise splitting.
Require: Finite set of conditionals Δ Ensure: Unique finest premise splitting of Δ 1: P rem ← P rem(Δ) 2: P Δ = ∅ 3: while P rem = ∅ do 4: Choose X ∈ P rem 5: if there are Y ∈ P rem, Y = X, with XY ≡ ⊥ then 6: build else 10: A ← X 11: end if 14: end while 15: if P Δ ≡ then 16: c, d} be the signature for a set of conditionals Δ = {(c|ab), (d|abc), (e|ab), (b|a), (d|ae)}. We now want to compute the unique finest premise splitting using Algorithm 1: We initialize P rem = {ab, abc, ab, a, ae} and P Δ = ∅. In the first iteration, X = ab with A 1 = ab and therefore, P Δ = {ab} and P rem = {abc, ab, a, ae}. For the second iteration, X = abc with [X] = {abc, ab}, hence A 2 = ab and P rem = {ab, a, ae}. In the next iteration, X = ab and XY ≡ ⊥ for all other Y ∈ P rem, therefore P Δ = {ab, ab}. Then, X = a with [X] = {a, ae} and A 3 = a and therefore, P rem = {a}. In the last iteration, X = a = A 3 , and P Δ = {ab, ab, a}. We have ab ∨ ab ∨ a ≡ , thus the algorithm terminates and returns P Δ = {ab, ab, a} which determines a partitioning of Δ. For the GRK property we split

Strategic c-revisions
The iterated revision problem (CondPS) described in the introduction can be solved by c-revisions [13] which provide a highly general framework for revising ranking functions by sets of conditionals while respecting the so-called principle of conditional preservation. Each c-revision is an iterated revision in the sense of Darwiche and Pearl [7] because the DP-postulates are implied by the principle of conditional preservation [14,15]. Note that in our framework, plausible propositions A are identified with conditionals (A| ) with tautological antecedents (see Section 2), so Δ may also contain plausible facts. This means that c-revisions can fully deal with (GRK) revision problems, as defined in Definition 1. For our purposes it will be sufficient to use a basic version of c-revisions.
Definition 4 (C-revisions for OCFs) Let κ be an OCF and Δ = {(B 1 |X 1 ), . . . , (B m |X m )} a set of conditionals. Then a c-revision of κ by is an OCF κ * constructed from nonnegative integers η i assigned to each (B i |X i ) and an integer κ 0 such that κ * accepts Δ and is given by: The η i can be considered as impact factors of the single conditionals (B i |X i ) for falsifying the conditionals in Δ which have to be chosen so as to ensure success. The possible values for the η i can be specified by a constraint satisfaction problem.
where the η i are constraint variables taking values in N.
. . , η m ) of natural numbers. For a constraint satisfaction problem CSP, the set of solutions is denoted by Sol(CSP). Thus, with Sol(CR(κ, )) we denote the set of all solutions of CR(κ, ). For any #» η ∈ N m , the function κ * as defined by (4) with is denoted by κ * #» η .

Proposition 1 (Soundness and completeness of CR(κ, )) Let κ be an OCF and Δ
The proof of Proposition 1 is a direct consequence of the propositions presented in [14]. The constraints given by (5) ensure that a c-revision κ * of κ by Δ as given by (4) accepts Δ, and κ 0 given by (6) is a normalization factor ensuring that κ * (ω) = 0 for at least one world ω. Since (4) and (5) provide a general schema for revision operators, many c-revisions are possible. Nevertheless, it might be useful to impose further constraints on the parameters η i . For instance, one option is to take minimal η i satisfying (5) ensuring that the resulting OCF ranks worlds as plausible as possible. For being able to impose further restrictions on the impacts η i determining a c-revision, we will employ a selection strategy for choosing impact vectors.

Definition 6 (Selection strategy σ , strategic c-revision) A selection strategy (for crevisions) is a function
σ : (κ, Δ) → #» η assigning to each pair of an OCF κ and a (consistent) set of conditionals an impact vector Similar as c-revisions generalize c-representations [13], selection strategies for crevisions generalize selection strategies for c-representations, that were introduced in [16], by taking also the prior κ into account. Note that the c-revision operator * σ determined by a selection strategy σ selects a single c-revision for each κ and each (consistent) Δ. By identifying (plausible) propositional statements A with conditionals having a tautological antecedent, i.e., A ≡ (A| ), this concept of selection strategy-based c-revision operator * σ can also be applied in a straightforward way to propositional iterated revision and multiple iterated revision, where κ is revised by A resp. a set {A 1 , . . . , A m } of propositions (cf. [18]).
In order to focus on the core aspects of our approach and to avoid easy, but possibly lengthy, uninformative case distinctions, we make the following assumptions for a set of conditionals Δ = {(B 1 |X 1 ), . . . , (B m |X m )} used for c-revisions. For this, we say that two conditionals (B|X) and (B |X ) are conditionally equivalent, denoted by (B|X) ≡ (B |X ), if they have the same verification and the same falsification behaviour, i.e., if XB ≡ X B and XB ≡ X B [8,13]. Unless stated explicitly otherwise, we assume that Δ does not contain conditionally equivalent conditionals (i.e., for all , and that Δ is consistent (which can easily be checked by the well-known tolerance test [9]). Furthermore, when dealing with two sets Δ, Δ of conditionals and enumerating their elements, we will tacitly assume that conditionally equivalent conditionals are given first and in the same order in and in . Thus All results presented in this article can easily be extended to the case where any of these assumptions is dropped by properly taking the resulting special cases into account.
In [3], the notion of elementwise equivalence for sets of conditionals Δ, Δ is introduced, stating essentially that for every conditional in Δ there is a conditionally equivalent conditional in Δ , and vice versa. Using our general assumptions that each of Δ and Δ does not contain conditionally equivalent conditionals and that corresponding conditionals are enumerated in the same order in Δ and in Δ , we slightly modify elementwise equivalence in order to ensure a one-to-one correspondence between the conditionals in Δ and Δ .
In the following, we specify conditions for selection strategies which aim at ensuring basic properties that we expect from belief revision implemented by c-revisions. First, we present a postulate requiring selection strategies to be dependent only on the solution space of the respective constraint satisfaction problem: (IP-SOL σ ) A selection strategy σ is impact preserving with respect to the solution space if for any two constraint satisfaction problems CR(κ, Δ), CR(κ , Δ ) with Sol(CR(κ, Δ)) = Sol(CR(κ , Δ )), we have σ (κ, Δ) = σ (κ , Δ ).
Note that if CR(κ, Δ) = CR(κ , Δ ), then a selection strategy fulfilling (IP-SOL σ ) will also trivially satisfy σ (κ, Δ) = σ (κ , Δ ). In particular, this also implies that the selection strategy σ and the resulting strategic c-revision * σ are syntax independent: The following observation is a direct consequence of syntax independence.
If #» η is an impact vector with impacts corresponding to the conditionals in Δ, then for Δ ⊆ Δ, the subvector of #» η containing only the impacts related to the conditionals in Δ is called the projection of #» η to Δ and is denoted by #» η Δ . Hence, if σ is a selection strategy, σ (κ, Δ) Δ is the projection of σ (κ, Δ) to Δ .
Using projections of constraint satisfaction problems for c-revisions, we can generalize the idea of preserving impacts, as expressed by (IP-SOL σ ) and (SI σ ) to equivalent subproblems. This is specified in the next axiom which has far-reaching consequences.
Because a propositional tautology can be modelled by the self-filfilling conditional ( | ), a direct consequence of Proposition 4 is that (IP-ESP σ ) ensures tautological vacuity.
Via a selection strategy σ , we choose σ (κ, Δ ∪ {S}) = (1, 1, 2 Characterizing c-revisions as solutions of a constraint satisfaction problem as done here has several advantages. Besides being the base for the concept of selection strategies, the CSP characterization also paves the road to implementations exploiting the capabilities of constraint solvers. Also the set of c-representations can be characterized as the solutions of a CSP; furthermore, for skeptical c-inference, taking all c-representations of a set of conditionals into account, that was introduced in [2] and further elaborated in [4], this observation also holds. Solving these CSPs has been achieved by a constraint logic programming approach [5], employing the contraint solver provided by SICStus prolog [6]. An extension of these implementations to Hansson's descriptor revision [10] instantiated to a conditional logic is presented in [20], underlying the feasibility of implementing c-revsions by using a solver for CSPs because revisions can be simulated straightforwardly within the framework of descriptor revision. Moreover, having selection strategies at hand will be fruitful in such an implementation. For instance, for a c-revision involving a syntax splitting, an impact preserving selection strategy breaks down larger CSPs into smaller local ones and reuses previous computation results.

GRK for strategic c-revisions
In a nutshell, Generalized Ranking Kinematics means that if the new information that the agents receive can be split into different cases, then it should be possible to revise with these different subsets independently on the respectively conditionalized prior epistemic state. In a purely quantitative framework, the principle of maximum entropy is a revision method which satisfies this property for probability distributions. In this section, we will show that (GRK) is satisfied by strategic c-revisions using (IP-ESP σ ).

n}. If σ is a selection strategy that satisfies (IP-ESP σ ), then * σ is a strategic c-revision that satisfies (GRK strong ) and (GRK weak ) .
Proof We first investigate the constraint satisfaction problems and then the general definition of the c-revision as in Definition 4. For all ω ∈ , ω A i for exactly one i. If ω A i then all conditionals from Δ k (k = i), are not applicable and hence irrelevant. Therefore, it holds for ω |= A i that: CR(κ, Δ ∪ {S}) is defined by the following set of constraints for η i,j , i ∈ {1, . . . , n}, j ∈ {1, . . . , n i }, and η S : η S occurs only if i ∈ J and either is cancelled out, or does not appear at all. For η S , it holds that: Conditioning on A i yields for ω |= A i : Now we turn to the case when we first conditionalize κ and then c-revise the conditionalized OCF. Let ω A i , then we have CR( κ|A i , Δ i ) is given by the set over the following inequalities for j = {1, . . . , n i }: In order to separate the two revisions from each other prima facie, we used μ's for κ|A i * σ Δ i instead of η's which we used for κ * σ (Δ ∪ {S}) for the impact factors.
In the proof of Theorem 3, (IP-ESP σ ) guarantees But, in order to show that κ * σ (Δ ∪ {S})|A i (ω) = κ|A i * σ Δ i (ω) for i = 1, . . . , n and ω ∈ Mod(A i ) (13) was crucial. Thus, we need to take the normalization constant and the conditionalization into consideration to achieve the same results for local and global revision. In Section 7, we will use this interaction between normalization and conditionalization to define a global revision via the local ones by simply omitting the normalization step.
Furthermore, S represents new evidence affecting the epistemic state of an agent. For Δ = ∅, (GRK strong ) induces that we strengthen the beliefs of the cases A j with j ∈ J , without changing the respective conditional beliefs.
We give an example of (GRK weak ) for c-revisions.  Table 1, along with schematic c-revised κ * Δ. From Theorem 3, it follows that CR(κ, Δ) Δ i = CR( κ|A i , Δ i ). Thus, applying (IP-ESP σ ) we can choose Note that, the selection strategy σ chooses Pareto-minimal impact factors. If we compare the conditionalized strategic c-revision with the strategic c-revisions from Table 1, it is clear that Note that we choose a more technical example to illustrate (GRK weak ) in order to illustrate the interaction of impact values and the benefits of local revision. We kept the signature as small as possible in order to be able to illustrate all relevant information in one table, focusing on the complexity of the (GRK) principle, not on the complexity of the example. In the following sections, we will focus on (GRK weak ) resp. choose S = , since the proposition S does not play a crucial role for the following results. As mentioned, before we use the general term (GRK) to refer to the idea of kinematics instead of (GRK strong ) resp. (GRK weak ).  In a very broad sense, ranking kinematics rules the interaction between conditionalization and revision. In this section, we focus on the special case where all conditionals in Δ have the same premise A. As mentioned in Section 3, conditionalization and revision commute in this case: κ * Δ|A = κ|A * Δ. Since κ|A already conditions on A, the premise A in the conditionals in Δ seems to be superfluous. From this, a further question arises: Can revision by conditionals be implemented by multiple propositional revision? This will be positively answered for strategic c-revisions in this section. A conditional revision approach can be used for propositional revision via representing propositions as conditionals with tautological premises. The shift from revision with propositions to revision with conditionals is harder to make, because conditionals unlike propositions are three-valued logical entities (see (1)). A proposition can be true or not in a world ω ∈ , whereas conditionals can be verfied, falsified or not applicable. This is also displayed in the different success conditions for revision with propositions resp. conditionals. A well-known connection between propositional revision and the acceptance of conditionals is displayed in the Ramsey Test (RT ocf ) for OCFs: In this section, we want to further elaborate on this connection and extend the Ramsey Test to revisions with conditionals using conditionalization, we will call this extension Ramsey Test for conditional revision (CRT ocf ) for OCFs. This will help us to understand better the connection between propositional and conditional revision. The statement here is that if we revise an epistemic state which is conditioned by A with a set of conditionals with antecedents A, then conditional revision can be reduced to (multiple) propositional revision. We will prove this reduction for strategic c-revisions.
As before we use the framework of ranking functions which is the general epistemic framework of this paper and which allows us -as before for (GRK) -to use strategic crevisions as a proof of concept. But clearly, (CRT ocf ) can be defined for general epistemic states. The key to the transition from propositional revision to conditional revision is conditionalization. Note that for (CRT ocf ), it is crucial that the premises of the conditionals are all the same, namely a proposition A. By conditionalizing the prior κ with this proposition, we can reduce the conditional revision to a propositional one. We benefit from the reducing of worlds from to Mod(A) ⊆ caused by conditionalization. If we conditionalize a ranking function with the premise of a conditional ( B j |A ), it has the nice side effect that we eliminate the neutral case in the evaluation of conditionals. For a conditionalized ranking function κ|A , the conditional ( B j |A ) has the same response behaviour to worlds ω ∈ Mod(A) as the proposition B j , i.e. the conditional is either verified or falsified by worlds in Mod(A).
Note that the set of consequents B inherits consistency if the set of conditionals Δ with premise A is consistent, so both revisions in (CRT ocf ) are well defined if Δ is consistent. This is depicted in the following lemma: Proof If Δ is consistent, then at least one ( B i |A ) ∈ Δ is tolerated and thus, there exist Thus {B 1 , . . . , B n } = B is consistent.

Theorem 4 Let κ be a ranking function. Let Δ = {( B j |A )} 1 j be a consistent set of conditionals with premises A and let B = {B j } 1 j n be the consequents of the conditionals in Δ. If σ satisfies (IP-SOL σ ) then * σ is a strategic c-revisions that satisfies (CRT ocf ).
Proof The constraint satisfaction problem CR( κ|A , B) is given by the set of the constraints for all j = 1, . . . , n: = min (15) (15) holds since we revise the conditionalized κ|A which is defined only on worlds in Mod(A). So, in this case, for all worlds ω |= B j it holds ω |= AB j . The set of the constraints in (15) for all j = 1, . . . , n is the same as CR( κ|A , Δ). Thus, CR( κ|A , B) = CR( κ|A , Δ) and therefore σ ( κ|A , B) = σ ( κ|A , Δ) by (IP-SOL σ ).
Note that we crucially used CR( κ|A , B) = CR( κ|A , Δ) and (IP-SOL σ ) in the proof. The constraint system is a direct consequence of the success condition defining the revision. For the conditionalized OCF κ|A , it holds: And therefore we get the same constraint system resp. impact vector for the propositional resp. conditional revision.
Note that σ chose Pareto-minimal parameters. The results of the revisions are displayed in Table 2. As we can see κ|A * σ B(ω) = κ|A * σ Δ(ω) holds for all ω ∈ Mod(A).

From local contexts to global c-revisions
In this section, we want to investigate the relation between the c-revision of a ranking functions with a full set of conditionals and the local revisions on different contexts while respecting the basic idea of kinematics. The concepts introduced before are powerful tools Table 2 Conditionalized OCF κ|A and the c-revised OCFs κ|A * σ B resp κ|A * σ Δ. We chose Pareto-minimal impact factors via a selection strategy σ and the normalization constant κ 0 = −2 for both revisions to reduce the complexity of the revision process. Throughout this section, we will assume that S = when we use (GRK). We used Spohn's conditionalization to focus on certain parts of Ω. Due to the concept of premise splitting, we were able to characterize the revision for each part of Ω induced by the formulas A i individually. The question that should come to one's mind is now, how can we bring together the revised information from the different focuses? The major problem here is that conditionalized κ|A i on Mod(A i ) do not determine the global κ on Ω uniquely. In this section, we show that under c-revision, we can solve this problem by introducing a new concept of ranking functions, which prevents the loss of information induced by conditionalization. We transfer information from locally revised contexts to the global context via a merging operator. This will allow us to construct the global c-revision using only c-revisions on local contexts, without increasing the complexity of the problem. As mentioned after the proof of Theorem 3, the specific form of c-revisions supports this ideally by separating the impacts of conditionals under revision and pure normalization. This is due to the fact that the notion of c-revisions originates from the Maximum Entropy principle resp. Minimum Cross Entropy. Here, the conditional revision of a probability distribution is also separated from normalization. At the core of these fundamental principles of quantitative revision (revision with probability distributions) is the notion of Subset Independence, which we discussed at the beginning of the paper. This guarantees for the revision of probability distributions in a special context, which is determined by conditionalization, only the conditionals talking about this case are relevant, similarly to what (GRK) does for revision of ranking functions. Furthermore, the idea of exploiting conditional independencies from Bayesian Networks is employed in (GRK) . For Bayesian networks, we make use of conditional independency assertions to reduce the dimensionality of problems via the combination of local probability distributions. For (GRK) , the disjoint cases implement conditional independencies allowing us to revise locally. The results of this section show that these local revisions can be used to determine the global revised epistemic state. Independencies employed by disjoint cases are also crucial for Jeffrey's rule as Pearl showed in [19]. In the next section, we will show that similar to information processing in Bayesian networks, we can skip normalization on local contexts to immediately obtain the global revised OCF. (GRK) ensures that the local impact factors can (and should) be used for the global revision.
In the next subsection, we introduce pre-OCFs and a merging operator to be able to ignore the information loss due to conditionalization during the revision process. With these tools at hand, we can reconstruct the c-revisions of the prior ranking function with the full set of conditionals κ * Δ by introducing a new revision operator following c-revisions in Section 7.2. In Section 7.3, we take a closer look on the properties of this new revision operator.

Merging and Pre-OCFs
Normalization is an artefact that turns general rankings in terms of natural numbers into OCFs. Note that for c-revisions (4), normalization is used as a meta-concept which is applied to a structural schema, and that normalization is not relevant at all for the constraint satisfaction problems (5). In the following, we will study OCFs and c-revisions which are not necessarily normalized. First, we define unnormalized OCFs as pre-OCFs: Every pre-OCF κ can be transformed to a classic ranking function via a function ocf : κ → κ defined as follows: And every standard OCF is also a pre-OCF, so the two concepts can be transformed into each other. However, ocf is not injective, many pre-OCFs will yield the same κ under ocf. For a conditionalized OCF, a natural candidate for an associated pre-OCF would be the prior OCF defined on Mod(A i ) only: The kinematics principle defines the behaviour of a revision operator on exclusive and exhaustive subcontexts. Yet, there is no guiding principle in how to combine the information from the local contexts to the global context after revision. We introduce a merging operator based on premise splitting able to solve the following merging problem: Since the A i 's are exclusive and exhaustive, M({ κ A i } n i=1 ) is well-defined. The merging operator M(·) merely connects the sub-OCFs to a global OCF on the whole set of possible worlds . The exclusivity guarantees that the information does not interact and the exhaustiveness guarantees that we can set up a pre-OCF over the full set of worlds . Note that for every premise splitting P with conditionalized ranking functions κ|A i , we can now define a merged OCF M(κ), which solves the problem (M). If the sub-functions κ A i are ranking functions then M(κ) is a ranking function, too.

Pre-c-revisions
Now, we examine the relation between the full c-revision of a ranking function with a set of conditionals κ * σ Δ and the local ranking functions κ|A i * σ Δ i which are conditionalized and then c-revised with a subset of Δ: Throughout this section we will assume that Δ satisfies the prerequisites of (GRK), namely, there are subsets Δ = Δ 1 ∪ . . . ∪ Δ n with exclusive and exhaustive formulas A i for which it holds that the premises of the conditionals in Δ i imply A i . As we have seen before, (GRK) allows the reduction of the global revision to local contexts. Now, we will take a look at the transition from ( κ|A i ) * σ ,i to κ * σ , i.e. from revisions on local contexts to a globally revised ranking function. Therefore, we need to reconstruct the lost information from the conditionalization and merge the revised information from the subsets Mod(A i ) back to . We will call this problem (CondPS σ M ) for global c-revisions: (CondPS σ M ) Let κ be a ranking function and let Δ = Δ 1 ∪ . . . ∪ Δ n be a set of conditionals with disjoint subsets Δ i and a premise splitting P = {A 1 , . . . , A n }. How can we construct the globally c-revised ranking function κ * σ Δ via the locally revised ranking functions κ|A i * σ Δ i while taking advantage of the idea of kinematics?
Solving (CondPS σ M ) allows the agent to c-revise on local contexts and then rebuild the full epistemic state from the c-revised sub-epistemic states. The problem (CondPS σ M ) is defined for c-revisions, so that the global revision satisfies the principle of conditional preservation, which ensures that the relation of worlds across the subsets Mod(A i ) is preserved. (CondPS σ M ) is defined for general c-revisions, we will show our results using strategic crevision since they allow for a more elegant notation, but could also be applied to general c-revisions.
For the case S = , the proof of Theorem 3 shows that all of κ * σ Δ, κ * σ Δ|A i , and κ|A i * σ Δ i use the same κ-values and impact factors, only the (re)normalization constants differ (please see (7), (10) and (11)). Therefore, postponing normalization towards the end of the revision process and working with pre-OCFs and pre-c-revision before allow us to derive easily a global revised κ from the local revisions on the contexts A i .
We illustrate (CondPS σ M ) via the following example:  Table 3. To save some space, we will place the conditionalized OCFs beneath each other in one column. In Theorem 3, we showed that it holds that CR(κ, Δ) Δ i = CR( κ|A i , Δ i ), so using the ranking function κ from Table 3 we get the following system of inequalities: From (IP-ESP σ ), it follows that we can choose a selection strategy σ , such that Note that σ chose Pareto-minimal impact factors, s.t. (5) holds with η 1,1 , η 1,2 , η 2,1 > 1 and η 2,2 > 2. The results of the revision κ * σ Δ and κ|A i * σ Δ i (i = 1, 2) and the normalization constant κ 0 resp. κ i,0 for κ|A i * σ Δ i (i = 1, 2) are displayed in Table 3. The normalization constants differ whether we revise with the whole set Δ resp. the subsets Δ i , this together with the prior conditionalization leads to different ranks for the revised c-revisions.
The normalization constant is the crucial point, because even if we would add κ(A i ) to the ranks of ω ∈ Mod(A i ) of the ranking functions κ|A i * σ Δ i as to kind of reverse the conditionalization, we would not overcome the problem of the normalization constant.
The example above is rather technical to highlight the importance of the normalization constant for the revision with strategic c-revision. This holds also for the following examples Table 3 The prior OCF κ and the conditionalized OCFs κ|A i with i = 1, 2 displayed in one column. The strategic c-revised OCFs κ * σ Δ and κ|A i * σ Δ i (i = 1, 2) are displayed as schemas and with the calculated ranks, again we displayed conditionalized OCFs together in one column. The values of the normalization constants can be found in the last row in this section, since they are used to illustrate complex and rather technical concepts. We will discuss the relevance of our approach for applications at the end of this subsection. As a next step, we will now define a revision operator similar to strategic c-revisions, except for the normalization constant. Thus, the strategic pre-c-revised ranking function will be a pre-OCF. We call this operator a strategic pre-c-revision.
Definition 12 (Strategic pre-c-revision) Let κ be an OCF and Δ = {(B 1 |X 1 ), . . . , (B m |X m )} a set of conditionals. Let σ be a selection strategy for c-revisions. Then a strategic pre-c-revision of κ by Δ is a pre-OCF κ * σ ,pre such that κ * σ ,pre accepts Δ and is given by #» η = σ (κ, Δ): As for strategic c-revisions, the η i can be considered as impact factors for the single conditionals ( B i |X i ) for falsifying the conditionals in Δ. The possible values for the η i can be specified by the same constraint satisfaction problem as in Definition 5, neglecting the normalization constant has no affect on the values of the impact factors.
Using the merging operator defined in Definition 11 and pre-c-revisions defined in Definition 12, we get the following result: ..,n i (n i = |Δ i |) for i = 1, . . . , n such that P = {A 1 , . . . , A n } is a premise splitting. Let κ be an OCF, σ be a selection strategy that satisfies (IP-ESP σ ), then for strategic pre-c-revisions * pre σ resp. strategic c-revisions * σ and the merging operator M(·) as defined in Definition 11 it holds that: for all ω ∈ Ω.
Proof The theorem follows directly from (GRK) and the definitions of strategic prec-revisions resp. c-revisions and the merging operator. For ω ∈ Mod(A i ), it holds that . (21) Note that since, the A i 's are exclusive the (pre-)c-revision with Δ yields the same results on Mod(A i ) as the revisions with Δ i . (21) holds for all 1 i n and therefore, we get for all ω ∈ Ω via the merging operator M(·). Thus, (22) follows from (18) via normalization for all ω ∈ Ω.
Theorem 5 shows under the prerequisites of (GRK), instead of revising the prior ranking functions with a set of conditionals Δ, we can revise with subsets Δ i of Δ on local contexts κ|A i and then merge the results, thus the offset κ|A i is sufficient to c-revise the prior ranking function κ with the set of conditionals Δ. Thus, global c-revisions can be constructed from c-revisions on local contexts. After normalizing the merged and revised pre-OCF, we get κ * Δ. This is a more efficient way to compute κ * Δ, because merging and normalizing come both at linear cost and splitting in subcontexts reduces the exponential effort of the revision significantly.
To complete the picture, we will now draw the connection between Theorem 5 and Theorem 4, using (GRK): Corollary 1 Let Δ = Δ 1 ∪ . . . ∪ Δ n be a consistent set of conditionals with subsets Δ i = {( B i,j |A i )} 1 j n i with premises A i and let P = {A 1 , . . . , A n } be a premise splitting. Let B i = {B i,j } 1 j n i be a set of propositions, i.e. the consequents of the conditionals in Δ i . Let κ be an OCF, σ be a selection strategy that satisfies (IP-ESP σ ), then for strategic prec-revisions * pre σ resp. strategic c-revisions * σ and the merging operator M(·) as defined in Definition 11 it holds that: for all ω ∈ Ω.
The corollary follows immediately from Theorems 3, 4 and 5, and connects our previous results. For sets of conditionals with premises A i for all the conditionals in Δ i , we can revise with propositions rather than employing conditional revision on the local contexts. The global revision is determined by the merged revisions on local contexts, therefore the In general, there are some issues with this form of independence established by stability, for further reading see [18]. (CR3) As we already mentioned in Section 4, by identifying propositional statements A as conditionals with tautological antecedents ( A| ), we can apply conditional revision for propositions in a straightforward way. Pre-c-revisions inherit this property from c-revisions, since the tautological antecedents of the conditionals in Δ are not effected by normalization. Pre-c-revisions inherit (CR3) from standard c-revisions. (CR4) Like (CR3), (CR4) is not affected by the fact that pre-c-revisions are not normalized and pre-c-revisions inherit this property from standard c-revisions.

Conclusion
Generalized Ranking Kinematics aims to capture the intuition that information concerning exclusive cases should be revised independently. Jeffrey [11] implemented this idea in the probabilistic framework for sets of propositions. Shore and Johnson [22] extended this notion for sets of conditionals and showed that the ability to differentiate between different contexts is crucial for inductive inference in the probabilistic framework. In this paper, we elaborated on the idea of Shore and Johnson and introduced a general concept of kinematics.
The key to our approach is to split the premises of the set of conditionals Δ and therefore create independent subcontexts that enable us to deal with new information coming from different directions independently. We presented an algorithm to compute the premise splitting for an arbitrary finite set of conditionals and proved that we obtain the finest premise splitting, which maximises the benefits of (GRK). In the framework of ranking functions, we showed that c-revisions are a proof of concept for (GRK) in [21]. In this paper, we defined strategic c-revisions to make the general concept of c-revisions usable in a more elegant and axiomatic way and reformulated the main result from [21]. Focussing on independent subcontexts is key to the (GRK) approach. We implemented this idea in the framework of ranking functions via Spohn's conditionalization [24]. Note that, Spohn's conditionalization itself can be seen as a propositional revision method and via the idea of kinematics, we established a variant of the standard Ramsey Test -the Ramsey Test for conditional revision -that implements a connection between propositional and conditional revision. Crucial for this connection is the special structure of the conditional knowledge base. It is part of our future work to generalize (CRT ocf ) for more general types of conditional knowledge bases by generalizing the notion of local contexts. The downside of conditionalization is the loss of information concerning the prior epistemic state, which makes the reconstruction of the globally c-revised ranking function difficult. We call this problem (CondPS), and provided a solution in this paper by defining a merging operator and introducing the concept of pre-OCFs. The merging operator allows us to set up a global epistemic state from local epistemic states. The idea of merging information from local to global contexts can be found in the logics underlying Bayesian networks. By constructing the globally revised posterior ranking function, using just the offsets of the conditionalized and locally revised OCFs, we utilize (GRK) to the fullest and show that strategic c-revisions establish a notion of relevance on a semantic level. Defining a merging operator for arbitrary contexts is part of our future work, in order to broaden the scope of the kinematics principle to non-exclusive semantic splittings.
In [13] c-revisions were devised as a qualitative counterpart to probabilistic revision via the principle of minimum cross-entropy (MinCEnt) and thus inherit many qualities of that revision operator. As we have mentioned above (GRK) is inspired by Subset Independence which is one of the characterizing axioms of MinCEnt, according to Shore and Johnson [22]. We implemented and elaborated on this powerful principle for iterated revision. The concept of local revisions can also be found in the concept of syntax splitting. Instead of splitting the worlds semantically via exclusive and exhaustive formulas A i , syntax splitting induces a syntatical splitting of the signature.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.