Disjunctive logic programs, answer sets, and the cut rule

In Minker and Rajasekar (J Log Program 9(1):45–74, 1990), Minker proposed a semantics for negation-free disjunctive logic programs that offers a natural generalisation of the fixed point semantics for definite logic programs. We show that this semantics can be further generalised for disjunctive logic programs with classical negation, in a constructive modal-theoretic framework where rules are built from claims and hypotheses, namely, formulas of the form □φ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Box \varphi $$\end{document} and ◊□φ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Diamond \Box \varphi $$\end{document} where φ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi $$\end{document} is a literal, respectively, yielding a “base semantics” for general disjunctive logic programs. Model-theoretically, this base semantics is expressed in terms of a classical notion of logical consequence. It has a complete proof procedure based on a general form of the cut rule. Usually, alternative semantics of logic programs amount to a particular interpretation of nonclassical negation as “failure to derive.” The counterpart in our framework is to complement the original program with a set of hypotheses required to satisfy specific conditions, and apply the base semantics to the resulting set. We demonstrate the approach for the answer set semantics. The proposed framework is purely classical in mainly three ways. First, it uses classical negation as unique form of negation. Second, it advocates the computation of logical consequences rather than of particular models. Third, it makes no reference to a notion of preferred or minimal interpretation.


Disjunctive logic programs, fixed points, and negation
As noted in [19], the field of disjunctive logic programming had its beginnings in 1982, but the first major semantics for disjunctive logic programs were proposed in 1990, in  [17], which offered, in particular, a natural generalisation of the fixed point semantics for definite logic programs given in [27]. The key feature of that generalised semantics is that disjunctions of atoms are generated using a bottom-up approach applied to sets of rules, assumed to have (possibly empty) conjunctions of atoms as bodies and nonempty disjunctions of atoms as heads. The model-theoretic interpretation is easy and flexible, as the absence of negation in the rules allows one to interpret disjunction either constructively or not, and also to choose for intended interpretations either all structures, all standard (Herbrand) structures, or all minimal standard structures.
It has been observed, in [21] in particular, that the generation process at work in [17] is a form of application of the hyper-resolution rule, that involves a finite but arbitrary number of clauses, as opposed to classical resolution that uses precisely two clauses as premises. Resolution, like modus ponens, is a form of the cut rule. We will refer to the cut rule to motivate and describe a generalisation to the immediate consequence operator associated with the fixed point semantics studied in [17]. The target will be the more general class of disjunctive logic programs with possibly empty heads, and more crucially, with possible occurrences of negation in the bodies and in the heads of the rules. Before we clarify what we actually mean by "occurrences of negation," let us recall that the field of logic programming has first focused on logic programs without negation, then logic programs with nonclassical negation in the bodies of the rules, and then logic programs with nonclassical negation in the bodies of the rules and classical negation in the bodies and the heads of the rules; see [2] for a survey. Some researchers have also proposed to consider more than two forms of negation [1].
Similarly to the situation with normal programs (sets of rules with no disjunction in the heads), different views on nonclassical negation have given rise to a large number of alternative semantics. "Failure to derive" is a notion that can be applied to sets of rules with no occurrence of (any form of) negation, and [17] already establishes a relationship between their fixed-point semantics and the generalised closed world assumption. When nonclassical negation is allowed to occur in the bodies of the rules, many possible interpretations become possible. But the various interpretations of nonclassical negation do not exhaust all issues raised by the presence of negation in logic programs: in [12], an argument is made that it is often desirable to be able to work with even more powerful disjunctive programs, in which classical negation can be used, and the answer set semantics is proposed as a natural alternative semantics able to deal with both classical and nonclassical negation. See [3], [7], [23] for other approaches and complexity results on disjunctive logic programs where both nonclassical and classical negation coexist.
Let us focus on classical negation first. What happens to the fixed point semantics in [17] when classical negation enters the stage? A technique to reduce programs with classical negation to programs without is proposed in both [12] and in [24] and discussed in [18] 1 ; essentially, for every predicate symbol ℘, a new predicate symbol ℘ ¬ is introduced, that allows one to get rid of ¬ by replacing the occurrences of ¬℘ with ℘ ¬ , and (taking ℘ to be nullary not to clutter the discussion) an integrity constraint ← ℘ ∧ ℘ ¬ is added to express mutual exclusivity between ℘ and ℘ ¬ . The immediate consequence operator in [17], slightly generalised to accept rules with an empty head, can then be applied. For instance, the rules p ← and ¬ p ∨ ¬q ← are transformed into p ← and p ¬ ∨ q ¬ ←, which complemented with ← p ∧ p ¬ , allows the immediate consequence operator in [17] generalised to accept rules with an empty head to generate q ¬ ; and if one added the rule ¬q ← q ¬ , then one would eventually generate ¬q, as required from a complete proof procedure. This solution is not fully satisfactory though, as one has obtained a complete proof procedure for an extension of the original set of rules, not for the set of rules itself. Our immediate consequence operator will be strongly related to the one in [17], but will deal gracefully with classical negation, without enriching the original language. Now let us focus on nonclassical negation. In close relationship to stable autoepistemic expansions ( [14,20]), [26] lets the language of disjunctive logic programs include a modal operator B that is used to capture nonclassical negation in one of two ways: not ℘ is expressed as either B¬℘ or ¬B℘, conveying that ¬℘ is believed or that ℘ is not believed, respectively. So classical negation is used in combination with B in one of the two proposed translations of not. As for classical negation in the original program, it gives rise in [26] to a third form of negation, denoted ∼ and referred to as "strong" negation. Based on a relation of minimal model entailment, [5,26] define a notion of static expansion that yields a fixed-point semantics for disjunctive logic programs; the associated notion of logical consequence is not standard as it requires selecting, amongst all possible interpretations, those that happen to be minimal according to an underlying notion of closed-world assumption. In this paper, we will show that we can further exploit the power of modal operators and work in a classical framework, in which classical negation is the only form of negation, a theme which has been developed in depth in [15]. Disjunctive logic programs will be modal disjunctive sets of rules, and the least fixed point of such a set of rules F will consist precisely of the disjunctions that are logical consequences of F. In contrast to [26] in which the heads and bodies of rules are disjunctions and conjunctions of both formulas with no occurrence of B and formulas with occurrences of B, our "building blocks" will be formulas of the form ϕ or ♦ ϕ where ϕ is a literal. More specifically, given an atom ℘, we will map ℘ to ℘, ¬℘ to ¬℘, not ℘ to ♦ ¬℘ and not ¬℘ to ♦ ℘. This mapping is studied in depth in the context of nondisjunctive logic programs in [16]. One of the key properties of the proposed syntax of rules is that disjunction is constructive. In [26], it is claimed that "as pointed out by several researchers, the form of negation proposed by Gelfond and Lifschitz does not represent real classical negation ¬F but rather its weaker form, denoted here by ∼F, which does not require the law of excluded middle F ∨ ∼F." This paper adopts a different view, namely, that only classical negation is necessary, and that there are other ways to circumvent excluded middle than to introduce a weaker form of negation; rather than expressing that either F or ¬F holds, one can express that either F or ¬F has been derived, that is, F ∨ ¬F, in accordance with Gödel's interpretation of intuitionistic logic in S4. Our representation of not allows disjunction to behave constructively with respect to both ¬ and not, as p ∨ ♦ ¬ p will not be valid. In contrast, p ∨ ♦¬ p would be valid, so ♦ cannot be replaced by ♦; our results will provide evidence that for our purposes, ♦ is the right choice. There are many examples of frameworks where the "building blocks" are of the form ϕ and ♦ϕ, in particular within the setting of autoepistemic logic (e.g., [10]), but not only (e.g., [22]).

Answer sets and program transformation
Let us describe the answer set semantics with an example. Consider the following set of rules, R.
It has X = {¬p 1 , p 2 , p 4 } as an answer set. Indeed, let us replace in (the bodies of the rules in) R all formulas of the form not ψ by true if ψ / ∈ X , by false otherwise, and simplify. This is the Gelfond-Lifschitz transformation [11]. It results in the following set of rules, R : For all atoms α, replace in R all occurrences of ¬α by a new atom α , resulting in a new set of rules, R : The logical consequences of R is { p 1 , p 2 , p 4 }, and that set consists precisely of the members of X where all negated atoms of the form ¬α have been replaced by α , completing the verification that X is an answer set for R.
The answer set semantics imposes a condition that in our framework, translates into complementing the (modal version of the) logic program with a set of formulas of the form ♦ ϕ. More precisely: if ♦ ϕ occurs in the (modal version of the) logic program, then that program should be complemented with ♦ ϕ provided that the resulting set of formulas does not logically imply ¬ϕ (which will logically imply ♦¬ϕ, hence be logically inconsistent with ♦ ϕ). One can think of this condition as: any possible hypothesis ψ-formula of form ♦ ϕ that occurs in the body of some rule-should be made, unless the resulting theory (the modal logic program complemented with the set of selected hypotheses) refutes ψ. 2 If we were dealing with normal programs, 2 Variations on the same idea can be used to capture other semantics besides the answer set semantics, and in particular the well-founded semantics; the key condition that captures the well-founded semantics would be expressed not in terms of "nonrefutation of hypothesis," but in terms of "confirmation of hypothesis," when adding ♦ ϕ (and possibly other hypotheses) to a modal logic program results in a theory that logically implies ϕ. But these considerations go beyond the key purpose of this paper and are developed in depth in [16], hence we will not say anything more on the well-founded semantics. then no other machinery would be needed to capture the answer set semantics. But as we are dealing with logic programs whose heads are disjunctions of (one or more) literals, it is necessary to introduce one more component. Indeed, the cut rule will not be applied to F, but to a closure of F under an operation that creates more rules, while still preserving logical validity (the created rules will be logical consequences of those from which they originate). What is required is a form of program extension, a particular case of program transformation. Program transformation is a technique that has been widely used in logic programming in general, and for the particular purpose of defining semantics of disjunctive programs, for example in [4]. Essentially, the transformation we shall use will, from a rule of the form 3 ψ → ℘ 1 ∨ · · · ∨ ℘ n ∨ ℘ n+1 ∨ · · · ∨ ℘ n+k produce rules of the following form.
In a different syntactic setting, this operation was studied in [6] to transform disjunctive programs into a normal program. In our framework, the original set of rules will not be replaced by, but complemented with, formulas obtained by left-shift. The answer set semantics lets literals assumed not to hold play a role in activating a rule by having them preceded with not in the body of the rule. Translated into our setting, the set of literals preceded with not in the bodies of some rules makes up a sufficient pool of hypotheses to cast the answer set semantics, provided the focus is on normal programs. But when dealing with disjunctive logic programs, this pool of literals is not sufficient, and the operation above, that left-shifts some formulas and turns their negations into hypotheses, makes it possible to usefully expand it as needed, no more, no less, and capture exactly the classical notion of an answer for a disjunctive logic program, as defined for instance in [10]. Interestingly, this operator is not only necessary in relation to the answer set semantics, but also for what we referred to above as the "base semantics,", that is, essentially, the fixed point semantics in [17] generalised to disjunctive logic programs with classical negation.
There are alternative approaches to cast the answer set semantics in a logical framework where the rules of a logic program are modified or complemented with other formulas. The framework can be classical; see for instance [13], where "loop formulas" are added, in a setting restricted to logic programs without classical negation, and more importantly, limited to finite sets of finite rules-two conditions that we do not impose. The framework can be nonclassical; see for instance [25], in a setting that sits between intuitionistic logic and classical logic. There is a vast literature that examines how to compute answer sets; see [9] for a good example of work along those lines. Our study sheds no light on how to compute answer sets. Our framework is not classical in the sense that classical logical inferences would make it possible to derive, from a fixed theory, all answer sets and nothing but answer sets. Rather, its key features are the following.
• It allows one to formalise a very general class of logic programs, with mechanical translations into the chosen logical language. • It characterises semantics of interest in the form of some formulas being logical consequences of some theory T , possibly together with conditions imposing that certain formulas be or be not logical consequences of T . • It is a unifying framework in that the characterisations all rely on a fixed set of underlying notions and a fixed classical notion of logical consequence (a claim that is partially supported by the work described in this paper, but more strongly so thanks to the additional work presented in other papers).

Plan of the paper
We proceed as follows. In Sect. 2, we motivate the notions to be introduced in the rest of the paper. In Sect. 3, we fix the logical background, and in particular, define the modal language from which the bodies and heads of a disjunctive logic program will be made up, together with its semantics. In Sect. 4, we formalise the left shift operation and establish the relationship between this framework and the answer set semantics; the rest of the paper can then be applied to the answer set semantics as a particular case. In Sect. 5, we introduce an intermediate proof system, in the spirit of tableau proofs, and establish its completeness with respect to the class of disjunctive logic programs under consideration. In Sect. 6, we let the cut rule be a counterpart to the immediate consequence operator described in [17]; we then establish the completeness of the system of proof based on applications of this rule. Essentially, we convert a tableau proof into a proof by cuts, a technique which is interesting in its own right.

Objective
Consider the very simple sets of rules whose left hand sides are empty or positive propositional formulas (built from atoms using conjunction and disjunction) and whose right hand sides are atoms. 4 Here is an example of such a set of rules P: One can describe the set of literals [P] that are logical consequences of this set of implications, namely, { p 1 , p 2 , p 3 , p 5 , p 8 , p 10 }, as the ⊆-minimal fixed point of P. One can also describe it as the result of • first collecting the facts-the right hand sides of the rules with empty left hand sides-and • then firing the rules that can be activated because the atoms generated so far validate their left hand sides, and collecting their right hand sides. with what has been previously generated.
The theme of this paper is that properties (A)-(C) can be preserved for rules that are more interesting and general, and more particularly, for rules with disjunctions on the right hand side; moreover, an adapted form of the cut rule can operate on rules in a way that satisfies properties (A)-(C). The most elegant presentation of the cut rule is in the sequent calculus, and takes the form ( ) ϕ 1 , . . . , ϕ n ψ 1 , . . . , ψ m , ξ ξ, ϕ 1 , . . . , ϕ n ψ 1 , . . . , ψ m ϕ 1 , . . . , ϕ n , ϕ 1 , . . . , ϕ n ψ 1 , . . . , ψ m , ψ 1 , . . . , ψ m to express that is a logical consequence of We will go through intermediate sets of rules and intermediate adaptations of the cut rule till we reach the final form of the cut rule that can be satisfactorily applied to the sets of rules of the kind that we want to eventually be able to work with. Let us first adapt the cut rule so that it can deal with sets of rules of the form described above, in a way that satisfies properties (A)-(C) above: we let it take the form where k ∈ N, ξ 0 , …, ξ k , ψ are atoms, ϕ is a positive propositional formula with at least one occurrence of each of ξ 0 , …, ξ k , and ϕ[ξ 0 /true, . . . , ξ k /true], that is, the result of making all occurrences of ξ 0 , …, ξ k in ϕ true, is logically valid. This is a big modification of ( ), and in some way also a big simplification, more similar to a generalised modus ponens than to a full cut, but further adaptations will bring us closer to ( ) as we consider sets of rules with disjunction on the right hand side. As an example of an application of ( ) for the set of rules P defined above, p 10 is added to [P] by the following application of ( ): Let us emphasise the key differences between ( ) and ( ), that will be applicable to all further adaptations of the cut rule.
• In ( ), the cut rule has two antecedents. In ( ), at least two sequents, but possibly more, make up the antecedents. • In ( ), the antecedents of the cut rule are arbitrary sequents. In ( ), one antecedent of the cut rule is an arbitrary sequent, but all other antecedents are sequents with an empty left hand side. • In ( ), the formula to which the cut is applied, namely ξ , is one of a number of formulas on the left hand side of a sequent that are implicitly conjuncted. In ( ), the formulas ξ 0 , …, ξ k to which the cut is simultaneously applied occur in a formula ϕ that is not necessarily the conjunction of ξ 0 , …, ξ k , but that is logically implied by that conjunction. • In ( ), the consequent is an arbitrary sequent. In ( ), it is a sequent with an empty left hand side.

Negation
To see whether negation is problematic, let P now denote the extension of the set of rules defined above with the following rules.
One might think that the contrapositive of the first extra rule should let p 11 join [P], breaking down property (A) above, and the last two extra rules taken together should let p 13 join [P], breaking down properties (B) and (C). But following standard practice in logic programming, we work in a paradigm where disjunction is constructive. For example, given literals ϕ 1 , ϕ 2 , ϕ 3 and ϕ, the intended meaning of the rule ϕ 1 ∧ (ϕ 2 ∨ ϕ 3 ) → ϕ is, in that paradigm: if ϕ 1 has been generated, and if at least one of ϕ 2 and ϕ 3 has been generated, then ϕ can be generated. A representation of our set of rules more faithful to that intended meaning uses the modal operator to capture the notion "has been generated" or "has been proved." And in accordance with the expected meaning of , we will work in a logical setting where for all atoms ϕ, ϕ ∧ ¬ϕ is inconsistent, while ϕ ∨ ¬ϕ is satisfiable but not valid. Call claim any formula of the form ϕ where ϕ is a literal, and claiming condition 5 any formula obtained from the set of claims by arbitrary application of conjunction and disjunction. So we now consider sets of rules whose left hand sides are claiming conditions and whose right hand sides are claims.
Following on from our example, let F denote: 6 13 We also define [F] as the set of claims that are logical consequences of F. The contrapositive of the implication ¬ p 11 → ¬ p 3 is a formula that is logically equivalent to ♦ p 3 → ♦ p 11 , which allows one to generate ♦ p 11 but not the stronger p 11 , and as p 12 ∨ ¬ p 12 is not valid, p 13 cannot be generated either. Hence Though it uses classical negation and no other form of negation, this modal representation and associated interpretation of a set of rules is suitable to model negation as failure and the main semantics of logic programs. It is easy to see that properties (A)-(C) above are preserved for the kind of rules now under consideration, thanks to the form of the cut rule that has been described as ( ) with the only difference that ξ 0 , …, ξ k , ψ are claims rather than atoms, and ϕ is a claiming condition rather than a positive propositional formula.

Disjunction
Let us now allow disjunction on the right hand side of a rule. Call alternative any formula of the form ϕ 1 ∨ · · · ∨ ϕ n where n ∈ N and ϕ 1 , …, ϕ n are pairwise distinct claims, with no a priori reference to any particular rule (when n = 0, the alternative is empty). So we now consider sets of rules whose left hand sides are claiming conditions and whose right hand sides are alternatives. Let us extend our running example so that F now denotes the set of rules above complemented with the following rules.  17 and p 20 to [F]. But the very idea of rules that read from left to right and fire individually seems to break down. Consider first the claims p 15 and p 17 . Each of them is inferred from two generated alternatives ( ¬ p 3 ∨ p 15 and p 3 , and p 16 ∨ p 17 and ¬ p 16 , respectively), breaking down property (C) above. This is still easily fixed by closing F under a left shift operation, that moves claims from right to left as exemplified below, in a way that will preserve the logical validity of the original set of rules and capture well the reasoning behind the inference of p 15 from ¬ p 3 ∨ p 15 and p 3 , and the inference of p 17 from p 16 ∨ p 17 , ¬ p 16 and p 10 : When we define more formally the syntax of a rule, we will, for good reasons, take disjunction and conjunction as operators on sets. This is why we assumed that all claims that occur in an alternative are pairwise distinct. For an illustration, the fact that → p ∨ p is not an admissible rule implies that ¬ p → p is not admissible either (it would have to be obtained from the latter by left shift); that is good, as would ¬ p → p be admissible, it could be demanded that it lets one derive p, but p cannot be derived from ¬ p → p in a way that satisfies properties (A)-(C) above. Also note that a left shift can move the whole right hand side of a rule to the left. That will allow one to deal with inconsistent sets of rules and reduce any particular contradiction, involving two claims of the form p and ¬ p, to the "generic" contradiction that emerges when the empty alternative (disjunction without disjunct) is derived, as is the case for instance for a set of rules that contains both → p and → ¬ p, with p → produced from the latter by left shift. Now consider the claim p 20 . Here it seems that in order to generate p 20 , it is necessary to consider, together with p 18 ∨ p 19 , both rules p 18 → p 20 and p 19 → p 20 , breaking down property (B) above. This is the point where the cut rule has to be generalised from ( ) to a form that brings it closer to ( ): we now let it take the form are pairwise distinct claims, ψ 0 , …, ψ n are pairwise distinct claims, • ϕ is a claiming condition with at least one occurrence of each of ξ n 0 0 , …, ξ n k k , • ϕ[ξ n 0 0 /true, . . . , ξ n k k /true] is logically valid, and • ξ 1 , . . . , ξ m are the pairwise distinct members of {ξ For instance, we can add p 20 to [F] thanks to two applications of ( †): One can see that the logically strongest members of [F] can be obtained by successive applications of ( †) on the closure of F under left shift, validating properties (A)-(C) above.
One might object that the order of the claims on the right hand side of a sequent has to be taken into account, and that it is necessary to add a rule that permutes the various elements of a sequence so that the claims to which the cut is applied can always be last on the right hand side of the corresponding sequents. But this will not be necessary again because disjunction will be treated as an operator over a set: the right hand side of a sequent in ( †) is implicitly disjuncted as a set, and is therefore to be conceived of as some arbitrary enumeration of that set, the order of the enumeration being irrelevant. One could therefore write the right hand side of a sequent either as {ξ 1 , . . . , ξ n } or as {ξ 1 , . . . , ξ n } rather than as ξ 1 , . . . , ξ n , depending on whether which of making the disjunction implicit or explicit would be preferred.

Answer sets
Can we consider positive formulas over a set of formulas that is not constrained to containing claims only? As we work in a constructive paradigm, and as for all literals ϕ, ϕ ∨ ♦¬ϕ will be valid, we have to keep formulas of the form ♦ϕ out of both sides of the rules. But we will work in a logical framework where for all literals ϕ, ♦ ϕ is a logical consequence of ϕ, is consistent with ♦ ¬ϕ and is inconsistent with ¬ϕ, and where ♦ ϕ ∨ ♦ ¬ϕ is not valid. Call hypothesis any formula of the form ♦ ϕ where ϕ is a literal (so a hypothesis is any formula of the form ♦ϕ where ϕ is a claim). We now motivate why hypotheses are interesting and why it is worth allowing them to occur on the left hand side of the rules, motivated by relationships to the answer set semantics.
The usual presentation of the answer set semantics uses two kinds of negation: classical ¬ and nonclassical not. The former can only be applied to atoms, and the latter to atoms or classically negated atoms; moreover, not can only occur on the left hand side of a rule, and the right hand side of a rule can be either a literal or a disjunction. Such sets of rules are referred to as extended-disjunctive programs, and as extended-normal in case disjunction does not occur on the right hand side of any rule. One can transform an extended-disjunctive program R into a set of rules R that uses modalities but not not, proceeding as follows.
• Precede all occurrences of literals preceded with neither not nor ¬ with .
• Replace every occurrence of not which is not followed by ¬ with ♦ ¬.
• Replace every occurrence of not ¬ with ♦ .
For instance, this technique transforms the following extended-normal program R, already met in Sect. 1.2, into the following set of rules R .
Call condition, with no a priori reference to any particular rule, any formula obtained from the set of claims and hypotheses by arbitrary application of conjunction and disjunction. We have now reached the final form of the rules we want to be able to work with: they have conditions as left hand sides, and alternatives as right hand sides-let us refer to them as conditional alternatives.
Given an extended-normal program R, let us examine the relationship between R and the associated set of conditional alternatives R . It is easy to see that if R is the extended-normal program defined above, then R has a unique answer set, namely, We will verify that more generally, given an extended-normal program R and a set of literals X , X is an answer set for R iff { ϕ | ϕ ∈ X } is the set of all claims that are logical consequences of R ∪ H where H is a set of hypotheses with the following properties.
• R ∪ H is consistent; • for all literals ψ, ♦ ψ belongs to H iff ♦ ψ occurs in (the conditions of the conditional alternatives in) R and ¬ψ is not a logical consequence of R ∪ H .
Obviously, the previous relationship between R and R does not generalise to extended-disjunctive programs. For instance, the extended-disjunctive program consisting of only → p ∨ q has { p} and {q} as answer sets, and neither { p} nor { q} is the set of claims that are logical consequences of { p ∨ q} complemented with some set of hypotheses. What we need is the left-shift operation described in the previous section, adapted to let claims that move from right to left become hypotheses rather than claims, which still preserves logical validity; let us talk about hypothetical left shift to refer to this form of the left shift operator. With → p ∨ q as example, the hypothetical left-shift generates the three conditional alternatives that follow.
We can now formulate the key question that is the object of this paper in full generality.
Let F be a set of conditional alternatives and H a set of hypotheses. Let [F, H ] denote the set of all alternatives that are logical consequences of F ∪ H . Is there a form of the cut rule that can be applied to a closure of F and H and generate [F, H ], in such a way that properties (A)-(C) discussed at the beginning of the paper hold?
We have claimed that this question can be positively answered in case H is empty and no hypothesis occurs in F, thanks to the first version of the left shift operator and the version of the cut rule given by ( †). We will see that there is also a positive answer in the general case. The closure of F and H will be obtained by hypothetical left shift and replacement of all occurrences of the members of H in the conditions of the conditional alternatives so obtained by true. The version of the cut rule to be used is what has been described as ( †), except that ϕ has to be assumed to be a condition rather than a claiming condition, and the requirement that ϕ[ξ n 0 0 /true, . . . , ξ n k k /true] be logically valid has to be replaced by the requirement that be logically valid: we set to true in the condition of the conditional alternative that is the target of the cut all hypotheses and claims built from one of the k + 1 literals to which the cut applies.

Dealing properly with substitution and validity
Our last form of the cut rule still leaves to be desired: eliminating the left hand side of the selected conditional alternative by turning it into a valid formula thanks to substitution of claims and hypotheses by true is not a mechanical, syntactic, proof-theoretic operation. But we will proceed in a way that addresses this issue satisfactorily. Recall that we intend to take disjunction and conjunction as operators on (possibly empty) sets. We have mentioned already that this has the advantage of making duplicate disjuncted claims a non-issue. Note now that there is no need to introduce a propositional constant true as ∅ is logically valid, and that ∅ is logically invalid. This will be useful to avoid empty left or right hand sides in a conditional alternative, which is formally sloppy. But the key point is that we can replace the requirement that ϕ[♦ξ n 0 0 /true, . . . , ♦ξ n k k /true][ξ n 0 0 /true, . . . , ξ n k k /true] is logically valid by the requirement that ∅ is the result of substituting all occurrences of ♦ξ n 0 0 , …, ♦ξ n k k and all occurrences of ξ n 0 0 , …, ξ n k k not preceded by ♦ in ϕ by ∅, collapsing conjunctions, collapsing disjunctions, letting ∅ absorb enclosing disjunctions, and letting ∅ absorb enclosing conjunctions.
For an example, let ϕ be When one replaces in ϕ the claims p 1 , p 2 , p 4 and p 7 by ∅ and successively applies the transformations described above, one obtains ∅ and eventually ∅. What we have described is a mechanical, syntactic, proof-theoretic way guaranteed to derive ∅ from a condition in which the occurrences of some claims and hypotheses have been replaced by ∅ whenever the resulting formula logically follows from the set of those claims and hypotheses.

Claims, hypotheses, and disjunctive programs
N denotes the set of natural numbers and Ord the class of ordinals. Definition 3.1 A vocabulary is a countable set of nullary predicate symbols.

Notation 3.2 We denote by V a vocabulary.
Members of V are called atoms (over V). Members of V and negations of members of V are called literals (over V). Given a literal ϕ, we let ∼ϕ denote ¬ϕ if ϕ is an atom, and ψ if ϕ is of the form ¬ψ.

Definition 3.3
The set of conditions (over V) is inductively defined as the smallest set that satisfies the following conditions.
• All expressions of the form ϕ with ϕ a literal over V, are conditions. • All expressions of the form ♦ ϕ with ϕ a literal over V, are conditions. • All expressions of the form X with X a countable set of conditions over V, are conditions. • All expressions of the form X with X a finite set of conditions over V, are conditions. Definition 3. 4 We call claim (over V) any condition over V of the form ϕ.
We call hypothesis (over V) any condition over V of the form ♦ ϕ. We call stance (over V) any claim or hypothesis over V.
A few remarks about Definition 3.3 are in order. First, note that all conditions are in negation normal form: negation can be applied to atoms only. Second, note that disjunction and conjunction can be applied to the empty set, yielding a logically invalid and a logically valid formula, respectively. Third, note that contrary to conjunction, disjunction can be applied to an infinite set. The motivation is that it will be formally advantageous to group together all rules that have a common alternative: rather than considering a set of rules of the form { p i → q | i ∈ N}, we will prefer the single rule { p i | i ∈ N} → q. Infinite vocabularies and infinite sets of rules are natural if one thinks of propositionalising a set of first order (modal) rules. For instance, the previous set of rules could be obtained by propositionalising the firstorder rule ∃x p(x) → q(0) in a setting where all intended interpretations are standard and the set of closed terms is equal to the set of numerals {n | n ∈ N}; then one would map p(n) to p n for all n ∈ N, q(0) to q, and obtain the former set of rules as an alternative representation. If we worked in a first-order language with standard structures as intended interpretations, that language could sometimes be kept finite thanks to function symbols when infinitely many nullary predicate symbols are needed to perform the propositionalisation. As this would bring no significant difference in the results or in their proofs, we opt for the simpler formulation of an infinite propositional language. So infinite sets of rules are natural objects of study, and disjunctions that apply to infinite sets are natural tools. Moreover, disjunctions can be assumed to operate on countably infinite sets without affecting any of the formal developments. On the other hand, many results would break down if conjunction was allowed to operate on infinite sets.
As we know from Sect. 2, we will consider rules whose right hand sides are of the form D for some member D of Alt(V), where D is defined next.
If, as explained before, one chooses to disjunct the conditions of all rules that have a common alternative, one is then led to define the sets of rules that are our object of study as follows, using either an "implicit" representation (Definition 3.7, mapping to a unique condition the set of literals that once disjuncted, make up an alternative) or an "explicit" representation (Definition 3.9, putting the mapping in proper logical form).  The theory associated with Fis defined as the set of conditional alternatives Notation 3.10 Given a disjunctive program F, we let Th(F) denote the theory associated with F.
That would formalise an extendeddisjunctive program having as rules p 1 ∧ p 2 → p 3 ∨¬ p 4 and not p 4 → p 3 ∨¬ p 4 , and no other rule with p 3 ∨¬ p 4 as head. All rules with the same head are grouped together (thanks to potentially infinite disjunctions) in F. In practice, for most members D of Alt(V), ϕ D would be ∅ as F would model an extended-disjunctive program with no rule with the disjunctions of the members of D as head (with V = {p 1 , p 2 , p 3 , p 4 }, Alt(V)is of cardinality 2 8 ).

Semantics
The formulas that make up the theory associated with a disjunctive program are very specific: they are implications both sides of which do not contain occurrences of a formula of the form ♦ϕ for some literal ϕ, where no modal operator is in the scope of another modal operator except for hypotheses, etc. This implies that we do not need to develop a complete semantics for a set of formulas closed under modal and boolean operators. We opt for keeping the concepts to a minimal, and in particular, avoid to resort to Kripke frames or similar semantic objects. Instead, we restrict the notion of logical consequence we will work with to that part of the language that we strictly need. But it is perfectly possible to embed the notions defined in this section into a full fledged semantics, and this is done in [16].

Definition 3.12
Let X be a set of stances. We say that X is consistent just in case for all literals ϕ, if ϕ ∈ X then neither ∼ϕ nor ♦ ∼ϕ belongs to X ; otherwise we say that X is inconsistent.

Definition 3.13
A set X of stances is closed just in case it is consistent and for all literals ϕ, if ϕ ∈ X then ♦ ϕ belongs to X .
Essentially, the notion of a closed set of stances provides a convenient, syntactic definition of logical consequence between a set of stances (used as antecedent) and a stance (used as consequent) which suffices for our purposes: we can think of a stance ψ to be a logical consequence of a set X of stances iff ψ belongs to the closure of X (that is, the ⊆-smallest set of stances which is closed and contains X ).
The first part of next definition is motivated by the relationship this framework bears to the answer set semantics, as sketched in Sect. 2. The second part fulfils a different purpose: intuitively, a complete set of stances is the set of all stances that are true at a particular point in a suitable Kripke frame, which suffices to determine the truth of any condition or conditional alternative at that point, hence which suffices to define a notion of logical consequence restricted to the language of hypotheses and of the theory associated with a disjunctive program. Definition 3.14 Let H be a set of hypotheses. A set X of stances is H -complete just in case it is consistent and for all literals ϕ with ♦ ϕ ∈ H , A set X of stances is complete just in case, denoting by H the set of all hypotheses, X is H -complete.

Notation 3.16 We denote by S the set of all closed sets of stances (over V).
The next definition exploits the remark that precedes Definition 3.14. If we were not in a modal setting, we could think of a member of S as the atomic diagram of a standard structure that determines the truth value of any sentence in that structure. Here we can think of a member of S as the "stance diagram" of a point of a Kripke frame that determines the truth value of any condition or conditional alternative at that point.

Definition 3.17 Let a member S of S be given.
For ϕ a condition, we say that S forces ϕ and write S ϕ iff: • when ϕ is a stance, ϕ ∈ S; • when ϕ is of the form X , S forces some member of X ; • when ϕ is of the form X , S forces all members of X .
For all conditional alternatives ϕ, we say that S forces ϕ, denoted S ϕ, iff either S does not force the condition of ϕ or S forces the alternative of ϕ. If S does not force a condition or a conditional alternative ϕ then we write S ϕ. Given a set T of conditions or conditional alternatives, we write S T if S forces all members of T , and S T otherwise.

Definition 3.18
Given two sets of conditions or conditional alternatives T and X , we say that X is a logical S-consequence of T , or that T logically S-implies X , and we write T S X , just in case every member of S that forces T forces X .
If a set of conditions or conditional alternatives T does not logically S-imply a set of conditions or conditional alternatives X then we write T S X . The same terminology and notation applies if one or both sets of conditions are replaced by a condition or a conditional alternative.

Definition 3.19
Given two sets of conditions or conditional alternatives T 1 and T 2 , we say that T 1 and T 2 are logically S-equivalent just in case T 1 logically S-implies T 2 and T 2 logically S-implies T 1 .

Definition 3.20
Let a disjunctive program F and a set H of hypotheses. We say that F is S-consistent with H just in case some member of S forces H ∪ Th(F); otherwise we say that F is S-inconsistent with H .

Definition 3.21 A condition is said to be S-valid iff it is logically S-equivalent to
∅.

Remarks
Let a disjunctive program F = (ϕ D ) D∈Alt(V) be given. First, suppose that ϕ D = ∅ for all members D of Alt(V) that contain at least one negated atom. Also suppose that for all members D of Alt(V), ϕ D contains no occurrence of a hypothesis and no occurrence of a claim of the form ¬ p. So F formalises a logic program whose rules are all of the form where p i 1 , …, p i n , p j 1 , …, p j m are atoms. Such are the logic programs considered in [17]. They are given a semantics, in terms of the derivation of disjunctions of the form p k 1 ∨ · · · ∨ p k p , that is precisely captured by the notion of a disjunction of the form p k 1 ∨ · · · ∨ p k p being a logical S-consequence of Th(F). This is an immediate consequence of a very special case of Proposition 6.10, proved in Sect. 6: the proof system used to establish the validity of Proposition 6.10 corresponds to the procedure that defines the semantics presented in [17].
Second, suppose that for all members D of Alt(V), ϕ D contains no occurrence of a hypothesis. So F formalises a logic program whose rules are all of the form l i 1 ∧ · · · ∧ l i n → l j 1 ∨ . . . l j m where l i 1 , …, l i n , l j 1 , …, l j m are literals. Then the notion of a disjunction of the form l k 1 ∨· · ·∨ l k p being a logical S-consequence of Th(F) also captures the semantics described in [17], but generalised to the logic programs where classical negation is accepted in the (bodies and heads) of the rules. Moreover, for all sets H of hypotheses and for all disjunctions ϕ of the form l k 1 ∨ · · · ∨ l k p , ϕ is a logical S-consequence of Th(F) iff ϕ is a logical S-consequence of H ∪ Th(F): hypotheses do not increase logical power when dealing with disjunctive programs that formalise logic program with classical negation but without not.
Third, imposing no condition on F, the following holds. Let F = (ϕ D ) D∈Alt(V) be the disjunctive program such that for all members D of Alt(V), ϕ D is ϕ D with any occurence of a hypothesis of the form ♦ l being replaced by l. Then for all disjunctions ϕ of the form l k 1 ∨ · · · ∨ l k p , ϕ is a logical S-consequence of Th(F) iff ϕ is a logical S-consequence of Th(F ): disjunctive programs that formalise logic programs in which not can occur in the bodies of the rules would not add logical power, if hypotheses were not introduced too as "complementary axioms". Proposition 6.10 will embody the full generality of the proposed framework.

Left shift completions and answer sets
The next definition captures the notion of hypothetical left shift discussed in Sect. 2, here defined up to logical S-equivalence, which suffices for our purposes. A left shift completion of a disjunctive program F should be thought of as the closure of F under hypothetical left shift. 4 } are different to ∅ because F models an extended-disjunctive program that has rules with ¬ p 2 ∨ ¬p 3 ∨ p 4 as head, rules with p 1 ∨ ¬p 2 ∨ p 4 as head, and rules with p 1 ∨¬ p 2 ∨¬ p 3 ∨ p 4 as head, and no other rule whose head has at least ¬ p 2 and p 4 as disjuncts. Then the theory associated with a left shift completion of F would have as conditional alternative (a formula equivalent to):

Proposition 4.2 Two disjunctive programs such that one is a left shift completion of the other are logically S-equivalent.
Proof Let disjunctive programs F = (ϕ D ) D∈Alt(V) and F = (ϕ D ) D∈Alt(V) be such that F is a left shift completion of F. Let D ∈ Alt(V) be given. Since ϕ D is logically S-equivalent to a condition of the form We conclude that F and F are logically Sequivalent.
Recall the illustrated defintion of an answer set at the beginning of Sect. 1.2. The definition of an answer set for an extended-disjunctive program is exactly the same; the fact that rule heads can be disjunctions rather than simple literals making absolutely no difference. That is, given an extended-disjunctive program R, an answer set for R is a consistent set X of literals with the following property. Replace in (the bodies of the rules in) R all formulas of the form not ψ by true if ψ / ∈ X , by false otherwise, and simplify, resulting in a set of rules, say R . Now for all atoms α, replace in R all occurrences of ¬α by a new atom α , resulting in a new set of rules, say R . Then the set X of members of X where all negated atoms of the form ¬α have been replaced by α , is the set of logical consequences of R .
In Sect. 2.4, we saw how an answer set program R can be put in correspondence with a disjunctive program R . Clearly, every disjunctive program is of the form R for some unique extended-disjunctive program R (generalised to allow countable disjunctions on the left hand sides of the rules). Hence answer sets can be defined on the basis of disjunctive programs rather than on the basis of extended-disjunctive programs, resulting in the definition that follows. It is easily verified that Definition 4.3 is equivalent to the standard definition, previously outlined, of an answer set. The verification amounts to rewriting an extended-disjunctive program R as a disjunctive program F, essentially rewriting an occurrence in R of a literal α not preceded by not as α in F, and rewriting an occurrence in R of a formula of the form not α or not ¬α, with α an atom, as ♦ ¬α or ♦ α in F, respectively. In effect, Definition 4.3 is nothing but the standard definition of an answer set modulo the mechanical translation, the mechanical rewriting, of an extended-disjunctive program as a disjunctive program.
To illustrate the need for the second condition in Definition 4.3 with a trivial example (that does not even need disjunction), the logic program not ¬ p → q, which has {q} as unique answer set, corresponds in our framework to a disjunctive program F with Th(F) = {♦ p → q} and H yp(F) = {♦ p}. Then S = {♦ p, q} satisfies both conditions of the previous definition, whereas {♦ p, q, p} and {♦ p, q, r } satisfy only the first one (and have to be ruled out as {q, r } and {q, p} are not answer sets for not ¬ p → q). Note that the "standard" left shift completion of a disjunctive program moves all subsets of literals in the head of a given clause to the left. For instance, the logic program { p → q, q → p, p ∨ q} has as left shift completion: q} as unique answer set. The correspondence, discussed in Sect. 2, between answer sets and the sets of claims that are logical S-consequences of the theory associated with an associated disjunctive program complemented with a set of hypotheses constrained in a particular way, can now be fully formalised and established.  • X is an answer set for F. • There exists a complete hypothetical extension H for F such that X is the set of claims that are logical S-consequences of Th(F ) ∪ {H }.
Proof Suppose that X is an answer set for F. Let S be a member of S that satisfies both items in Definition 4.3 and such that X is the set of claims in S. Let H be the set of all hypotheses ♦ ϕ in H yp(F ) such that S ∼ϕ. Note that H ∩ H yp(F) is equal to the set of hypotheses in S ∩ H yp(F). Hence S ∪ H forces Th(F) ∪ H , and by Proposition 4.2, also forces Th(F ) ∪ H . Moreover, for all T ∈ S that force Th(F ) ∪ H , T is H yp(F)-complete, hence X ⊆ T. Hence X is the set of claims that are logical S-consequences of Th(F ) ∪ H , and the set of stances ϕ such that Conversely, let H be a complete hypothetical extension for F such that X is the set of claims that are logical S-consequences of Th(F ) ∪ H . Obviously, X ∪ H belongs to S, forces Th(F) by Proposition 4.2, and is H yp(F)-complete since H yp(F) is a subset of H yp(F ). Also, every member T of S that forces Th(F) and contains H is such that all members of X are claims in T. Hence X is an answer set for F.

General strategy
We aim to show that given a disjunctive program F and a set H of hypotheses, a modification of the cut rule can be applied to H and any left shift completion of F and generate all alternatives that are logical S-consequences of Th(F) ∪ H , in such a way that properties (A)-(C) listed in Sect. 2 are satisfied. To this aim, we first introduce an intermediate proof system and demonstrate that it is complete; we refer to a proof in this system as a tableau proof. Then we will see how a tableau proof can be translated into a proof by cuts.
Tableau proofs are best represented as trees. Say that we try and derive an alternative of the form D, D ∈ Alt(V), from a disjunctive program F and a set of hypotheses H . We build a tree T whose nodes are labeled with claims, except for the root and possibly some of the leaves. Let N be a node in T that has not been declared to be a leaf, and let X be the set of claims that label the nodes on the path from the root of T up to N . We try and select a conditional alternative R in the theory associated with a left shift completion of F whose condition is seen to be a logical S-consequence of H ∪ X . If R's alternative is the empty disjunction, then N is given a nonlabelled child that is declared to be a leaf. If R's alternative is of the form {ϕ 1 , . . . , ϕ n } for some n > 0 and pairwise distinct claims ϕ 1 , …, ϕ n , then N is given n children labeled ϕ 1 , …, ϕ n and every child that receives a member of D as label is declared to be a leaf. If the construction eventually stops and results in a finite tree not consisting of its root only (this is guaranteed in case F is finite), then all leaves are unlabelled or labelled with nothing but members of D. For the tree to represent a successful tableau proof, D should be empty, in which case Th(F) is S-inconsistent with H , or at least one leaf should be labelled with a member of D. We will verify that it is always possible to build such a tree whenever D is a logical consequence of Th(F) ∪ H , hence that the tableau proof procedure is complete.

Example
To illustrate both tableau proofs and proofs by cut, consider a disjunctive program F over the vocabulary V = {p 0 , . . . , p 10 } such that Th(F) is logically S-equivalent to the set consisting of and all conditional alternatives that follow.
It is easy to verify that p 0 is a logical S-consequence of Th(F) (it will be demonstrated in the detailed illustration of Sect. 6.2).
Let (ϕ D ) D∈Alt(V) be a left shift completion of F. It is clear that there is no point in left shifting a claim, say ϕ, from the right hand side to the left hand side of one of Th(F)'s conditional alternatives if ∼ϕ does not occur on the right hand side of any other conditional alternative of Th(F). To simplify the matter further, let us ignore all left shifts that are not involved in the derivation of p 0 from F thanks to the tableau proof tree we are about to present. This allows us not to describe ϕ D for all D ∈ Alt(V), but only provide, for some members D of Alt(V), a condition that is logically S-equivalent to the disjunction of ϕ D with the conditions of the form ♦ ∼(D \ D) for those strict supersets D of D, if any, such that the conditional alternative ϕ D → (D \ D) turns out to be useful. These considerations lead to writing down 9 relations of logical S-consequence: The tree depicted next represents a tableau proof of p 0 . It differs slightly from the general description we sketched at the beginning of the section, in that leaves with no label are not represented and additional information on the nodes is provided in the form of a finite set of numbers, whose meaning will be explained, and that will be needed to convert the tree into a proof by cuts. Also, it is technically more convenient to work with literals rather than claims. Finally, we stop referring to node labels, and rather adopt the usual definition of a tree, to be reminded shortly, as a finite sequence of entities, here literals; what was referred to above as the label of a node is now the last member of the sequence that defines that node. The first level of T is determined by (2), and expresses that one of p 1 , ¬ p 2 , p 3 , p 4 and ¬ p 4 holds. The node ( p 1 , ¬ p 2 ) is determined by (4); its member of index 0 is p 1 , and { p 1 } is a ⊆-minimal set of claims that logically S-implies ϕ {¬ p 2 } , from which ¬ p 2 can be generated. The node ( p 1 , ¬ p 2 , p 3 ) is determined by (5); its member of index 1 is ¬ p 2 , and { ¬ p 2 } is a ⊆-minimal set of claims that logically Simplies ϕ { p 3 } , from which p 3 can be generated. The node ( p 1 , ¬ p 2 , p 3 ) branches out into ( p 1 , ¬ p 2 , p 3 , p 5 ), ( p 1 , ¬ p 2 , p 3 , ¬ p 5 ), ( p 1 , ¬ p 2 , p 3 , p 6 ) and ( p 1 , ¬ p 2 , p 3 , p 7 ) as determined by (7); p 3 is the element of index 2 of all those sequences, and { p 3 } is a ⊆-minimal set of claims that logically S-implies ϕ { p 5 ,¬ p 5 , p 6 , p 7 } , from which one of p 5 , ¬ p 5 , p 6 and p 7 is known to hold. The nodes ( p 1 , ¬ p 2 , p 3 , p 5 , p 0 ), ( p 1 , ¬ p 2 , p 3 , ¬ p 5 , p 0 ) and ( p 1 , ¬ p 2 , p 3 , p 6 , p 0 ) are all determined by (0); { p 3 , p 5 }, { p 3 , ¬ p 5 } and { p 3 , p 6 } are the sets of elements of index 2 and 3 of these sequences, respectively, and if X denotes any of these sets then X is a ⊆-minimal set of claims that logically S-implies ϕ { p 0 } , from which p 0 is known to hold. We skip a few nodes and move to ( p 1 , ¬ p 2 , p 3 , p 7 , p 8 ); its members of index 2, 3 and 4 are p 3 , p 7 and p 8 , and { p 3 , p 7 , p 8 } is a ⊆-minimal set of claims that logically S-implies ϕ ∅ , indicating that { p 1 , ¬ p 2 , p 3 , p 7 , p 8 } cannot hold together and making ( p 1 , ¬ p 2 , p 3 , p 7 , p 8 ) a leaf of T . The subtrees of T rooted at (¬ p 2 ), ( p 3 ) and (¬ p 4 ) duplicate the subtrees rooted at ( p 1 , ¬ p 2 ), ( p 1 , ¬ p 2 , p 3 ) and ( p 1 , ¬ p 2 , p 3 , p 7 , ¬ p 4 ), respectively, and are not explicitly represented; if they were depicted then of course, the integers associated with the nodes would have to be appropriately adapted. The tree T represents a tableau proof of p 0 because it has at least one leaf labeled with p 0 (actually, it has 17 such leaves) and the remaining 5 leaves have no label (they are

Completeness of the system of tableau proofs
Let us introduce all the terminology and notation relative to sequences and trees that will be needed in the sequel.

Notation 5.1
Given a sequence σ , we denote by rng(σ )-the range of σ -the set of members of σ , by lt(σ ) the length of σ , and in case σ is not the empty sequence, written (), by lst(σ ) the last element of σ . Given a sequence σ and an element x, we denote by σ x the concatenation of σ with (x). Given a sequence σ and n ∈ N with n ≤ lt(σ ), we denote by σ |n the initial segment of σ of length n. Given a sequence σ and n ∈ N with n < lt(σ ), we denote by σ (n) the (n + 1)-st element of σ . Given a nonempty sequence σ , we write σ − to denote σ truncated from its last element. Given two sequences σ and τ , we write σ ⊆ τ to express that σ is an initial segment of τ .

Definition 5.2
Let a set X be given.
A tree over X is a set of finite sequences of members of X that is closed under initial segments.
Let a tree T over X be given. Note that () is the root of T . A inner node of T is a member of T that has a child in T , namely, a member of T of the form σ x for some x ∈ X . A leaf of T is a member of T that is not an inner node of T . A branch of T is a ⊆-maximal subset B of T with the property that any two members of B are such that one is an initial segment of the other.

Notation 5.3 Given a set X , a tree T over X , and a member σ of T , we let Succ
It is time to fix the notation for substitution of stances in a condition by ∅. We can now define tableau proofs in accordance with the semi-formal description given at in Sect. 5.1. The next proposition shows that the tableau proof procedure is sound and complete. Proof Let T be a tableau proof of D from F and H . Let n be the number of inner nodes of T and leaves of T that do not end in a member of D (note that n > 0 whether or not T consists only of the empty sequence). Let X 1 , …, X n be sets of nodes of T such that X 1 = Succ T (()) and for all nonzero i < n, there exists σ ∈ X i such that • either σ is an inner node of T and X i+1 = X i ∪ {σ ξ | ξ ∈ Succ T (σ )} \ {σ }, • or σ is a leaf of T , σ does not end in a member of D, and X i+1 = X i \ {σ }.
Note that X n is the set of leaves of T that end in a member of D; so in order to show that H ∪ Th(F) S D, it suffices to prove by Proposition 4.2 that H ∪ Th(F ) logically S-implies rng(σ ) | σ ∈ X n (from which we can derive that F is S-inconsistent with H in case X n is empty). We prove by induction that for all nonzero i ≤ n, H ∪ Th(F ) logically S-implies rng(σ ) | σ ∈ X i . It is immediately verified that H ∪ Th(F ) logically S-implies rng(σ ) | σ ∈ X 1 (including if X 1 = ∅, which can only be the case if F is S-inconsistent with H ). Let a nonzero i < n be given, and assume that H ∪ Th(F ) S rng(σ ) | σ ∈ X i . Assume that F is S-consistent with H , and let S ∈ S force H ∪ Th(F ). Suppose that X i+1 is of the form X i ∪ {τ ξ | ξ ∈ Succ T (τ )} \ {τ } for some inner node τ of T . It is immediately verified that if S rng(τ ) then S rng(σ ) | σ ∈ X i \ {σ } , and if S rng(τ ) then S rng(τ ξ ) | ξ ∈ Succ τ }, hence S forces rng(σ ) | σ ∈ X i+1 . Suppose that X i+1 is of the form X i \ {τ } for some leaf τ of T that does not end in a member of D. Since S does not force ∅ → ∅, we infer that S rng(τ ), hence again, S rng(σ ) | σ ∈ X i+1 , as wanted.
. Define a sequence (T n ) n∈N of trees over the set of literals as follows. Set T 0 = {()}. Let n ∈ N be given, and assume that T n has been defined. First we let T n ⊆ T n+1 . Let σ be a leaf of T n . If rng(σ ) ∩ D = ∅ then no strict extension of σ belongs to T n+1 . Suppose that rng(σ ) ∩ D = ∅. If there exists a least i ∈ N such that H ∪ rng(σ ) ∪ D i is consistent, ϕ D i [H ∪ rng(σ )] is S-valid and there is no initial segment τ of σ such that {τ ξ | ξ ∈ D i } ⊆ T n , then the strict extensions of σ in T n+1 are precisely the sequences of the form σ ξ with ξ ∈ D i ; otherwise no strict extension of σ belongs to T n+1 . This completes the definition of T n+1 . Set T = n∈N T n . We are done if we show that T is a tableau proof of D from F and H . Let B be a branch of T . We first show the following.
Suppose for a contradiction that either B is infinite or B is finite but (2) does not hold. Let X be the set of literals that occur in B. It is immediately verified that H ∪ X is consistent. Let S be the ⊆-minimal member of S that contains H ∪ X . Note that X contains no member of D whether B is finite or not, and so S does not force D, whether D is empty or not. Let i ∈ N be such that D = D i , so S forces ϕ D i . Let Y be the set of all literals ψ in D i such that H ∪ X ∪ { ψ} is inconsistent. Let j ∈ N be such that D j = D i \ Y . Then by the choice of F , S forces ϕ D j , and H ∪ X ∪ D j is obviously consistent. It is then easy to verify that by assumption on B and by construction of (T n ) n∈N , there exists a member τ of B with the property that: If D j = ∅ then B clearly ends in τ , which contradicts the assumption that (2) above does not hold. If D j = ∅ then τ ξ belongs to B for some ξ ∈ D j and S forces D j . So if S does not force ϕ ∅ then we infer that S forces H ∪ Th(F ), which contradicts the assumption that H ∪ Th(F ) S D. So we have shown that B satisfies (A) and (B) above. In particular, we have shown that T contains no infinite branch, which by König's lemma, implies that T is finite. Finally, from the construction of (T n ) n∈N and the properties of T 's branches demonstrated above, we conclude that T is a tableau proof of D from F and H . The next corollary emphasises that tableau proofs are suitable for refutation.

Corollary 5.7 Let a disjunctive program F, a left shift completion F of F, and a set H of hypotheses be given. Then the following conditions are equivalent.
• F is S-inconsistent with H . • There exists a tableau proof of ∅ from F and H .
It is well worth also to take note of both corollaries that follow, the second of which expresses the compactness of the tableau proof procedure.

General strategy
Let us further exploit the example given in the previous section and explain, on the basis of that example, how a tableau proof can be converted into a proof by cuts. Let T be the tree depicted in the previous section, which, recall, represents a tableau proof of p 0 . The strategy is to explore T depth first and label some nodes N in T with a member of Alt(V) determined by the (possibly empty) set of N 's children and by the labels associated with N |i 1 +1 , …, N |i k +k where {i 1 , . . . , i k } is the (possibly empty) set of numbers associated with N in T (those labels will necessarily exist). There might be subtrees of T that will be skipped during this exploration; the nodes of those subtrees will then not be labeled. Also, some nodes might receive various labels over time: when a leaf gets labeled, then the exploration of T proceeds by backtracking and the label assigned to that leaf replaces the label (guaranteed to exist) that had been previously assigned to the node we backtrack to. The fact that we will eventually obtain a proof of p 0 by cut will be captured by the fact that p 0 will be the label last assigned to a node. More generally, a proof by cut of an alternative of the form D, D ∈ Alt(V), will require that the label last assigned to a node be a subset of D.
Let us explain in a little more detail how labels are determined. Let N be a node in T that has not received any label yet but whose parent, if any, has received some label. If N ends in p 0 (more generally, if we try and prove D for some D ∈ Alt(V) and N ends in a member of D) then N is necessarily a leaf, and it receives the label last assigned to its parent. Suppose that we are not in that situation. If N has a parent and the label last assigned to it does not contain the literal N ends in, then the subtree of T rooted at N is skipped and none of its nodes receives any label. Suppose that we are not in that situation either. Let k ∈ N and literals ξ 1 , …, ξ k be such that N has k children in T , those children being N ξ 1 , …, N ξ k . Let n ∈ N be the number of integers associated with N in T , and let i 1 , …, i n be those integers. Let ψ 1 , …, ψ n be the last elements of N |i 1 , …, N |i n , and let D 1 , …, D n be the labels that have (necessarily) been assigned to N |i 1 , …, N |i n , respectively (less precisely, ψ 1 , …, ψ n are the literals on the path from the root of T to N at positions i 0 , …, i n , and D 1 , …, D n are the labels currently associated with those positions, respectively). Then { ψ 1 , . . . , ψ n } is a ⊆-minimal set of claims that logically S-implies ϕ {ξ 1 ,...,ξ k } , ψ 1 belongs to D 1 , …, and ψ n belongs to D n . Hence the cut rule can be applied to D 1 , …, D n and ϕ {ξ 1 ,...,ξ k } ξ 1 , . . . , ξ k . If χ 1 , . . . , χ m is the consequent of that application of the cut rule, then {χ 1 , . . . , χ m } is the label we first (and possibly last) assign to N .

Completeness of the system of proofs by cuts
What can be derived from a disjunctive program and a set of hypotheses by applying the cut rule iteratively is defined in the notation that follows. It formalises the notation [F, H ] which had been informally introduced towards the end of Sect. 2.4, and essentially captures the notion of proof by application of a suitable form of the cut rule, identified in our discussion of Sect. 2. We can finally formulate and prove the key result of this paper. Proof Set F = (ϕ E ) E∈Alt(V) . By Proposition 5.6, let T be a tableau proof of D from F and H . Let T be the set of members of T that contain no occurrence of any member of D. Let N ∈ N be the cardinality of T , and let (σ 0 , . . . , σ N −1 ) be an enumeration of the members of T such that σ 0 = () and for all i < N , σ i is a child of a member of {σ j | j < i} in T of maximal length (so (σ i ) i<N is a depth-first enumeration of T ). For all n ∈ N and for all members σ of T of length n, we let spt(σ )-the support of σ -denote a ⊆-minimal subset of {0, . . . , n − 1} such that ϕ Succ T (σ ) [H ∪ {σ (i) | i ∈ spt(σ )}] is S-valid (it is immediately verified that such a set exists). We now inductively define for all i < N a sequence ([σ i ]) i<N of members of Alt(V) of length either 0 or lt(σ i ) + 1. Set [σ 0 ] = (Succ T (σ 0 )). Let a nonzero i < N be least such that [σ i ] has not been defined yet. Then by construction, [σ i−1 ] = (). We now determine some integer j with i ≤ j < N and define [σ k ] for all k ∈ {i, . . . , j}. Let j be the least integer such that either j = N , or i ≤ j < N , lt(σ j ) ≤ lt(σ i ), and lst(σ j ) ∈ lst([σ i−1 ]). Then for all k ∈ {i, . . . , j − 1}, [σ k ] = (). If j = N then we are done with the construction, so suppose otherwise. Note that for all strict initial segments τ of σ j , [τ ] has been defined and is different to ().
We prove by induction that the following holds for all i < N with [σ i ] = ().
(1) For all n < lt(σ i ), (1) and (2) is straightforward for i = 0. Let k < N be such that [σ k ] = (), and assume that for all i < k with [σ i ] = (), (1) and (2) hold. Let i be the maximal integer smaller than k such that [σ i ] = (). If lt(σ k ) > 1 then, using part (1) of the inductive hypothesis, the fact that (σ j ) j<N is a depth-first enumeration of T , and the definition of [σ k ] |lt(σ k )−1 , it is easy to verify the following.
( †) For all n < lt(σ k ) − 1, [σ k ](n) is included in Using part (2) of the inductive hypothesis, the fact that (σ j ) j<N is a depth-first enumeration of T , and the fact that the penultimate element of [σ k ] is lst([σ i )], it is easy to verify the following, wether lt(σ k ) = lt(σ i ) + 1 or whether lt(σ k ) ≤ lt(σ i ).

Conclusion
We have presented a classical, modal approach to disjunctive logic programs. It is classical in three respects. First, in that only classical negation is used. Second, in that a classical proof technique, based on a generalisation of the cut rule, is complete. Third, in that the semantics can be defined in terms of logical consequence, rather than in terms of minimal or preferred models. The semantics is flexible enough to capture the well known semantics that have been proposed, by possibly expanding the set of rules with formulas referred to as hypotheses, requested to satisfy some special conditions. This has been demonstrated for the answer set semantics.