An Abductive Question-Answer System for the Minimal Logic of Formal Inconsistency mbC

. The aim in this paper is to deﬁne an Abductive Question-Answer System for the minimal logic of formal inconsistency mbC . As a proof-theoretical basis we employ the Socratic proofs method. The system produces abductive hypotheses; these are answers to abductive questions concerning derivability of formulas from sets of formulas. We integrated the generation of and the evaluation of hypotheses via constraints of consistency and signiﬁcance being imposed on the system rules.


Introduction
In abductive reasoning we aim at filling a certain gap between a knowledge base Γ and a puzzling phenomenon A, unattainable from Γ (cf. [15,23]). The commonly accepted schema is this: from an observation A and the known rule if H, then A, infer H (see [21, 5.189]). However, this schema may be and has been studied in detail in different ways, which leads to different models of abduction [12,24]. From a computational point of view, it is of particular appeal to consider an algorithmic perspective, according to which an abductive hypothesis H "is legitimately dischargeable to the extent to which it makes it possible to prove (or compute) from a database, a formula not provable (or computable) from it, as it is currently structured" [13, p. 88]. A brief overview of procedures for generation of abductive hypotheses defined in such a spirit can be found in [17].
There are four primary ingredients of the algorithmic account of abduction [17, p. 2]: (i) a basic logic (which determines the language of specification of A, H and Γ), (ii) a proof method (which determines the exact mechanics of the procedure of generation of abducibles), (iii) a hypotheses generation mechanism (which determines the way the chosen proof method is applied in order to generate abducibles), and (iv) an implementation of criteria for comparative evaluation of different abducibles.
In this paper we seek to introduce a framework for automated generation and evaluation of abductive hypotheses in the form of an Abductive Question-Answer System. Our proof-theoretical basis is set up by the Socratic proofs method [26]: we shall employ the concepts of an erotetic calculus and of a Socratic transformation of a question [28], which is proven to be effective in an automated proof search [9,18,27,29]. As the reader will see, there are close affinities between erotetic calculi and sequent calculi with semantically reversible rules [28, p. 98]. The basic logic of choice is the minimal logic of formal inconsistency mbC, which is one of the logics of formal inconsistency [5,6]. On the one hand, these logics, being paraconsistent, are tools for reasoning under conditions which do not presuppose consistency [20, p. 465]. On the other hand, they "have a remarkable way of reintroducing consistency into the non-classical picture: they internalize the very notions of consistency and inconsistency at the object-language level." [6, p. 1]. Therefore, one can reconstruct classical logic inside paraconsistent logic. Moreover, more expressive language allows for formulation of such abductive hypotheses that could not be obtained by means of classical logic. In light of this arguments we can say that the abductive procedure we describe in this paper does not contradict procedures based on classical logic but extends them.
We define an Abductive Question-Answer System for mbC in the form of an erotetic calculus augmented with abductive rules, which allow for systematic search for answers to abductive questions, that is, abductive hypotheses. Consistency and significance constraints, imposed on abductive rules applications, warrants that generated abductive hypotheses meet those basic proof-theoretical criteria. Thus in the case of our system comparative evaluation of hypotheses is embedded into the procedure for their generation. This is worth noticing, as separating hypotheses generation and evaluation is by far more popular approach within the algorithmic perspective (cf. [13, p. 47], [17]).
We start with introducing the minimal logic of formal inconsistency, mbC (Section 2) and its proof theory employing the Socratic proofs method (Section 3). On this basis we define our abductive procedure (Section 4), including an algorithm for generation of abductive hypotheses (p. 24).

Minimal Logic of Formal Inconsistency
Logics of formal inconsistency (LFIs) are paraconsistent logics, which are able to remodel classically valid reasoning, by means of some special operator •. A formula of the form •A should be read it is consistent that A or A behaves classically. The main intuition is the following: when we reason with the set of premises Γ = {A 1 , . . . , A n } such that there is no A i (1 ≤ i ≤ n) which contains consistency operator •, we use deductive machinery of some paraconsistent logic. But when we obtain an information that some of these formulas are consistent, i.e. the • operator occurs somewhere in Γ, we can use much stronger deductive machinery of classical logic to reason classically about fragments of Γ. Suppose we have the following set of premises {p, ∼ p}, where ∼ is a paraconsistent negation. Using some paraconsistent logic, like CLuN [2] for example, one cannot deduce an arbitrary formula from this set. But if we have an additional premise which says that p is safe or consistent, formalized by the formula •p, we can deduce an arbitrary formula B, just like in classical logic.
We use the language L mbC of minimal logic of formal inconsistency, which consists of a countably infinite set Var = {p 1 , p 2 , . . .} of propositional variables and ¬ (classical negation), 1 ∼ (paraconsistent negation), • (consistency operator), ∧ (conjunction), ∨ (disjunction), → (implication) as primitive connectives. The set of well-formed formulas (wffs for short) is defined as usual: By a literal we mean a propositional variable or its negation. Literals are denoted by l, k, m an so on. If l = p i , then l means ¬p i and if l = ¬p i , then l = p i . The literals l and l are called complementary literals. Naturally, l = l.
The Hilbert-style system for mbC consists of the following axioms and rules [6]: which behaves in a standard way in the case of classical connectives and the following conditions are satisfied: From a proof-theoretical point of view it is more convenient to work with the notion of mbC-valuation instead of semivaluation for the following reason. The notion of mbC-semivaluation has a drawback: the truth value of some formulas is not determined by its subformulas (if v(A) = 1 then v(∼ A) is not determined). In order to assign values to formulas of the form ∼ A and •A in a consistent manner, we introduce a new assignment function λ. Using the notion of mbC-valuation we are able to give a simple soundness and completeness theorems for our proof method.
The notion of mbC-semivaluation is described in [8] and in [6] (but under the name "bivaluation semantics for mbC"). The term semivaluation is sometimes used in a slightly different sense, see for example [19]. 3 The notion of mbC-valuation is defined in [8].
Now we define a new language L + mbC which is an extension of the language L mbC . For proof-theoretical reasons we add to the latter the symbol χ. The set of well formed formulas of L + mbC is defined in the following way: For the extended language L + mbC we define an extension of mbC-valuation. Semantically, the introduced operator, χ, does not change the truth value of a given formula. This operator is only a syntactic device for identifying formulas preceded by consistency operator and paraconsistent negation, i.e. formulas whose truth value is not determined by its subformulas.
, which behaves in the same manner on the set FOR mbC as λ and the following conditions are satisfied: Note that in the extended language L + mbC we are able to express normal forms of formulas of the language L mbC . This fact has the following consequences: it is relatively easy to design invertible sequent calculus rules for non-classical connectives and the completeness theorem for the introduced calculus can be based on the strategy of counter-model construction. These advantages will be apparent in the next section on proof theory of mbC.

Proof Theory of mbC
There are several proof-theoretical descriptions of the logic mbC: there is a standard tableau method for several LFIs (where rules operate on signed formulas) [4], KE tableau method (a variant of standard tableau method, in which some form of the cut rule is essential), which is also implemented [20]. Moreover there is a sequent calculus for mbC [10] and a system based on resolution rule and grounded in Inferential Erotetic Logic [8]. Our approach is based on Inferential Erotetic Logic as well and we use some techniques and concepts introduced in [8], but we have to forgo the resolution rule in order to obtain a simple and intuitive model of abductive reasoning. In fact, our system is akin to some version of hypersequent calculus [27].
The language L ? mbC is an object-level language in which our erotetic calculi will be worded. The meaningful expressions of the language L ? mbC belong to two disjoint sets. The first one consists of declarative well-formed formulas (d-wffs for short). The second one is a set of erotetic well-formed formulas (e-wffs or simply questions).
To obtain the vocabulary of L ? mbC we add the following signs: (turnstile which intuitively stands for a derivability relation in mbC), ? (a question mark for constructing questions of L ? mbC ) and , (a comma), ; (a semicolon), to the vocabulary of L + mbC . where Φ is a finite, non-empty sequence of sequents of L ? mbC .
Let Φ = Γ 1 Δ 1 , . . . , Γ n Δ n be a sequence of sequents of L ? mbC . The question is interpreted as follows: Is it the case that Δ 1 is entailed by Γ 1 and . . . and Δ n is entailed by Γ n ? Terms of Φ are called constituents of the question ?(Φ). If Γ entails in mbC Δ then we say that the sequent Γ Δ of L ? mbC is closed, otherwise it is open.
A sequent φ is basic iff φ is of one of the following forms (Γ, Γ , Γ , Δ, Δ , Δ may be empty): where B ∈ FOR + mbC . Naturally, each basic sequent is closed. The erotetic calculus E mbC for the logic mbC consists of the rules for classical connectives and specific mbC rules.  Table 2. Rules for classical connectives Table 3. Specific rules of mbC An active sequent is specified in a premise of a given E mbC rule scheme. A principal sequent is specified in a conclusion of a given E mbC rule scheme. For each E mbC rule scheme exactly one sequent is active in the premise of that rule. A formula which is specified in an active sequent of a given rule scheme is called an active formula of that rule scheme. Formula(s) which is (are) specified in the principal sequent of a given rule is (are) called principal formula(s) of that rule. unanalyzable formulas are those that are not active formulas in any premise of any E mbC rule. This means that these formulas cannot be further decomposed by means of introduced rules; however they guarantee that the rules for non-classical connectives are sound and invertible (see Lemma 2).
If a sequent consists of unanalyzable formulas only, it is called an atomic sequent. If each constituent of a question Q is atomic, then Q is called an atomic question. An s-transformation s is called complete iff the last term of s is an atomic question. There are two kinds of unanalyzable formulas: positive unanalyzable formulas are of the form (i), (iii), (v); negative unanalyzable formulas are of the form (ii), (iv), (vi). Complementary unanalyzable formulas are pairs of formulas: (i) and (ii), (iii) and (iv), (v) and (vi), where the negative unanalyzable formula is obtainable by addition of the classical negation to the positive unanalyzable formula, and similarly, the positive unanalyzable formula is obtainable by removal of the classical negation from the beginning of the negative unanalyzable formula.
We will say that the sequent Γ Δ is valid iff there is no valuation λ # such that λ # (A) = 1 for every term A of Γ and λ # (B) = 0 for every term B of Δ. Definition 9. (complexity) The complexity of a formula A ∈ FOR + mbC , com(A) is defined inductively as follows: This concept of formula complexity is not standard because it does not measure the number of occurrences of propositional connectives (such as items 2, 4, and 5) in the formula. At the semantic level, although connectives ∼ and • differ from the other ones by being slightly more complicated (their truth value is not always determined by the truth values of their subformulas), the introduced notion of complexity reflects this difference. Note that if A is an unanalyzable formula then com(A) ≤ 1. Proof. If Γ Δ is a sequent containing only unanalyzable formulas (atomic sequent), then s 1 =?(Γ Δ) is an atomic question and s = s 1 is finite s-transformation.
If the sequent Γ Δ is not atomic, then from Lemma 3.1 we know that the rules which could be applied to ?(Γ Δ) reduce the complexity of some formula in Γ or Δ. As both Γ and Δ are finite, by applying rules of E mbC consecutively we obtain an atomic question.

Lemma 2. For every E mbC rule scheme the active sequent of that rule is valid iff the principal sequent(s) is (are) valid.
Proof. Proof goes by cases. Let us consider R • rule scheme. Let us assume that the sequent Γ Δ, •A, Δ is valid. The only non-trivial case is when λ # (•A) = 1. By the Definition 3 we have that (λ # (A) = 0 and λ # ( Similarly, it is easy to see (in the light of Definition 3) that the validity of the principal sequents ensures the validity of the active sequent of R • rule scheme.
The proof goes analogically for the rest of E mbC rule schemes.

Lemma 3. (countermodel) Let Γ Δ be an atomic and not basic sequent. There exists an extended mbC-valuation
Proof. Note that the following clauses define an assignment which determines an extended mbC-valuation which invalidates Γ Δ.
Proof. Assume that φ is valid but not provable in E mbC . Therefore, by Lemma 1 there exists a finite Socratic transformation s = s 1 , . . . , s n of the question ?(φ) such that at least one constituent ψ of the last question of s is an atomic but not basic sequent. By Lemma 3 there exists an extended mbC-valuation, which invalidates ψ. Thus at least one constituent of the last question s n of s is not valid. Therefore, by Lemma 2 we know that at least one constituent of the question s n−1 is not valid. By applying Lemma 2 consecutively we arrive at the conclusion that the sequent φ is not valid, contrary to the assumption.

Abduction in mbC
Following the algorithmic account of abduction, we interpret the abductive problem as a requirement to fill the deductive gap between the premises and the conclusion. In our Abductive Question-Answer System, such requests will be expressed through abductive questions.
Definition 10. (Abductive question) An abductive question (or abductive problem) has the following form: where Ψ is a non-empty sequence of sequents such that at least one term of Ψ is an open sequent of L ? mbC . • If Ψ = φ is an one-term sequence, then the question ?(Ψ) is called simple abductive question.
An atomic abductive question: can be read as follows: Which formulas close Γ Δ ?
A compound abductive question can be read as follows: Which formulas close every term of Γ 1 Δ 1 , . . . , Γ n Δ n ? The fact that B is a correct analytic answer for a question Q will be denoted as: B ∈ c(Q).
As an example let us consider the question: • the set of all propositional variables occurring in Q * : V Q * = {p, q, r, s, z}, • exemplary direct analytic answers to Q * : p, q, p → q, • exemplary correct analytic answers to Q * : ¬p, s, ¬p ∧ z.
Proof. Proof goes by induction on the length of the s-transformation s and by an inspection of the rules of E mbC . The case of length(s) = 1 is trivial. 4 Table 4.

Examples of abductive rules
Assume that the property holds for the arbitrary s-transformation s, where length(s) = i. We will show that it is also valid for the s-transformation s , where length(s ) = i+1. Let us also assume that the last rule applied in s was L α . Let formula C close each constituent of the last question of s (Q i+1 ). Let Γ 1 , α 1 , α 2 , Γ 2 Δ be the conclusion sequent of L α . We know from assumption that C closes this sequent, i.e. C, Since v(α) = 1 iff v(α 1 ) = 1 and v(α 2 ) = 1 the formula C closes the premise sequent of the last applied rule, namely C, Γ 1 , α, Γ 2 mbC Δ. The length of the s-transformation with the deleted last question equals i. A similar argument applies for the other rules.
open. Partial answer for Q is such a formula A that the addition of A to the Γ i results in Γ i Δ i becoming a closed sequent or a sequent which after transformation to the atomic sequent is also a closed one.
Note that it could be the case that a partial answer is also a correct analytic answer, i.e. it closes all open sequents of Q. Definition 14. (Abductive rule) Let Q be a minimal abductive question and A be a partial answer for Q. The premise of an abductive rule is Q and the conclusion is A. Table 4 has a question as the premise and a declarative formula as a conclusion (which is a partial answer to the question-premise). A formula, which is a conclusion, when added to the antecedent of an active sequent in an abductive rule, makes this antecedent inconsistent (in the classical sense) or generates a link between antecedent and succedent.

Each rule in
Most abductive procedures consist of two steps: abductive hypotheses generation and subsequent evaluation of generated hypotheses against predefined criteria [17]. There are several candidates for such criteria [1,8]. In this paper we are concerned with just two of them, which are of fundamental importance from the proof-theoretical point of view.
The first one is consistency: an abductive hypothesis should be consistent with the initial data or knowledge base. Note that the notion of consistency used here is relative to the logic mbC i.e. abductive hypothesis of the form ∼ p is consistent with the knowledge base of the form Γ = {p, q, r}, but not with the knowledge base Γ = {p, q, •p} for example. The reason for introducing the consistency criterion is that we do not want a knowledge base to become inconsistent or trivial in the sense that every formula could be inferred from it. Consistency criterion may be also called non-triviality criterion.
The second one is significance: an abductive hypothesis should not allow to derive Δ by itself, that is, in filling this deductive gap both abductive hypothesis and the initial database should be significant.
Another point is that we do not want to carry out two separate steps for the generation and evaluation of the generated abductive hypothesis, but rather to build an abductive procedure that generates the good abductive hypotheses. Therefore, we will not implement the evaluation criteria of the abductive hypotheses, but we will implement rules that can make the construction of the abductive hypotheses consistent and / or significant. These rules are implemented as restrictions placed on the application of the rules.
Abductive rules from Table 4 generate partial answers that can be inconsistent with the knowledge base Γ or it could be the case, that such a generated partial answer is itself sufficient to derive Δ. As we would like to rule out such possibilities, we introduce downward (or Hintikka) and dual downward saturated sets. Partial answers that are generated along with the restriction concerning downward saturated set (we will call it consistency constraints) will be consistent with the knowledge base Γ. Similarly, partial answers that are generated with the restriction concerning dual downward saturated set (we will call it significance constraints) will not be too strong, i.e. Δ would not be obtainable from the partial answer alone.
Definition 15. (Downward saturated set) Let Γ be a sequence of formulas of L + mbC . By a downward saturated set (or Hintikka set) corresponding to a sequence Γ we mean a set U Γ , which fulfils the following conditions: (ix) nothing more belongs to U Γ except those formulas which enter U Γ on the grounds of conditions (i)-(viii). The idea in the proof is to construct a valuation which sends each unanalyzable formula to 1. The next step is to show by induction that it can be extended to satisfy all formulas from Γ. For a detailed proof see for example [11].

Lemma 5. If a non-empty sequence of formulas Γ is satisfiable, then at least one downward saturated set corresponding to Γ belongs to consistency property of Γ.
Proof. Assume no downward saturated set corresponding to Γ belongs to consistency property of Γ. Thus all such sets contain complementary unanalyzable formulas. Due to the fact that the construction of downward saturated sets reflects mbC-valuation, Γ cannot be satisfiable.
The notion of downward saturated set can be dualized in order to tackle the problem of significance restriction. Detailed study of such sets in the context of First-Order Logic can be found in [7]. Definition 17. (Dual downward saturated set) Let Δ be a sequence of formulas of L + mbC . By a dual downward saturated set (or dual Hintikka set) corresponding to a sequence Δ we mean a set W Δ , which fulfils the following conditions: (ix) nothing more belongs to W Δ except those formulas which enter W Δ on the grounds of conditions (i)-(viii).
A dual Hintikka set W Δ is d-satisfied under extended mbC-valuation λ # iff at least one element of W Δ is true under λ # . A dual Hintikka set W Δ is d-satisfied by each extended mbC-valuation (W Δ is d-valid) iff there is no extended mbC-valuation λ # such that each formula in W Δ is false under λ # . If W Δ = ∅, then W Δ is d-inconsistent.

Corollary 4.3. A dual Hintikka set W Δ is d-satisfied by each extended mbC-valuation (W Δ is d-valid) iff for some unanalyzable formula
Definition 18. (Non-validity property) By a non-validity property corre sponding to a sequence Δ we mean a finite set W nv Δ = {W 1 Δ , . . . , W n Δ }, which contains all dual Hintikka sets for Δ that do not contain complementary unanalyzable formulas.

Lemma 6. (Dual Hintikka's Lemma) For an arbitrary Δ, each set belonging to the non-validity property of Δ is not d-valid.
The idea of the proof of dual Hintikka's lemma is analogous to that of Hintikka's lemma. But now we want to construct a valuation which sends each unanalyzable formula to 0. Then we can easily extend such valuation to falsify all formulas from Δ. Table 5 contains constraints to rules from Table 4. The intuitions underlying those constraints are the following: in cases when we want to generate abductive hypothesis that is consistent with the knowledge base Γ, we look for those formulas, that are consistent with at least one Hintikka set. In other words, we can say that we are looking for a formula that is true under some mbC-valuation λ # , under which all formulas from Γ are also true.
Constraints for significance of the abductive hypothesis reflect similar notions. We are looking for those hypotheses that do not make at least one Hinntika set d-valid. In other words, when we extend such dual Hintikka set by the negation of a formula which is our hypothesis, 5 and turn such extended dual Hintikka set into a formula by linking all elements from this set by disjunctions, we do not want to obtain a formula which is true under every mbC-valuation λ # .
Γ be a downward saturated set corresponding to some Γ. If an unanalyzable formula l / ∈ U Γ , then the set U Γ ∪ {l} is consistent.
Proof. U Γ is consistent by definition of the consistency property. Let us assume that l / ∈ U Γ . If U Γ ∪ {l} is inconsistent then l ∈ U Γ ∪ {l}, which contradicts the assumption.
Γ be a downward saturated set corresponding to some Γ. If l / ∈ U Γ or k / ∈ U Γ , then the set U Γ ∪ {l → k} is consistent.
Proof. The proof is analogous to the proof of Lemma 7. Now we are going to prove that abductive hypotheses generated by abductive rules used in accordance with the constraints are consistent with the initial knowledge base and significant. For that reason we introduce an algorithm 1 (see page 24) which creates s-transformation, consistency and significance properties for an initial question Q =?(Γ Δ), and then uses abductive rules along with restrictions to produce abductive hypotheses.

Theorem 4.4. Each abductive hypothesis generated by Algorithm 1, where each abductive rule is applied with a consistency constraint is consistent with the initial knowledge base.
Proof. The proof follows from Lemmas 7 and 8 and from the construction of U c+ Γ . There exists a set . .

There exists a set
. . Proof. We know that W Δ is not valid, i.e. there exists an extended mbCvaluation λ # such that each formula in W Δ is false under λ # . Since l / ∈ W Δ we can assume that λ # (l) = 0. It follows that W Δ is not valid.
Proof. The proof is analogous to the proof of Lemma 9.

a literal) if and only if a dual Hintikka set
There exists an extended mbC-valuation λ # such that λ # (l) = 1 and λ # (A 1 ∨ . . . ∨ A n ) = 0. In this case λ # (l) = 0 and each formula in W is false under λ # . Therefore W is not valid. (←) Assume W is not valid. There exists an extended mbC-valuation λ # , such that each formula in W is false under λ # . In this case λ # (l) = 1 and

Examples
We shall provide two examples in order to explain how the Algorithm 1 generates abductive hypotheses. At the end of the section some remarks about the way the algorithm works have been provided.
The complete s-transformation of Q assigned to s is the following: Open sequents from the last term of s are assigned to Φ: • ¬p, q, s, ¬χ ∼ s z , • r, q, s, ¬χ ∼ s z .
The number of elements in Φ: , ¬(q → ∼s), q, s, ¬χ ∼ s, q → r, r}. The non-validity property contains one set W nv Let us assume that we randomly get j = 1. In that case: Let us further assume that we randomly get r = 1, therefore R = R 1 abd , and our partial answer generated by means of the rule R is the following: In the next step we have to cross out from consistency property those Hintikka sets that are inconsistent with a, i.e. sets U 1 Γ and U 2 Γ . After this step consistency property looks as follows: Similarly, we leave only those dual Hintikka set in non-validity property that are not d-valid with a. In this case nothing changes, because W 1 Δ is not dvalid with a. The set Ω is enlarged by partial hypothesis a = p and j = 1 is removed from Θ, therefore Θ = {2}.
Θ still contains one element, which is now assigned to j = 2. In this case: At this stage the algorithm can randomly assign r = 1 again. However, there is no partial hypothesis that could be generated by means of R 1 abd in accordance with the consistency constraint for R 1 abd . The reason is that there is only one Hintikka set in the consistency property, which contains complementary unanalyzable formulas of all formulas that belong to the antecedent of the sequent φ. The algorithm cannot execute instructions in the if loop and the value for r is again randomly assigned.
Let us assume that r = 2 this time. R = R 2 abd and the generated partial answer can be a = q → z, since it is consistent with the U 3 Γ and not d-valid with W 1 Δ . The set Ω is enlarged by a and j = 2 is removed from Θ, leaving Θ = ∅ as a result. Therefore, the condition for breaking the while loop is fulfilled. The set Ω is transformed into the abductive hypothesis by linking all partial answers contained in it with conjunction: The addition of Ω to the Γ from initial abductive question Q results in obtaining a question Q * = ?(p ∧ (q → z), p → (q → r), ¬(q → ∼ s) z), which is no longer an abductive one.
Example 2. Let us now consider an abductive question Q = ?(Γ Δ ), where: The complete s-transformation of Q assigned to s is the following: Open sequents from the last term of s are assigned to Φ: • ¬p, ¬r ¬z, χ ∼ z , The number of elements in Φ: x = 3, therefore Θ = {1, 2, 3}. The consistency Let us assume that we randomly get j = 1. In that case: Let us further assume that we randomly get r = 2, therefore R = R 2 abd , and our partial answer generated by means of the rule R is the following: In the next step we have to cross out from consistency property those Hintikka sets that do not fulfil the consistency restriction for R. After this step consistency property contains the same Hintikka sets: Similarly, we leave only those dual Hintikka set in non-validity property that are not d-valid with a. In this case nothing changes, because W 1 Δ is not dvalid with a. The set Ω is enlarged by partial hypothesis a = p → ¬z and j = 1 is removed from Θ, therefore Θ = {2, 3}.
Θ still contains elements. Let us assume that we randomly get j = 2. In this case: Let us assume that r = 5 this time. R = R 5 abd and the generated partial answer can be: The following Hintikka sets do not fulfil the consistency restriction for R: U 1 Γ , U 2 Γ , U 5 Γ , therefore they are removed from the consistency property: Similarly as in the case of the previous partial hypothesis, a in not d-valid with W 1 Δ and the non-validity property does not change. The set Ω is enlarged by a and j = 2 is removed from Θ, leaving Θ = {3} as a result. Therefore, in the next step j = 3 and: There are only two abductive hypotheses which can generate a partial hypothesis for φ, namely R 1 abd and R 2 abd . Assuming that r = 1 the partial hypothesis we obtain is of the following form: Hintikka sets U 3 Γ and U 6 Γ do not fulfil the consistency restriction for R and are removed from the consistency property: Since there is a Hintikka set left in the consistency property and a is nod d-valid with the W 1 Δ set form the non-validity property, constructed partial hypothesis is significant and valid. Θ = ∅ and the while loop is broken. As in the previous example, the set Ω is transformed into the abductive hypothesis by linking all partial answers contained in it with conjunction: The question ?(Γ ∪ Ω Δ ) is not an abductive one.
Our algorithm exhibits some weaknesses that should be mentioned. First of all, the algorithm is just a scheme used to depict how the abductive procedure works, rather than optimised implementation of an abductive hypotheses generator. Another point is that the algorithm will not terminate in case when it is impossible to generate partial answer by means of the proposed abductive rules used along with the constraints. There are at least two possible situations of this kind: the abductive goal is inconsistent with the knowledge base, or already generated partial answers make it impossible to generate further partial answers.
The algorithm is also not optimised for finding the shortest possible abductive hypotheses, nevertheless it is possible that it will find them. It is easy to see that in some cases the algorithm will not recognise, that a partial answer for one open sequent can be a partial answer for other open sequents.

Discussion
In this section we want to compare our model of abductive reasoning with two other approaches.

Carnielli's System
There was an earlier attempt, made by Carnielli, to model abductive reasoning in the context of paraconsistent logic. We will briefly compare his system to ours (for details, see [3]).
As we have mentioned at the beginning of the paper, four ingredients of algorithmic approach to abduction can be distinguished: a basic logic (which gives us a formal language and a system of formulas considered valid ), proof method (a way, in which this logic is given), hypotheses generation mechanism and criteria which rule out hypotheses which are not good enough. The logic used by Carnielli is LFI1 system, which is an extension of mbC for which a simple 3-valued semantics exists. As a proof method, signed version of analytic tableaux is used. Passing to the last two ingredients, the situation becomes more complicated: it seems to us that the procedure of generating hypotheses and hypotheses evaluation mechanism are interrelated. Such properties of hypotheses as consistency (non-triviality in Carnielli system), analyticity or minimality are forced by the very definition of what abductive problem and abductive solution are (see definition 5.1 in [3]). It is an approach very different from ours: we try to be neutral in the definition of hypotheses generation mechanism with respect to properties one may think of as desirable. But we also deliver some simple implementation of most frequently accepted properties such as consistency and significance, but we let the users to decide, which properties they want to deploy. Another major difference between these systems is the form of hypotheses, which can be generated. In Carnielli's system, hypotheses are collections of atoms. This is determined by the proof method used. One of the consequences of this approach is that disjunctive hypotheses are impossible to obtain (we will say more on this, when discussing some specific examples). Another is that only analytic hypotheses are accepted. Our approach is more general: we stipulate in what way an abductive rule, which enables hypotheses generation, has to function and we provide two examples, one of them enabling law-like hypotheses (see [25]). This approach is, in a way, open ended: new abductive rules can be added, which enable more interesting hypotheses, in particular non-analytic ones, which is impossible in Carnielli's system. Let us consider an eminently simple abductive problem: we want to obtain q from the knowledge base consisting solely of p. In the context of AQAS one can pose an abductive question In Carnielli's system we start by constructing a tableaux, where we list all formulas from the knowledge base (and we assign to each of them symbol T) and an abductive goal (with F assigned) in the root of a tree. Tp Fq There is no rule which can be applied to simplify our problem further. In Carnielli's system two abductive hypothesis can be generated: Fp and Tq. The first one is (classically) inconsistent with the knowledge base. The second one is certainly too strong: from Tq follows that q is true, but that is exactly what we were trying to explain.
In AQAS we start with the aforementioned question which cannot be further simplified. By means of question-answer rules we can formulate three hypotheses: ¬p (where ¬ denotes classical negation), q and p → q. The first two hypotheses do not meet consistency and significance restriction, but the third one does.
This simple example shows that AQAS is able to generate good (in the above case the only good ) hypotheses which are not reachable for Carnielli's system.
In the next example we will show that AQAS produces more good hypotheses than other considered systems. Our knowledge base consists of p → q and q → r, and we want to derive r. The initial step is the transformation of an abductive question: ?(p → q, q → r r) ?(¬p, q → r r ; q, q → r r) ?(¬p, ¬q r ; ¬p, r r ; q, q → r r) ?(¬p, ¬q r ; ¬p, r r ; q, ¬q r ; q, r r) There is only one open constituent of the last question, namely the sequent ¬p, ¬q r and it can be closed in AQAS (with consistency and significance constraints) by any of the following formulas: p, q, ¬p → r, ¬q → r. In Carnielli's system only p and q can be obtained due to the fact that abductive hypothesis is considered there as a conjunction of literals, thus more complicated formulas are not reachable, which can result in a system not producing any hypothesis despite the fact that there is one, as the previous example has shown.

Abductive Logic Programming
The next abductive procedure that we want to compare our approach with is Abductive Logic Programming (ALP) (details of the method can be found in [16]). The ALP framework consists of three ingredients: a logic program P (knowledge base), a set of abducibles A (i.e. potential abductive hypotheses) and a set of integrity constraints IC, where we can express constraints regarding our knowledge that are additional to the logic program P. ALP is aimed at modeling the syllogistic perspective on abductive reasoning [12].
Logic used in ALP is a part of the first order logic called logic programming where only universally quantified implications are used. The antecedent of the implication is a set of literals, and the consequent of the implication is an atom. In ALP all variables are substituted in a consequent manner by constants from a finite set, therefore we use propositional examples for ALP. The proof method is the standard SLD-resolution with backward reasoning in SLD-fashion. The mechanisms for the generation and evaluation of abductive hypotheses are mixed together. One of the starting ingredients of ALP is a set of abducibles A, which is assumed to be given at the start. In [16] set A contains all atoms that occur in logic program P or atoms that occur in logic program P but only those that do not appear in the consequent of any implication. Abductive hypotheses are defined as subsets of A set. Therefore, the latter method for A set generation impose the minimality restriction for abductive hypotheses. In addition, the proof procedure accompanied by the above mentioned method for the set of abducibles generation guarantees that obtained abductive hypotheses are consistent with the initial knowledge base represented by the the logic program P and the set of integrity constraints IC.
Differences between the approach described in this paper and the ALP occur in the foundations of both methods. In ALP abductive goals and hypotheses are restricted only to literals, while we do not have such restriction regarding abductive goals in our procedure and generated abductive hypotheses include also law-like statements. Furthermore, we generate abductive hypotheses, while in ALP framework abductive hypotheses are picked from the set of abducibles given from the start. As a consequence, we are able to produce abductive hypotheses for abductive goals that contain information that do not belong to the initial knowledge base, contrary to the ALP approach. There are many implementations of Abductive Logic Programming, e.g. in Prolog [22] or in a neuro-symbolic system [14].
Let us consider the same two examples as in the previous subsection. For the first abductive problem, where we have p as our knowledge base and q as the abductive goal, our procedure is able to generate three abductive hypotheses, i.e. ¬p, q and p → q, with one meeting both, consistency and significance restrictions. In this case the ALP procedure is not able to produce any hypothesis.
In the second example we have the following formulas as the knowledge base: p → q, q → r and as the abductive goal we have r. Our approach generates the following four abductive hypotheses that are consistent and significant: p, q, ¬p → q, ¬q → r. For the ALP procedure we assume that P = {q ← p, r ← q} and the set of integrity constraints IC is empty. According to the two ways of selecting the set of abducibles we have the following: A = {p, q} or A = {p}. In the first case ALP produces the following three hypotheses: {p}, {q} and {p, q}. In the second case we have only one abductive hypothesis {p}.

Summary and Conclusion
In this paper we have introduced an Abductive Question-Answer System for the minimal logic of formal inconsistency mbC. The system produces abductive hypotheses, which are answers to abductive questions concerning derivability of formulas from sets of formulas. We integrated generation and evaluation of hypotheses via constraints of consistency and significance being imposed on the system rules. Our further research will be concerned with optimization issues. We also plan for modular implementation of more diverse set of evaluation criteria, which would allow for producing hypotheses exhibiting different characteristics, depending on particular choice of criteria.
Additionally, we have compared our procedure with two alternative approaches. Generally speaking, abductive hypotheses in ALP and Carnielli's system are conjunction of literals, where in AQAS each hypothesis can be considered as a conjunctive normal form, which can consists of nonempty disjunctions (due to the interdefinability of implication and disjunction in the presence of classical negation). In both cases the model of abductive reasoning proposed in this paper is more flexible with regards to assuming abductive goals and creation of abductive hypotheses. In addition, our system clearly distinguishes between the generation and evaluation of abductive hypotheses, while in both other approaches this division is not plain and simple.
Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.