1 Introduction

Like most research carried out by the Polish School of Argumentation, our approach is rooted in the Polish tradition of methodology and pragmatics stemming from the Lvov-Warsaw School [cf. (Koszowy and Araszkiewicz 2014), this issue]. Using the concept of logical probability elaborated by Ajdukiewicz (1974), we adjust and develop a simple method for evaluating argument force, as presented by Tokarz (2006). The model of argument proposed in this paper consists of two components: (1) a formal model for identifying argument structure and (2) a formal model for computing the acceptability (credibility) of argument conclusions (the terms ‘credibility’ and ‘acceptability’ are used interchangeably).

First we present the background of our approach. We give a brief overview of elementary ways of expanding simple arguments into more complex argumentative structures, and discuss the evaluation method for them recommended by Tokarz (Sect. 2). Our contribution is the introduction of precise and, in comparison with other approaches, simple set-theoretical definitions of the notions used to describe these structures (Sect. 3). These definitions allow us to consider non-standard or fallacious structures such as divergent, incoherent or circular arguments. Next, we propose a numerical method for argument evaluation (Sect. 4). This method shows how, depending on the structure of an argument, the acceptability of its premises can be transformed into the acceptability of its conclusion. Specifically, our contribution is the introduction of an algorithm which allows us to evaluate convergent arguments. Finally, we discuss some related work on argument structure and evaluation, and some issues that can constitute subject matter for further studies (Sect. 5).

2 Background

Our research on formalization of argument structure was motivated by the methodological need to clarify the fundamental concepts of argumentation theory, as they were introduced in the Polish literature by Hołówka (1998) and by Tokarz (2006) and his collaborators (Szymanek et al. 2003).

Our approach rests on a conception of argument which is widespread in critical thinking and informal logic. The key feature of this conception is the distinction between linked and convergent arguments. It is not easy to ascertain who first made this distinction and who introduced the related method of diagramming, which is the graph-theoretical method of representing arguments in informal logic. Reed et al. (2007) indicate Whately (1836) as a forerunner of this method. As for the linked-convergent distinction and its representation, they refer to Beardsley (1950) and Freeman (1991); however, at least Thomas (1973) should be mentioned here as well.

Linked arguments are usually diagrammed using one arrow to represent the relationship of support between their joint premises and the conclusion (Fig. 1a). The premises of convergent arguments support the conclusion independently, so the diagrams represent each of them as connected with the conclusion by a separate arrow (Fig. 1b). Combining these two types of arguments, or series of them, yields more complex structures (Fig. 1c).

Fig. 1
figure 1

Examples of linked, convergent and multilevel argumentative structure

The method of evaluation recommended by Tokarz can be briefly described as follows. Integers from 1 to 5 are assigned to the first premises and inferences of a given argument (the first premises are those propositions that are not conclusions of any inference). If the number 5 is assigned to a premise it means that the proposition is fully credible; 1 means that it is fully non-credible (i.e. its negation is fully credible); 3 stands for a neutral value; 2 and 4 are intermediate values. In the case of inferences, 5 means that the conclusion follows from the premises; 1 means that the negation of the conclusion follows from the premises; 3 that the conclusion is independent of the premises; 4 that relative to these premises the conclusion is more credible than its negation; and 2 that the reverse holds. In the next steps of the procedure we compute the value of the final conclusion of the argument, using the operations of minimum and maximum as follows. To determine the value of the conclusion of a single inference we take the minimum of the numbers assigned to it and to its linked premises, and to determine the value of the conclusion of many convergent inferences we take the maximum of the numbers calculated for each of the inferences separately. The entire argument is acceptable if the number assigned to its final conclusion is 4 or 5.

The advantages of this method are its simplicity and intuitiveness. A closer analysis, however, reveals some difficulties. First of all, as there is only one intermediate value between 3 and 5 (and between 1 and 3), it is impossible to strengthen the argument by adding a new convergent reasoning which is acceptable but not rated at 5. Moreover, the operation of maximum selects only the strongest of the convergent arguments, while the rest are skipped over in the computation. This means that the formulation of convergent arguments is pointless. Note that the same idea (it can be called the ‘maximum principle’) is used in the Carneades argumentation system when the proof standard ‘preponderance of the evidence’ is employed (Gordon and Walton 2009). On the other hand, the operation of minimum seems to be too ‘liberal’ when applied to linked arguments with many premises that are not fully credible. Intuitively speaking, two doubts are more than one doubt (just as the probability of the occurrence of two separate events at the same time is lower than the probability of each of them individually). Furthermore, many doubts taken together regarding propositions that, considered separately, are to some degree credible, can make the set of these propositions not credible (as an improbable coincidence may consist of probable component events). A further undesirable consequence seems to be that what counts in the evaluation of linked arguments is actually the acceptability of the weakest premise, so that it is useless to improve such an argument by increasing the credibility of other premises (or of the inference).

These objections lead to the conclusion that the number of intermediate values should be unlimited and the operations of minimum and maximum should be revised. For this purpose we turn to the concept of logical probability as defined by Ajdukiewicz: ‘the logical probability of a statement A relative to a statement B is the highest degree of the certainty of acceptance of the statement A to which we are entitled by a fully certain and valid acceptance of the statement B’ (Ajdukiewicz 1974, p. 121). Thus, we will use the notion of logical probability to model the credibility of arguments in a formal way. To specify our understanding of the ‘degree of credibility’, let us cite one more characteristic, which accurately supplements Ajdukiewicz’s definition: ‘It is the degree of belief of a “perfectly rational being” who has precisely as much information as we do’ (Kemeny 1959, pp. 110–111).

3 Structure of Arguments

In this section we show how argumentative structures can be simply defined and described in terms of set theory. Since the basic notions introduced below are familiar to those who deal with the theory of argumentation, we will present them in a somewhat abbreviated form.

Let L be the set of the propositions of a given language.

Definition 1

(Sequent). A sequent is any ordered pair of the form <P, α> , where P is a non-empty and finite subset of L and αL.

The sequent <P, α> , where P = {α 1 , α 2,…, α n }, will be denoted by Pα or subsequently by (α 1 , α 2,…, α n ) ▶ α. Sequents correspond to single inferences represented by separate arrows in the diagrams.

Definition 2

(Sequent premise, Conclusion, Counterdomain, Range). The set of premises (or the domain) of the sequent Pα is the set P; the proposition α is the conclusion; the set {α} is the counterdomain; and the set P ∪ {α} is the range.

The domain, counterdomain and range of a sequent S will be denoted by p(S), c(S) and r(S), respectively.

Definition 3

(Argument). An argument is any non-empty and finite set of sequents.

In other words, arguments are simply (non-empty and finite) relations between (non-empty and finite) sets of propositions and single propositions of L.

Definition 4

(Argument premise, Domain, Conclusion, Counterdomain, Range). The set of premises (or the domain) of an argument A is the set of all the premises of all its sequents—symbolically, α ∈ P(A) iff there exists a sequent SA such that α ∈ p(S); the set of conclusions (or the counterdomain) of A is the set of all the conclusions of all its sequents—symbolically, α ∈ C(A) iff there exists a sequent SA such that α ∈ c(S); the range of A is the set R(A) = P(A) ∪ C(A).

Definition 5

(First premise, Final, Intermediate conclusion). The first premises of an argument A are the elements of the set Fp(A) = P(A) − C(A); the final conclusions are the elements of the set Fc(A) = C(A) − P(A); the intermediate conclusions are the elements of the set Ic(A) = C(A) ∩ P(A).

The example of a more complex structure represented by Fig. 1c) can serve as an illustration. This argument, which we will call Δ, is denoted by the following expression: {(a 1) ▶ a 4; (a 2) ▶ a 4; (a 3) ▶ a 4; (a 4 , a 5 , a 6) ▶ a 8; (a 7) ▶ a 8; (a 8) ▶ a}. Furthermore we have P(Δ) = {a 1 , a 2 , a 3, a 4 , a 5 , a 6 , a 7 , a 8}; C(Δ) = {a 4 , a 8 , a}; R(Δ) = {a 1 , a 2 , a 3, a 4 , a 5 , a 6 , a 7 , a 8 , a}; Fp(Δ) = {a 1 , a 2 , a 3 , a 5 , a 6 , a 7}; and Fc(Δ) = {a}; Ic(Δ) = {a 4 , a 8}.

Sequents, because they correspond to single inferences, are the real atoms of argumentation, so that sets consisting of only one sequent can be called atomic arguments. There are two kinds of atomic arguments: simple arguments with only one premise and linked arguments with many premises. Convergent arguments are not atomic. So, contrary to what seems to be vaguely suggested by the standard distinction, and what is followed by Vorobej in (1995), linked and convergent arguments are structures that must be distinguished at two different levels of complexity (like atoms and molecules in chemistry). Thus convergent arguments are those that consist of many subarguments which have the same final conclusion (a subargument of A is any non-empty subset of A).

Unlike the arguments in the examples considered so far, some arguments can have more than one final conclusion. Furthermore, the set of final conclusions can be empty. The set of first premises can in some cases be empty too. In order to distinguish and describe these somewhat atypical structures we introduce some additional notions. If an argument consists of two or more separate (possibly even irrelevant) parts, each of them must have its own, different final conclusion. In order to characterize this kind of incoherence we will first define the relation of being (argumentatively) connected in a given structure.

Definition 6

(Connected propositions). Propositions α and β are connected in an argument A, symbolically Con A (α, β), iff there exists a sequence of propositions δ 1 , … δ n (n  2), such that for every k < n, the propositions δ k and δ k+1 belong to the range of the same sequent S ∈ A, and furthermore δ 1 = α and δ n  = β.

Definition 7

(Coherence). An argument A is coherent iff Con A (α, β), for every proposition α, β ∈ R(A). Otherwise, the argument is incoherent.

The relation of being connected is an equivalence relation. The analogous relation which holds between whole sequents of a given argument is also an equivalence relation. Therefore incoherent structures can be regarded as the sums of mutually disjoint, coherent arguments, i.e. the sums of their separate parts. For example, the incoherent argument {(a 1) ▶ a 2; (a 2) ▶ a 3; (b 1) ▶ b 2} is equal to the sum {(a 1) ▶ a 2; (a 2) ▶ a 3} ∪ {(b 1) ▶ b 2} of two coherent arguments.

An argument can also have many conclusions due to the divergence of its structure. In order to express this property precisely we must first define the relation of support between the premises and the conclusions of a given argument.

Definition 8

(Support relation). A proposition β is directly supported by a proposition α in an argument A iff there exists a sequent SA such that α ∈ p(S) and β ∈ c(S); β is indirectly supported by α iff there exists a sequence of propositions δ 1, δ 2, … δ n , where n ≥ 3, such that for every k < n, δ k+1 is directly supported by δ k in A, and furthermore δ 1 = α and δ n  = β; finally, β is supported by α, symbolically Sup A (α, β), iff β is directly or indirectly supported by α in A.

Definition 9

(Divergence) An argument A is divergent iff there exist two different propositions α and β such that Con A (α, β), but neither Sup A (α, β), nor Sup A (β, α), and furthermore there exists no proposition γ such that Sup A (α, γ) and Sup A (β, γ).

Similarly to incoherent structures, divergent arguments can be regarded as the sums of non-convergent, but not necessarily mutually disjoint arguments. For example, the divergent argument {(a 1) ▶ a 2; (a 2) ▶ a 3; (a 2) ▶ a 4} is equal to the sum {(a 1) ▶ a 2; (a 2) ▶ a 3} ∪ {(a 1) ▶ a 2; (a 2) ▶ a 4} of two non-divergent arguments.

Incoherence and divergence result in an increased number of final conclusions. On the other hand, the number of final conclusions, as well as the number of first premises, can be reduced by a vicious circle, i.e. a cycle that may occur among the elements of an argument range. Since the relation of support is transitive, circularity of arguments can be easily defined as follows.

Definition 10

(Circularity). An argument A is circular iff there exists a proposition α such that Sup A (α, α).

As the following examples show, circular arguments can have no first premises: {(a) ▶ b; (b) ▶ a; (a) ▶ c}; no final conclusion: {(a) ▶ b; (b) ▶ a; (c) ▶ b}; or neither first premises nor final conclusion: {(a) ▶ b; (b) ▶ a}. Thus circularity is traditionally regarded as a serious structural defect that makes evaluation of arguments impossible (petitio principii). Therefore we exclude such arguments from further analysis.

To conclude this section let us add that each coherent, non-divergent and non-circular argument can be transformed into a finite, multilevel structure (Selinger 2010). The first level is the set of all the sequents whose conclusion is the final conclusion of the whole argument. Each subsequent level consists of all the sequents whose conclusions are premises of the sequents from the previous level, and so on. Such an argument always has exactly one final conclusion and at least one first premise, and all the premises of its last level are first premises. These properties make it easier to provide a clear and effective procedure for evaluation.

4 Evaluation of Arguments

In this section we present a method for numerical computation of argument force. We assume that arguments considered to be computable are not circular. For the simplicity of the presentation we also assume that each argument considered is coherent and non-divergent. We will show how to transform the values of the first premises of such an argument into the value of its conclusion. We consider these values to be the degrees of acceptability of propositions for a given, rational agent in a given epistemic situation, so the term epistemic values is relevant here. By analogy to probability, epistemic values will be represented by the rational numbers from the closed interval [0, 1]. The natural order of the numbers in this interval reflects the order of the set of epistemic values, having a greatest element, a smallest element, and an unlimited number of intermediate values, which allow us to strengthen or weaken arguments repeatedly an unlimited number of times. Thus we have the following definition.

Definition 11

(Evaluation function). A function of evaluation is any function mapping a set of propositions L  ⊆ L into the closed interval [0, 1] of rational numbers.

Obviously, not every function of evaluation can be assigned to a rational agent. The issue of the canons (or postulates) of rationality is discussed systematically e.g. by Kaplan (1981).

Let us consider a single sequent. Its premises support its conclusion with some strength (or weight) that is to be measured by the numbers of our scale. We equate the strength of a given sequent with the acceptability of its conclusion under the condition that its premises are fully acceptable, so that by analogy to the conditional probability considered by Ajdukiewicz we call it conditional acceptability. For a given sequent S = (α 1 , α 2,…, α n ) ▶ α we will denote it by w(S) or alternatively by w(α/α 1, α 2, …, α n ). Formally, w is assumed to be a predefined function mapping the sequents of L into the closed interval [0, 1] of rational numbers. This parameter corresponds to different types of inferences which can be regarded as argumentation schemes. It is inversely proportional to the number of objections that can be raised against the arguments of a given type by some methodical procedure such as that proposed by Walton (2012). If the conclusion of a sequent is the logical consequence of its premises, i.e. the sequent is deductive, its conditional acceptability is 1.

Let \(v\) be an evaluation function. It will be convenient for further considerations to assume that our language L contains the connective of conjunction among its expressions. We also assume that if some propositions are elements of the domain of the function \(v\), then so is their conjunction. Thus we can equate the value of the (set of the) premises of a given linked argument with the value of their conjunction and compute it as a simple argument. In order to simplify further considerations we assume at this stage that the premises of the linked arguments are independent.

Definition 12

(Mutually independent propositions). Propositions α, β ∈ dom (\(v\)) are mutually independent iff w(α/β) = \(v\)(α) and w(β/α) = \(v\)(β).

If both conjuncts of a given conjunction are mutually independent, then its value is a simple, arithmetical multiplication of the values of the conjuncts. Thus the value of the entire conjunction is decreased proportionally to the values of its conjuncts, and for the evaluation function \(v\) we have the following:

Definition 13

(Conjunction value). If α, β ∈ dom (\(v\)), and they are mutually independent propositions, then \(v\)(αβ) = \(v\)(α) · \(v\)(β).

Since multiplication is commutative and associative, we can use it to compute sequents with more than two premises, but first we need to expand the concept of independence of propositions to take into account entire sets rather than only pairs of propositions:

Definition 14

(Independent set of propositions). A finite, multi-element set of propositions A ⊆ dom(\(v\)) is independent iff every proposition α ∈ A and the conjunction of all the propositions belonging to A − {α} are mutually independent.

Thus, if α 1 , α 2,…, α n ∈ dom(\(v\)), and if they form an independent set of propositions, then \(v\)(α 1α 2 ∧ … ∧ α n ) = \(v\)(α 1) · \(v\)(α 2) · … · \(v\)(α n ).

Now we can define how to compute the value of the conclusion of a single sequent. We will denote this value by \(v\) S (α), where α is the conclusion of S. Formally, the function \(v\) S is an extension of the function \(v\). Thus we assume that the conclusion of the sequent under consideration does not belong to the domain of \(v\). If the value of some proposition is less than ½, it means that the proposition is believed to be more likely false than true. Since drawing conclusions from false premises is a logical fallacy, a rational agent should not use such propositions in arguments. Therefore we also assume that, relative to a given evaluation function, the premises of a sequent under consideration are acceptable, i.e. the value of their conjunction is greater than ½. Let us note that an unacceptable proposition, or one not yet evaluated, might be used as a premise if we accept it potentially, i.e. if we construct a special evaluation function which assigns 1 to this proposition. Therefore the above assumption does not exclude the possibility of analysing different forms of a contrario reasoning within our model.

Definition 15

(Single sequent conclusion value). If \(v\) is an evaluation function, and S = (α 1 , α 2,…, α n ) ▶ α is a sequent such that α 1, α 2, …, α n ∈ dom(\(v\)) and α ∉ dom(\(v\)), and moreover \(v\)(α 1α 2 ∧ … ∧ α n ) > ½, then: \(v\) S (α) = \(v\)(α 1α 2 ∧ … ∧ α n ) · w(α/α 1, α 2, …, α n ).

Thus the acceptability of the conclusion turns out to be the value of its premises, which is reduced proportionally to the conditional acceptability. The same holds in the case of probability, and our motivation for using simple multiplication here was simply to maintain conformity with the probabilistic interpretation.

The above definition enables us to compute the value of atomic arguments and of a series of them. Now we need to take into account the case of convergent reasoning. For simplicity let us first consider arguments consisting of only two sequents, say S 1 = (α 1 , α 2, … α n ) ▶ γ and S 2 = (β 1 , β 2, … β m ) ▶ γ, with mutually independent sets (i.e. conjunctions) of premises and with the same conclusion γ. We use \(v_{{S_{1} , S_{2} }} (\gamma )\) to denote the value of their common conclusion with respect to the premises of both sequents.

Definition 16

(Conclusion value of convergent sequents). If \(v_{{S_{1} }}\left( \gamma \right)\, >\)  ½, and \(v_{{S_{2} }} \left( \gamma \right)\, >\)  ½, and the sets of premises p(S 1), p(S 2) are mutually independent, then: \(v_{{S_{1} , S_{2} }} (\gamma )\) = \(v_{{S_{1} }} \left( \gamma \right)\) ⊕ \(v_{{S_{2} }} \left( \gamma \right)\), where x ⊕ y = 2x + 2y − 2xy − 1.

Thus the support distributed between both independent pieces of evidence is aggregated by the operation ⊕. Moreover, the value of one piece of evidence is increased proportionally to the value of the other with respect to the interval [½, 1] (not to the whole interval [0, 1], as for the algorithm x + y − xy proposed by Yanal (1991, p. 140)).

The intuition is that the uncertainty left by the first argument, represented by the section [x, 1] in Fig. 2, should be decreased proportionally to the certainty given by the second argument, represented by the section [½, y] (in Yanal’s algorithm the second value is represented by the section [0, y]). Thus, by Thales’ Theorem, we can read the following proportion:

Fig. 2
figure 2

Calculation of the value of xy

$$\frac{{(x \,{ \oplus }\, y) - x}}{1 - x} = \frac{y - {\raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$}}}{{\raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$}}}$$

Important properties of the operation ⊕ are as follows:

$$x \,\oplus\, 1 = 1;$$
$$x \, { \oplus} \,{{\raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$}}} = x;$$
$$if\,{{\raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$}}} < x,\, y < 1, \quad then\,x\,{\oplus}\, y > y;\, x \,{ \oplus }\, y > x\quad and \quad x\, { \oplus }\, y < { 1}.$$

It follows from the second equation that if x = y = ½, then x ⊕ y = ½. This implication does not hold for Yanal’s algorithm, according to which we obtain ¾ in this case. This means that if we were to take a convergent argumentation with some completely irrelevant conclusion, we would have to accept it to a degree as high as ¾. Furthermore, by continuing to add irrelevant or very weak convergent arguments we would reach a value of almost 1 surprisingly quickly. Thus, Yanal’s algorithm overestimates the acceptability of convergent arguments. Moreover, since it allows both x and y to be smaller than ½, it can even happen that the convergent sum of unacceptable arguments (cf. Def. 17 below) will be acceptable itself. It is worth noting that in Yanal’s examples of such arguments (e.g. induction) premises should be interpreted as linked, so that there is no need to use many very weak convergent arguments (such as particular instances of induction).

The operation ⊕ is commutative and associative, i.e. for every x, y and z,

$$x \,{ \oplus }\, y = y \,{ \oplus }\, x;$$
$$(x \,{ \oplus }\, y)\,{ \oplus }\,z = x\,{ \oplus }\, (y\,{ \oplus }\,z).$$

Therefore, if we have many convergent sequents S 1, S 2, …, S n , for some n ≥ 2, with the same conclusion γ, and the values \(v_{{S_{1} }} \left( \gamma \right), v_{{S_{2} }} \left( \gamma \right) , \ldots , v_{{S_{n} }} \left( \gamma \right)\) are greater than ½, and moreover the conjunctions of their premises form an independent set of propositions, then \(v_{{S_{1} , S_{2} , \ldots , S_{n} }} (\gamma )\) = \(v_{{S_{1} }} \left( \gamma \right)\) ⊕ \(v_{{S_{2} }} \left( \gamma \right)\) ⊕ ··· ⊕ \(v_{{S_{n} }} \left( \gamma \right)\).

It is obvious how to compute the value of the conclusion of the entire argument by means of the above definitions. We will denote this value by \(v_{{\mathbf{A}}} (\alpha )\), where {α} = Fc(A), Fp(A) ⊆ dom(\(v\)) and C(A) ∩ dom(\(v\)) = ∅. If A is an atomic argument, i.e. if A = {S} for some sequent S, then \(v_{{\mathbf{A}}} (\alpha )\) = \(v_{S} (\alpha )\). If A is a direct argument consisting of many different sequents S 1, S 2, …, S n with the same conclusion α, then \(v_{{\mathbf{A}}} (\alpha )\) = \(v_{{S_{1} , S_{2} , \ldots , S_{n} }} (\gamma )\). Finally, if an argument is a more complex structure with many levels, the value of its conclusion should be computed level by level, beginning with the highest level, where all the premises are first premises.

It is easy to see that the calculated value can be given by a single formula. For example, the value of the final conclusion of the argument Δ = {(a 1) ▶ a 4; (a 2) ▶ a 4; (a 3) ▶ a 4; (a 4 , a 5 , a 6) ▶a 8; (a 7) ▶ a 8; (a 8) ▶ a} (see Fig. 1c), is given by the following formula:

$$\begin{aligned} v_{\Delta } (a) & = \{ [[((v(a_{1} ) \cdot w(a_{4} /a_{1} ) \,{ \oplus }\, v(a_{2} ) \cdot w(a_{4} /a_{2} ) \,{ \oplus }\, (v(a_{3} ) \cdot w(a_{4} /a_{3} ))) \cdot v(a_{5} ) \cdot \\ \,\quad \cdot v(a_{6} )] \cdot w(a_{8} /a_{4} ,a_{5}, a_{6} )] \,{ \oplus }\, (v(a_{7} ) \cdot w(a_{8} /a_{7} ))\} \cdot w(a/a_{8} ). \\ \end{aligned}$$

The value of an argument may be not computable if the value of some of its first premises is smaller than ½. However, when such an argument contains a convergent reasoning, it can still be computable if another part of this reasoning is supported by acceptable first premises. On the other hand, even if all of the first premises are acceptable, the value of the conjunction of some linked premises (or the product of the conditional acceptability of some constituent sequent and the value of its premises) can be smaller than ½. This situation can make further computation impossible. Obviously, this kind of incomputability is evidence of a fallacy in the argument analysed (perhaps it would be appropriate to assign the value ½ to the conclusions of such arguments). In any case, an argument may be accepted only if the value of its conclusion is computable. Naturally, this value must be greater than ½, because otherwise the argument would justify the negation of its conclusion rather than the conclusion itself (or it would leave the conclusion undecided in the case of a value of ½). Thus the least restrictive criterion for the acceptability of arguments can be formulated as follows:

Definition 17

(Argument acceptability). An argument A such that Fc(A) = {α} is acceptable with respect to the evaluation function \(v\) iff \(v_{{\mathbf{A}}} (\alpha )\) > ½.

The operation ⊕ can also be used to recalculate the acceptability of the conclusion α of an argument A if α already belongs to the domain of an evaluation function \(v\). However, it must be assumed that the initial value \(v (\alpha )\) is greater than ½, and moreover that it is not assigned to α due to some reasons dependent on the premises of A. In this case we have \(v_{{\mathbf{A}}} (\alpha )\) = \(v\)(α) ⊕ \(v_{{\mathbf{A}}}^{{\prime }} (\alpha )\), where \(v^{{\prime }}\) is the evaluation function obtained from \(v\) by reducing its domain to the set dom(\(v\)) − {α}. Thus the assumption C(A) ∩ dom(\(v\)) = ∅ is not necessary for the computation of \(v_{{\mathbf{A}}} (\alpha )\), but if it does not hold, then the argument acceptability criterion in Def. 17 should be replaced by the condition \(v_{{\mathbf{A}}} (\alpha )\) > \(v(\alpha )\).

Finally, let us discuss one limitation that must be faced in dealing with the theory of argumentation, namely the assumption that the premises are independent. It is problematic if a premise happens to follow from other premises (or if they are equivalent). In this case the value of the conclusion of a linked argument can be underestimated, since the value of the conjunction of its premises will be decreased by that of the dependent premises, which is in fact needlessly added. On the other hand, if the conjunctions of the premises of some convergent arguments are dependent, then the value of the conclusion can be overestimated, since dependent arguments will double the same content of argumentation (the double counting fallacy). Therefore we must be careful when we compute the value of a conclusion supported by dependent premises, and eliminate such needless parts of the argumentation before we begin calculations.

If it is not possible to avoid some dependent premises, we can avoid undesirable consequences when we calculate the value of the premises of a single sequent. For this purpose we have to replace the simple multiplication in Def. 13 with a more general formula: \(v\)(αβ) = \(v\)(α) · w(β/α). However, in this case the commutativity of conjunction must be ensured by a separate postulate, which can be regarded as a postulate of rationality: \(v\)(α) · w(β/α) = \(v\)(β) · w(α/β). If it holds, the associativity of conjunction can be expressed by the following postulate: \(v\)(α) · w(β/α) · w(γ/α, β) = \(v\)(β) · w(γ/β) · w(α/β, γ). With regard to the acceptability of convergent arguments with dependent premises, we can still use the operation ⊕ to calculate an upper bound for this value (Tokarz’s maximum principle determines a lower bound).

5 Some Related Work

This section explores the relationship between the proposed model and relevant work in argumentation theory with regard to (1) the structure of arguments and (2) argument evaluation. This comparison will lay the ground for discussing (3) how the proposed model can be extended in the future to take into account research directions suggested by some contemporary approaches. Namely, we sketch how to specify the attack relation and how to introduce conductive arguments into our model.

Formal aspects of argumentative structures are extensively investigated by those who deal with AI and defeasible reasoning, such as Pollock, who created OSCAR (1987), Vreeswijk (1997), and the creators of Deflog (Verheij 2003) and ASPIC+ (Prakken 2010). However, this research does not directly refer to the linked-convergent distinction which is fundamental to our model. On the other hand, there are also some software tools supporting the analysis of argument structure, such as Carneades (Gordon and Walton 2006), Rationale (van Gelder 2007) or Araucaria (Rowe et al. 2006; Budzynska 2011), which exploit this distinction substantially. Some of these models introduce highly developed structures, such as the argument graphs in Carneades (Gordon and Walton 2006), which result in a highly complex argument representation (the fact that premises and conclusions are the edges of these graphs is perhaps technically justified, but may be regarded as unintuitive). In contrast, the definition proposed in this paper, which treats an argument as a relation between sets of propositions and single propositions, seems to be very simple and intuitive, and cannot be further simplified. In particular, this relation cannot hold between individual, single propositions (cf. Budzynska 2011, p. 30), because it would not allow us to distinguish which of the premises are linked.

Since the numeric model of evaluation proposed here takes into account the linked-convergent distinction, it can be exploited in argument analysis performed using argumentation technologies such as Araucaria and Rationale. Among other numeric approaches, some, such as the Bayesian model proposed by Nielsen and Parsons (2006), do not consider convergent reasoning, while the remaining ones do not adequately reflect its cumulative nature. Tokarz’s method including the maximum principle (cf. Sect. 2) and the proof standards used in Carneades in order to select the better of two sets of arguments (either pro or contra conclusion) belong to the latter category. As Walton and Gordon state, ‘the proof standards […] modeled thus far in Carneades do not compare the set of pro arguments against the set of con arguments, but rather only compare each pro argument against each con argument’ (2013, p. 10). In contrast, our method allows us to compute the value of each set independently of the other, and thus to evaluate (and compare) them within a uniform and absolute scale. Furthermore, these values increase proportionally to the number and to the forces of all the convergent components. Yanal’s algorithm, on the other hand, reflects this ‘cumulative proportionality’, but overestimates the acceptability of convergent reasoning (cf. Sect. 4).

An argument in our model is regarded as a separate propositional structure that is extracted from some text or utterance, and it can be evaluated independently of possible attacks or counterarguments that may occur in a dialogue. Thus our approach should be distinguished from the abstract argumentation frameworks introduced by Dung (1995), as well as from other non-numerical approaches dealing with defeasible and non-monotonic reasoning. It is not the aim of this paper to develop our conception into a sort of analysis of an argumentative dialogue (cf. Kacprzak and Yaskorska 2014, this issue), but let us note that such a development seems possible, since formulation and expansion of arguments can be interpreted as acts of attack or defence. We will attempt to sketch briefly how to specify within our model some possible means of attacking arguments, which have been defined from ancient times through Schopenhauer (cf. 1988, vol. 3) to the contemporary literature on argumentation (see e.g. Walton 2011).

Let A be an attacked argument, and B an attacking argument. If A is attacked, as Schopenhauer would say, directly (cf. 1988, vol. 3), i.e. if it is undercut, then the conclusion of A is questioned. Thus the conclusion of B should be formulated simply as the proposition ‘A is not acceptable’, while the premises of B can state that (1) some premise of A is not acceptable, i.e. its acceptability is not greater than ½; or (2) it does not belong to the domain of the evaluation function; or (3) there is some sequent in A whose conditional acceptability is not greater than ½. The algorithms proposed in this paper let us distinguish some other possibilities, covered neither by Tokarz’s method nor by Carneades: (4) the premises of some sequent considered separately are acceptable, however their conjunction is not; (5) the premises of some sequent are acceptable and its conditional acceptability is greater than ½, but the product of these values is not. These are the ways in which A can be undercut. Let us note, however, that since we take convergent reasoning into account, not every attack of this kind results in a successful questioning of the whole argument.

An argument A can also be attacked indirectly (cf. Schopenhauer 1988, vol. 3), when its conclusion, say α, is either rebutted by a stronger argument or questioned by an equal one with the conclusion ~α. Thus, we can say that B attacks A if B is acceptable and \(v_{{\mathbf{A}}} (\alpha ) \le v_{{\mathbf{B}}} ( \sim \alpha )\). Furthermore, the value y − x + ½ for x, y ≥ ½, where x = \(v_{{\mathbf{A}}} (\alpha )\) and y = \(v_{{\mathbf{B}}} ( \sim \alpha )\), can be taken as the final value of ~α (the value of α is: x − y + ½).

This algorithm can also be used to calculate the acceptability of the ‘conductive arguments’ evaluated in Carneades by means of proof standards (Walton and Gordon 2013). Such arguments, apart from normal pro-premises, also have contra-premises (exceptions) denying the conclusion. They can be introduced into our model by simply assigning Boolean values to each sequent: true if the premises of a sequent are pro its conclusion, and false if they are contra (cf. Koszowy and Selinger 2013). Thus the algorithms proposed in this paper are an alternative to proof standards. On the other hand, an advantage of proof standards is that they let us avoid the double counting fallacy, so it seems that both methods could be used complementarily. Let us note that in this extended version of our model the attack relation can also be interpreted as the relation holding between individual propositions, namely between the conjunctions of the exceptions and the conclusions of contra-sequents.

The second type of indirect attack distinguished by Schopenhauer (cf. 1988, vol. 3) is called apagoge. It aims to reduce the conclusion α of an attacked argument A to an absurdity (ad absurdum) or to a falsehood (ad falsum). In our model, apagoge might be understood as finding an unacceptable proposition β such that the argument B = {(α → β; ~ β) ▶ ~ α} is acceptable and \(v_{{\mathbf{A}}} (\alpha ) \le v_{{\mathbf{B}}} ( \sim \alpha )\).

These methods of attack can be used simultaneously in one counterargument. For example, an intermediate conclusion of an attacked argument can be rebutted by a better argument, so that the whole attacked argument or part of it is questioned; also, some of its convergent parts could be undercut to facilitate the rebuttal of its final conclusion, and so on. These ideas, however, should be elaborated in more detail to reveal more relationships between our model and other models. Let us add only that the relationships between various models of argumentation are being intensively investigated. For example, Dung’s abstract frameworks have been developed and furnished with a numerical semantics (Brewka and Woltran 2010; Gabbay 2012), and the numerical formalism of Carneades has been translated into the formalism of ASPIC+ (van Gijzel and Prakken 2012).

To conclude, let us explicitly point out a more fundamental issue. As we have stressed, our model of argumentation is based on the distinction between linked and convergent arguments. Our approach can be said to modify it slightly, but the correctness of this distinction can be questioned more profoundly. For example, Vorobej in (1995) considers ‘hybrid arguments’, which are neither linked nor convergent. It seems that they may be regarded simply as atomic arguments and computed in the same way, but the very fact that such arguments exist reveals the difficulties in recognizing the structure of argument in practice.

6 Conclusion

The computational model proposed in the paper shows that the diagramming method can be formalized strictly and precisely (while still simply) by means of standard set-theoretical and arithmetical tools. Our model combines the benefits of other approaches to argument analysis: it recognizes the internal structure of arguments, allows infinitely many degrees of acceptability, and allows the interpretation of the attack relation. Moreover, it reflects the cumulative nature of convergent reasoning. It should also be stressed here that all of the proposed notions and operations are finitary, so that the logical force of an argument can be evaluated in a finite number of steps. Despite some limitations, the model can be applied to the evaluation of a fairly large class of arguments, and can serve as a framework for further in-depth study.