1 Introduction

In a seminal paper Dung (1995) demonstrated that various concepts of non-monotonic reasoning, logic programming, and game theory can be modeled via so-called abstract argumentation frameworks. The latter are directed graphs, where the vertices are identified with arguments and the edges represent attacks between arguments. Initially, arguments and attacks have been considered only in a binary (classical) setting: either they are present, accepted, rejected, etc, or not. More recently, proposals for generalizing to graded scenarios, where arguments and/or attacks can be of various strength have been made, see, e.g., Coste-Marquis et al. (2012), Dunne et al. (2011), Krause et al. (1995) and Matt and Toni (2008) for some important contributions along this line. Here, we will follow Dunne et al. (2011) in insisting that weights of attacks between arguments naturally give raise to degrees of acceptability of arguments and thus their claims.

Since fuzzy logics—in the sense of full-fledged truth-functional logics over truth values in [0, 1], see Cintula et al. (2015)—take truth to be graded, one might think that there is a straightforward connection between frameworks featuring graded arguments and fuzzy logics. However, even in the non-graded scenario, the relation between logical consequence and so-called semantics for Dung-style argumentation frames (see, e.g., Besnard and Hunter 2008) is delicate. One possibility for establishing such a connection is to focus on purely logical argumentation (Arieli and Straßer 2015), where the claims of arguments are assumed to be logically entailed by the support part of an argument. Note, however, that this amounts to a severe restriction of the types of arguments considered. Moreover, it has been pointed out, e.g., in Amgoud et al. (2016), that an argument that features its own claim also as its support is a paradigmatic case of an unacceptable argument, thus running head on into conflict with the most basic property of ordinary logical consequence relations, namely reflexivity. For this reason, we will follow another route here, that is not limited to purely logical argumentation and that considers not just explicit, but in particular also logically implicit arguments.

We look for principles that constrain the strength of implicit attacks on claims that either logically follow from claims of attacking arguments or that, conversely, logically entail claims of attacking arguments. For example, it seems natural to stipulate that an argument, which attacks a claim A with a given weight, implicitly attacks the claim \(A \wedge B\) with at least the same weight. Similarly, an attack on a disjunctive claim \(A \vee B\) may reasonably be assumed to entail attacks on A and on B that carry at least the same weight. A closely related scenario has recently been introduced by Corsi and Fermüller (2017) for the non-weighted case, investigating which sets of logical attack principles give raise to either classical logic or—more realistically from the argumentation point of view—to certain sub-classical logics that are induced by fragments of the classical sequent calculus LK.

In our endeavor to connect logical argumentation principles with fuzzy logics, we will first focus on Gödel logic, since the constraints on weights of attacks that can be systematically related to many-valued truth functions are arguably more transparent for this logic than for others. However, we will also consider Łukasiewicz and product logic in this vein and make a few remarks on more general connections between t-norm-based fuzzy logics and weighted versions of logical attack principles.

We want to emphasize that we do not want to suggest that fuzzy logics can be viewed as logics of argumentation in any straightforward sense. Rather, we are interested in determining which constraints are actually needed to establish the connection. Some of these principles, like the ones mentioned in the last paragraph, are probably uncontroversial, while others seem to be too demanding with respect to pre-formal intuitions about the relation between logical connectives and argument strength. Our aim is to provide a detailed picture regarding the respective necessity and sufficiency of a fairly large set of different principles for characterizing various t-norm-based fuzzy logics.

Another possible misunderstanding that we want to address right away concerns the very nature of argumentation-based reasoning: shouldn’t any ‘logic of argumentation’ be non-monotonic? So why do we attempt to characterize ordinary (monotonic) truth-functional logics as logics of argumentative reasoning? We certainly agree that a claim that is justified with respect to a given set of arguments may well have to be discarded, if further arguments are taken into account. Indeed, the information that can be extracted from what we will call a semi-abstract argumentation frame below, cannot be expected to grow or shrink monotonically, if we update the frame. But this does not mean that we cannot observe certain monotonic inference patterns, if we refer to the set of all possible semi-abstract argumentation frames that satisfy certain closure properties regarding implicit arguments and attacks. In fact, in this paper we restrict attention to logically complex propositions that are immune to attack according to various principles relating logical form to potential attack. The fact that such ‘argumentatively immune’ statements turn out to coincide with logically valid formulas according to certain monotonic non-classical logics is not in conflict with principles of non-monotonic reasoning.

Some readers may be disappointed that our concepts, and results do not relate in any direct manner to the various extension-based ‘semantics’ or to other methods for singling out (in some appropriate sense) maximal conflict free sets of coherently defensible arguments that have been developed in computational argumentation theory. However, as already indicated, our aim is to analyze to which extend t-norm-based fuzzy logics can be interpreted in a new semantic framework that talks about attacks of varying strength rather than about degrees of truth. Consequently, we seek to contribute to the literature on alternative semantics for fuzzy logics (see, e.g., Bennett et al. 2000; Giles 1982; Lawry 1998; Paris 1997, 2000; Ruspini 1991), rather than to computational argumentation theory. Whether such an endeavor can also have an impact on argumentation theory remains to be seen. Except for some very tentative remarks in the conclusion, no claim of relevance regarding argumentation theory is made here.

The rest of the paper is organized as follows. In Sect. 2 we quickly review some basic concepts from Corsi and Fermüller (2017) regarding classical attack principles. These notions and some of the principles are generalized to weighted (semi-abstract) argumentation frames in Sect. 3. Section 4 introduces our central new concept: argumentative immunity. In Sect. 5 we prove argumentative soundness and completeness for Gödel logic \(\mathsf{G}\) with respect to an appropriate collection of attack principles. Section 6 presents attack principles for Łukasiewicz logic \(\mathsf{\L }\) and for product logic \(\mathsf{P}\). In Sect. 8 we analyze the so-called prelinearity axiom, which is central for all t-norm-based fuzzy logics, from our argumentation-based perspective. In the conclusion (Sect. 10) we briefly look back at what we have achieved and suggest several directions for further research.

2 Attack principles for unweighted argumentation frames

We have introduced the concept of logical attack principles in Corsi and Fermüller (2017) for (unweighted) argumentation frames. Before dealing with weighted argumentation frames, we revisit central notions and previous results from Corsi and Fermüller (2017). To keep the paper self-contained, we also review the ideas and motivation guiding our approach.

Recall that Dung’s abstract argumentation frames (Dung 1995) are just finite directed graphs, where the vertices represent arguments and the edges represent attacks between arguments. The aim is to identify so-called admissible extensions, which are sets of arguments that are pairwise conflict free (i.e., there is no attack among them) and that defend every member attacked by some external argument by in turn attacking this attacking argument. Various conditions on admissible extensions lead to refined versions of extensions. We will not deal with extension-based semantics here, but refer the interested reader to, e.g., Besnard and Hunter (2008) for a thorough introduction into Dung-style argumentation theory.

Abstract argumentation frames (AFs) can be instantiated by attaching concrete arguments to the vertices and defining concrete types of attack between arguments. This can be done in various ways: important examples of systems for extracting argumentation frames from concrete data (formulas and rules) are Caminada and Amgoud (2007) and Modgil and Prakken (2013). In general, arguments consist in a support part and a claim. In this paper, we will not investigate fully instantiated AFs, but rather focus exclusively on the logical structure of the claim of a given argument. Consequently, we deal with a variant of Dung’s AFs that still abstracts away from the internal structure of arguments and attacks, but partly instantiates the graph by associating its vertices with propositional formulas that represent the claims of corresponding arguments.

Definition 1

A semi-abstract argumentation frame (SAF) is a directed graph \((\mathcal{A},R_{\rightarrow })\), where each vertex \(a\in \mathcal{A}\) is labeled by a propositional formula over the connectives \(\wedge \), \(\vee \), \(\supset \), \(\lnot \), and constant \(\bot \). We say that FattacksG and write \(F{\longrightarrow }G\) if there is an edge from a vertex labeled by F to one labeled by G.

\(F {\longrightarrow }G\) signifies that in some underlying (ordinary) argumentation frame there is an argument featuring claim F that attacks an argument with the claim G. As indicated, we will mostly drop the reference to (full) arguments and speak of attacks on the level of corresponding claims, i.e., propositional formulas.

Let us revisit an example of an SAF, originally presented in Corsi and Fermüller (2017).

Example 1

Consider the following statements:

  • “The overall prosperity in society increases.” (P)

  • “Warlike conflicts about energy resources arise.” (W)

  • “The level of \(\hbox {CO}_2\) emissions is getting dangerously high.” (C)

  • “Awareness about the need of environmental protection increases.” (E)

Moreover, consider an argumentation frame containing arguments, where the claims consist in some of these statements or in some simple logical compounds thereof.

Using the indicated abbreviations and identifying vertices with their labels, a concrete corresponding SAF \(S_E=({\mathcal{A}},{R_{\rightarrow }})\) is given by \(\mathcal{A}=\{P,E,W, P \supset C, E\vee C, P \wedge C \}\) and \(R_{\rightarrow }=\{E {\longrightarrow }P\supset C,\ W {\longrightarrow }C \vee E,\ W {\longrightarrow }P\wedge C,\ C\vee E {\longrightarrow }P\}\).

Note that the various statements that are put forward as claims in the above example may well be thought as supported by additional statements that remain implicit here. Even without any access to such additional statements, one can identify certain logical connections between these claims that bear on the existence of further implicit arguments and attacks.

Example 2

Suppose we have an argument, say X, that attacks the claim that the majority of the population of some country strongly supports its government. Without analyzing X and even without knowing X, one can reasonable assume that X implicitly also attacks the conjunctive claim that the majority of the population strongly supports its government and believes that the economic situation is improving. Note that this observation does not assert any particular connection between support for the government and economic performance. It rather expresses the simple rationality principle that one cannot attack a statement A without implicitly thereby also attacking any conjunction of the form \(A \wedge B\).

Note that the observation made in Example 2 only refers to the logical form of claims. The corresponding principle can be formulated as follows

\({\mathbf{(A.}\wedge \mathbf{)}}\) :

If \(F{\longrightarrow }A\) or \(F{\longrightarrow }B\), then \(F{\longrightarrow }A \wedge B\).

This principle can be understood as a simple instance of the following general attack principle, where \(\models \) denotes logical consequence

(A.gen) :

If \(F{\longrightarrow }G\) and \(G'\models G\), then \(F{\longrightarrow }G'\).

As instances of (A.gen) we obtain not only \({\mathbf{(A.}\wedge \mathbf{)}}\), but also the following attack principles, referring to other logical connectives.

\({\mathbf{(A.}\vee \mathbf{)}}\) :

If \(F{\longrightarrow }A \vee B\), then \(F{\longrightarrow }A\) and \(F{\longrightarrow }B\).

\({\mathbf{(A.}\supset \mathbf{)}}\) :

If \(F{\longrightarrow }B\), but not \(F {\longrightarrow }A\), then \(F{\longrightarrow }A \supset B\).

\({\mathbf{(A.}\bot \mathbf{)}}\) :

\(F{\longrightarrow }\bot \), for every F.

In Corsi and Fermüller (2017) we defined a notion of logical consequence (‘argumentative consequence’) that views attacks on formulas as forms of (weak) counterexamples and asked which attack principles are needed to recover classical consequence as argumentative consequence. As expected, it turns out the above mentioned principles are not sufficient to characterize classical logic. For that purpose we rather have to consider additional, stronger and arguably more problematic inverse principles like

\({\mathbf{(C.}\wedge \mathbf{)}}\) :

If \(F{\longrightarrow }A \wedge B\), then \(F{\longrightarrow }A\) or \(F{\longrightarrow }B\).

\({\mathbf{(C.}\vee \mathbf{)}}\) :

If \(F{\longrightarrow }A\) and \(F{\longrightarrow }B\), then \(F{\longrightarrow }A \vee B\).

\({\mathbf{(C.}\supset \mathbf{)}}\) :

If \(F{\longrightarrow }A \supset B\), then \(F{\longrightarrow }B\), but not \(F{\longrightarrow }A\).

At least some of these latter principles are hard to justify with respect to intuitions about rational constraints on (implicit) arguments. Therefore, Corsi and Fermüller (2017) introduces a simple modal interpretation of the attack relation that allows one to sort out certain attack principles as invalid in general. It is shown that the logic that results from just enforcing the remaining principles is characterized by a fragment of the classical sequent calculus LK, and consequently admits an alternative semantics in terms of non-deterministic matrices over the two classical truth values. While this type of interpretation can be considered as a form of many-valued semantics, it does not refer to intermediary truth values. This raises the question whether certain fuzzy logics can be characterized in a similar manner. For this purpose we will consider weighted (degree-based) versions of argumentation frames.

3 Attack principles for weighted attacks

For the graded scenario we generalize semi-abstract argumentation frames in a straightforward manner by attaching a value from the closed real unit interval to each edge (mn) of the argumentation graph attack. This value is intended to model the (normalized) strength of attack on the argument represented by the node m on the argument represented by the node n.

Definition 2

A weighted semi-abstract argumentation frame (WSAF) is a triple \((\mathcal{A},R_{\rightarrow },w)\), where \((\mathcal{A},R_{\rightarrow })\) is a SAF, and w is an assignment of weights\(\in [0,1]\) to the attacks, i.e., to the ordered pairs of elements in \(\mathcal{A}\). We write \(F {\mathop {\longrightarrow }\limits ^{w}} G\) if the weight w is assigned to \(F {\longrightarrow }G\).

Any WSAF combines an SAF according to Definition 2 with a WAF, as introduced in Dunne et al. (2011). For the unweighted case, i.e., for SAFs, we stipulated in Sect. 2 that \(R_{\rightarrow }\) arises from an underlying classical argumentation frame by setting \(F {\longrightarrow }G\), whenever there exists an argument with claim F that attacks an argument with claim G. In the weighted case, we have to take into account that different underlying attacks on G from arguments with claim F, which may carry different weights, might exist in a given WAF. We could of course stipulate that w in \(F {\mathop {\longrightarrow }\limits ^{w}} G\) is the supremum over all weights of attacks with corresponding claims. The precept that the attack with maximal weight should be decisive in case of multiple attacks of the same type is certainly adequate for specific application scenarios. However, we see no need to restrict the weight assignments in WSAFs in any specific manner, here. Rather we just stipulate that—as part of the abstraction process from concrete collections of weighted attacks to a WSAF—some systematic method is applied, that maps the sets of weights between arguments involving the same claims into a single weight.

Since we allow also attacks of weight 0, (unweighted) SAFs are just special cases of WSAFs, where each attack is either of weight 1 or 0. The latter, of course, amounts to ‘no attack at all’, i.e., where we previously wrote , we now write \(F{\mathop {\longrightarrow }\limits ^{0}} A\).

The attack principles for SAFs can be generalized to WSAFs in various ways, at least some of which are very straightforward, as indicated by the following example.

Example 3

We revisit Example 2 of Sect. 2, where we considered an argument X that attacks the claim A, expressing that the majority of the population of some country strongly supports its government. Let us now assume we have some information regarding the actual strength of this attack. Note that we are not, at least not in any direct way, attaching a degree of truth or belief to the claim A itself. Rather we only consider the given attack on the statement. Different meanings can be associated with ‘strength of the attack’ as emphasized, e.g., in Dunne et al. (2011). For example, it could simply reflect our (the modelers) degree of belief in the validity of the attack. In a more sophisticated scenario, we can imagine a set of experts who are asked to judge whether the alleged attack of X on the argument claiming A is convincing or not. The weight of the attack could then be stipulated to equal the proportion of experts who find the attack convincing. Of course, many alternative interpretations of ‘weight’ are conceivable. But in any case it should be clear that the argument X should not attack any claim that is formed by conjunctively attaching a further claim B to A with higher weight that A itself. Like in the unweighted case, this expresses a simple rationality principle that only takes into account the logical form of the attacked claim. Neither the content of the involved argument nor the nature of the attack and the particular interpretation of ‘strength’ or ’weight’ matter when stipulating that any attack on a claim A (implicitly) attacks any claim of the form \(A \wedge B\) with at least the same weight.

In accordance to the above example we obtain the following generalization of principle \({\mathbf{(A.}\wedge \mathbf{)}}\):

\({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\) and \(F{\mathop {\longrightarrow }\limits ^{y}} B\), then \(F{\mathop {\longrightarrow }\limits ^{z}}A \wedge B\), where \(z\ge \max (x,y)\).

Actually, since we also consider attacks of weight 0 (equivalent to ‘no attack edge’ in SAFs), we may assume without loss of generality that the graph formed by \(R_{\rightarrow }\) in a WSAF is complete. But this means that the above formulation of the attack principle for conjunction can be reformulated as follows:

\({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{z}}A \wedge B\), then \(z\ge \max (x,y)\).

In words: an attack against a conjunction carries a weight that is at least as large as that against any of its conjuncts.

Likewise, the following principle refinement of \({\mathbf{(A.}\vee \mathbf{)}}\) for (implicit) attacks on disjunctive claims should be intuitively uncontroversial.

\({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{z}} A \vee B\), then \(z\le \min (x,y)\).

An attack against a disjunction entails attacks to both of its disjuncts of at least the same weight.

As explained in Corsi and Fermüller (2017), also \({\mathbf{(C.}\vee \mathbf{)}}\), the inverse of \({\mathbf{(A.}\vee \mathbf{)}}\) can be justified with respect to a particular formal interpretation of the attack relation. It straightforwardly generalizes to the weighted scenario as follows.

\({\mathbf{(C}^w\mathbf{.}\vee \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{z}} A \vee B\), then \(z\ge \min (x,y)\).

An attack against a disjunction carries a weight that is at least as large as that against one of its disjuncts.

Example 4

Let us expand Examples 2 and 3 to disjunctive claims by considering the following statement “(A) The majority of the population strongly supports its government or (B) believes that the economic situation is improving”. Assume that some argument X attacks this claim \((A \vee B)\) with some weight \(w \in [0,1]\). Then \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\) expresses the rationality principle that arguments claiming F cannot attack \(A \vee B\) with a greater weight than that of a corresponding attack A or B alone. Note that this makes sense independently of any concrete interpretation of weights of attacks, since \(A\vee B\) logically follows from A as well as from B, even if we move from to classical logic to a many-valued one.

The inverse principle \({\mathbf{(C}^w\mathbf{.}\vee \mathbf{)}}\) is less obviously valid. However, if we adopt the interpretation of weights as reflecting degrees of belief in the validity of the proposed attacks, then the following principle seems reasonable: An agent who believes with degree x that arguments claiming F successfully attack A and believes with degree y that arguments claiming F also attack B successfully, should believe with a degree that is not lower than both x and y that those arguments, at least implicitly, also establish a valid attack on the claim that either \(A \vee B\).

Bearing in mind that our principles are not intended to model actual attacks, but rather suggest possibilities for ‘closing off’ given sets of coherent arguments with respect to simple logical consequence relations, also the following version of \({\mathbf{(A.}\bot \mathbf{)}}\) should be obvious.

\({\mathbf{(A}^w\mathbf{.}\bot \mathbf{)}}\) :

\(F{\mathop {\longrightarrow }\limits ^{1}} \bot \), for every F.

Every argument fully attacks (at least implicitly) the clearly false claim \(\bot \).

Note that \(\bot \) is intended to stand for any obviously false statement. Therefore no incoherence should arise from stipulating that any argument implicitly rejects an argument with claim \(\bot \) without qualification regarding the weight of the attack.

Justifying principles involving attacks on implicative claims is more delicate. It is important to keep in mind that we only want to consider material, even truth-functional implication here, and hence do not investigate proper (intensional) conditionals or counter-factual statements. This means that we (once more) look for principles that only refer to the weights of attacks on the claim and on its immediate subformulas, respectively. In light of the classical principles \({\mathbf{(A.}\supset \mathbf{)}}\) and \({\mathbf{(C.}\supset \mathbf{)}}\), at least the following candidates are worth considering.

\({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{0}} A\supset B\), then \(x \ge y\).

If an implication is not attacked at all, then the implying formula is attacked with at least the same weight as the implied formula.

\({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{z>0}} A\supset B\),Footnote 1 then \(x < y\).

If an implication is attacked with some positive weight, then the implying formula is attacked with a strictly smaller weight than the implied formula.

These two principles are equivalent to the following reformulations, respectively.

\({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(x < y\), then \(F{\mathop {\longrightarrow }\limits ^{z>0}} A\supset B\).

If the implied formula is attacked with a higher weight than the implying formula, then the implication is attacked with some positive weight.

\({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(x \ge y\), then \(F{\mathop {\longrightarrow }\limits ^{0}} A\supset B\).

If the implied formula is attacked with at least the same weight than the implying formula, then the implication is not attacked at all.

These reformulations make transparent that \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) jointly express the principle that an implication is attacked with some positive weight if and only if the implied formula is attacked with a higher weight than the implying formula. But no restriction is made on the amount of (positive) weight of the attack on the implication in relation to the weights of the attacks on its subformulas. The following principle bounds the weight of an attack on an implication by the weight of the corresponding attack on the implied formula. This seems reasonable, if we take into account that we aim at characterizing strictly material implication, here.

\({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{y}} B\) and \(F{\mathop {\longrightarrow }\limits ^{z}} A\supset B\), then \(z\le y\).

No implication is attacked to a higher degree than the implied formula.

Definition 3

The set of basic weighted attack principles\(\mathcal{P}_B\) consists of all principles mentioned in this section; i.e.,

$$\begin{aligned} {\mathcal{P}_B}= & {} \{\mathbf{(A}^w.\wedge \mathbf{)}, \mathbf{(A}^w.\vee \mathbf{)},\mathbf{(C}^w.\vee \mathbf{)}, \mathbf{(A}^w.\bot \mathbf{)},\mathbf{(A}^w.\supset \mathbf{)},\\&\mathbf{(C}^w.\supset \mathbf{)},\mathbf{(B}^w.\supset \mathbf{)}\}. \end{aligned}$$

It is probably not surprising that the basic weighted principles, even if imposed jointly, do not suffice to determine any specific truth-functional semantics for the logical connectives. In other words: further principles will be needed to characterize particular fuzzy logics.

Example 5

Let us consider a scenario similar to Example 1, referring to the recent economic growth of China and current debates on policies about pollution. The involved statements are the following:

  • “Rapid economic growth occurs.” (G)

  • “Very high level of \(\hbox {CO}_2\) emissions occurs.” (C)

  • “Overall prosperity increases.” (P)

  • “Awareness about the need of environmental protection increases.” (E)

  • “Strict regulations concerning \(\hbox {CO}_2\) emissions are put in place.” (R)

  • “Industry invests in ‘green’ production methods.” (I)

Again, in addition to such statements, also certain logical compounds of these statements might well be considered as claims of arguments. Let a concrete corresponding SAF \(S_E=({\mathcal{A}},{R_{\rightarrow }})\) be given by \(\mathcal{A}=\{I, C, P, G, C\wedge P, P\vee I, E\supset R \}\) and \(R_{\rightarrow }=\{I {\longrightarrow }C, C{\longrightarrow }P\vee I, G{\longrightarrow }E\supset R\}\).

Imposing our attack principles for the unweighted frames results in additional (implicit) attacks that augment \(R_{\rightarrow }\); namely, \(I {\longrightarrow }C\wedge P\), \(C {\longrightarrow }I\) and \(G{\longrightarrow }R\) using \({\mathbf{(A.}\wedge \mathbf{)}}\), \({\mathbf{(C.}\vee \mathbf{)}}\) and \({\mathbf{(C.}\supset \mathbf{)}}\), respectively.

So far, we have not yet considered any weights attached to the indicated attacks. However, it is perfectly conceivable that not all of the mentioned attacks are equally plausible or equally agreed upon among a group of experts. Of course, to systematically derive certain weights of particular attacks, we would have to analyze the underlying arguments and not just the claims of these arguments. (Remember that only the later are recorded in SAFs.) But even without access to such information, it is plausible that, e.g., arguments claiming I are only considered partly successful in attacking arguments claiming C. Similarly, also the two other attacks registered in the SAF \(S_E\) may receive weights less than 1. Concretely, suppose that we have the following weights on the attack relations: \(I{\mathop {\longrightarrow }\limits ^{0.2}} C\), \(C{\mathop {\longrightarrow }\limits ^{0.7}} P \vee I\) and \(G{\mathop {\longrightarrow }\limits ^{0.5}} E \supset R\). Then the logical principles discussed in this section entail that further attacks than those considered explicitly so far should be taken into account. For example, principle \({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\), applied to \(I{\mathop {\longrightarrow }\limits ^{0.2}} C\) yields that arguments claiming I will attack arguments claiming \(C \wedge P\) with at least the same weight (0.2) with which they attack arguments claiming C. We record this by writing \(I \xrightarrow []{z_1 \ge 0.2} C \wedge P\). Similarly, we can apply \({\mathbf{(C}^w\mathbf{.}\vee \mathbf{)}}\) to \(C{\mathop {\longrightarrow }\limits ^{0.7}} P \vee I\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) to \(G{\mathop {\longrightarrow }\limits ^{0.5}} E \supset R\) to obtain \(C \xrightarrow []{z_2 \le 0.7} I\) and \(G\xrightarrow []{z_3 > 0} R\), respectively.

In Dunne et al. (2011), three different ways of imposing weights are analyzed: weights can be interpreted as measures of votes in support of attacks, as a measure of inconsistency between arguments or, more generally, as rankings of different types of attack. Under all three interpretations at least some of our attack principles are straightforwardly justified. In particular, using the first interpretation it is easy to see that \({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\), \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\), \({\mathbf{(A}^w\mathbf{.}\bot \mathbf{)}}\), and \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) hold. For example, if x is the number of votes in support of the attack \(F {\longrightarrow }A\) and y is the number of votes in support of \(F {\longrightarrow }B\), then the number of votes in support of \(F {\longrightarrow }A \wedge B\) can be neither below x nor below y, because agents that support either \(F {\longrightarrow }A\) or \(F {\longrightarrow }B\), if acting rationally, will also support the attack \(F {\longrightarrow }A \wedge B\). \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\), \({\mathbf{(A}^w\mathbf{.}\bot \mathbf{)}}\), and \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) can be justified analogously.

We do not aim at an analysis of concrete arguments or at a new method for assigning weights to attacks between arguments. Rather we want to explore under which conditions given weighted argumentation frames can be used to extract a many-valued semantics for the involved claims. For this purpose we are now going to introduce a semantic notion that re-frames logical validity as immunity with respect to attacks that adhere to rationality principles like those discussed in this section, but also later, in Sects. 5, 6, and 8.

4 Argumentative immunity

Remember that we are actually not interested in concrete (weighted or unweighted) argumentation frameworks. Rather we want to relate fuzzy logics to the realm of all possible weighted argumentation frames that satisfy certain attack principles, like the ones discussed in the last section. Since we cannot expect any given WSAF to already contain explicitly all arguments and all attacks that are required in order to make it possible to satisfy such principles, we introduce the following closure operation.

Definition 4

A WSAF S is logically closed with respect to \(\varGamma \) if all formulas and subformulas of formulas in \(\varGamma \) occur as claims of some argument in S.

We will suppress the explicit reference to \(\varGamma \) whenever the context makes clear what formulas are expected to be available as claims of arguments in the relevant WSAF. For example, in speaking of an argumentation frame S that satisfies the principle \({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\) it is implicitly understood that S is closed at least with respect to \(\{A \wedge B\}\) and thus contain not only attacks (possibly of weight 0) on \(A \wedge B\), but also attacks on A and on B.

Definition 5

Let \(\mathcal P\) be a set of (weight related) attack principles, then we call a formula F\(\mathcal{P} \) argumentatively immune (shortly: \(\mathcal{P} \)-immune) if there is no logically closed WSAF (with respect to \(\{F\}\)) that satisfies the principles in \(\mathcal P\) and contains an argument that attacks F with some weight \(>0\).

Argumentative immunity is intended as a notion that provides a new view on logical validity, which is not based on Tarski-style semantics, but rather only refers to claims of arguments (that may or may not be interpreted in the usual way) and to the weights of explicit or implicit attacks between them. To illustrate its use, consider the following example that refers to the axiom of (pre-)linearity and is thus characteristic for all t-norm-based fuzzy logics.

Proposition 1

The formula \((A \supset B) \vee (B \supset A)\) is \(\{\)\({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\),\({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\)\(\}\)-immune.

Proof

Let S be a WSAF that satisfies \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\). We proceed indirectly and assume that S contains an argument X that attacks \((A \supset B) \vee (B \supset A)\) with some positive weight z. By \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\), we obtain \(X {\mathop {\longrightarrow }\limits ^{x>0}} A\supset B\) and \(X {\mathop {\longrightarrow }\limits ^{y>0}} B\supset A\). Applying \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) to these attacks yields a contradiction: we obtain \(X {\mathop {\longrightarrow }\limits ^{u}} A\) and \( X {\mathop {\longrightarrow }\limits ^{v}} B\), where \(u<v\) because of \(X {\mathop {\longrightarrow }\limits ^{x>0}} A\supset B\) and \(v<u\) because of \(X {\mathop {\longrightarrow }\limits ^{y>0}} B\supset A\). \(\square \)

More generally, our aim is to investigate with respect to which collections of attack principles some fundamental fuzzy logics are argumentatively sound and complete, respectively. By argumentative soundness we mean that all valid formulas are argumentatively immune; argumentative completeness is the converse: all argumentatively immune formulas are logically valid.

Regarding argumentative soundness, the following observation is crucial: argumentative immunity is preserved under applications of modus ponens whenever \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) is satisfied. More precisely the following holds.

Proposition 2

If G as well as \(G\supset F\) are argumentatively \(\mathcal P\)-immune, then also F is argumentatively \(\mathcal P\)-immune as long as \(\mathcal P\) contains \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\).

Proof

Suppose that F is not argumentatively \(\mathcal P\)-immune. This means that there is a WSAF S that is logically closed (with respect to at least \(\{G\supset F\}\)), such that S satisfies all principles in \(\mathcal P\) and contains an argument X attacking F with positive weight (\(X {\mathop {\longrightarrow }\limits ^{z>0}} F\)). We make the following case distinction.

  1. (1)

    \(X {\mathop {\longrightarrow }\limits ^{x>0}} G\supset F\): this means that \(G\supset F\), too, is not \(\mathcal P\) argumentatively immune.

  2. (2)

    \(X {\mathop {\longrightarrow }\limits ^{0}} G\supset F\): then according to principle \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) we have \(X {\mathop {\longrightarrow }\limits ^{x}}G\) and \(X{\mathop {\longrightarrow }\limits ^{y}}F\), where \(x\ge y\). But by the assumption \(X {\mathop {\longrightarrow }\limits ^{z>0}} F\) we obtain that y and thus also x is greater than 0. In other words, in this case the first premise is not \(\mathcal P\) argumentatively immune.

To sum up: we have shown, indirectly, that F is \(\mathcal P\) argumentatively immune if both G and \(G\supset F\) are \(\mathcal P\) argumentatively immune, assuming that \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) is among the principles collected in \(\mathcal P\). \(\square \)

5 Characterizing Gödel logic

Propositional finite-valued Gödel logics were introduced (implicitly) by Gödel (1933) to show that intuitionistic logic does not have a characteristic finite matrix. Dummett (1959) later generalized these to an infinite set of truth values and showed that the set of its tautologies is axiomatized by intuitionistic logic extended by the prelinearity axiom \((A \supset B) \vee (B \supset A)\). Hence infinite-valued Gödel logic \(\mathsf{G}\) is also called Gödel–Dummett logic or Dummett’s \(\mathsf{LC}\). Gödel logics naturally turn up in a number of different areas of logic and computer science. For instance, Dunn and Meyer (1971) pointed out their relation to relevance logics; Visser (1982) employed \(\mathsf{G}\) in investigations of the provability logic of Heyting arithmetic. Most importantly in our context, \(\mathsf{G}\) has been recognized as one of the most important formalizations of fuzzy logic (Hájek 2001).

We will first review the semantics and a Hilbert-style proof system for \(\mathsf{G}\) and, then proceed in three steps.

  1. 1.

    We introduce two further attack principles \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\wedge \mathbf{)}}\) that have not been considered in Sect. 3.

  2. 2.

    We show that all formulas that are derivable in the Hilbert-style system for Gödel logic are argumentatively immune with respect to \({\mathcal{P}_B} \cup \) {\({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(C}^w\mathbf{.}\wedge \mathbf{)}}\) }.

  3. 3.

    Conversely, we show that formulas that all formulas that are argumentatively immune in this specific sense are also valid according to Gödel logic.

Recall the semantics of Gödel logic: every assignment \(I\) of truth values in [0, 1] to propositional variables is extended to non-atomic formulas as follows:

$$\begin{aligned}&\begin{array}{lcl} \Vert { A \wedge B}\Vert ^{\mathsf{G}}_{{I}} = \min \left( \Vert { A}\Vert ^{\mathsf{G}}_{{I}}, \Vert { B}\Vert ^{\mathsf{G}}_{{I}}\right) , \\ \Vert { A \vee B}\Vert ^{\mathsf{G}}_{{I}} = \max \left( \Vert { A}\Vert ^{\mathsf{G}}_{{I}}, \Vert { B}\Vert ^{\mathsf{G}}_{{I}}\right) , \end{array}\\&\begin{array}{c} \Vert { A \supset B}\Vert ^{\mathsf{G}}_{{I}} = {\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \Vert { A}\Vert ^{\mathsf{G}}_{{I}} \le \Vert { B}\Vert ^{\mathsf{G}}_{{I}} \\ \Vert { B}\Vert ^{\mathsf{G}}_{{I}} &{} \quad \text{ otherwise }. \end{array}\right. } \end{array} \end{aligned}$$

\(\lnot A\) is defined as \(A \supset \bot \), hence

$$\begin{aligned} \begin{array}{lc@{}l} \Vert { \lnot A}\Vert ^{\mathsf{G}}_{{I}} = {\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \Vert { A}\Vert ^{\mathsf{G}}_{{I}} = 0\\ 0 &{} \quad \text{ otherwise } \end{array}\right. } \end{array} \end{aligned}$$

For the atomic formula \(\bot \) we have \(\Vert { \bot }\Vert ^{\mathsf{G}}_{{I}} = 0\). F is \(\mathsf{G}\)-valid if \(\Vert { F}\Vert ^{\mathsf{G}}_{{I}}=1\) for all assignments \(I\).

Gödel logic can be axiomatized in various ways. Below, we will refer to the Hilbert-style system consisting in the following axioms:

$$\begin{aligned} \begin{array}{rl} {}[{\supset }\text{- }1]: &{} F \supset (G \supset F)\\ {}[{\supset }\text{- }2]: &{} (F \supset (G \supset H)) \supset ((F \supset G) \supset (F \supset H))\\ {}[{\wedge }\text{- }1]: &{} (F \wedge G) \supset F\\ {}[{\wedge }\text{- }2]: &{} (F \wedge G) \supset G\\ {}[{\wedge }\text{- }3]: &{} F\supset (G\supset (F \wedge G))\\ {}[{\vee }\text{- }1]: &{} F \supset (F \vee G)\\ {}[{\vee }\text{- }2]: &{} G \supset (F \vee G)\\ {}[{\vee }\text{- }3]: &{} (G \supset F) \supset ((H\supset F) \supset ((G\vee H) \supset F))\\ {}[{\bot }]: &{} \bot \supset F\\ {}[{Lin}]: &{} (F \supset G) \vee (G \supset F) \end{array} \end{aligned}$$

The only inference rules is modus ponens: from F and \(F\supset G\) infer G. Note that the only axiom that is not already valid in intuitionistic logic is Lin. The following fact has been established by Dummett (1959).

Theorem 1

The above Hilbert-style system is sound and complete for Gödel logic. In other words: a formula F is derivable in the system iff F is \(\mathsf{G}\)-valid.

To obtain a characterization of Gödel logic in terms of argumentative immunity, we have to consider the following additional principles for weighted attacks.

\({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\) and \(F{\mathop {\longrightarrow }\limits ^{y}} B\), where \(x < y\), then \(F{\mathop {\longrightarrow }\limits ^{y}} A\supset B\).

If the implying formula is attacked with a smaller weight than the implied formula, then the implication is attacked with the same weight as the implied formula.

\({\mathbf{(C}^w\mathbf{.}\wedge \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}}B\), \(F{\mathop {\longrightarrow }\limits ^{z}} A\wedge B\), then \(z\le \max (x,y)\).

An attack against a conjunction entails an attack to at least one of its conjuncts with an equal or higher weight.

Definition 6

\(\mathcal{P}_{\mathsf{G}} = {\mathcal{P}_B} \cup \{\mathbf{(G}^w.\supset \mathbf{)},\mathbf{(C}^w.\wedge \mathbf{)}\}\).

Note that in the presence of \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) amounts to a strengthening of \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\). In other words, \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) is redundant in \(\mathcal{P}_{\mathsf{G}}\). However, it is still interesting to see in which cases it suffices to refer to \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) instead of to the stronger principle \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\).

Theorem 2

(Argumentative soundness of \(\mathsf{G}\)) Every \(\mathsf{G}\)-valid formula is \(\mathcal{P}_{\mathsf{G}}\) argumentatively immune.

Proof

By Theorem 1 and Proposition 2, it remains to check that the axioms for Gödel logic are \(\mathcal{P}_{\mathsf{G}}\)-immune. In the following, we implicitly assume that all arguments occur in a WSAF that is logically closed with respect to the axiom in question. In each case we argue indirectly, deriving a contraction from the assumption that there is an argument X that attacks the axiom in question with some positive weight.

\([\supset \)-1]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} F \supset (G\supset F)\), then by \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we obtain \(f < y\), where f is given by \(X {\mathop {\longrightarrow }\limits ^{f}} F\) and y is given by \(X {\mathop {\longrightarrow }\limits ^{y}} G\supset F\). On the other hand, applying \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) to the latter statement yields \(y \le f\), which is a contradiction.

\([\supset \)-2]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} (F \supset (G \supset H)) \supset ((F \supset G) \supset (F \supset H))\). Then by \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), we obtain \(x < y\), where \( X {\mathop {\longrightarrow }\limits ^{x}} F \supset (G \supset H)\) and \(X {\mathop {\longrightarrow }\limits ^{y}} (F \supset G) \supset (F \supset H)\). Since \(y>0\) we can apply \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) to obtain \(v<w\), where \(X {\mathop {\longrightarrow }\limits ^{v}} F \supset G\) and \(X {\mathop {\longrightarrow }\limits ^{w}} F \supset H\). Since \(w>0\), \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) implies \(f<h\), where \(X {\mathop {\longrightarrow }\limits ^{f}} F\) and \(X {\mathop {\longrightarrow }\limits ^{h}} H\). We can also apply \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) to obtain \(y=w\). Applying \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) again, justified by \(f < h\), yields \(w=h\). For reference below, we assign the following labels to some of the facts established so far: (1) \(x<y\), (2) \(f<h\), and (3) \(y=h\).

We show that each of the following cases leads to a contradiction.

\(g<h\)::

By \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) this implies \(u=h\). By (2) we obtain \(f<u\) and thus can apply \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) to obtain \(x=u\). Jointly, this yields \(x=h\) and hence, by (3), also \(x=y\), which contradicts (1).

\(g\ge h\) and \(f<g\)::

By \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\)\(f<g\) yields \(g=v\). Because of (2), \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) also leads to \(w=h\). Therefore \(g\ge h\) implies \(w\le v\). By \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), the latter entails \(y=0\), which contradicts (1).

\(g\ge h\) and \(f\ge g\)::

By transitivity we have \(f \ge h\), which contradicts (2).

\([\wedge \)-1]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} (F \wedge G)\supset F\). , then by \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we obtain \(x<f\), where \(X {\mathop {\longrightarrow }\limits ^{x}} F\wedge G\) and \(X {\mathop {\longrightarrow }\limits ^{f}} F\). On the other hand, by \({\mathbf{(C}^w\mathbf{.}\wedge \mathbf{)}}\), we obtain \(x \ge \max (f, g)\), where g is given by \(X {\mathop {\longrightarrow }\limits ^{g}} G\). This in particular implies \(x \ge f\). Thus we have a contradiction since X cannot attack \((F \wedge G)\supset F\) with a weight that is both smaller and greater or equal to f.

\([\wedge \)-2]::

Analogous to \([\wedge \)-1].

\([\wedge \)-3]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} F\supset (G\supset (F \wedge G))\). By applying \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) twice, we first obtain \(f < y\) and, then \(g < x\), where \(X {\mathop {\longrightarrow }\limits ^{y}} G\supset (F \wedge G)\), \(X {\mathop {\longrightarrow }\limits ^{x}}F \wedge G\), \(X {\mathop {\longrightarrow }\limits ^{f}} F\) and \(X {\mathop {\longrightarrow }\limits ^{g}} G\). On the other hand, by \({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\), we obtain \(x \le f\) or \(x \le g\). The latter case clearly contradicts \(g<x\). To obtain a contradiction also in the first case, we apply \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) to \(X {\mathop {\longrightarrow }\limits ^{y}} G\supset (F \wedge G)\) and \(X {\mathop {\longrightarrow }\limits ^{x}}F \wedge G\) to obtain \(x=y\), and consequently \(f < y=x \le f\).

\([\vee \)-1]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} F \supset (F \vee G)\). , then by \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we obtain \(f<y\), where \(X {\mathop {\longrightarrow }\limits ^{f}} F\) and \(X {\mathop {\longrightarrow }\limits ^{y}} F \vee G\). On the other hand, by \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\), we have \(y \le f\), which is in contradiction with the previous assertion.

\([\vee \)-2]::

analogous to \([\vee \)-1]

\([\vee \)-3]::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} (G \supset F) \supset ((H \supset F) \supset ((G\vee H) \supset F))\). We first name the weights of attacks by X on subformulas: \(X {\mathop {\longrightarrow }\limits ^{x}} (H \supset F) \supset ((G\vee H) \supset F)\), \(X {\mathop {\longrightarrow }\limits ^{y}} G \supset F\), \(X {\mathop {\longrightarrow }\limits ^{u}} H \supset F\), \(X {\mathop {\longrightarrow }\limits ^{v}} (G\vee H) \supset F\), \(X {\mathop {\longrightarrow }\limits ^{w}} G\vee H\), \(X {\mathop {\longrightarrow }\limits ^{f}} F\), \(X {\mathop {\longrightarrow }\limits ^{g}} F\), and finally \(X {\mathop {\longrightarrow }\limits ^{h}} H\).

By successively applying \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), we obtain \(y< x\), \(u<v\), and \(w<f\).

Next, \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) yields \(v=f\), because \(w< f\), and \(x=v\), because \(u < v\). Below, we will refer to \(x=v=f\) as \((*)\).

Finally, we show that each of the following cases leads to a contradiction.

\(g<f\)::

By \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) this implies \(y=f\). This contradicts \(y<x\) combined with \((*)\).

\(h<f\)::

By \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) this implies \(u=f\). This contradicts \(u<v\) combined with \((*)\).

\(g\ge f\) and \(h\ge f\)::

This means that \(f\le \min (g,h)\). By applying \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\) to \(X {\mathop {\longrightarrow }\limits ^{w}} G\vee H\) we obtain that \(g\le w\) or \(h \le w\). But above we have shown \(w<f\), and thus obtain a contradiction in both cases.

\([\bot {]}\)::

Assume that \(X {\mathop {\longrightarrow }\limits ^{z>0}} \bot \supset F\). By \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we obtain that \(x<f\), where \(X {\mathop {\longrightarrow }\limits ^{x}} \bot \) and \(X {\mathop {\longrightarrow }\limits ^{f}} F\). This directly contradicts principle \({\mathbf{(A}^w\mathbf{.}\bot \mathbf{)}}\), which requires that \(x=1\).

[Lin]::

By Proposition 1.

\(\square \)

We remarking in passing that, to guarantee the argumentative immunity of \([\supset \)-2] and \([{\vee }\text{- }3]\), one cannot trade \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\) for any principle already contained in \(\mathcal{P}_B\). Likewise one can show that the ‘strong’ principle \({\mathbf{(C}^w\mathbf{.}\wedge \mathbf{)}}\) is indeed needed to render \([{\wedge }\text{- }1]\) argumentatively immune. All are other axioms are already \(\mathcal{P}_B\) argumentatively immune.

Before showing the converse of Theorem 2—namely, argumentative completeness of \(\mathsf{G}\)—let us observe that classical logic is not argumentatively sound with respect to \(\mathcal{P}_{\mathsf{G}}\). \(F \vee \lnot F\) (i.e., \(F\vee (F\supset \bot ))\) is not \(\mathcal{P}_{\mathsf{G}}\)-immune. Consider a WSAF that just contains four arguments with claims \(\bot \), F, \(\lnot F(= F\supset \bot )\) and \(F\vee \lnot F\), respectively, and where the weights of attacks between these arguments are as specified in the following matrix:

$$\begin{aligned} \begin{array}{c|cccc} {\mathop {\longrightarrow }\limits ^{w}} &{} \bot &{} F &{} F \supset \bot &{} F \vee \lnot F \\ \hline \bot &{} 1 &{} 0 &{} 1 &{} 0 \\ F &{} 1 &{} 0.5 &{} 1 &{} 0.5 \\ F\supset \bot &{} 1 &{} 1 &{} 0 &{} 0 \\ F \vee \lnot F &{} 1 &{} 0 &{} 1 &{} 0\\ \end{array} \end{aligned}$$

It is straightforward to check that all principles of \(\mathcal{P}_{\mathsf{G}}\) are satisfied in this WSAF. Since \(F\vee \lnot F\) is attacked with weight 0.5 by F it is not argumentatively valid.

Theorem 3

(Argumentative completeness of \(\mathsf{G}\)) Every \(\mathcal{P}_{\mathsf{G}}\) argumentatively immune formula is \(\mathsf{G}\)-valid.

Proof

We proceed indirectly. Suppose that F is not \(\mathsf{G}\)-valid. This means that there is an assignment \(I\) such that \(\Vert { F}\Vert ^{\mathsf{G}}_{{I}}<1\). Taking \(I\) as a starting point, we construct a WSAF \(S_I\) that is logically closed with respect to \(\{F\}\) and satisfies the attack principles in \(\mathcal{P}_{\mathsf{G}}\) such that \(X{\mathop {\longrightarrow }\limits ^{z>0}} F\) for some (claim of an) argument X in \(S_I\).

We define \(S_I\) by assigning the weight \(1-\Vert { G}\Vert ^{\mathsf{G}}_{{I}}\) to each edge (HG) of the attack relation of \(S_I\). In other words, we stipulate that every (claim of an) argument is attacked by every other argument and by itself with a weight that is inverse to its degree of truth in \(I\).

It remains to check that all attack principles in \(\mathcal{P}_{\mathsf{G}}\) are satisfied in \(S_I\).

\({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\)::

Let \(A\wedge B\) be an argument in \(S_I\) and let \(\Vert { A\wedge B}\Vert ^{\mathsf{G}}_{{I}}=u\), \(\Vert { A}\Vert ^{\mathsf{G}}_{{I}}=v\), and \(\Vert { B}\Vert ^{\mathsf{G}}_{{I}}=w\). Then, by definition of \(S_I\), we have \(F {\mathop {\longrightarrow }\limits ^{1-u}} A\wedge B\), \(F {\mathop {\longrightarrow }\limits ^{1-v}} A\), and \(F {\mathop {\longrightarrow }\limits ^{1-w}} B\) for every argument F in \(S_I\). Moreover, since \(\Vert { A\wedge B}\Vert ^{\mathsf{G}}_{{I}}= \min (\Vert { A}\Vert ^{\mathsf{G}}_{{I}},\Vert { B}\Vert ^{\mathsf{G}}_{{I}})\), i.e., \(u=\min (v,w)\) we obtain \(1-{u}= \max (1-v,1-w)\), as required by \({\mathbf{(A}^w\mathbf{.}\wedge \mathbf{)}}\).

\({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\),\({\mathbf{(C}^w\mathbf{.}\vee \mathbf{)}}\)::

Let \(A \vee B\) be an argument in \(S_I\) and let \(\Vert { A\vee B}\Vert ^{\mathsf{G}}_{{I}}=u\), \(\Vert { A}\Vert ^{\mathsf{G}}_{{I}}=v\), and \(\Vert { B}\Vert ^{\mathsf{G}}_{{I}}=w\). Then, by definition of \(S_I\), we have \(F {\mathop {\longrightarrow }\limits ^{1-u}} A\vee B\), \(F {\mathop {\longrightarrow }\limits ^{1-v}} A\), and \(F {\mathop {\longrightarrow }\limits ^{1-w}} B\) for every argument F in \(S_I\). Moreover, since \(\Vert { A\vee B}\Vert ^{\mathsf{G}}_{{I}}= \max (\Vert { A}\Vert ^{\mathsf{G}}_{{I}},\Vert { B}\Vert ^{\mathsf{G}}_{{I}})\), i.e., \(u=\max (v, w)\) we obtain \(1-u= \min (1-v,1-w)\). Consequently \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\vee \mathbf{)}}\) are satisfied.

\({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\),\({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\)::

Let \(A \supset B\) be an argument in \(S_I\) and let \(\Vert { A\supset B}\Vert ^{\mathsf{G}}_{{I}}=u\), \(\Vert { A}\Vert ^{\mathsf{G}}_{{I}}=v\), and \(\Vert { B}\Vert ^{\mathsf{G}}_{{I}}=w\). Then, by definition of \(S_I\), we have \(F {\mathop {\longrightarrow }\limits ^{1-u}} A\supset B\), \(F {\mathop {\longrightarrow }\limits ^{1-v}} A\), and \(F {\mathop {\longrightarrow }\limits ^{1-w}} B\) for every argument F in \(S_I\). By the definition of the truth function for implication in \(\mathsf{G}\), we obtain

$$\begin{aligned} u = {\left\{ \begin{array}{ll} 1 &{} \text{ if } v \le w \\ w &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$

and consequently

$$\begin{aligned} 1- u = {\left\{ \begin{array}{ll} 0 &{} \text{ if } 1-w \le 1-v \\ 1- w &{} \text{ otherwise }. \end{array}\right. }. \end{aligned}$$

But this means that indeed all attack principles in \(\mathcal{P}_{\mathsf{G}}\) regarding implication are satisfied.

\({\mathbf{(A}^w\mathbf{.}\bot \mathbf{)}}\)::

By definition every argument in \(S_I\) attacks \(\bot \) with weight \(1 = 1- \Vert { \bot }\Vert ^{\mathsf{G}}_{{I}}\).

\(\square \)

Remark 1

It is well known that the truth of formulas in Gödel logic actually does not depend on the absolute values of the degrees of truth (other than 0 and 1) assigned to atomic propositions, but only on the relative order of these values. This fact has repercussions for the argumentation-based interpretation of Gödel logics discussed above. It means that argumentative immunity with respect to the principles \(\mathcal{P}_{\mathsf{G}}\) only concerns the relative order of weights. This in turn implies that we may focus on weighted argumentation frames, where the weights attached to attacks between arguments reflect rankings of attacks, which is one of three possible ways of assigning meaning to weights in Dunne et al. (2011).

6 Characterizing Łukasiewicz and product logic

Gödel logic \(\mathsf{G}\) is only one of three fundamental t-norm-based fuzzy logics (Hájek 2001). The other two are Łukasiewicz logic \(\mathsf{\L }\) and product logic \(\mathsf{P}\). In this section we explore attack principles with respect to which \(\mathsf{\L }\) and \(\mathsf{P}\) are argumentatively sound and complete. The discussion of possible interpretations of these principles is deferred to Sect. 7.

Both \(\mathsf{\L }\) and \(\mathsf{P}\) feature not only the ‘lattice conjunction’ or ‘weak conjunction’ \(\wedge \), specified by \(\min \) like in Gödel logic, but also a second, non-idempotent ‘strong conjunction’, which we will denote by \( \, \& \,\). It is specified by the Łukasiewicz and product t-norm, respectively. More precisely, the (standard) semantics for strong conjunction in \(\mathsf{\L }\) and \(\mathsf{P}\), respectively, is given by extending assignments \(I\) over [0, 1] as follows:

$$ \begin{aligned} \begin{array}{l} \Vert { A \, \& \,B}\Vert ^{\mathsf{\L }}_{{I}} = \max (0, \Vert { A}\Vert ^{\mathsf{\L }}_{{I}}+\Vert { B}\Vert ^{\mathsf{\L }}_{{I}}-1),\\ \Vert { A \, \& \,B}\Vert ^{\mathsf{P}}_{{I}} = \Vert { A}\Vert ^{\mathsf{P}}_{{I}}\cdot \Vert { B}\Vert ^{\mathsf{P}}_{{I}}. \end{array} \end{aligned}$$

In both cases, implication is given by the respective residuum of t-norm, which amounts to

$$\begin{aligned} \begin{array}{lcl} \Vert { A \supset B}\Vert ^{\mathsf{\L }}_{{I}} = \min (1, 1- \Vert { A}\Vert ^{\mathsf{\L }}_{{I}}+\Vert { B}\Vert ^{\mathsf{\L }}_{{I}}),\\ \Vert { A \supset B}\Vert ^{\mathsf{P}}_{{I}} = {\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \Vert { A}\Vert ^{\mathsf{P}}_{{I}} \le \Vert { B}\Vert ^{\mathsf{P}}_{{I}} \\ {\frac{\Vert { B}\Vert ^{\mathsf{P}}_{{I}}}{\Vert { A}\Vert ^{\mathsf{P}}_{{I}}}} &{} \quad \text{ otherwise }. \end{array}\right. } \end{array} \end{aligned}$$

Negation can be defined by \(\lnot A = A \supset 0\). Given \(\Vert { \bot }\Vert ^{\mathsf{\L }}_{{I}}=\Vert { \bot }\Vert ^{\mathsf{P}}_{{I}}=0\), this amounts to the following truth functions.

$$\begin{aligned} \begin{array}{lcl} \Vert { \lnot A}\Vert ^{\mathsf{\L }}_{{I}} = 1 - \Vert { A}\Vert ^{\mathsf{\L }}_{{I}}, \\ \Vert { \lnot A}\Vert ^{\mathsf{P}}_{{I}} = {\left\{ \begin{array}{ll} 1 &{} \quad \text{ if } \Vert { A}\Vert ^{\mathsf{P}}_{{I}} = 0\\ 0 &{} \quad \text{ otherwise }. \end{array}\right. } \end{array} \end{aligned}$$

Attack principles that characterize strong conjunction for \(\mathsf{\L }\) and \(\mathsf{P}\) are obtained by stipulating that the weight of an attack on a conjunction is determined by the respective co-t-norm:

\( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \( F{\mathop {\longrightarrow }\limits ^{z}}A \, \& \,B\), then \(z= \min (1,x+y)\).

\( {\mathbf{(P}^w\mathbf{.}\, \& \,\mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \( F{\mathop {\longrightarrow }\limits ^{z}}A \, \& \,B\), then \(z= x+y-xy\).

Correspondingly, we obtain the following attack principles for implications:

\({\mathbf{(\L }^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \(F{\mathop {\longrightarrow }\limits ^{z}}A \supset B\), then \(z= \max (0,y-x)\).

\({\mathbf{(P}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), where \(x<y\), and \(F{\mathop {\longrightarrow }\limits ^{z}}A \supset B\), then \(z=\frac{y-x}{1-x}\).

The condition \(x < y\) in \({\mathbf{(P}^w\mathbf{.}\supset \mathbf{)}}\) indicates that we assume the basic attack principles \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), which cover the case where \(x \ge y\), are still present.

Definition 7

$$ \begin{aligned} \mathcal{P}_{\mathsf{\L }}= & {} {\mathcal{P}_B} \cup \{\mathbf{(\L }^w.\, \& \,\mathbf{)},\mathbf{(\L }^w.\supset \mathbf{)}\},\\ \mathcal{P}_{\mathsf{P}}= & {} {\mathcal{P}_B} \cup \{\mathbf{(P}^w.\, \& \,\mathbf{)},\mathbf{(P}^w.\supset \mathbf{)}\}. \end{aligned}$$

Given corresponding Hilbert-style proof systems, it is straightforward to show in analogy to Theorem 2 that \(\mathsf{\L }\) and \(\mathsf{P}\) are argumentatively sound relative to \(\mathcal{P}_{\mathsf{\L }} \) and \(\mathcal{P}_{\mathsf{P}} \), respectively. Likewise, argumentative completeness can be checked in prefect analogy to the proof of Theorem 3. Since the proofs are routine, lengthy, but not very informative, we just state the corresponding results.

Theorem 4

(Argumentative soundness and completeness of \(\mathsf{\L }\)) Every formula is \(\mathcal{P}_{\mathsf{\L }}\) argumentatively immune formula if and only if it is \(\mathsf{\L }\)-valid.

Theorem 5

(Argumentative soundness and completeness of \(\mathsf{P}\)) Every formula is \(\mathcal{P}_{\mathsf{P}}\) argumentatively immune formula if and only if it is \(\mathsf{P}\)-valid.

7 Justifying attack principles for \(\mathsf{\L }\) and \(\mathsf{P}\)

While the basic attack principles \(\mathcal{P}_B\) introduced in Sect. 3, but also the additional principles introduced in Sect. 5, are easy to graspFootnote 2 also independently of any specific knowledge about Gödel logic or fuzzy logics in general, this is hardly the case for \( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) and \({\mathbf{(\L }^w\mathbf{.}\supset \mathbf{)}}\) or for \( {\mathbf{(P}^w\mathbf{.}\, \& \,\mathbf{)}}\) and \({\mathbf{(P}^w\mathbf{.}\supset \mathbf{)}}\). Indeed, considering only what we have presented in Sect. 6, one may suspect that Theorems 4 and 5 amount to purely formal and in fact rather straightforward technical observations. It is therefore highly desirable to explore to which extend these results can be employed to establish connections between fuzzy logics, that shed new light on the informal meaning on argument (attack) strength on the one hand and degrees of truth or acceptability an the other hand.

Revisiting \( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) under the just mentioned perspective, we suggest to attach the following informal reading to it:

\( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \( F{\mathop {\longrightarrow }\limits ^{z}}A \, \& \,B\), then \(z= \min (1,x+y)\).

A conjunction is attacked with the weight that results from summing up the weights of attacks on its conjuncts; but the sum is capped at the maximal weight.

Summing up weights of attack is certainly very reasonable if the underlying arguments are independent. At this point it is important to recall from Sect. 3, that formulas only denote claims of arguments, but—except in a degenerated case—are not already full arguments themselves. We stipulated that \(F{\mathop {\longrightarrow }\limits ^{x}} A\) means that x is the overall weight of attack, that we obtain if we take into account all arguments with claim F that attack some argument with claim A in some specific way. This now provides a basis for a modeling scenario that is able to explain the difference of the meaning of weak conjunction (\(\wedge \)) and strong conjunction (\( \, \& \,\)): \(A \wedge A\) is logically equivalent to A and consequently attacks on \(A \wedge A\) are treated as indistinguishable from attacks on A; however, determining the overall weight against the claim \( A \, \& \,A\) calls for exhibiting two independent attacks on A, unless we find that already A alone is attacked with maximal weight. More generally, according to the suggested interpretation of \(\mathsf{\L }\)\(\mathcal{P}_{\mathsf{\L }}\)-argumentative immunity, \( F {\mathop {\longrightarrow }\limits ^{x}} A \, \& \,B\) means that x is the (truncated) sum of weights of independent attacks with claim F on A and B, respectively. The consideration of strong conjunction in the sense of Łukasiewicz logic thus seems to be justified only with respect to argumentation frames that are rich enough to contain (also) independent arguments against corresponding claims.

Example 6

Recall Examples 2 and 3, where we considered two arguments with the following respective claims: (A) “The majority of the population strongly supports its government” and (B) “The majority of the population believes that the economy is growing”. Considering the further claim (X) “Many people are worried about their future”. There clearly is some tension between X and A and, likewise, between X and B. Assume that this tension is witnessed by attacking arguments involving these claims. Suppose that we have no direct access to these arguments, but that we are informed that the following weights arise for an SAF, i.e., when we abstract away from the underlying arguments: \(X {\mathop {\longrightarrow }\limits ^{0.7}} A\) and \(X {\mathop {\longrightarrow }\limits ^{0.9}} B\). On the basis of just this information, it is difficult to decide which weight one should assign to implicit attacks of arguments claiming X to arguments that claim the conjunction of A and B. But under the following two assumptions it seems reasonable to follow principle \( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) and correspondingly assign the maximal weight to the implicit attack on the conjunctive claim: (1) The conjunction is understood in the strong sense, meaning that the degree of truth of the conjunction is, in general, strictly smaller than the degree of truth of each conjunct. (2) The (unknown) arguments that are represented in the abstraction as \(X {\mathop {\longrightarrow }\limits ^{0.7}} A\) and \(X {\mathop {\longrightarrow }\limits ^{0.9}} B\), respectively, are independent and therefore mutually reinforce each other. In other words, \( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) yields \( X {\mathop {\longrightarrow }\limits ^{1}} A \, \& \,B\), since we assume that we have independent arguments against A and B, respectively, where the sum of the weights of these arguments is at least as high as the maximal value for individual weights.

The case for product logic \(\mathsf{P}\) seems to be more subtle than the one for \(\mathsf{\L }\). To assist the reader, we restate the corresponding attack principle for strong conjunction:

\( {\mathbf{(P}^w\mathbf{.}\, \& \,\mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{x}} A\), \(F{\mathop {\longrightarrow }\limits ^{y}} B\), and \( F{\mathop {\longrightarrow }\limits ^{z}}A \, \& \,B\), then \(z= x+y-xy\).

The crucial expression \(x+y-xy\) is not only the co-t-norm of the product t-norm, but is also known as probabilistic sum, which hints at a suitable interpretation. To this aim, we suggest to identify the weight of an attack on claim A by an argument with claim F with the conditional probability \(p(\overline{A}|F)\), i.e., with the probability that A does not hold, given that F holds (\(\overline{A}\) denotes the event that is complementary to that corresponding to proposition A). Arguably, this amounts to an intuitively sound interpretation of argument strength, or more appropriately: attack strengthFootnote 3 Similarly to the case for Łukasiewicz logic, let us assume that A and B correspond to two independent events. We then get \(p(\overline{A \wedge B}|F)=x+y-xy\) if \(x=p(\overline{A}|F)\) and \(y=p(\overline{B}|F)\), where F corresponds to any non-empty event.

Note that the above scenario does not directly support the interpretation of arbitrarily nested logically compound statements, since A, B, F refer to classical events (and moreover the event F has to be non-empty). The scenario, however, suggests the use of a two-tiered language: (1) at the inner level, formulas are built up from atomic formulas using the classical connectives \(\wedge \), \(\vee \), \(\lnot \), intended to denote events; (2) at the outer level, one may combine classical formulas using connectives from product logic. The intended meaning of formulas combined by strong (product) conjunction is then given via \( {\mathbf{(P}^w\mathbf{.}\, \& \,\mathbf{)}}\), interpreted as suggested. One might want to explore generalizations of this setting using fuzzy events (Yager 1982) and more general combinations of inner and outer language levels along the line of Hájek et al. (1995) or Godo et al. (2003).

We have only addressed the interpretation of (strong) conjunction, so far. The corresponding principles for implication are uniquely determined, if we stipulate that the truth function for implication is the residuum of the truth function for strong implication. In our context we can enforce residuation by the following attack principle.

\( {\mathbf{(R}^w\mathbf{.}{\supset }/{ \& }{} \mathbf{)}}\) :

\( F{\mathop {\longrightarrow }\limits ^{x}} (A \, \& \,B) \supset C\) if and only if \( F{\mathop {\longrightarrow }\limits ^{x}} A \supset (B \, \& \,C)\)

In presence of \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\), \( {\mathbf{(R}^w\mathbf{.}{\supset }/{ \& }{} \mathbf{)}}\) ensures that \( {\mathbf{(\L }^w\mathbf{.}\, \& \,\mathbf{)}}\) entails \({\mathbf{(\L }^w\mathbf{.}\supset \mathbf{)}}\) and, likewise, that \( {\mathbf{(P}^w\mathbf{.}\, \& \,\mathbf{)}}\) entails \({\mathbf{(P}^w\mathbf{.}\supset \mathbf{)}}\).

Once implication is fixed, all other connectives—negation, weak (lattice) conjunction and disjunction, but also strong disjunction, the dual of strong conjunction—are uniquely defined as well. It should be obvious by now, how corresponding attack principles can be formulated.

8 An analysis of prelinearity

Recall that by Proposition 1 of Sect. 4 the formula \((F \supset G) \vee (G \supset F)\) (prelinearity) is {\({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\),\({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) }-immune. Given the centrality of prelinearity for t-norm-based fuzzy logics, it may be useful to emphasize that only two rather reasonable principles on implicit attacks are needed to render this axiom argumentatively immune.

  1. 1.

    Corresponding to \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\): An implication is attacked with some positive weight only if the implying formula is attacked with less weight than the implied formula.

  2. 2.

    Regarding \({\mathbf{(A}^w\mathbf{.}\vee \mathbf{)}}\): The proof of Proposition 1 shows that actually only a weak form of this principle is needed. Namely, if a claim F is not attacked at all, then neither is any (logically weaker) claim of the form \(F\vee G\). Equivalently: any positive attack on a disjunction entails positive attacks on both disjuncts.

These observations are certainly encouraging from the perspective of fuzzy logic, since they seems to indicate that rather mild conditions on implicit attack already single out as possible ‘logics of weighted argumentation’ (in our current sense) those that satisfy an axiom that can be considered a hallmark of all deductive fuzzy logics. (See, e.g., Běhounek and Cintula (2006) for a general characterization of fuzzy logics that focuses on prelinearity.) However, it is important to remember that prelinearity can also be expressed in a purely implicative form. In particular the standard proof systems for Hajek’s \(\mathsf{BL}\), the logic of all continuous t-norms (Hájek 1998), features the following version of the axiom:

$$\begin{aligned} \begin{array}{rl} [\text{ PreLin: }]&((F \supset G) \supset H) \supset (((G \supset F) \supset H)\supset H) \end{array} \end{aligned}$$

Likewise PreLin, rather than \((F \supset G) \vee (G \supset F)\), is among the axioms of \(\mathsf{MTL}\), the logic of all left-continuous t-norms (Esteva and Godo 2001). Therefore it is important to take note of the following fact.

Proposition 3

PreLin is not \(\mathcal{P}_B\) argumentatively immune.

Proof

Clearly, only the principles \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\), and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) of \(\mathcal{P}_B\) are relevant. Recall that \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\) and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) jointly express that an implication is attacked with a nonzero weight if and only if the implied formula is attacked with a higher weight than the implying formula. \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) bounds the weight of an attack on an implication by the weight of an attack on the implied formula. It is therefore straightforward to check that all three principles are satisfied, if, in a given WSAF, for an arbitrary (claim of an) argument X, the weights of corresponding attacks on the subformulas of PreLin are as follows:

$$\begin{aligned} \begin{array}{l} X {\mathop {\longrightarrow }\limits ^{0}} F,\\ X {\mathop {\longrightarrow }\limits ^{1}} G,\\ X {\mathop {\longrightarrow }\limits ^{1}} H,\\ X{\mathop {\longrightarrow }\limits ^{0.5}} F \supset G , \\ X{\mathop {\longrightarrow }\limits ^{0.5}} (F \supset G) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{0}} G \supset F, \\ X{\mathop {\longrightarrow }\limits ^{0.5}} (G\supset F) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{1}} ((G \supset F) \supset H)\supset H, \\ X{\mathop {\longrightarrow }\limits ^{1}} ((F \supset G) \supset H) \supset (((G \supset F) \supset H)\supset H). \\ \end{array} \end{aligned}$$

Since PreLin is attacked with weight 1, it is not \(\mathcal{P}_B\) argumentatively immune. \(\square \)

Let us make two observations about the assignment of weights to attacks used in the above proof. (1) Although the weights on attacks to F, G, and H are in \(\{0,1\}\), some implications involving only these subformulas are attacked with the intermediary weight 0.5. (2) Although the respective weights of attacks on the immediate subformulas of \((F \supset G) \supset H\) and of \(((G \supset F) \supset H)\supset H\) are identical (0.5 for the implying formula, and 1 for the implied formula), these formulas are attacked with different weights. This motivates the following definitions and further observation.

Definition 8

A WSAP is compatible with the unweighted case if the weight of an attack on any formula whose subformulas are attacked with weights in \(\{0,1\}\) is also either 0 or 1.

Definition 9

A WSAP has a functional weight assignment if for each logical connective, the weight of an attack on a compound formula only depends on the weights of attacks on its immediate subformulas.

Proposition 3 can be strengthened as follows.

Proposition 4

PreLin is not \(\mathcal{P}_B\) argumentatively immune, even if only WSAPs with functional weight assignments, compatible with the unweighted case are considered.

Proof

It is straightforward to check that the following weight assignment is functional, compatible with the unweighted case, and still satisfies \({\mathbf{(A}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\), and \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\).

$$\begin{aligned} \begin{array}{l} X {\mathop {\longrightarrow }\limits ^{0.3}} F,\\ X {\mathop {\longrightarrow }\limits ^{0.6}} G,\\ X {\mathop {\longrightarrow }\limits ^{0.9}} H,\\ X{\mathop {\longrightarrow }\limits ^{0.4}} F \supset G , \\ X{\mathop {\longrightarrow }\limits ^{0.5}} (F \supset G) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{0}} G \supset F, \\ X{\mathop {\longrightarrow }\limits ^{0.7}} (G \supset F) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{0.8}} ((G \supset F) \supset H)\supset H, \\ X{\mathop {\longrightarrow }\limits ^{1}} ((F \supset G) \supset H) \supset (((G \supset F) \supset H)\supset H). \end{array} \end{aligned}$$

Since PreLin is attacked with weight 1, it is not \(\mathcal{P}_B\) argumentatively immune. \(\square \)

The question arises which further principles guarantee the argumentative immunity of PreLin. Of course, since \(\mathsf{G}\), \(\mathsf{\L }\), and \(\mathsf{P}\) are argumentatively sound, we know that each of \({\mathbf{(G}^w\mathbf{.}\supset \mathbf{)}}\), \({\mathbf{(\L }^w\mathbf{.}\supset \mathbf{)}}\), and \({\mathbf{(P}^w\mathbf{.}\supset \mathbf{)}}\), separately, but in conjunction with \(\mathcal{P}_B\), suffices to render PreLin argumentative immune with respect to corresponding, pairwise incompatible, set of attack principles. Motivated by the search for a general, not logic specific principle that suffices to justify PreLin, we suggest the following.

\({\mathbf{(D}^w\mathbf{.}\supset \mathbf{)}}\) :

If \(F{\mathop {\longrightarrow }\limits ^{0}} A\), \(F{\mathop {\longrightarrow }\limits ^{x}} B\) and \(F{\mathop {\longrightarrow }\limits ^{y}} A\supset B\), then \(y\ge x\).

If the implying formula is not attacked at all, then the implication is attacked with at least the same weight as the implied formula.

Proposition 5

PreLin is \({\mathcal{P}_B} \cup \{\)\({\mathbf{(D}^w\mathbf{.}\supset \mathbf{)}}\)\(\}\) argumentatively immune.

Proof

For some claim X, let the weights of corresponding attacks to subformulas of PreLin as follows:

$$\begin{aligned} \begin{array}{l} X {\mathop {\longrightarrow }\limits ^{f}} F,\\ X {\mathop {\longrightarrow }\limits ^{g}} G,\\ X {\mathop {\longrightarrow }\limits ^{h}} H,\\ X{\mathop {\longrightarrow }\limits ^{w}} F \supset G , \\ X{\mathop {\longrightarrow }\limits ^{x}} (F \supset G) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{v}} G \supset F, \\ X{\mathop {\longrightarrow }\limits ^{u}} (G \supset F) \supset H, \\ X{\mathop {\longrightarrow }\limits ^{y}} ((G \supset F) \supset H)\supset H, \\ X{\mathop {\longrightarrow }\limits ^{z}} ((F \supset G) \supset H) \supset (((G \supset F) \supset H)\supset H). \end{array} \end{aligned}$$

Assume that \(z>0\); then \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) entails \(x<y\), and, since \(y>0\), further also \(u<h\).

We now distinguish two cases.

\(g\ge f\)::

By \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we have \(v=0\), and therefore can apply \({\mathbf{(D}^w\mathbf{.}\supset \mathbf{)}}\) to obtain \(u\ge h\), which contradicts \(u<h\).

\(g<f\)::

By \({\mathbf{(C}^w\mathbf{.}\supset \mathbf{)}}\) we have \(w=0\), and therefore, by \({\mathbf{(D}^w\mathbf{.}\supset \mathbf{)}}\), \(h\le x\). On the other hand, applying \({\mathbf{(B}^w\mathbf{.}\supset \mathbf{)}}\) to the right subformula of PreLin yields \(y\le h\). Since \(x<y\), we obtain \(x<h\), contradicting \(h\le x\).

These contradictions imply that PreLin cannot be attacked with positive weight. \(\square \)

9 Remarks on related Literature

To our best knowledge, the idea to explore principles that constrain on the weights of (implicit) attacks on logically complex claims in terms of the strength of attacks on corresponding subformulas is new. These logical attack principles for weighted argumentation frames generalize those introduced in Corsi and Fermüller (2017) for unweighted frames, intended to capture some plausible intuitions about implicit attacks that result from considering straightforward logical connections between attacked claims. We emphasize that this approach does not seek to improve argumentation-based reasoning per se, but rather is motivated by the problem to characterize fuzzy logics in terms of graded concepts that do not simply take the notion of degrees of truth for granted. The challenge here is to derive truth functions for logical connectives from specific rationality principles, rather than to impose them directly. From this perspective our approach can be classified as alternative to other attempts to derive fuzzy logics from various frames of interpretation, like voting semantics (Lawry 1998), acceptability semantics (Paris 1997), re-randomising semantics (Hisdal 1988), and in particular Giles’s game-based semantics for Łukasiewicz logic (Giles 1977, 1982).

From an argumentation perspective, quite different lines of literature appear to be related. As already mentioned in the introduction, various generalizations of ordinary argumentation frames to grade versions have been suggested, see, e.g., Coste-Marquis et al. (2012), Krause et al. (1995) and Matt and Toni (2008). In particular, an impressive group of experts joined to investigate weighted argument systems in Dunne et al. (2011). While Dunne et al. (2011) mainly focuses on computational aspects, also different possible interpretations of weights of attacks are discussed there. Moreover, the authors of Dunne et al. (2011) forcefully argue that weights on attacks should be considered as a primitive notion from which one can derive weights for (claims of) arguments, rather than the other way round. We have adopted this view in our own approach.

Janssen et al. (2008) introduced fuzzy argumentation frameworks to model the relative strength of attacks. They generalize Dung-style extensions from sets to fuzzy sets of arguments and establish a connection to fuzzy answer set programming. A more elaborate formalization of argumentative reasoning based on fuzzy logic is presented by Alsinet et al. (2008). The authors introduce \(\hbox {PGL}^+\), a possibilistic logic over Gödel logic, extended with fuzzy constants. \(\hbox {PGL}^+\) is then incorporated in a possibilistic defeasible logic programming language, intended to support argumentative reasoning in presence of imprecise (fuzzy) information. More recently, Budán et al. (2017) suggest to add meta-level information to arguments using labels referring to fuzzy evaluations. These labels are propagated through an argumentative graph according to the relations of support, conflict and aggregation between arguments.

Finally, in light of Sect. 7, it should be mentioned that several papers investigate probabilistic versions of Dung’s argumentation frameworks. In particular, Li et al. (2012) introduced probabilistic argumentation frameworks that attach degrees of belief to arguments. A more specific use of probabilities for assumption-based argumentation in jury-based disputes is presented by Dung and Thang (2010). Hunter (2013) generalizes this concept to logic-based argumentation with uncertain arguments. However, none of the mentioned papers explicitly consider constraints like our weighted attack principles for implicit arguments.

10 Conclusion

We set out to explore the possibility to characterize certain deductive fuzzy logics in terms of weighted argumentation frames. Both concepts refer to degrees or grades: in the first case to degrees of truth and in the latter to graded strength of attacks. As indicated in Sect. 9 different combinations of both concepts have been considered in the literature. However, the idea to connect the semantics of fuzzy logics to weighted attacks seems to be novel. Our main tool for establishing such a link are rationality principles (attack principles) that refer to the logical form of claims of attacked arguments. We introduced the notion of argumentative immunity with respect to given collections of attack principles. While some of these principles reflect general and natural desiderata concerning weights of attacks on logically compound claims, given weights of attacks on their immediate subformulas, other principles are quite specific and might well be questionable or outright inadequate for specific instances of the abstract framework. Our results reveal that not only basic principles of the first kind, but also specific principles of the latter kind are needed in order to characterize Gödel, Łukasiewicz and product logic in terms of argumentative immunity. This should neither come as a surprise, nor should it be interpreted as a largely negative result. We rather submit that our findings present specific characteristics of the various logics, as expressed by different axioms, from a new perspective, namely that of (possibly very strong and specific) rationality principles for weighted attacks between arguments. Like other semantic frameworks (Bennett et al. 2000; Giles 1982; Lawry 1998; Paris 1997, 2000; Ruspini 1991), the established connection may help to select or discard particular meanings that one can attach to ‘degrees of truth’ in given application scenarios. It remains to be explored which types of applications may benefit from the argumentation-based semantics of fuzzy logics suggested here.

As already emphasized in the introduction, we do not pretend to contribute to the practice or theory of abstract argumentation frameworks, at least not directly. We rather, conversely, ‘borrow’ some basic concepts from abstract weighted argumentation to assemble a new type of semantic framework for fuzzy logics—one that is based on attacks of graded strength, rather than on a direct assignment of degrees of truth. From the point of view of fuzzy logic, it is particularly encouraging that different versions of the central prelinearity axiom can be justified in various ways in this manner.

Our scenario calls for further research in several directions. While, the focus on the three fundamental t-norm-based fuzzy logics, \(\mathsf{G}\), \(\mathsf{\L }\), and \(\mathsf{P}\) seems natural for a first exploration of this new territory, already our own results indicate that one should probably consider weaker logics, like \(\mathsf{BL}\) or \(\mathsf{MTL}\), in order to identify more general and more robust links between attack principles and fuzzy logics.

The rather brief remarks in Sects. 3 and 6 regarding various possible interpretations of the attack relation and on different options for constraining weights on combined attacks, are only intended as first hints toward a more systematic investigation of coherent interpretations of argument strength. In particular, we recently joined forces with the cognitive scientist Niki Pfeifer to explore probabilistic interpretations with respect to concrete data from experimental psychology. A first assessment along this line can be found in Pfeifer and Fermüller (2018).

Also our investigation of prelinearity in Sect. 8 (and partly already in Sect. 4) is by no means definitive, but should rather for example, various principles regarding the monotonicity of attack weight with respect to weights of attacks on implied and implying subformulas come to our mind as candidates for further attack principles that justify prelinearity. Moreover, also other characteristic properties and corresponding axioms of t-norm-based fuzzy logics, like residuation, seem worth exploring.

Finally, we recall that validity for important t-norm-based fuzzy logics, in particular for the three logics investigated here, is co-NP-complete. This means that corresponding forms of argumentative immunity can, presumably, be checked much more efficiently than semantic properties that naturally appear in the context of non-monotonic reasoning (Gottlob 1992). This might render such checks attractive as a kind of coherence check for argumentative claims with respect to logically implicit attacks.