Skip to main content
Log in

A probabilistic analysis of argument cogency

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency (RSA). Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: (1) change in the commitment to the reason, (2) the reason’s sensitivity and selectivity to the claim, (3) one’s prior commitment to the claim, and (4) the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, and may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual (in)dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Similarly important are the notions of argument strengthening, rebuttal, and counter-rebuttal, which however fall outside the scope of this paper.

  2. Recent work nevertheless applies such formal methods as computer models of defeasible inference to reasoning and argument (e.g., Walton and Gordon 2015).

  3. Following the introduction of the RSA criteria by Johnson and Blair in 1977 (Johnson and Blair 2006, p. 55), many informalists have adopted, modified, or augmented them (see Johnson and Blair 2002, p. 370). For instance, Govier (2010, p. 87ff.) calls sufficiency good grounds; Johnson (2000, p. 189ff.) added premise truth as a fourth criterion, situating this together with adequacy at the “dialectical tier;” Vorobej (2006, p. 49ff.) replaced acceptability with truth and added compactness as a fourth criterion to stipulate the absence of irrelevant premises.

  4. Spohn’s own ranking theory (Spohn 2012) also qualifies as an inductive logic. Pursuing a Baconian approach to probability, his theory is more general than the Pascalian approach we rely on. (For these terms, see Cohen 1989.) Ranking theory models the differential retractability of full rather than graded propositions, interpreted as belief-contents. By contrast, we speak of graded commitments to reasons or claims.

  5. The more common interpretation—‘the probability of a hypothesis, H, given evidence, E’—reflects the role of the probability calculus for the empirical sciences when gauging the (dis-)confirmatory effect of evidence on hypotheses, calculation of which relies on Bayes’ theorem. Several differences arise in contexts of defeasible inference and argumentation: First, scientific hypotheses typically have a predictive or explanatory relationship to the evidence. Second, the evidence here typically accumulates through independent instances (e.g., observations or test results), making the reliability of evidence expressible as long-run frequencies. Neither feature need hold between a claim and the reasons offered in its support. Finally, as Strevens (Strevens 2012, p. 23; notation adapted) writes: “a Bayesian conditionalizes on [some evidence] E—that is, applies Bayes’ rule to E—just when they ‘learn that E has occurred.’ In classical Bayesianism, to learn E is to have one’s subjective probability for E go to one [i.e., a probability value of 1, denoting certainty] as the result of some kind of observation.” By contrast, reasons appearing as premises of an argument need not be certain or unretractable (see Sect. 3.8). Rather defeasible reasoning and argument involves making judgements about the acceptability of one’s premises. Sometimes an update will occur when a reasoner comes to find their reasons are more (or less) acceptable than they did previously.

  6. As Hahn and Oaksford (2006b, p. 3) acknowledge, rule-based procedural accounts of argumentation are nonetheless required for the stronger argument (as identified) to in fact “carry through” to the discussion outcome.

  7. For mutually consistent events, the law is: \( P({A\vee B})=P(A)+P(B)-P( {A \& B} ).\)

  8. We use ‘R’ and ‘the / a reason’ to indicate the conjunction of all the articulated premises of an argument or piece of reasoning.

  9. (3) says that the probability of event A occurring, given some other event B does, equals the probability of both events occurring, divided by the probability that event B occurs anyways. Since the probability of any two events occurring is never greater than the probability of either event occurring individually, \( P(A \& B)\le P(B)\) holds. This guarantees a probability value in the range \(0\le P(A{\vert }B)\le 1\), so long as \(P(B)<0\). If \(P(B)=0\) then \(P(A{\vert }B)\) is undefined.

  10. In cases where A and B are independent, such that there is no systematic positive or negative correlation between them, it holds that: \(P(A{\vert }B)=P(A)\) and \(P(B{\vert }A)=P(B)\).

  11. (8) states that the posterior probability of C given R is the prior probability of C times the likelihood of the reason given the claim, P(R|C), over the probability of the reason, P(R). (The notion of likelihood is introduced in Sect. 3.7.)

  12. Joyce (2009, p. 5) notes that Carnap (1962, p. 466) identified i as the relevance quotient, or the probability ratio; Strevens (2012, p. 30) calls i the Bayes multiplier.

  13. (11) states the chance of event A occurring as the chance that A occurs given another event B does, times the chance that B occurs, plus the chance that A occurs given B fails to occur, times the chance that B fails to occur. The law presupposes conditionalization on exhaustive alternatives, which any claim B and its negation \({\sim } B\) of course are.

  14. (12) says that the posterior probability of a claim C given a reason R is the prior probability of C times the likelihood of R given C, divided by the sum of the likelihood of R given C times the probability of C, and the likelihood of R given not C times the probability of not C.

  15. While Eq. (2) was a consequence of the logical truth \((A\vee {\sim } A)\), these two constraints are consequences of the metaphysical truth that, given event B occurs, any other event A will either occur, or not.

  16. Notice that, since \(P(R{\vert }{\sim } C)=1-P({\sim } R{\vert } {\sim } C)\), rather than using ‘the logical complement of specificity’ in order to refer to \(P(R{\vert }{\sim } C)\), we use the term ‘specificity’ alone.

  17. (14) says that the final commitment in C, updated on a partial commitment in R, is the commitment in C given R, times the final commitment in R, plus the commitment in C conditional on \({\sim } R\), times the final commitment in \({\sim } R\). When conditionalizing on a partial commitment in order to update, JC thus recognizes that a partial commitment in R is a partial commitment in \({\sim } R\), which together sum to one.

  18. (17) states that one’s final commitment in C conditionalized on one’s final commitment in R, where \(0<P_{f}(R)<1\), is one’s initial commitment in C, times the sum of the likelihoods that R given C, multiplied by the ratio of one’s final commitment to the reason over one’s initial commitment to it, and the likelihood of \({\sim } R\) given C, multiplied by the ratio of one’s final commitment that \({\sim } R\) over one’s initial commitment to it.

  19. Granted that sufficiency presupposes relevance, reasons by contrast can be relevant without being sufficient. Indeed, distinct relevance-based failures of arguments are identified by the fallacies of relevance (Johnson and Blair 2002, p. 370). Johnson (2000, p. 200) claims that the notion of relevance it itself “basic” and “a ground-floor notion that a reasoner must grasp.” See Powers (1995) for the view that all fallacies allegedly are nothing but relevance problems. Zenker (2016) provides an overview of what currently does (not) count as a fallacy.

  20. In cases of irrelevance, both the impact term and the likelihood ratio, \(P(R{\vert }C)/P(R{\vert }{\sim } C)\) equals 1 (see Korb 2004, p. 44; Hahn and Hornikx 2016, p. 1838). Where \(i=1\), it follows that \(P(R{\vert }C)=P(R)\); so by the law of total probability (Eq. 11): \(P(R{\vert }C)=P(R{\vert }{\sim } C)\).

  21. (24) says that the final probability of C is the posterior probability of C given R, which meets or exceeds the threshold, \(t_{s}\), and exceeds the prior probability of C.

  22. After all, if R entails C, then \( P(C \& R)=P(R)\), and given the definition of conditional probability (Eq. 3), we have it that \(P(C{\vert }R)=1\).

  23. For all calculations in Example 1, see the “Appendix”.

  24. As an anonymous reviewer has rightly pointed out, an informal logician might respond that a set of individually weakly supporting reasons-for-C can, when taken together, provide sufficient convergent support—i.e., individually insufficient but jointly sufficient reasons—to the claim. This might be thought to provide an answer to the challenge to a threshold approach to sufficiency posed by Example 1.

    Yet, on standard accounts, this requires (somehow) taking the independent reasons together all at once, rather than separately in succession. Example 1 shows that, on a probabilistic understanding of inferential sufficiency, this combining of reasons is not required. Rather, the probabilistic calculus offers a formal understanding of how sufficient support can incrementally accrue via a succession of individually insufficient arguments, and so obviously contradicts the “weakest link-principle,” also known as Theophrastus’ rule (see Hahn and Oaksford 2006a, b, p. 15).

    Moreover, since we operationalize inferential sufficiency as an acceptability threshold on the probability of the claim conditional on a reason, \(P( C{\vert }R)\), further issues arise with a similar threshold approach to acceptability that result from interpreting a threshold application of inferential sufficiency as an inference gate (see Examples 2 and 3; Sect. 4.5). For instance, suppose that the claim in Example 1, now abbreviated as \(C_{1}\), does itself provide a reason for another claim, \(C_{2}\). As Example 3 will show, even a small sub-threshold change in the acceptability of \(C_{1}\) could push \(C_{2}\) above a sufficiency threshold when \(C_{2}\) is subsequently conditionalized on \(C_{1}\). So rather than immediately update on each individually insufficient reason the moment the reason “comes in,” to instead collect several weakly supporting reasons-for-\(C_{1}\)—as if holding these in memory until they (somehow) jointly meet a sufficiency threshold—could preclude a commitment-update in \(C_{2}\) under some conditions, although a sufficiency threshold would have been met if the acceptability of \(C_{1}\) had been updated earlier.

    Future work should provide a probabilistic analysis of the linked vs. convergent support-distinction in informal logic, which we cannot provide here. One possible result pertains to the identity conditions of reasons and premises. For some natural language material may well be discernable as a distinct premise but not count as a distinct reason, namely if offering (or receiving) that premise to support the claim C would, by itself, fail to render \(P(C{\vert }R)> P(C)\). The inferential effect of R on P(C), or the lack thereof, could thus become a criterion for a premise to in fact act as a reason. Hence, the identity conditions of reasons and premises in the probabilistic and the informal logic approach might be the same, while their functional properties could diverge.

  25. The acceptability of a set of reasons is some function of the acceptability of constituent reasons. Those considering the acceptability of individual and conjoint reasons may find that different acceptability standards are in effect (e.g., a higher threshold for the deliverances of reason and sensation than for memory or testimony).

References

  • Bayes, T. (1763/1958). An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53, 370–418. Reprinted in Biometrika, 45, 296–315.

  • Blair, J. A. (2011). Informal logic and its early historical development. Studies in Logic, 4, 1–16.

    Google Scholar 

  • Blair, J. A. (2012). Relevance, acceptability and sufficiency today. In C. Tindale (Ed.), Groundwork in the theory of argumentation (pp. 87–100). Dordrecht: Springer.

  • Blair, J. A., & Johnson, R. (1987). The current state of informal logic. Informal Logic, 9, 147–151.

    Article  Google Scholar 

  • Bradley, S. (2015). Imprecise probabilities. In E. N. Zalta et al. (Eds.), Stanford encyclopedia of philosophy, 2015 edition. Stanford, CA: Center for Study of Language and Information. http://plato.stanford.edu/archives/sum2015/entries/imprecise-probabilities/

  • Cohen, J. L. (1989). An introduction to the philosophy of induction and probability. Oxford: Oxford UP.

    Google Scholar 

  • Carnap, R. (1962). The logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Corner, A., & Hahn, U. (2013). Normative theories of argumentation: Are some norms better than others? Synthese, 190, 3579–3610.

    Article  Google Scholar 

  • Corner, A., Hahn, U., & Oaksford, M. (2006). The slippery slope argument: Probability, utility and category reappraisal. In R. Sun (Ed.), Proceedings of the 28th annual meeting of the cognitive science society (pp. 1145–1150). Mahwah, NJ: Erlbaum.

  • Cox, R. T. (1946). Probability, frequency and reasonable expectation. American Journal of Physics, 14, 1–10.

    Article  Google Scholar 

  • Cox, R. T. (1961). The algebra of probable inference. Baltimore, MD: Johns Hopkins UP.

    Google Scholar 

  • Douven, I., & Schupbach, J. N. (2015). The role of explanatory considerations in updating. Cognition, 142, 299–311.

    Article  Google Scholar 

  • Evans, J. S. T. B. (2002). Logic and human reasoning: An assessment of the deduction paradigm. Psychological Bulletin, 128, 978–996.

    Article  Google Scholar 

  • Fitelson, B. (2001). Studies in Bayesian confirmation theory. Dissertation, University of Wisconsin at Madison. http://fitelson.org/thesis.pdf.

  • Godden, D. (2010). The importance of belief in argumentation: Belief, commitment and the effective resolution of a difference of opinion. Synthese, 172, 397–414.

    Article  Google Scholar 

  • Godden, D., & Walton, D. (2007). Advances in the theory of argumentation schemes and critical questions. Informal Logic, 27, 267–292.

    Article  Google Scholar 

  • Godden, D., & Zenker, F. (2015). Denying antecedents and affirming consequents: The state of the art. Informal Logic, 35, 88–134.

    Article  Google Scholar 

  • Govier, T. (2010). A practical study of argument (7th ed.). Belmont, CA: Wadsworth, Cengage Learning.

    Google Scholar 

  • Hahn, U. (2014). The Bayesian boom: Good thing or bad? Frontiers in Psychology, 5, 765. doi:10.3389/fpsyg.2014.00765.

    Article  Google Scholar 

  • Hahn, U., & Hornikx, J. (2016). A normative framework for argument quality: Argumentation schemes with a Bayesian foundation. Synthese, 193, 1833–1873.

    Article  Google Scholar 

  • Hahn, U., & Oaksford, M. (2006a). A Bayesian approach to informal argument fallacies. Synthese, 152, 207–236.

    Article  Google Scholar 

  • Hahn, U., & Oaksford, M. (2006b). A normative theory of argument strength. Informal Logic, 26, 1–24.

    Article  Google Scholar 

  • Hahn, U., & Oaksford, M. (2007). The rationality of informal argumentation: A Bayesian approach to reasoning fallacies. Psychological Review, 114, 704–732.

    Article  Google Scholar 

  • Hahn, U., Oaksford, M., & Bayindir, H. (2005). How convinced should we be by negative evidence? In B. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th annual conference of the cognitive science society (pp. 887–892). Mahwah, NJ: Lawrence Erlbaum Associates.

  • Hahn, U., Oaksford, M., & Corner, A. (2005). Circular arguments, begging the question and the formalization of argument strength. In A. Russell, T. Honkela, K. Lagus, & M. Pöllä (Eds.), Proceedings of AMKLC’05, International symposium on adaptive models of knowledge, language and cognition (pp. 34–40). Espoo: Helsinki University of Technology.

  • Hajek, A. (2008). Dutch book arguments. In P. Anand, P. Pattanaik, & C. Puppe (Eds.), The Oxford handbook of rational and social choice (pp. 173–195). Oxford, UK: Oxford University Press.

  • Hamblin, C. (1970). Fallacies. London: Methuen.

    Google Scholar 

  • Harris, A. J. L., Hahn, U., Madsen, J. K., & Hsu, A. (2015). The appeal to expert opinion: Quantitative support for a Bayesian network approach. Cognitive Science. doi:10.1111/cogs.12276.

  • Hertwig, R., Ortmann, A., & Gigerenzer, G. (1997). Deductive competence: A desert devoid of content and context. Current Psychology of Cognition, 16, 102–107.

    Google Scholar 

  • Howson, C., & Urbach, P. (2006). Scientific reasoning: The Bayesian approach (3rd ed.). La Salle, IL: Open Court.

    Google Scholar 

  • Ikuenobe, P. (2004). On the theoretical unification and nature of the fallacies. Argumentation, 18, 189–211.

    Article  Google Scholar 

  • Jeffrey, R. (1983). The logic of decision (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Johnson, R. (2000). Manifest rationality: A pragmatic theory of argument. Mahwah, NJ: Lawrence Earbaum.

    Google Scholar 

  • Johnson, R. H. (2006). Making sense of informal logic. Informal Logic, 26, 231–258.

    Article  Google Scholar 

  • Johnson, R. (2011). Informal logic and deductivism. Studies in Logic, 4, 17–37.

    Google Scholar 

  • Johnson, R., & Blair, J. A. (2002). Informal logic and the reconfiguration of logic. In D. Gabbay, R. Johnson, H. Ohlbach, & J. Woods (Eds.), Handbook of the logic of argument and inference: Turn towards the practical (pp. 340–396). Amsterdam: Elsevier.

    Google Scholar 

  • Johnson, R., & Blair, J. A. (2006). Logical self defense (3rd ed.). New York: International Debate Education Association (First edition 1977, Toronto: McGraw-Hill Ryerson).

  • Joyce, J. (2009). Bayes’ theorem. In E. N. Zalta et al. (Eds.), Stanford encyclopedia of philosophy, 2009 edition (pp. 1–47). Stanford, CA: Center for Study of Language and Information. http://plato.stanford.edu/archives/spr2009/entries/bayes-theorem/

  • Kolmogorov, A. N. (1933). Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse Der Mathematik. Berlin: Springer (translated as: (1950). Foundations of Probability. New York: Chelsea Publishing Company).

  • Korb, K. (2004). Bayesian informal logic and fallacy. Informal Logic, 24, 41–70.

    Article  Google Scholar 

  • Oaksford, M., & Hahn, U. (2004). A Bayesian approach to the argument from ignorance. Canadian Journal for Experimental Psychology, 58, 75–85.

    Article  Google Scholar 

  • Pfeifer, N. (2013). On argument strength. In F. Zenker (Ed.), Bayesian argumentation: The practical side of probability (pp. 185–193). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Pinto, R. C. (2001). Argument, inference and dialectic. Dordrecht: Kluwer.

    Book  Google Scholar 

  • Powers, L. H. (1995). The one fallacy theory. Informal Logic, 17(2), 303–314.

    Article  Google Scholar 

  • Ramsey, F. (1926/1931). Truth and probability. In R. Braithwaite (Ed.), The foundations of mathematics and other essays (pp. 156–198). London: Routledge & Kegan Paul.

  • Spohn, W. (2012). The laws of belief: Ranking functions and their applications. Oxford: Oxford UP.

    Book  Google Scholar 

  • Strevens, M. (2012). Notes on Bayesian confirmation theory. http://www.strevens.org/bct/

  • Talbot, W. (2011). Bayesian epistemology. In E. N. Zalta et al. (Eds.), Stanford encyclopedia of philosophy, 2011 edition (pp. 1–34). Stanford, CA: Center for Study of Language and Information. http://plato.stanford.edu/archives/sum2011/entries/epistemology-bayesian/

  • van Eemeren, F. H., Garssen, B., Krabbe, E. C. W., Snoeck Henkemans, A. F., Verheij, B., & Wegemans, J. (2014). Handbook of argumentation theory. Dordrecht: Springer.

    Book  Google Scholar 

  • Vorobej, M. (2006). A theory of argument. New York: Cambridge UP.

    Book  Google Scholar 

  • Walton, D., & Gordon, T. (2015). Formalizing informal logic. Informal Logic, 35, 508–538.

    Article  Google Scholar 

  • Walton, D., Reed, C., & Macagno, F. (2008). Argumentation schemes. Cambridge: Cambridge UP.

    Book  Google Scholar 

  • Woods, J. (2000). How philosophical is informal logic? Informal Logic, 20, 139–167.

    Article  Google Scholar 

  • Woods, J., & Walton, D. (2007). Fallacies: Selected papers 1972–1982. London: College Publications.

    Google Scholar 

  • Zenker, F. (2013). Bayesian argumentation: The practical side of probability. In F. Zenker (Ed.), Bayesian argumentation: The practical side of probability (pp. 1–11). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Zenker, F. (2016). The polysemy of ‘fallacy’—or ‘bias’, for that matter. In: Bondy, P., and Benaquista, L. (eds). Argumentation, objectivity and bias (Proceedings of the 11th conference of the ontario society for the study of argumentation, 18–21 May, 2016) (pp. 1–14). Windsor, ON: OSSA. http://scholar.uwindsor.ca/ossaarchive/OSSA11/

Download references

Acknowledgements

We consider this joint work; our names are listed in alphabetical order. For comments that helped improve an earlier version of this paper, we thank Mike Oaksford as well as an anonymous reviewer (the latter particularly on the issue briefly discussed in Sect. 4.4, footnote 24). A version of this paper was presented at the workshop on Argument Strength hosted by the Research Group for Non-monotonic Logics and Formal Argumentation at the Institute of Philosophy II, Ruhr-University Bochum, Germany, 30 November–2 December, 2016. We thank that audience for their comments and discussion. Frank Zenker acknowledges a European Union Marie Sklodowska Curie COFUND fellowship (1225/02/03) as well as funding from the Volkswagen Foundation (90 531) and the Ragnar Söderberg Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Godden.

Appendix: Calculations for Example 1 (Sect. 4.4)

Appendix: Calculations for Example 1 (Sect. 4.4)

The law of total probability (Eq. 11) serves to calculate the initial priors on each of the reasons as follows:

$$\begin{aligned} P({R_n})= & {} P({R_n |C})\times P( C )+P( {R_n |{\sim } C} )\times P({{\sim } C})\\= & {} 0.25\times 0.17+0.15\times 0.83\\= & {} 0.167 \end{aligned}$$

Using BT (Eq. 8) to successively update on each reason, \(R_{1}\) to \(R_{4}\), for the first update:

$$\begin{aligned} P_f (C)= & {} \frac{P({R_1 |C})}{P({R_1})}\times P(C)\\= & {} \frac{0.25}{0.167}\times 0.17\\= & {} 0.2545 \end{aligned}$$

Update 1 fails to satisfy Eq. 28, since

$$\begin{aligned} \frac{P(R_1 |C)}{P( {R_1 } )}<\frac{t_S }{P( C )}= & {} \frac{0.25}{0.167}<\frac{0.5001}{0.17}=1.497<2.492 \end{aligned}$$

Given the updated value for P(C), we then recalculate the prior on each remaining reason:

$$\begin{aligned} P({R_n })= & {} P({R_n |C})\times P(C)+P({R_n |{\sim } C})\times P({{\sim } C})\\= & {} 0.25\times 0.2545+0.15\times 0.7455\\= & {} 0.1755 \end{aligned}$$

For the second update, on \(R_{2}\), we find:

$$\begin{aligned} P_f ( C )= & {} \frac{P( {R_2 |C} )}{P( {R_2 } )}\times P( C )\\= & {} \frac{0.25}{0.1755}\times 0.2545\\= & {} 0.3625 \end{aligned}$$

So also the second update fails to satisfy Eq. 28, since

$$\begin{aligned} \frac{P(R_2 |C)}{P( {R_2 } )}<\frac{t_S }{P( C )}=\frac{0.25}{0.1755}<\frac{0.5001}{0.12545}=1.425<1.965 \end{aligned}$$

Again recalculating the priors on the remaining reasons:

$$\begin{aligned} P({R_n})= & {} P({R_n |C})\times P( C )+P({R_n |{\sim } C})\times P({{\sim } C})\\= & {} 0.25\times 0.3625+0.15\times 0.6375\\= & {} 0.1875 \end{aligned}$$

We find for the third update, on \(R_{3}\):

$$\begin{aligned} P_f ( C )= & {} \frac{P({R_3 |C})}{P({R_3})}\times P(C)\\= & {} \frac{0.25}{0.1875}\times 0.3625\\= & {} 0.4833 \end{aligned}$$

So that also the third update fails to satisfy Eq. 28, since

$$\begin{aligned} \frac{P(R_3 |C)}{P( {R_3 } )}<\frac{t_S }{P( C )}=\frac{0.25}{0.1875}<\frac{0.5001}{0.3625}=1.333<1.379 \end{aligned}$$

Finally recalculating the prior on the remaining reason, \(\hbox {R}_{4}\):

$$\begin{aligned} P( {R_n } )= & {} P( {R_n |C} )\times P( C )+P( {R_n |{\sim } C} )\times P( {{\sim } C} )\\= & {} 0.25\times 0.4833+0.15\times 0.6167\\= & {} 0.1983 \end{aligned}$$

For the fourth update we find:

$$\begin{aligned} P_f ( C )= & {} \frac{P( {R_4 |C} )}{P( {R_4 } )}\times P( C )\\= & {} \frac{0.25}{0.1983}\times 0.4833\\= & {} 0.6093 \end{aligned}$$

Therefore, had the first three updates already taken place, then a threshold application of sufficiency would permit the fourth update, since

$$\begin{aligned} \frac{P(R_4 |C)}{P( {R_4 } )}\ge \frac{t_S }{P( C )}=\frac{0.25}{0.1983}\ge \frac{0.5001}{0.4833}=1.26\ge 1.035 \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Godden, D., Zenker, F. A probabilistic analysis of argument cogency. Synthese 195, 1715–1740 (2018). https://doi.org/10.1007/s11229-016-1299-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-016-1299-2

Keywords

Navigation