Skip to main content

Veridical and Misleading Evidence

  • Chapter
  • First Online:
Belief, Evidence, and Uncertainty

Part of the book series: SpringerBriefs in Philosophy ((BRIEFSPHILOSC))

  • 792 Accesses

Abstract

Like the error-statisticians, Glymour, and us, Peter Achinstein rejects an account of evidence in traditional Bayesian terms. Like the error-statisticians and Glymour, but unlike us, his own account of evidence incorporates what we have called the “true-model” assumption, that there is a conceptual connection between the existence of evidence for a hypothesis and having a good reason to believe that the hypothesis is true. In this connection, and unlike any of the other views surveyed, Achinstein does not so much analyze the concept of evidence per se as provide a taxonomy of conceptions of evidence—subjective, objective, potential, “epistemic-situational,” and “veridical.” He then both argues that only “veridical evidence” captures exemplary scientific practice and explains why this should be the case. In the first half of this chapter, we set out Achinstein’s criticisms of the Bayesian account, then develop and in turn criticize his own account of “veridical evidence.” In the second half of the chapter, we contrast “veridical” with “misleading” evidence and show how two key theorems concerning the probability of misleading evidence allow us to measure it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Achinstein does not make our distinction between data and evidence. In what follows, we have collapsed the distinction between them and used his “e” to stand for both. It is, of course, our position that a distinction between them should be observed.

  2. 2.

    Achinstein is working here with the notion of an objective epistemic probability, independent of what people happen to believe. Our criticisms of his position, and the examples on which they sometimes depend, do not turn on questions concerning the correct interpretation of the probability operator. See Achinstein (2010) on this topic where he discusses Mayo’s work.

  3. 3.

    While we take issue with some of its main themes, Cartwright (1983) has a detailed and very influential discussion of the respects in which physical laws are not true, and the implications thereof.

  4. 4.

    In her highly readable and very influential book, Science as Social Knowledge, Helen Longino disagrees with our dual emphasis on the normative and descriptive aspects of philosophy of science. She addresses descriptively the evidential issues in which she is interested in a “nonphilosophical discussion of evidence.” She chides philosophers for their use of formal and normative models “that are never realized in practice” (Longino 1990, emphasis ours). We differ from her in at least two ways. First, we think that without formal and normative accounts, it is difficult to assess a variety of scientific and philosophical issues. It is only against the background of such an account, for example, that we have been able to contend that across the board application of the collapsibility principle leads to the apparent paradoxical nature of Simpson’s Paradox (Bandyopadhyay et al. 2011). Second, there are many, many counter-examples to her claim, among which the enormously influential work of Lewis and Sally Binford on the theory and practice of archeology. See Binford and Binford (1968). In the Appendix to the last chapter of this monograph, one of the authors of the monograph, a working ecologist, shows how our formal/normative accounts of confirmation and evidence play a role in current ecological science and beyond.

  5. 5.

    Longino takes issue with the claim, which we share with Achinstein, that evidence is independent of any agent’s beliefs and thus objective. According to her, all evidential relations are “always determined by background assumptions” (ibid., p. 60, emphasis ours) and therefore the same data can provide different evidential support for two competing theories simply in virtue of the fact that they make different background assumptions. This leads her to brand her account as “a contextualist analysis of evidence.” Three points are worth mentioning. First, our account is also contextualist, but in a very different and more minimal way: data provide evidence for one hypothesis (and its auxiliaries) against a rival hypothesis (and its auxiliaries) if and only if the likelihood of the data on one hypothesis as against the other is greater than one. Our account of evidence is local, precise, and comparative. Her account is general, rather open-ended as to what counts as a “background assumption,” and (at least not explicitly) non-comparative. There is a clear sense in which our account, unlike hers, is objective and agent-independent. Second, even though many of the auxiliary assumptions (a.k.a. background assumptions) of Newtonian and Einsteinian theory differ, the derivation of Mercury’s observed perihelion shift is nonetheless considered evidence for the latter as against the former (see our discussion of the “old evidence” problem in Chap. 9). If we were to take her account seriously, a serious shift in scientific practice and the assessment of the evidential support for theories would result. Third, in company with most other philosophical analyses of them, she tends to identify the concepts of “evidence” and “confirmation.” After subjecting the Hempelian account of confirmation to severe criticisms for its syntactic character, she goes on to write that the right question to ask is: “[c]an this definition of the confirmation relation be the source of the relation between evidence and hypothesis?” (ibid. p. 24). But as the tuberculosis example makes clear, the definition of “confirmation” is not the source of the (properly understood) relation between “evidence” and hypothesis; “confirmation” and “evidence” can vary dramatically.

  6. 6.

    Suppose what could be the case, that the test has been improved so that its sensitivity and specificity are now both 0.999. Pr(H│D & B) is now 0.085, whereas the Pr(H) is as before 0.000093. The posterior probability is very low, and yet the LR is 999. Confirmation and evidence pull us in very different directions. In fact, the probability of misleading evidence this strong is only 0.001 (Taper and Lele 2011). At this point, a doctor and patient might wonder whether we need to know more about the patient. Has she recently been abroad? If so, then it might not be wise to identify her with the general population in which the prior probability of TB for an individual randomly selected is 0.000093. A wedge between confirmation and evidence is easily driven.

  7. 7.

    Notice this evidence is the same regardless if one computes the LR for the two models on the basis of 25 throws, or as the product of the LR’s for the five sequences.

  8. 8.

    See Royall (1997, 2004): “The positive result on our diagnostic test, with a likelihood ratio (LR) of 0.94/).02 = 47, constitutes strong evidence that the subject has the disease. This interpretation of the test result is correct, regardless of that subject’s actual disease status. If she does not have the disease, then the evidence is misleading. We have not made an error—we have interpreted the evidence correctly. It is the evidence itself that is misleading”.

  9. 9.

    A famous case of the former involved the French physicist Pierre Blondlot’s continued perception of “N-rays” in his experimental set-up even after the American physicist Robert Wood had surreptitiously removed parts of the apparatus necessary to see them. See Nye (1980). Cases of fraud which occasionally come to light involve the deliberate fabrication of data.

  10. 10.

    One of the few philosophers to take misleading evidence seriously is Earl Conee (2004), although his account (and what he styles an “evidentialist” approach to epistemological issues) very much differs from our own.

  11. 11.

    There is a distinction not discussed here between a pre-data (1/k) and post-data (1/LR) probability of misleading evidence, (see Taper and Lele 2011). While acknowledging that the pre-data probability of misleading evidence was useful in conceptualization and study design, Royall rejected the use of the probability of misleading post-data. Post data, the evidence is either misleading or it is not; there is no probability involved. We modestly disagree with Royall on the utility of post data misleading evidence. First, there is no established scale for presenting evidence. The medical field often uses log10(LR), ecology often uses ln(LR), and there are other formats. A post-data probability of misleading evidence, recognized as a counterfactual probability, not a belief-probability, can be useful in communicating the strength of evidence to scientists trained in classical error statistics. Second, as we have pointed out in Chap. 6, recognizing the implicit post-data error probability makes the relationship between evidential statistics and severe testing easier to understand. Finally, in some situations, such as evidence in the presence of nuisance parameters, or multiple comparisons, the post-data probability of misleading evidence may to some extent become uncoupled from the likelihood realization.

  12. 12.

    Royall (2004).

References

  • Achinstein, P. (2010). Induction and severe testing: Mill’s Sins or Mayo’s error? In D. G. Mayo & A. Spanos (Eds.), Error and inference (pp. 170–188). Cambridge: Cambridge University Press.

    Google Scholar 

  • Achinstein, P. (2001). The book of evidence. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Achinstein, P. (1983). The concept of evidence. (ed.) Oxford: Oxford University Press.

    Google Scholar 

  • Achinstein, P. (2005). Scientific evidence. Baltimore: The Johns Hopkins University Press.

    Google Scholar 

  • Bandyopadhyay, P., Nelson, D., Greenwood, M., Brittan, G., & Berwald, J. (2011). The logic of Simpson’s paradox. Synthèse, 181, 185–208.

    Google Scholar 

  • Binford, S., & Binford, L. (Eds.). (1968). New perspectives on archeology. Chicago: Aldine Publishing Company.

    Google Scholar 

  • Cartwright, N. (1983). How the laws of physics lie. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Conee, E. (2004). Heeding misleading evidence. In E. Conee & R. Feldman (Eds.), Evidentialism. Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Longino, H. (1990). Science as social knowledge. Princeton: Princeton University Press.

    Google Scholar 

  • Nye, M. (1980). N-rays: An episode in the history of science and psychology of science. Historical Studies in the Physical Sciences, 11, 125–156.

    Article  Google Scholar 

  • Royall, R. (1997). Statistical evidence: A likelihood paradigm. New York: Chapman Hall.

    Google Scholar 

  • Royall, R. (2004). The likelihood paradigm for statistical evidence. In M. L. Taper & S. R. Lele (Eds.), The nature of scientific evidence. Chicago: University of Chicago Press.

    Google Scholar 

  • Taper, M., & Lele, S. (2011). Evidence, evidence functions, and error-probabilities. In P. Bandyopadhyay & M. Forster (Eds.), Handbook of statistics. Amsterdam: Elsevier, North-Holland.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prasanta S. Bandyopadhyay .

Rights and permissions

Reprints and permissions

Copyright information

© 2016 The Author(s)

About this chapter

Cite this chapter

Bandyopadhyay, P.S., Brittan, G., Taper, M.L. (2016). Veridical and Misleading Evidence. In: Belief, Evidence, and Uncertainty. SpringerBriefs in Philosophy(). Springer, Cham. https://doi.org/10.1007/978-3-319-27772-1_8

Download citation

Publish with us

Policies and ethics