This paper connects veritistic teleological epistemology, VTE, with the epistemological dimension of the scientific realism debate. VTE sees our epistemic activities as a tradeoff between believing truths and avoiding error. I argue that van Fraassen’s epistemology is not suited to give a justification for a crucial presupposition of his Bad Lot objection to inference to the best explanation (IBE), the presupposition that believing that p is linked to p being more likely to be true. This makes him vulnerable to a counterargument, tailored after Musgrave’s defense of IBE, which would result in a stalemate between them about presuppositions of rationality. I will, however, show that switching to VTE can justify van Fraassen’s presupposition. This leads to a dismissal of common IBE arguments for realism as presented by Boyd and Musgrave, but I also argue that a more cautious version of realism can be rescued from the Bad Lot objection. Finally, I raise some worries about epistemic risk-attitude consistency for constructive empiricists and develop an alternative anti-realist position.1
The goal of the paper is to bridge a gap between epistemology and philosophy of science by applying a currently widely discussed epistemological framework, i.e., teleological epistemology, to the scientific realism debate.
The scientific realism debate revolves around the relation between our scientific theories and the world. One can separate a metaphysical, a semantic and an epistemological dimension of scientific realism (cf. Psillos 1999; Chakravartty 2017). This paper focuses on epistemological realism – the view that we have some form of justified beliefs about the world through scientific investigation. In the current literature the concern is mostly about justified beliefs about unobservable entities and/or structures such as electrons, quarks or proteins and I adopt this focus as well without questioning this distinction between observables and unobservables. I analyze the debate from the perspective of veritistic teleological epistemology [VTE] because I think that the implications of what has been termed “the value-turn in epistemology” (Riggs 2006; Pritchard 2007) are not yet appreciated in philosophy of science enough and will advance the realism debate significantly. Even though I will give a quick motivation for VTE, a full defense is not in the scope of this paper. The goal is rather to show the consequences of applying the framework to the scientific realism debate. Specifically, I will apply VTE (a) to the Bad Lot objection for realists and (b) to the question of epistemic risk attitudes for anti-realists.
Section 2 is dedicated to an overview of teleological epistemology. In Sect. 3, I present the Bad Lot objection and argue that van Fraassen’s epistemology is not suited to give a justification for a crucial presupposition of his Bad Lot objection, that believing that p is linked to p being more likely to be true. In Sect. 4, I argue that VTE can deliver such a justification. In Sect. 5, I show that this leads to a dismissal of common IBE arguments for realism as presented by Boyd and Musgrave but also argue that their arguments can be modified and a more cautious version of realism can be saved. In Sect. 6, I present the consequences of VTE for anti-realism and argue that anti-realists cannot be mere sceptics and that constructive empiricists might have a problem with epistemic risk consistency. Furthermore, I formulate an alternative anti-realist position based on high epistemic risk aversion.
2 Teleological Epistemology
In this section, I want to give a brief introduction to teleological epistemology.Footnote 1 Recently, Chakravartty (2018) develops an analysis of the realism debate in terms of the Jamesian truth goal (cf. Chakravartty 2018, 232; named after James 1896).Footnote 2 The Jamesian truth goal (henceforth the truth goal) is characterized by two sub-values: believing truths and avoiding error. One central feature of that characterization is that those two parts pull in opposite directions. Different ways of balancing them results in different epistemic standards. The more we value the goal of avoiding false beliefs at the expense of the goal of believing truths the higher our epistemic standards will rise and the more often we will withhold our beliefs. On the other hand, the more we value the goal of believing truths at the expense of the goal to avoid believing falsehoods the lower our epistemic standards will get and the less likely it will be that we withhold our beliefs. This can be called the fundamental epistemic trade-off and I will call the choice of how to do the trade-off the question of balancing.
For the purpose of clarity, I will introduce the following conventionFootnote 3:
Let Tv be the value of having true beliefs.
Let Fv be the value of avoiding having false beliefs.
Let Tv (p) be the value of a true belief on the question whether p, if p is true.
Let Fv (p) be the value of avoiding a false belief on the question whether p, if p is false.
Let >, <, =, ≥, ≤ express the relations between those values.
For how to do balancing, Chakravartty, for instance, calls for liberalism:
While all defensible stances must pass the test of rationality, there is no one answer to the question of what a responsible epistemic agent should value. Epistemological values are variable and though an individual may change her mind about them, she need not. (Chakravartty 2018, 233)
From this Chakravartty (2018, 232) concludes for stance-choice to “subtract the usual judgment that at most one party to these disputes is, in fact, correct”. As such, Chakravartty ends up at a very liberal attitude towards stance-choiceFootnote 4, resembling some of van Fraassen’s own voluntarism. Contrary to van Fraassen, this is now clearly cashed out in a trade-off between believing truths and avoiding error. Chakravartty’s paper merely concludes with this liberal solution, but I think the Jamesian picture can deliver much more, at least after it is spelled out explicitly in VTE. I will now introduce such a framework. It can be divided into 4 pillars:
1. The Axiological Pillar: The Good is Identified with Some x.
The first step is to identify the good. Most epistemologists today favor veritism, i.e., some variation of the view that the fundamental epistemic good is to believe truths and to avoid believing falsehoods.Footnote 5 This paper works within such a veritistic framework of teleological epistemology (VTE).
2. The Teleological Pillar: The Good is the End that We Want to Achieve.
This is the explicit teleological pillar: the good is what we are aiming for or trying to achieve. I will speak about the truth goal when referring to the already mentioned veritist goal.
3. The Deontic Pillar: What is Right is Generally Explained via the Good.
We believe generally in the right way, if it promotes the epistemic good (the goal) and we believe generally in the wrong way if it impedes on the epistemic good. Here, ‘right way of believing’ and ‘wrong way of believing’ is simply another way of saying ‘justified believing’ and ‘unjustified believing’. As Ronzoni (2010, 455) puts it: “[T]he good tells us what our general direction and our ultimate target should be, whereas the right tells us what are the legitimate options to get there”. Consider, for instance, evidentialism. Framed in a teleological framework such an account would propose that the right way of believing is (roughly speaking) believing based on evidence because this is the best way of getting to our ultimate goal of believing truths and avoiding error. It is important to recognize here that this makes justification merely instrumental in reaching the epistemic good. We develop our methods of justification as a mere instrument to achieve our epistemic goals. One way of spelling out objective principles of epistemic justification is as followsFootnote 6:
Principle of Direct Objective Epistemic Justification: For all subjects S and propositions p: Believing that p is epistemically justified for S if and only if believing that p maximizes epistemic value.
Principle of Indirect Objective Epistemic Justification: For all methods of justification M: M is epistemically justified if and only if following M maximizes epistemic value and for all subjects S and propositions p: Believing that p is epistemically justified for S if and only if believing that p follows M.
4. The Normative Pillar: Norms of Belief are Explained Entirely via the Right.
The final pillar explains the path from right and wrong believing to obligations and permissions to believe. A very straightforward way of spelling this connection out (but by no means the only one) is as follows:
Norm of Belief Formation and Sustenance. For all subjects S and propositions p: S is epistemically obligated to believe that p if and only if S is justified in believing that p. S is epistemically obligated not to believe that p if and only if S is not justified in believing that p.
Note that a variety of epistemologists (e.g., Alston 1988; Plantinga 1986) think that there are no obligations to believe; they skip the fourth pillar. They can still be teleological epistemologists, so the fourth pillar is optional. In the context of the realism debate this would translate as follows: Suppose believing in the existence of electrons maximizes epistemic value, then it is epistemically justified to believe that electrons exist but that does not already imply that you are obligated to believe that electrons exist.Footnote 7
These four steps roughly characterize the basic structure of teleological epistemology: We identify the good. We set the good as our goal. We explain the right via the good and we explain norms of belief via the right. This is a very broad characterization and all I want to refer to with the term ‘teleological epistemology’.
Current epistemologists differentiate between two (sometimes competing) veritist frameworks: one with full belief, the other with degrees of belief (i.e., credences). Van Fraassen originally framed his Bad Lot objection in terms of full beliefs, and I mostly want to focus on such a framework here. Later, van Fraassen also considered the use of credence and subjective probabilities, but he still sees the value of the notion of ‘belief’ in his constructive empiricism as follows: “First of all, it seems to me that there is a good place within the epistemic enterprise for having one picture of which you just say “that is the way things are”” (van Fraassen 2001, 165).
This is how I understand a system of full belief—as a description about what one thinks that things are. Of course, this characterization does not exclude more fine-grained notions, such that I can be (or should be) more confident in some of my full beliefs than in others. Note further, that various epistemologists propose bridge principles between full beliefs and credences. This is associated with the Lockean Thesis, claiming that I ought to believe that p iff I have high credences in p.Footnote 8
The Lockean Thesis is contentious and it is not in the scope of the paper to fully navigate through its legitimacy, but I want to give some specifications in order to better understand some of my later remarks concerning a credence framework. One main problem for Lockeans is to provide a non-arbitrary credence threshold for belief obligations. Here, I am sympathetic to Dorst’s (2019) view of a variable threshold. He argues that the magnitudes of the values of true and false belief determine the threshold and this threshold varies with context and proposition in question. I am sympathetic to this view because it naturally pairs with VTE. First, it incorporates the two-sided truth-goal as the basis for the threshold. Second, it acknowledges that epistemic reasons alone cannot inform balancing, so there is no fixed threshold. And third, it has a natural solution for balancing the truth-goal on contextual grounds. It also solves many classical problems of Lockeans, such as Lottery Paradoxes.
There is another group of objections to Lockeanism ingrained in the analysis of knowledge ascriptions (see especially Shaffer 2018). Plausibly, Lockeans are committed to the following view: Knowing that p implies that one’s degree of rational credence must meet or exceed some threshold. However, it is easy to construct cases where one’s degree of rational credence in some p is extremely high, p is true, but p is false in a nearby world. In such cases, believing that p is not safe. Given that knowledge implies safety, the threshold view of knowledge must be false. One potential path to go here is the following: If I can be at least sometimes permitted or obligated to believe that p, while my belief in p is not safe, then I can be obligated to believe without having knowledge. Such a view is only problematic if I were to think that one should only believe what one knows (cf. Williamson 2002; Littlejohn 2012). This only touches the problems with knowledge and a threshold view and I will have to come back to it later when I talk about approximate truth, but other than that, I will mostly focus on norms of belief here and leave the analysis of knowledge behind.
I also want to say a few words about the alternatives to VTE to put the present paper into context. In my proposed broad sense, VTE is the most common value-theoretical framework in epistemology. Value-theoretical thinking was long under the surface of the discussion, as Berker (2013) recognizes. He (Berker 2013, 355, footnote 27) lists 63 papers from 35 epistemologist of all orientations who adhere to teleological epistemology, but mostly only implicitly, to show how widespread this view is, and this is the case even though my conception of teleological epistemology is wider than his. Already this prevalence of VTE warrants to investigate into its applications to epistemological issues of the scientific realism debate. The two main rival non-teleological accounts are: First, purely coherentist, subjectivist accounts of justification.Footnote 9 Secondly, recent explicitly deontological accounts. Littlejohn (2012) referred to his account as a deontological conception of justification but recognized later (Littlejohn 2018) that this is better understood as a form of teleological non-consequentialism. In fact, his theory has all the four features of my categorization of a teleological theory. Very recently, however, epistemologists started to develop truly deontological frameworks. Most notably, Sylvan (2020) takes Wood’s (1999) interpretation of Kant—that the good is not aimed at but it is to be respected—to explicate his epistemic Kantianism. For a Kantian solution to van Fraassen’s Bad Lot objection against IBE, see also Shaffer (2019). Both, the purely subjectivist and the explicit deontological view of justification, are currently minority positions in epistemology. Saying this is not to judge them, this is out of the scope of this paper. My concern is simply to apply the most common value-theoretical framework of epistemology, i.e., a teleological framework, to two specific issues in the scientific realism debate.
One further remark on the presuppositions of VTE. By speaking about truth, VTE does not reject anti-realism from the outset. Van Fraassen, for instance, still explicitly forms the truth-goal for science. He simply restricts it to observables—the goal is empirical adequacy or truth about the phenomena. My claim merely amounts to the following: If the epistemic goals of science are best viewed in terms of VTE, then what are the implications for the realism debate.
3 The Bad Lot
Realists often rely on an inference to the best explanation [IBE] for their claims. One current major objection against IBE is the argument of the Bad Lot due to van Fraassen. He writes:
[O]ur selection may well be the best of a bad lot. To believe is at least to consider more likely to be true, than not. […] For me to take it that the best of set X will be more likely to be true than not, requires a prior belief that the truth is already more likely to be found in X, than not. (van Fraassen 1989, 142-143)
Van Fraassen’s Bad Lot objection presupposes that believing that p is linked to p being more likely to be true. This is a presupposition (i.e., epistemic value decision) van Fraassen never justifies. Consider a possible alternative. Start with James’ (1896) argument and suppose that we have equally balanced evidence between the belief that God exists and the belief that God does not exist. Could it not be rational to believe in the existence of God after all, even though it is not more likely to be true than not? James responds to Clifford’s risk-averse epistemology as follows: “It is like a general informing his soldiers that it is better to keep out of battle forever than to risk a single wound.” And why should we not be even more risk-tolerant? Could it not also be better to fight a battle that is very likely lost, in a case where the slight chance of victory would be extremely valuable? Could it not be rational to hold a belief even if it is likely to be false, given that if it were true, holding that belief would be highly valuable? If believing truths would be much more valuable than avoiding error, then this seems reasonable.
Whatever one finds prima facie reasonable here, at least van Fraassen’s presupposition of a risk-averse balancing needs some justification. Van Fraassen famously defends a voluntarist epistemology. The main cornerstone of this epistemology is that there are only two requirements for rationality: consistency and probabilistic coherence. Since the rationality requirements are so thin, various competing epistemic systems are evaluated as rational. Given these thin criteria for rationality, there is no straightforward way in which the just painted Jamesian picture violates any of these two criteria. For instance, by believing in God in a case of equally balance evidence, I am not inconsistent as long as I do not also believe that God does not exist. Furthermore, I am also not probabilistically incoherent in believing so. As long as I keep my credences in the God hypothesis and its negation at 0.5, no Dutch Book can be made against me in this case. (I need to give up a straightforward version of the Lockean Thesis however.) As such, van Fraassen’s voluntarist epistemology is not suited to justify the > 0.5 threshold for belief. In the next subchapter, I will argue that while van Fraassen cannot establish that avoiding error needs to be valued higher than believing truths, VTE can. However, I will also argue that this leaves a door open for saving a form of realism from the Bad Lot.
4 Balancing the Truth Goal
This section addresses how the truth-goal should be balanced. I will argue that for all propositions p that we consider, Fv (p) > Tv (p). This argument is based on the following principle of logical inconsistency avoidance:
Principle of Logical Inconsistency Avoidance (LI): For all subjects S and propositions p, S is not permitted to believe both that p and that ¬p at the same time.
It might be tempting to view LI as a basic norm of belief. However, in VTE every epistemological principle has to be judged by whether it is conducive to the truth goal. LI is no exception. Prima facie, LI seems to fulfill this condition: We know that believing that p and that ¬p simultaneously will necessarily lead to believing a falsehood. Thus, since it is part of the correct epistemic goal to avoid believing falsehoods, we should not believe p and ¬p at the same time. LI is justified.
However, this argument is only successful by a specific balancing decision. Consider a case with equally balanced evidence between p and ¬p (and no non-evidential reasons for preferring one over the other).Footnote 10 Believing just one of the two propositions would be completely epistemically arbitrary and cannot be correct. Thus, there are two options left: believing none or both. If we believe none, then we succeed in our goal of avoiding error but not in our goal of believing the truth about whether p. Therefore, this option prioritizes the goal of error-avoidance over the goal of truth-acquisition. Conversely, if we believe both, then we succeed in our goal of believing the truth about whether p, but we cannot avoid believing a falsehood too. In this case, we prioritize the goal of truth-acquaintance. Furthermore, if we prioritize none of those two goals, then both options, believing both and believing neither, have the same value. If one option maximizes value, then the other does too. Thus, both options are permissible which, again, makes believing a contradiction permissible and violates LI.
This has the following consequences: In VTE, LI holds only if for all propositions p, Fv (p) > Tv (p). We are, thus, left with two options in VTE: Endorsing risk-averse balancing or giving up LI. So, how about giving up LI? One view that straightforwardly necessitate a violation of LI is Dialethism (Priest, Routley and Norman 1989) – the view that there are Dialetheia, i.e., that for some propositions p, p and ¬p is true. Given VTE, for such propositions believing that p and believing that ¬p will maximize value. This implies a violation of LI. Dialethism is usually motivated by various logical paradoxes. But even if Dialethism is false, various paradoxes point towards a violation of LI that seem to be motivated by VTE. One such case is the Preface Paradox (Makinson 1965). In short, assume you have written a book carefully. It might be reasonable to believe of each sentence in the book that it is true but also believe that the conjunction of all the sentences in the book is false, and thus explicitly believing a contradiction. Similar is Harman’s Paradox (Harman 1986, 158): You infer that of all your beliefs some will most likely be inconsistent, but you do not know which. It might not be reasonable to abandon a large number of your beliefs just to be consistent. Again, this violates LI. Such reasoning is plausibly motivated by VTE: If you had to give up a large set of true beliefs just to avoid one or a few inconsistencies (and with it believe a few falsehoods) you would tilt the balancing of the truth goal unreasonably heavily in favor of error-avoidance. Thus, it might be epistemically better to keep those beliefs. Note that this VTE-based motivation for violating LI is not based on Dialethism. This shows that even if Dialetheism is false, VTE can motivate violating LI. This, however, necessitates paraconsistency. The logical consequence relation ⊢ cannot allow for Explosivity, i.e., a contradiction cannot entail everything. A paraconsistent logic that at least denies Explosivity, allows a violation of LI without being committed to believing everything.
Still, even if we grant for all such cases that violating LI is permissible, in order to avoid for all propositions p, Fv (p) > Tv (p) an even more radical violation of LI is necessary. LI had to be violated in all cases of equally balanced evidence between the truth of some proposition and its negation. However, this is just a much too common occurrence, be it in everyday life or in science. Holding all those contradictory beliefs cannot be reasonable and goes way beyond paradoxical cases such as the Preface Paradox. It can also not be motivated by Dialethism, since Dialethism is usually a response to logical paradoxes and not to contingent epistemic situations. After all, it would be very curious why for all propositions p, for which it happens that the evidence is equally balanced, it also happens that p and ¬p is true. As such, it cannot be that for all propositions p, Tv (p) ≥ Fv (p). This argument is a reductio of such a balancing. Then, the only option is that for all propositions p, Fv (p) > Tv (p).
One quick objection. Some scenarios might appear to contradict this conclusion. Consider Jeremy Goodman’s example (cf. Hawthorne, Rothschild and Spectre 2016, 1400) where a horse A has a 45% chance to win a race, a horse B 28% and C 27%. If you ask me: “What do you think which horse will win?”, it seems rational for me to respond: “I believe horse A will win”. Thus, it seems as if I am permitted to believe a proposition that is likely to be false – that horse A will win. Now, this might tell us something interesting about how we use the word ‘belief’. However, this answer is merely indicative of my belief that horse A has a higher chance of winning, than the other options. If I were to be asked whether I believe that horse A wins or whether it loses, there can be at best two rational responses: either I believe that A loses, since it is more likely that it loses, or I believe neither that it wins nor that it loses since the evidence is so slim. The latter would be the more risk-averse answer. Examples such as this are then no counterexamples to the risk-averse weighting of the truth goal.
5 Consequences for the Realist
In the last subsection, I established that for all propositions p, Fv (p) > Tv (p). In this section, I will show that this weighting of the truth goal rules out a widely held version of classical realism based on IBE. As an example, I will refer to two prominent accounts: that of Alan Musgrave and that of Richard Boyd. I then consider some modifications to make those accounts compatible with VTE.
Let us start with considering Musgrave’s influential defense of IBE. As already stated, van Fraassen (1989) objects to IBE with the argument of the Bad Lot. In a sense, Musgrave recognizes a version of this objection already in his “The Ultimate Argument for Scientific Realism” (Musgrave 1988, 238–239). He concedes that the best explanation might be a “perfectly lousy one” (Musgrave 1988, 239). He thus strengthens the premise of IBE and contents that the explanation ought to be ‘‘satisfactory’’. “It is reasonable to accept a satisfactory explanation of any fact, which is also the best available explanation of that fact, as true” (Musgrave 1988, 239). Similarly, Lipton calls for a “good enough” (Lipton 1993), or “sufficiently good” (Lipton 2004) explanation. As Dellsén (2017) rightfully points out, since Musgrave never describes what “satisfactory” exactly amounts to, his attempt lacks clarity. It raises the suspicion that any explication of “satisfactory”, “sufficiently good”, or “good enough” endangers the whole argument to be circular. Sure, if the explanation is satisfactory or good enough then we buy the conclusion, but van Fraassen could simply respond to Musgrave or Lipton that any explanation involving unobservables can never be satisfactory.
It is less recognized (e.g., it is absent in Dellsén’s 2017 and Schupbach’s 2014: 56 discussion of Musgrave) that Musgrave later changed his explication of IBE. In Musgrave (2007), the satisfactory condition disappears, when he explicates IBE as follows: “It is reasonable to believe that the best available explanation of any fact is true.” This is surely no mistake. It is rather an advancement—most likely a reaction to criticism. The advantage of leaving the satisfactory condition out is that the charge of circular arguing disappears. However, if Musgrave removes the satisfactory condition, what about the Bad Lot? He does not address it in this paper. It seems to me that Musgrave’s response is just to bite the bullet. We should take him by his word of how he explicates IBE now. If it turns out that we have a bad lot, then we believe the best explanation anyway. That is the implication. How can that move be reasonable? Remember conjectural realism (cf. Worrall 1982) which parallels Musgrave’s account in many regards. The argument was that our current best scientific theories are the best guesses about the truth. Worrall (Dialectica 43 (1–2): 150 footnote, 1989) explicitly stated that this does not imply that we should also believe in the truth of our best guesses. Musgrave’s account is merely a realist extension of conjectural realism. He argues, we should also believe in the best guesses, even if they have a low degree of warrant.
This is what rationality is for Musgrave. He even calls it an “absurd metaphysical principle” to claim that “‘The best available explanation of any fact is true (or probably true, or approximately true)’”. But if the best explanation should be believed without even securing probable true belief, then Musgrave clearly separated rational believing from true believing altogether here. And this is exactly what he intended to do because “[c]ritical rationalists reject justificationism”.
From the perspective of van Fraassen’s original Bad Lot objection, there is not much van Fraassen could respond to Musgrave. At best he could contend that they disagree what rationality is. Even that hardly works. As explained earlier, van Fraassen’s requirement for rationality is consistency and probabilistic coherence. It is not clear how Musgrave violates that exactly. Separating full belief from likeliness of truth is neither inconsistent nor probabilistically incoherent. Van Fraassen sneaks into his principles of rationality that believing that p is linked to p being more likely to be true and Musgrave simply disagrees that this is the only reasonable response. Thus, we end up in a stalemate situation about what is reasonable. Here VTE comes in.
Consider a case where an explanation E is likely to be false but still the best explanation at hand. Musgrave’s considerations imply that it is then reasonable to believe in the truth of E since it is the best explanation. However, in believing that E, one would be more likely to commit an error than to believe a truth. From the perspective of VTE, this could only be justified if believing truths is more valuable than avoiding error. Otherwise, the disvalue from the high number of errors one gets would lead to a lower epistemic value when compared to not following this practice and not believing that E. As shown in the last section, the problem with such a risk-tolerant weighting of the truth goal is, however, that it would epistemically obligate one to believe an unreasonable number of contradictions in all cases of equally balanced evidence between p and ¬p. It is even worse for Musgrave, because one could set up the example such that the evidential support for the best explanation in question is extremely low and the error rate extremely high. This would be an extremely risk-seeking strategy, leading to the belief in contradictions of even more cases than the ones of equally balanced evidence. That cannot be reasonable. Consequently, the real reason why the Bad Lot argument hits is that Musgraves version of IBE leads to believing too many contradictions and thus is unreasonable.
It is now tempting to think that this simply vindicates van Fraassen’s argument. It seems that Musgrave just runs into believing many contradictions, so he violates consistency after all. He does, but this is only the case if we presuppose VTE. Van Fraassen’s rationality requirements are much more liberal, and from this perspective there is no inconsistency. Specifically, van Fraassen’s rationality criteria cannot secure believing that p and not believing that ¬p while the evidence points towards p.
Before considering how Musgrave might respond, we should analyze another prominent defense of IBE—the one by Boyd. I will argue that it has very similar problems than Musgrave’s. Boyd stands here really exemplary for revisionary approaches to IBE that try to weaken the conclusion from truth to approximate truth.Footnote 11 He writes:
If the fact that a theory provides the best explanation for some important phenomenon is not a justification for believing that the theory is at least approximately true, then it is hard to see how intellectual inquiry could proceed. (Boyd 1983, 74)
This statement diverges from Musgrave’s in at least one important detail. Boyd does not state that it is reasonable or justified to believe that the best explanation is true, but only that it is “at least approximately true”. Setting aside the problem that ‘approximately true’ is a notorious dodgy term, this makes Boyd’s claim weaker than Musgrave’s, since he is only committed to infer approximate truths.Footnote 12
However, this argument still cannot escape the VTE argument. Consider a case where the evidence is such that the probability of any of multiple explanations of a phenomena being approximately true is < 0.5. According to Boyd, it would be still justified to believe in the best explanation amongst them being approximately true even though it would still be probably false that the explanation is approximate true. Again, this requires a weighting of the truth goal where for all p, Tv (p) ≥ Fv (p). This is unreasonable and Boyd style realism is ruled out as well, given the considerations so far.
It is worth exploring how this argument would play out in a partial belief model. Suppose the probability of the best explanation p of being true is 0.3. Given that we tailor our credences to the probability of truth, then our credences in p ought to be 0.3. Our credences then reflect the degree of empirical or explanatory success of p. In a Lockean framework, we would need to have a really low threshold to bridge from credences to full beliefs in p (and ¬p) in this situation. We are probabilistically coherent, but we violate LI massively which goes against VTE. Note that I do not want to suggest here that realists and anti-realists simply assign the same credences to propositions about unobservables and the anti-realist simply requires a higher threshold to bridge from credences to full belief and this is why they end up at diverging full beliefs. In fact, precisely discussions around the Bad Lot indicate a different picture. Anti-realists, such as van Fraassen, do not assign evidential value to explanations in the realm of unobservables. As such, the discovery that the existence of some unobservable A has strong explanatory power will not increase the credences of an anti-realist in the proposition that A exists, while it will increase the credences of a realist such as Musgrave. As such, realists and anti-realists will frequently end up at differing credences. It is not simply a different threshold that separates realists from anti-realists. Still, the threshold plays some role. As a consequence of Musgrave’s version of IBE, he is committed to the view that the best explanation warrants full belief even if such explanation has a credence of 0.3. Van Fraassen, on the other hand, requires a threshold of > 0.5.
Notice the similarities to the horse-case. Boyd might be able to put an argument forward that the best explanation is more likely to be approximately true than all alternative explanations. For this, he has to argue that the best explanation of a phenomenon is the best evidence one can have in favor of a metaphysical assumption and furthermore believing based on evidence is the method of justification that maximizes epistemic value. Then, it is always justified to believe that the best explanation is more likely to be approximately true than all alternative explanations. However, it is not always justified to believe that the best explanation is true (or approximately true) in the same sense that it is not always justified to believe that the horse with the best chances of winning actually wins. Therefore, this solution would only result in a form of conjectural realism but not the more substantial form of realism intended by Boyd. It is correct that the most probable theory in a bad lot of theories is still epistemically superior to its competitors. It must enjoy some degree of empirical/explanatory success (otherwise it would not be the most probably theory). A credence model can reflect that by assigning higher credences to the best explanation than to the competitors and ultimately, tailoring the credences to the probability of truth. However, whatever the exact relation between credences and full belief is, one necessary condition for full belief is that the credences are > 0.5.
Next, I will consider two modifications to Musgrave and Boyd-style realism in order to reconcile parts of their view with VTE. The first option is to set the epistemic standards higher than Musgrave or Boyd. The route for the classical realist is to judge whether our best scientific theories are generally more likely to be (approximately) true than not about their claims regarding unobservables. If the probability is generally > 0.5, then the beliefs in unobservables based on our current best scientific theories are justified. This can be framed either in terms of expected value or in terms of actual value, which translates to epistemology to subjective vs. objective justification. For objective justification, the beliefs of the realist in unobservables actually have to have a truth/falsity-ratio of > 0.5 in order to be justified. Objective justification is compatible with inaccessible justification. We could be justified to be realists without knowing that we are justified. This is a very externalist way of looking at it. For subjective justification, we need some accessibility to justification. This depends on what we think the evidential basis for the arguments in the realism debate is. One option would be to consider the historical meta-evidence. Then the probability of truth will be assessed in accordance with such evidence. Notice that this procedure of the realist does not consider every single proposition by judging whether it maximizes value. It rather follows a general method of belief acquisition and sustenance and judges whether that general method maximizes value, whereas the justification of beliefs is achieved only indirectly by whether they are formed and sustained according to the said method. This equates to the epistemological structure of indirect justification as laid out in Sect. 2, pillar 3. There are, of course, also intermediary options possible, but these details are not important here.
It is central to recognize that fulfilling the threshold condition is a more demanding task for the realist than merely relying on IBE. If Laudan (1981, 35) really demonstrated that “one could find half a dozen once successful theories which we now regard as substantially non-referring” and one could take that as a straight up inductive argument (cf. Psillos 1999, 97), then the realist would be in serious trouble.
The route for the selective realist is to select only those ontological commitments of our current best scientific theories that are most likely to be true relative to the evidential basis and then believe selectively only in those. This procedure might even be selective to such extent, that one individually judges for every single proposition whether it maximizes value. Selective realism can therefore also follow the structure of direct justification.
A Boyd-style realist might modify the arguments in any of those directions. As for Musgrave, however, such a move would contradict his general falsificationist approach, since he explicitly does not want to link reasonable belief to the probability of that belief being true. This makes Musgrave’s approach largely incompatible with VTE. There is, however, a second option for realists such as Musgrave or Boyd that makes defending Tv (p) ≥ Fv (p) possible. It relies on a modification of the truth goal. We will see that this still necessitates some major deviations from Musgrave’s account but can preserve some major components. The modification to the truth goal is to include a relevance condition which can be formulated as follows:
Relevance Condition. It is only valuable to believe truths that are relevant. Believing irrelevant truths has no value.Footnote 13
One basic motivation for this modification is the phonebook case (cf. Goldman 1999; Alston 2005): It cannot be an epistemic obligation to memorize a phonebook merely because it is a reliable means to acquire true beliefs and avoid error.
Let us go back to the problematic cases for Musgrave’s account. Consider an explanation E for a phenomena P where E is the best explanation for P but is most likely to be false. From the perspective of VTE without considering relevance, it would be better to believe that ¬E than to believe that E since it has more epistemic value. However, one has to recognize that in a lot of cases ¬E does not explain anything, while E does. For instance, the existence of neutrinos might explain the mass loss of a collapsing star. That neutrinos do not exist might not explain anything. In this case, believing that E is relevant but believing that ¬E is not. If the correct truth goal is the relevance truth goal, then believing that E can have more value than believing that ¬E even if believing that ¬E is more likely to be true.Footnote 14
This argument has the following implication for balancing. For all p, where p is a relevant truth, Tv (p) ≥ Fv (p) does not contradict LI iff believing that ¬p is either irrelevant or at least has such low relevance such that believing that ¬p has either disvalue or no value. Since this implies that believing such a proposition ¬p is unjustified, and thus ¬p should not be believed, believing that p will not contradict LI even if the probability of it being true is ≤ 0.5. This solution demands, however, that we give up even the variable Lockean Thesis. It is also clear why this must be the case. If we include fundamental epistemic values other than believing truths and avoiding error in our epistemic goal, then not only credences plus the threshold but also, in this case, relevance, informs our belief choice.
The question is then, of course, how much one can balance the truth goal in favor of believing truths in such a case. In order to save Musgrave’s dictum that it is reasonable to believe the best explanation being true no matter what, believing truths would have to be infinitely more valuable than avoiding error in such explanatory cases. After all, given the evidence, the chance for the best explanation to be true could be infinitely small. On the flipside, believing the negation of the explanation has to approach the value of 0, even though the likelihood that it is true could approach 1. Even if this way of balancing now does not contradict LI, such balancing seems to be unreasonable from the perspective of VTE as well. It basically amounts to putting all value on relevance and no value on the main goals of believing truths or avoiding error when it comes to explanations. The only function that the goal of believing truths and avoiding error plays would be as an arbiter in cases of multiple explanations. Here, the truth goal makes the one that is the most likely to be true justified. Other than that, it does not matter what the likelihood of p or ¬p is. This is not to say that Musgrave could not bite the bullet again (and, arguably, he already did by disconnecting reasonable belief from truth altogether). However, showing the consequences of that view should make us skeptical about such a move. Instead, I argue for reconciliation, i.e., if one shares Musgrave’s risk-tolerant starting point, then one can get the best of both worlds. One can (a) save the epistemic core of Boyd’s and Musgrave’s realism, which is their very risk-tolerant epistemological considerations that can be captured by balancing the truth goal as Tv (p) ≥ Fv (p), and (b) not abandoning VTE either, which saves the connection between explanation and the likelihood of that explanation being true.
In this subsection, I showed that Musgrave’s and Boyd’s realism can lead to a wide contradiction of LI. I considered two modifications. First, a modification of balancing, leading to the additional condition that only those explanations of a phenomena should be believed which also have a probability of > 0.5 to be true. Second, modifying the truth goal to the relevance truth goal and thus saving the basis of VTE by saving the connection between explanation and the likelihood of that explanation being true but also being risk-tolerant without contradicting LI.
6 Consequences for the Anti-Realist
I argued that realists can be too risk-tolerant. Can anti-realists be to risk-averse? Consider the following argument for anti-realism about unobservables:
Radical Skeptic Argument against Unobservables.
Believing in unobservables goes beyond immediate sense perceptions.
For all S and all p, if believing that p goes beyond the immediate sense perceptions of S, then there is no rational ground on which S could judge between p and ¬p.Footnote 15
For all S and all p, if there is no rational ground on which S can judge between p and ¬p, then S is not justified to believe that p.
Believing in unobservables is not justified. (from 1 to 3)
For all S, S ought to have only justified beliefs.
For all S, S ought not to believe in unobservables.Footnote 16 (from 4 to 5)
The argument is valid. Is it sound? The first problem with the Radical Skeptic Argument against Unobservables and similar ones is about the aim of the whole research program surrounding the realism debate. It is an essential feature of the realism debate that anti-realism cannot merely be radical skepticism cooked up in philosophy of science. The arguments of the scientific realism debate need to bring something new to the table and not simply reiterate the epistemological debate on external world skepticism under new clothing.Footnote 17 In the Radical Skeptic Argument against Unobservables, the term ‘unobservable’ is exchangeable with any other term that refers to some x that goes beyond immediate sense perceptions, resulting in an anti-realism about any such x. This insight is not based on any evidence, nor any understanding of the sciences, nor any investigations into the history of science, nor any data at all. It merely gives a principled argument about the (allegedly) grim status of all epistemic agents everywhere. But scientific anti-realism is supposed to be a challenge to realist interpretations of theoretical sciences. As such, it must have something to do with science and cannot be a mere iteration of the present general argument. For scientific anti-realism to be a relevant position, it must be defensible even if we bracket the radical sceptic. This is generally accepted by contemporary anti-realists.Footnote 18 Bracketing the radical sceptic gives the anti-realist a specific task. There must be specific reasons based on specific evidence about the sciences itself that makes scientific knowledge claims about unobservables problematic without repudiating ordinary knowledge claims.
Second, there is also an argument to be made from the perspective of VTE why one should not use the high epistemic standards of radical sceptics in philosophy of science or anywhere else. Consider a Cartesian theory of justification. Descartes, with his method of radical doubt, wants to secure infallible beliefs. His epistemic goal is certainty, which is merely a limiting case of the truth goal. It is the maximally risk-averse version: valuing avoiding believing falsehoods infinitely over believing truths. Almost every contemporary theory of justification moved past such an overly stringent Cartesian methodology. From the perspective of VTE it is clear why. By only believing certainties, one believes hardly anything. In all non-skeptical worlds, this robs one from almost all true beliefs, whereas in most skeptical worlds (especially Descartes’ demon type worlds) one is in an epistemically hopeless position anyway, no matter what one believes. Such an attitude undermines the truth goal – to aim at avoiding error and believing truths – and is self-defeating.
For these two reasons, the anti-realist philosopher of science cannot merely adopt an argument that is a version of the Radical Skeptic Argument against Unobservables. In more general terms: Every anti-realist position that relies on epistemic standards as high as leading to radical skepticism has to be rejected.
There is the suspicion that at least some versions of instrumentalism simply do that. I leave it up to the reader to decide whether this is the case for historical instrumentalist positions. Here, I will focus on constructive empiricism which seems to have a response at hand. Van Fraassen is very aware that the anti-realist must move beyond the radical skeptic, and he makes two claims that do precisely that as he says: “I am sticking my neck out” (van Fraassen 1980, 72). First, he writes: “Let’s agree in addition, right from the start, that we have no doubts about the reality of our bodies, our brains” (van Fraassen 2002, 5; also cf. van Fraassen 1980, 71). Van Fraassen accepts metaphysical realism and is also a direct realist about observable objects. Secondly, he proposes that “the claim of empirical adequacy relates to the future as well” (cf. van Fraassen 1980, 72).Footnote 19 Such inferences go beyond the data and present phenomena, which van Fraassen is well aware of since he neither want’s a “nothing goes” skepticism nor an “anything goes” radical voluntarism (cf. van Fraassen 2000).
The question is, are such selective choice of inferences consistent with VTE. Here, a passage by Devitt, who problematizes how van Fraassen gives two categories of inferred entities—unobserved observables and unobservables—different epistemic status, is enlightening:
Constructive empiricism […] must rest on the principle that though we are often justified in inferring the existence of a particular (sort of) unobserved observable, we are never justified in inferring the existence of a particular (sort of) unobservable. […] [T]his principle is strange, for the inferences in the two cases seem to be epistemically the same: they seem to take us beyond the evidence in the same way. (Devitt 1996, 143)
Similarly, Psillos (1997) contends that inferences to unobserved observables are in need of IBE as well. He argues that by denying IBE, for van Fraassen to be consistent he cannot even stick his neck out as much as he says he does. Instead, van Fraassen will end up at Humean skepticism, believing in present observables only. How can van Fraassen give unobservables such a different standing than unobserved observables? Devitt suggests the following principle:
In the past, when we have posited unobserved entities, we have mostly discovered later that we were right if the entities were observable; but have never later discovered this if they were unobservable. For the only way to discover this is to observe the entity. […] The maximum degree of belief in the existence of an entity is justified only if one has observed it; belief in an unobserved entity can never be as justified as belief in an observed one. (Devitt 1996, 144)
I think Devitt’s answer is the best attempt to get van Fraassen out of Devitt’s and Psillos’ charge. Given this epistemological principle, the (voluntarist) anti-realist can now reply that the belief in unobserved observables can be epistemically obligatory while the belief in unobservables is never obligatory. Devitt’s answer works if we associate different levels of epistemic risks for unobservables and unobserved observables. But this argument is more in line with VTE and its fundamental epistemological trade-off than a categorical exclusion of unobservables from one’s picture of the world. Such anti-realism is based on a commitment of valuing avoiding error over believing truth to such an extent that all beliefs in unobservables become unjustified but not as high as to not allow for justified beliefs in unobserved observables or future empirical adequacy.
Whether such a different epistemological standing of unobservables and unobserved observables holds up to the evidence is in part a question of diachronic evidence. One could make a Laudan’s List II for unobserved observables—a list where claims about future empirical adequacy turned out to be false and then compare it to unobservables. Consider three examples:
The gas laws were once empirically adequate, but it turned out that they are, for instance, not empirically adequate near critical points in phase transitions. This was an unforeseeable false generalization.
All data present at Newton’s time showed the empirical adequacy of Euclidean geometry. It turns out that by modeling the observable phenomena in larger regions of space, we observe Non-Euclidian phenomena.
At Newton’s time, absolute simultaneity was empirically adequate. It turns out that with more precise measure instruments, especially at high speed, one can observe the relativity of simultaneity and time dilation effects.
Should we conclude from these and similar instances by pessimistic meta-induction that our current theories will turn out to be not empirically adequate in the future if we take them as generalizations that reach out to yet unobserved observable portions of reality? At least van Fraassen does not draw this conclusion. It is true that he does not rely on the Pessimistic Induction for his anti-realism anyway or believes in a general inductive method, but he must think that the general preservation of empirical adequacy is a sufficient reason to justify claims in future empirical adequacy.Footnote 20
Now one wonders why van Fraassen does not allow similar inferences for unobservables. I think the considerations suggest that van Fraassen might have a harder task than usually assumed in walking this fine line between being so risk-averse that he rules out beliefs in all unobservables while at the same time still be risk-tolerant enough as to allow for beliefs in unobserved observables and future empirical adequacy. Since beliefs about future empirical adequacy and about observables also come with varying degrees of confidence based on their evidential support, some of those beliefs will have such a low degree of confidence that they need to be abandoned. For van Fraassen, the line of sufficient support for believing has to be drawn at least as high as to also rule out all beliefs in unobservables in order to preserve the commitments of constructive empiricism. Here, one starts to wonder, why one would want to draw the line of epistemic risk exactly there.
Contrary to building anti-realism on a priori empiricist principles, as done by constructive empiricists, I suggest identifying anti-realism with a very risk-averse attitude relative to the empirical basis from the get-go. I will outline this proposal by supposing the relevant empirical basis being historical meta-evidence (as realists such as Psillos (1999) or anti-realist such as Laudan (1981) argue), but this approach can be easily transferred to a different evidential basis as well. Such anti-realism can be formulated as follows:
Value Claim. Ontological commitments should adhere to very high epistemic standards. (These cannot be so high as to leading to radical skepticism.)
Factual Claim. Given (i), current historical meta-evidence is not sufficient for epistemic obligationsFootnote 21 to believe in any unobservables (be it entities or structures) even if postulated by our current most successful scientific theories.
Responsiveness Claim. It is possible that new diachronic evidence comes up that is strong enough to overturn (ii). If it is evidence for an entity or structure x, then, at least, one should give up anti-realism with regard to x.
From the perspective of VTE, factual anti-realism is advantageous to constructive empiricism. It can give a very simple answer to Devitt’s worry about why one would draw the line of justified believing exactly at unobservables. For factual anti-realism this is a mixture between a value decision and a factual claim. Factual anti-realism just takes high risk-aversion as its fundamental epistemic value; withholding beliefs in unobservables then merely follows from the epistemic risk attitude and the factual evidence. Note that this strategy is also susceptible to new empirical evidence. In scenarios with a very solid historical track-record, anti-realists do not have to bite the bullet of remaining anti-realist no matter what the evidence shows. They will give up being anti-realists based on historical meta-evidence. By linking anti-realism to the historical meta-evidence, we will get a stance that does not unreasonably block itself from any evidence. Anti-realism will become empirically testable by a long-term research project in philosophy of science that is concerned with continuities and discontinuities in theory change. In a recent publication (Pils 2023), I provided a comprehensive explanation of why it is erroneous for both anti-realists and realists to disregard historical evidence.
Also note that this view is not compatible with the Lockean Thesis and a fixed threshold. It is, however, compatible with the already mentioned contextually variable Lockean Thesis. One such contextual factor is that Factual Anti-Realists require a higher threshold of credences for full belief than realists do.
I argued that van Fraassen’s epistemology is not suited to give a justification for a crucial presupposition of his Bad Lot argument—the presupposition that believing that p is linked to p being more likely to be true. I argued that VTE can give such a justification. However, on that background many classical realists, exemplified by Musgrave and Boyd, run into a wide contradiction of the Principle of Logical Inconsistency Avoidance, LI. I considered two modifications of such realism. First, modifying balancing, leading to the additional condition that the realist’s beliefs in unobservables generally need to have a probability of > 0.5 to be true relative to the evidence (subjective justification) or actually have a > 0.5 truth/falsity-ratio (objective justification). Second, modifying the truth-goal to the relevance truth-goal and thus saving very risk-tolerant balancing while neither contradicting LI nor giving up VTE.
Furthermore, I ruled out all positions that have epistemic standards as high as leading to radical skepticism. Additionally, I raised worries about constructive empiricists being able to walk the thin line between being so risk-averse as to rule out all beliefs in unobservables while at the same time still being risk-tolerant enough as to allow for the beliefs in unobserved observables and future empirical adequacy. As an alternative, I proposed Factual Anti-Realism. A position that relies on high-risk aversion relative to the historical meta-evidential basis instead of a priori empiricist principles.
The broad use of the term ‘teleological epistemology’ I will introduce is similar used in epistemology by Littlejohn (2018) or Wedgwood (2018). It is also sometimes called the instrumental conception of epistemic rationality. Furthermore, many will recognize my explication of teleological epistemology as epistemic consequentialism.
This can be seen as a progression from Wylie’s (1986) analysis in terms of epistemic risk.
Here, I am following a somewhat similar convention by Kolodny (2007, 234).
The widest variety of epistemologists, such as Alston (1985, 83–84), BonJour (1985, 7–8), Goldman (1979, 29–30), or Lehrer (1990, 112) explicitly explain justification by it aiming at truth and thus adhere to VTE. The term ‘veritism’ is introduced in Goldman (1999). It is also sometimes called, ‘the Jamesian goal’ after William James (1896), “the twin cognitive good” (Carter, Jarvis and Rubin 2014), “value t-monism” (Pritchard 2010), or “veritistic value monism” (Ahlstrom-Vij 2012). For a detailed motivation of VTE and an instrumental analysis of epistemology see David (2001), for a naturalist motivation of VTE see Kornblith (2018). Also, cf. Williams (1988, 21).
Without going into the details, for direct justification these are modified versions based on considerations by Briesen (2016, 281), Chisholm (1966), Feldman (1988, 248) and Klausen (2009, 163), adding some considerations by Feldman (1988, 253) and Hooker (2016, Sect. 6.1) for indirect justification. Moreover, it is commonly assumed that the connection between rightness and goodness is a maximization relationship. However, I argued that a satisficing relationship may hold several advantages over this approach (Pils 2022).
One problem with obligations to believe is that it seems that we do not have voluntary control to choose our beliefs at will and there cannot be obligations for something we do not have control over. This worry goes back to Williams (1973) and is already found in James (1896). In the end, I agree with Feldman (1988) that even if we had no control over our process of belief formation, this is not a problem for a theory of belief obligations.
For the (classic) reliabilist, this argument would be run as considering a belief and its negation, where both are produced by a sufficiently reliable belief forming process with equal reliability.
There are also more radical revisionary responses, such as Dellséns (2017), who weakens the IBE conclusion even further to a mere heuristic. Even though this might be an interesting function of IBE, for realists I consider this surrendering too much to van Fraassen. I am concerned with such IBE arguments that, at least, infer approximate truth.
There is another worry lingering in the background. If we can only infer approximate truth, but knowledge requires full truth, then we can never have knowledge about unobservables. These poses serious problems to all accounts of scientific realism that are spelled out in terms of knowledge. An example is Boyd’s (1983) own view. Recently Buckwalter and Turri (2020) developed an account of knowledge with approximate truth. This view, however, might have quite unattractive consequences (see Shaffer 2021). One way to get around these problems with approximate truth for the scientific realist is to replace approximate truth in terms of full truths of only parts of a theory, as is advocated by Musgrave (2007).
Reference to the relevance truth goal is, for instance, found in Susan Haack (1993, 199). An early lengthy discussion about the relevance condition is due to Gilbert Harman (1986). Explicitly developed for veritism, see Briesen (2016). In philosophy of science, Khalifa (2020) recently argued for a form of relevance condition as a modification to (pure) veritism. I will not go into the details about the problems of how to precisely formulate a relevance condition. For some worries about how to do that see Harman (1986) as well, famous counter-cases are due to Grimm (2008, 742).
Note that relevance can also be on a scale. This is just a technical complication but does not affect the main argument.
Premise (2) is usually supported by the empiricist internalist evidentialist claim that for all judgements p that go beyond immediate sense perceptions, the evidential justifiers would look exactly the same whether one is in a skeptical scenario, such as Descartes demon world, and p is false or a non-skeptical world and p is true. Since for all such judgments p, evidence cannot decide whether p, (2) follows.
Note that some radical empiricist sceptics would be content to merely infer (4). Hume, for instance, does not accept (5) and thus does not accept (6) but he does accept that there is no epistemic justification for anything that goes beyond immediate sense perception.
For this argument, see Churchland (1979: 2); Wylie (1986, 287-288).
As a historical sidenote, verificationists took exactly this problem of inductive generalizations very serious and it was one motivation for their views. Van Fraassen by rejecting verificationsism while still being a strict empiricist gets now exactly this problem again.
This is spelled out in terms of belief obligations because the voluntarist anti-realist will agree with the realist that such evidence can be sufficient for justification but will disagree that it is sufficient for belief obligations.
Ahlstrom-Vij, Kristoffer. 2012. In defense of Veritistic Value Monism. Pacific Philosophical Quarterly 94 (1): 19–40.
Alston, William. P., . 1989. Concepts of Epistemic Justification, In The Monist 68 (2); re-issued in Epistemic Justification: Essays in the Theory of Knowledge. Ithaca, New York: Cornell University Press, 81–114.
Alston, William. P. 1988. The Deontological Conception of Epistemic Justification. Philosophical Perspectives 2: 257–299.
Alston, William P. 2005. Beyond “Justification”: Dimensions of Epistemic Evaluation. Ithaca, New York: Cornell University Press.
Berker, Selim. 2013. Epistemic Teleology and the separateness of propositions. Philosophical Review 122 (3): 337–393.
BonJour, Laurence. 1985. The Structure of Empirical Knowledge. Cambridge: Harvard University Press.
Boyd, Richard N. 1983. On the current status of the issue of scientific realism. Erkenntnis 19 (1/3): 45–90.
Briesen, Jochen. 2016. Epistemic Consequentialism: its relation to ethical consequentialism and the Truth-Indication Principle. In Epistemic Reasons, Norms and Goals, eds. Martin Grajner and PedroSchmechtig, 277–306. Berlin: De Gruyter.
Buckwalter, Wesley, and John Turri. 2020. Knowledge and Truth: A Sceptical Challenge. Pacific Philosophical Quarterly 101 (1): 93–101.
Carter, Adam J., Benjamin W. Jarvis, and Katherine Rubin. 2014. Varieties of Cognitive Achievement. Philosophical Studies 172 (6): 1603–1623.
Chakravartty, Anjan. 2007. Six Degrees of Speculation: Metaphysics in Empirical Contexts. In Images of Empiricism. Essays on Science and Stances, ed. Bradley Monton, 183–208. Oxford: Oxford University Press.
Chakravartty, Anjan. 2013. On the Prospects of Naturalized Metaphysics. Scientific Metaphysics, eds. Don Ross, James Ladyman, and Harold Kincaid. Oxford: Oxford University Press, 27–50.
Chakravartty, Anjan. 2017. Scientific Realism, ed. Edward N. Zalta. In The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), https://plato.stanford.edu/entries/scientific-realism/. [06.04.2020].
Chakravartty, Anjan. 2018. Realism, Antirealism, Epistemic Stances, and Voluntarism. In The Routledge Handbook of Scientific Realism, ed. Juha Saatsi, London/New York: Routledge.
Chisholm, Roderick M. . 1989. Theory of Knowledge, 3rd edition. New Jersey: Prentice Hall, Englewood Cliffs.
David, Marian. 2001. Truth as the Epistemic Goal. In Knowledge, Truth, and Duty. Essays on Epistemic Justification, Responsibility, and Virtue, ed. Matthias Steup. Oxford: Oxford University Press.
Dellsén, Finnur. 2017. Reactionary responses to the bad lot objection. Studies in History and Philosophy of Science. Part A 61: 32–40.
Demey, Lorenz. 2013. Contemporary Epistemic Logic and the Lockean Thesis, Foundations of Science 18 (4), 599–610.
Devitt, Michael. 1996. Realism and Truth. Second Edition, with a new afterword. Princeton: Princeton University Press.
Dorst, Kevin. 2019. Lockeans Maximize Expected Accuracy. Mind 128 (509): 175–211.
Feldman, Richard. 1988. Epistemic Obligations. Philosophical Perspectives. Vol. 2, Epistemology: 235–256.
Foley, Richard. 2009. Beliefs, Degrees of Belief, and the Lockean Thesis. In Degrees of Belief, eds. Franz Huber and Christoph Schmidt-Petri, 37–47. Dordrecht: Springer.
Goldman, Alvin I. 1979. What is Justified Belief. In Justification and Knowledge, ed. George S. Pappas, 1–25. Boston: D. Reidel.
Goldman, Alvin. I. 1999. Knowledge in a Social World. Oxford: Clarendon Press.
Grimm, Stephen R. 2008. Epistemic Goals and Epistemic Values. Philosophy and Phenomenological Research 77: 725–744.
Haack, Susan. 1993. Evidence and Inquiry. Towards Reconstruction in Epistemology. Oxford: Blackwell.
Harman, Gilbert. 1986. Change in View: Principles of Reasoning. Cambridge, Massachusetts: MIT Press.
Hawthorne, John, Daniel Rothschild, and Levi Spectre. 2016. Belief is weak. Philosophical Studies 173 (5): 1393–1404.
Hooker, Brad. 2016. Rule Consequentialism. In The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), ed. Edward N. Zalta. https://plato.stanford.edu/archives/win2016/entries/consequentialism-rule/. [14.01.2019].
James, William . 2013. The Will to Believe. In Reasons and Responsibility. Readings in Some Basic Problems of Philosophy, eds. John Feinberg, Russ Shafer-Landau, 129–137.
Khalifa Kareem. 2020. Understanding, Truth, and Epistemic Goals. Philosophy of Science 87 (5): 944–956.
Klausen, Søren H. 2009. Two Notions of Epistemic Normativity. Theoria 75: 161–178.
Kolodny, Niko. 2007. X – How Does Coherence Matter?. Proceedings the Aristotelian Society 107 (1), 229–263.
Kvanvig, Jonathan L., . 2014. Truth Is Not the Primary Epistemic Goal. eds. Matthias Steup, John Turri, and Ernest Sosa. Contemporary Debates in Epistemology, 352–362. Second Edition. Sussex: Wiley Blackwell.
Laudan, Larry. 1981. A Confutation of Convergent Realism. Philosophy of Science 48 (1): 19–49.
Lehrer, Keith. 1990. Theory of knowledge. Boulder: Westview.
Leitgeb, Hannes. 2014. The Stability Theory of Belief. Philosophical Review 123 (2): 131–171.
Levi, Isaac. 1967. Gambling with Truth. Massachusetts: MIT Press.
Lipton, Peter. 1993. Is the Best Good Enough?, Proceedings of the Aristotelian Society 93, 89–104.
Lipton, Peter. 2004. Epistemic Options. Philosophical Studies 121: 147–158.
Littlejohn, Clayton. 2012. Justification and the truth-connection. Cambridge: Cambridge University Press.
Littlejohn, Clayton. 2018. The Right in the Good. A Defense of Teleological Non-Consequentialism. In Epistemic Consequentialism, eds. Kristoffer Ahlstrom-Vij, and Jeffrey Dunn, 23–47. Oxford: Oxford University Press.
Maher, Patrick. 1993. Betting on theories. Cambridge: Cambridge University Press.
Makinson, David. C. 1965. The Paradox of the Preface. Analysis 25 (6): 205–207.
Musgrave, Alan. 1988. The Ultimate Argument for Scientific Realism. In Relativism and Realism in Science, 229–252. Netherlands: Kluwer.
Musgrave, Alan E. 2007. The ‘Miracle Argument’ for Scientific Realism, The Rutherford Journal, http://www.rutherfordjournal.org/article020108.html. [02.03.2020].
Pils, Raimund. 2022. A Satisficing Theory of Epistemic Justification. The Canadian Journal of Philosophy 52 (4): 450–467.
Pils, Raimund. 2023. Scientific Realism and Blocking Strategies. International Studies in the Philosophy of Science 36 (1): 1–17. https://doi.org/10.1080/02698595.2022.2133418.
Plantinga, Alvin. 1986. Chisholmian Internalism, presented at Brown University, November 1986.
Priest, Graham; Richard Routley, and Jean Norman, (Eds.) 1989. Paraconsistent Logic: Essays on the Inconsistent. München: Philosophia Verlag.
Pritchard, Duncan. 2007. Recent work on Epistemic Value. American Philosophical Quarterly 44 (2): 85–110.
Pritchard, Duncan. 2010. The Value Problem for Knowledge. Oxford: Oxford University Press.
Psillos, Stathis. 1997. How not to Defend Constructive Empiricism: A Rejoinder. The Philosophical Quarterly 47 (188): 369–372.
Psillos, Stathis. 1999. Scientific Realism: How Science Tracks Truth. London: Routledge.
Riggs, Wayne. 2006. The Value Turn in Epistemology. In New waves in Epistemology, eds. Vincent F. Hendriecks and Duncan Pritchard, 300–323. Aldershot: Ashgate.
Ronzoni, Miriam. 2010. Teleology, Deontology, and the Priority of the Right: On Some Unappreciated Distinctions. Ethical Theory and Moral Practice 13 (4): 453–472.
Schupbach, Jonah N. 2014. Is the bad lot objection just misguided?. Erkenntnis 79 (1): 55–64.
Shaffer, Michael. 2018. Foley’s threshold view of belief and the Safety Condition on Knowledge. Metaphilosophy 49: 589–594.
Shaffer, Michael. 2019. The availability Heuristic and Inference to the best explanation. Logos & Episteme 10: 409–432.
Shaffer, Michael. 2021. Can Knowledge really be non-factive? Logos & Episteme 12: 215–226.
Staffel, Julia. 2016. Beliefs, buses and lotteries: why rational belief can’t be stably high credence. Philosophical Studies 173: 1721–1734.
Stalnaker, Robert 1984. Inquiry. Cambridge: Cambridge University Press.
Stanford, Kyle P. 2006. Exceeding our grasp: Science, History, and the Problem of Unconceived Alternatives. Oxford: Oxford UP.
Sturgeon, Scott. 2008. Reason and the Grain of Belief, Noús 42 (1), 139–165.
Sylvan, Kurt. 2020. An epistemic non-consequentialism. The Philosophical Review 129 (1): 1–51.
Van Fraassen, Bas C. 1980. The scientific image. Oxford: Oxford University Press.
Van Fraassen, Bas C. 1989. Laws and symmetry. Oxford: Oxford University Press.
Van Fraassen, Bas C. 2000. The false hopes of traditional epistemology. Philosophy and Phenomenological Research 60 (2): 253–280.
Van Fraassen, Bas C. 2001. Constructive Empiricism Now. Philosophical Studies 106 (1): 151–170.
Van Fraassen, Bas C. 2002. The Empirical Stance. New Haven: Yale University Press.
Wedgwood, Ralph. 2018. Epistemic Teleology. Synchronic and Diachronic, eds. Kristoffer Ahlstrom-Vij, Jeffrey Dunn. Epistemic Consequentialism, Oxford: Oxford University Press, 85–112.
Williams, Bernard. 1973. Deciding to Believe. In Problems of the Self, ed. Bernard Williams, 136–151. Cambridge: Cambridge University Press.
Williams, Bernard. 1988. Consequentialism and Integrity. In Consequentialism and its Critics, ed. Samuel Scheffler, 20–50. Oxford: Oxford University Press.
Williamson, Timothy. 2002. Knowledge and its limits. Oxford: Oxford University Press.
Wood, Allen. 1999. Kant’s Ethical Thought. Cambridge: Cambridge University Press.
Worrall, John. 1982. Scientific Realism and Scientific Change. The Philosophical Quarterly 32 (128): 201–231.
Worrall, John. 1989. Structural Realism: The Best of Both Worlds?. Dialectica 43 (1–2): 99–124.
Wylie, Alison. 1986. Arguments for Scientific Realism: The Ascending Spiral. American Philosophical Quarterly 23 (3): 287–297.
Open access funding provided by Paris Lodron University of Salzburg.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 I want to thank Charlotte Werndl and Roman Frigg for feedback on an early draft of this paper and a reviewer of the Journal for General Philosophy of Science for helpful suggestions.
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
About this article
Cite this article
Pils, R. Veritistic Teleological Epistemology, the Bad Lot, and Epistemic Risk Consistency. J Gen Philos Sci (2023). https://doi.org/10.1007/s10838-023-09650-9