1 Introduction

Epistemic trust in science has become an important issue in philosophy of science. In many situations, lay persons trust the claims scientists produce, for example about the safety of vaccines, or the outcome of climate measures. A breakdown of this trust can have problematic consequences. Understanding what epistemic trust in science is and why it can reasonably break down is relevant for improving scientists’ trustworthiness.

Broadly construed, philosophers of science have found two types of answers to the question of what appropriate trust in science is (Sect. 2). According to the first one, trust in science is based on objectivity. According to the second one, trust in science is based on shared values. But which one is correct? This is the question I am concerned with in this paper. Much of the philosophy of science debate is focused on shared values. And some authors like Koskinen (2020) claim that objectivity can only ground reliance in science, but not trust (Sect. 3). I will argue that this view is mistaken and that in fact both answers are correct: Genuine trust in science can be based on objectivity, as well as on shared values.

Arguments that deemphasize the role of objectivity often rest on a central distinction from the philosophy of trust literature: between genuine trust and mere reliance. This distinction goes back to Baier’s (1986) influential paper Trust and Antitrust. Baier claims that in contrast to reliance, trust involves the possibility of betrayal: Trust involves an important normative dimension that is missing in cases of mere reliance.

This distinction matters for trust in science. Understanding the normative dimension of trust not only helps in understanding what trust in science is, it also helps in understanding the strong reactive attitudes people can have when their trust in what scientists say breaks down. However, there is little overlap between both areas of literature beyond Baier. Philosophers of science refer to Baier’s account, in particular when arguing for the view that shared values are required for trust or that reliance based on objectivity is missing a normative dimension (Baghramian & Caprioglio Panizza, 2022; Irzik & Kurtulmus, 2019; Koskinen, 2020; Rolin, 2020; Wilholt, 2013). But they usually do not include the broader debate on the nature of trust.Footnote 1 Since Baier’s analysis, moral philosophers and epistemologists have proposed several accounts on the nature of trust, trying to capture the difference between genuine trust and mere reliance. Not including these other trust accounts leads to an overly narrow view of trust in science and its normative dimension.

In this paper, I will show that there are plural forms of genuine trust in science. Trust in science can be based on shared values, but also on objectivity. I will link the trust in science debate to the literature on interpersonal trust, thus providing an integrated treatment of what the normative elements of trust in science are. Interpreted in a pluralist way (Sect. 5), the different accounts of interpersonal trust are helpful in understanding the different versions of trust in science. I will use two accounts in particular: Baier’s (1986) goodwill account and Hawley’s (2014, 2019) commitment account. The goodwill account is able to capture trust in science based on shared values (Sect. 6), whereas the commitment account is suitable for explaining trust in science based on objectivity (Sect. 7). The two types of trust in science are of different normative thickness (Sect. 8): Trust based on objectivity is less thick, yet distinguishable from mere reliance. Placing more emphasis on other grounds for trusting science besides shared values allows us to see how science can be trustworthy in cases where values do not align or are not known.

2 Trust in science

Trust in science either refers to trust within science—the trust members of research groups or scientific communities can have towards each other, which facilitates teamwork, collaboration, and division of labor—or to public trust in science, that is, trust which is invested by people outside of science (Rolin, 2020). I will mostly focus on public trust in science, although the majority of accounts are in principle applicable to both.

Philosophers who are concerned with problems of trust in science usually consider epistemic trust. In this paper, I will deal with epistemic trust in a narrow sense: trust in what someone says, that is, trust in a claim that someone makes. (For other notions see e.g. Hardwig, 1991, p. 697; John, 2020, p. 51; Wilholt, 2013, p. 233.) Two things will become important later on. First, scientific claims can be value-laden. And second, the collective level of the scientific community plays an important role in producing those claims.

The question I am interested in is: What is appropriate trust in science? One can find two types of answers in the literature: According to the first, trust in science is based on objectivity. According to the second, trust in science is based on shared values. I will briefly introduce both in a rather distinct manner. However, this distinction will later turn out to be more nuanced. For one, the two types of accounts can be hard to disentangle, for example because objectivity is often linked to values. And, importantly, my own pluralist view of trust allows for the different forms of trust to overlap and be present at the same time (I will explain the latter in Sect. 8).

I will start with the first type, trust in science based on objectivity. Obviously, there are several notions of objectivity. Douglas (2009, p. 116) lists seven senses of objectivity, all of which are compatible with rejecting the value-free ideal. She claims that these senses of objectivity can form a basis for trust: Knowledge claims are trustworthy because they are produced by objective processes. A claim can for example be objective and thus trustworthy because it can reliably be used in multiple additional experimental contexts; or it can be objective because it is the result of a critical discussion between diverse members of the scientific community. The different forms of objectivity can be combined to strengthen one’s sense of objectivity in a claim and thus provide a stronger basis for trust (Douglas, 2009, p. 132). I agree with Douglas that all forms of objectivity are important for trust, but I will mostly focus on those that have to do with the social and institutional practices of science.

Other authors have endorsed similar views, albeit with more specific accounts of objectivity in mind (see also Furman, 2020; John, 2021a; Koskinen, 2020). Oreskes (2019) for example argues that we should trust science when there is an expert consensus emerging in a scientific community that is characterized by diversity and opportunities for debate and criticism. Others stress the importance of inclusive and responsive dialogue (Rolin, 2002) or knowledge-sharing with marginalized communities (Grasswick, 2010) for facilitating objectivity and thus trust.

According to the second and more prominent type of answer, trust in science is based on shared values.Footnote 2 Again, this has to do with turning away from the value-free ideal: Since philosophers have argued at length that science cannot be value-free, the value-free ideal cannot do the job of grounding trust (e.g. Holman & Wilholt, 2022). It seems then that the values which end up in science at least should not be alien to the values held by the trustors.

Wilholt (2013) argues that because balancing inductive risks requires making value judgements, this leads to a coordination problem: How can scientists rely on each other’s claims? According to Wilholt, the problem of reliance can be solved via shared standards of the scientific community. However, trust in science requires more: it requires shared values, or in Wilholt’s (2013, p. 248) terminology “an enhanced kind of epistemic reliance”. Epistemic trust in this sense is important in the case of public trust, where scientists need to base their methodological decisions on the evaluations expected by the public (Wilholt, 2013, p. 250).

Irzik and Kurtulmus (2019, p. 1153) follow similar lines but use the distinction between “basic epistemic trust in science” and “enhanced epistemic trust in science”. According to them, only enhanced epistemic trust requires balancing inductive risks and therefore an alignment of values, which is important in cases of policy-relevant science. Others take the alignment or representation of values to be the distinguishing factor between trust within science and public trust in science, where shared values are only needed for the latter (Bueter, 2021).

Other authors less explicitly deal with the question what appropriate trust in science is but rather try to find mechanisms that enable trust in a value-laden science. However, the view that trust is based on shared values often lingers in the background. Many authors argue for instance for transparency of values, so decision makers or members of the public can decide whether they want to rest their decisions on a particular scientific claim (Carrier, 2022; Elliott, 2022; Elliott & Resnik, 2014). Similarly, Schroeder (2021, p. 556) who proposes to ground trust in democratic values deals with what he calls the “alien values concern”. This seems to imply that a misalignment of values is problematic for trust.

We have thus seen two different ways of spelling out appropriate trust in science. However, note that there is some overlap. First, objectivity is clearly linked to values. Douglas (2009) for example argues for detached objectivity which constrains values to its indirect role: An individual researcher can legitimately make value judgements only when assessing whether the evidence or reasons for accepting or rejecting a claim are sufficient. Whereas this sense of objectivity can easily be combined with an alignment of values, this gets more complicated with other senses of objectivity: Interactive objectivity for instance requires discussion among scientists with diverse values which seems to run counter to an alignment of values.

Second, value transparency and democratic values are also discussed as means to facilitate objectivity (e.g. Elliott & Resnik, 2014; Intemann, 2015). It is therefore possible to argue for grounding trust in these mechanisms even if one does not think trust requires shared values.

And third, there are philosophers who deal with trust in science but include both types of answers into their accounts. We can find examples for such hybrid views with Rolin (2021) or John (2018, 2021b). I will come back to some of these hybrid accounts in Sect. 9 and argue why a pluralist account of trust in science is preferable.

Importantly, even if the different types of trust in science are sometimes hard to disentangle or go hand in hand, they focus on different aspects. If we want to base trust on shared values, the focus lies on the trust relationship and the things both the trustor and the trusted scientist care about. If we want to base trust on scientific objectivity, the focus rather lies on the mechanics of science: its social and epistemic processes that underlie the production of scientific claims (Furman, 2020).

This difference has led some philosophers to argue that reliance in science based on objectivity is not actually trust. Their argument rests on the distinction between trust and reliance I already introduced. In the next section, I will explain both in a little more depth.

3 Reliance, cars, and science

In her paper Trust and Antitrust, Baier (1986) argues that trust is a form of reliance, but not mere relianceFootnote 3: We often rely on persons or things to do something or behave in a certain way, but this does not necessarily mean that trust is involved. Baier (1986, p. 235) illustrates this with the example of Kant’s neighbors. They may have relied on his regular habits to set their clocks, but it would be wrong to think that they trusted him to walk by every day at the same time. What distinguishes trust from reliance, is the possibility of betrayal: “The trusting can be betrayed, or at least let down, and not just disappointed” (Baier, 1986, p. 235). If Kant had slept in one day, his neighbors could have felt disappointed, but not betrayed. Similarly, I can rely on my car and be disappointed if it breaks down, but my car cannot betray me (Holton, 1994). In contrast to trust, reliance is missing a normative dimension.

Philosophers of science draw on this distinction from the philosophy of trust debate in order to argue that objectivity does not ground genuine trust in science. Koskinen (2020, p. 1194) makes this the most explicit, claiming that “objectivity indicates a shared basis for reliance rather than trust.” She gives three related reasons for this claim: First, if warranted trust increases, it is not necessarily the case that objectivity would have increased, indicating a missing link between both. Second, Koskinen argues, since processes cannot betray us, we can only rely on them but not trust them. And third, if we have thorough understanding of the workings of science ourselves, we rely without trust.

Koskinen makes an interesting point that others have picked up on as well (e.g. Baghramian & Caprioglio Panizza, 2022; Boulicault & Schroeder, 2021; Furman, 2020; John, 2020): It does not follow from the philosophy of science literature why objectivity should ground genuine trust and not just mere reliance. If “trust” based on objectivity focuses on the mechanics of science, it seems that it is missing the normative aspects necessary for genuine trust. Science becomes much more like a car. Or, as Furman (2020, pp. 3–4) characterizes this view: “My relationship to science is more like my relationship to my car, which I rely on and I am let down when it stops working, but I am never betrayed by my car. Betrayal is for lovers, not for cars or science.”Footnote 4

However, as I will argue, my relationship to science is not like my relationship to my car. Reliance in science based on objectivity can rightly be called trust. In the following, I will argue why the trust I have in science based on objectivity has something normative to it which the reliance in my car has not. I will first motivate this idea by referring to analyses of vaccine hesitancy (Sect. 4), and then draw on different accounts from philosophy of trust in order to explain what the respective normative elements of trust in science are (Sects. 5, 6 and 7).

4 Trust in science, betrayed

Let me start with two analyses of the same vaccine hesitancy case: the MMR vaccine for measles, mumps and rubella. In a Lancet paper from the late 90s, Andrew Wakefield suggested a causal link between the MMR vaccine and autism (Wakefield et al., 1998). The paper has later been retracted, Wakefield was ousted from the scientific community, and the majority of scientists reject the MMR autism hypothesis. However, the paper is still referred to in anti-vaccination debates. Many philosophers have used this case for analyzing distrust in science, in particular following Goldenberg (2016). I will draw here on the analyses by Irzik and Kurtulmus (2019) and John (2020).

Irzik and Kurtulmus (2019, p. 1161) reconstruct distrust in vaccine safety in terms of values. They argue that there is a possible misalignment of values held by scientists and parents who consider vaccinating their child. Whereas scientists primarily care about public health, parents primarily care about their child’s health. Therefore, scientists and parents might judge the inductive risks associated with the claim “There is no causal link between MMR and autism” differently. From the parents’ perspective, the scientists making such a claim are not trustworthy because they do not take their values into account. Here, the MMR vaccine case supports the view that trust in science requires shared values.

John (2020, p. 54) on the other hand points out that there are several reasons for distrust in vaccine safety which can be appropriate. One appears in the case of the “sceptic-about-consensus anti-vaxxer” (John, 2020, p. 55). Imagine a parent who in general thinks one should accept claims that meet the standards of biomedical science. She also thinks that a scientific community is solely driven by a concern for the truth and takes all views seriously, even very farfetched ones. She then reads an article about the biomedical community’s treatment of Wakefield and thus decides that the biomedical community does not work how it should. In John’s view, this is a case of distrust that is appropriate relative to the parent’s perspective: From her point of view, it seems reasonable to distrust the biomedical community’s claim that the MMR vaccine is safe. Her distrust rests, as John puts it, on a false “folk philosophy of science” regarding the social practices of science (see also John, 2018).

This can also be framed in terms of objectivity. The parent holds certain views about the objectivity of science, in particular that science is objective if all dissent is tolerated—which is similar to interactive objectivity (Douglas, 2009, p. 127), but a false folk notion as it lacks any requirement for shared standards for discussion. Since actual science is not “objective” in the parent’s folk sense, her trust in science breaks down. John’s “sceptic-about-consensus anti-vaxxer” is a case of distrust, even though it is based on objectivity considerations, not on value considerations.

I think both Irzik and Kurtulmus (2019) and John (2020) point out important aspects of trust (or distrust, respectively). I also agree with John’s analysis: It does not seem right to reconstruct the “sceptic-about-consensus anti-vaxxer” case as a case of mere nonreliance instead of distrust; or to reconstruct the attitude the parent had prior to the Wakefield controversy as mere reliance and not as genuine trust. Take the moment the parent learned about the treatment of Wakefield. It would seem right to not just feel disappointed but to feel betrayed: The parent thought that claims about the safety of the MMR vaccine were produced in a proper, objective way, but now it turns out—from her perspective, of course—they were not.

We can take three things from this. First, the two analyses do not need to contradict each other. They can describe different types of trust in science. Second, the vaccine hesitancy case motivates the idea that trust in science can be betrayed: It seems reasonable to feel betrayed if we think scientists care about the things we value but it turns out they do not. But it also seems reasonable to feel betrayed if it turns out that there is something wrong with the social and epistemic processes underlying the production of scientific claims. Third, this case shows why the trust-reliance distinction is relevant for science: It can explain instances where a stronger reactive attitude than mere disappointment in science is reasonable.

I will defend this view in the remainder of this paper. My argument is twofold: I will first show that there is a plurality of trust notions. Second, I will apply these trust accounts to the two types of trust in science I identified in Sect. 2. I will argue that each type of trust in science is normative in a different sense, which explains why trust in science can be betrayed in different ways. This not only helps in understanding the nature of trust in science, it also helps in understanding how science can be trustworthy in cases where values do not align or are not known. Because the philosophy of science debate is mostly concerned with trust based on shared values, it misses other important forms of trust in science.

5 A plurality of trust accounts

As I explained in Sect. 3, one of the central distinctions in the philosophy of trust literature is the distinction between trust and reliance. Many accounts on the nature of trust try to capture this distinction, often pointing to failures of other accounts in this regard. I will roughly sketch the general debate (for more extensive overviews see e.g. Dormandy, 2020; McLeod, 2021) and say a few more words about Baier’s (1986) goodwill account as well as Hawley’s (2014, 2019) commitment account, since I will come back to both in the next section. Following Simpson (2012), I will then argue that we can make much more sense of the trust debate if we take the different accounts to not actually describing the same form of trust.

Starting much of this debate, Baier (1986, p. 234) argues that well-placed trust is reliance on another person’s competence and “reliance on another’s good will”. Thus, a trustworthy person not only needs to be competent but also motivated by goodwill, that is, she needs to care about the goods or things the trustor values. Trust therefore implies a “special vulnerability” (Baier, 1986, p. 239) that reliance is lacking. In trusting, the trustor makes herself vulnerable because the trusted person has the power to harm the valued thing—and because the trusted person can act out of the wrong motives. If the trusted person acts out of negligence or ill will, the trustor can rightly feel betrayed, not just disappointed.

Other authors, however, have pointed to problems of Baier’s account. For example, goodwill does neither seem necessary nor sufficient for trust (Holton, 1994). I might trust my doctor to operate on me just because it is her job, not because she is motivated by goodwill (O’Neill, 2002).

Furthermore, as Hawley (2014) argues, goodwill cannot explain cases where neither trust nor distrust is appropriate. Trust can be a burden: It may be wrong to make the fact that my partner reliably cooks me dinner every night a matter of trust, even if he is motivated by goodwill towards me or my nutrition. It is also unclear whether the goodwill account is suitable to capture distrust. Distrust seems to involve that the distrusted person is motivated by a lack of goodwill or even ill will. But imagine a very honorable person who campaigns for imprisoning a criminal. Although the honorable person is motivated by ill will towards the criminal, the criminal does not need to distrust her: Hawley claims that in those cases, neither trust nor distrust is appropriate. I will come back to this example when applying Hawley’s account to trust in science (Sect. 7) since it shows interesting parallels to cases of value misalignment.

Several philosophers have proposed different accounts in order to deal with these problems. Some try to refine Baier’s goodwill account or propose different motives than goodwill (e.g. Jones, 1996; McLeod, 2000), whereas others do not think that identifying a particular motive is the right way to capture the difference between reliance and trust (e.g. Holton, 1994; Jones, 2012).

Hawley (2019, p. 9) proposes such an account that is not motives-based—and which she claims is able to capture trust and distrust alike. According to her commitment account, “to trust someone to do something is to believe that she has a commitment to doing it, and to rely upon her to meet that commitment. To distrust someone to do something is to believe that she has a commitment to doing it, and yet not rely upon her to meet that commitment.” Being trustworthy then requires living up to one’s commitments (Hawley, 2019, p. 73).

Hawley can thus explain those cases where neither trust nor distrust seems appropriate: It is not appropriate to trust my partner to cook me dinner every night if I do not believe that he has a commitment to do so. Likewise, it is not appropriate to distrust the honorable person who is campaigning to imprison the criminal if she never made any commitments to help the criminal. Hawley’s account is less restrictive than Baier’s goodwill account; it seems for example appropriate to trust my doctor because I believe she has a commitment to operate properly on me and rely on her to do so.

Yet, others have pointed to counter-examples to Hawleys account as well (e.g. D’Cruz, 2020; Kelp & Simion, 2023; Tallant, 2022), for instance with regards to situations in which the trustor relies on a person to not fulfill their commitments, or in which somebody makes a bad commitment. This also has to do with the fact that Hawley (2014, p. 11) takes commitment to be a very broad notion: “Commitments can be implicit or explicit, weighty or trivial, conferred by roles and external circumstances, default or acquired, welcome or unwelcome.”

It appears that every account has some advantages but also some disadvantages, and is subject to different counter-examples. The ease with which these can be found leads Simpson (2012) to the conclusion that there is a plurality of concepts used for the word “trust”. He starts by noting that philosophers tend to view trust as something that has to be analysed conceptually in the form of “trust is X”. This approach failsFootnote 5: “There is a strong prima facie case for supposing that there is no single phenomenon that ‘trust’ refers to, nor that our folk concept has determinate rules of use. Nonetheless, this does not mean that there is no philosophical understanding of the concept to be had” (Simpson, 2012, p. 551). Trust deserves philosophical reflection because it tells us something about the way humans interact with each other.

Simpson (2012, p. 558) thus proposes a genealogical account, with an “Ur-notion of trust” as reliance on someone’s freely cooperative behavior. Depending on the contexts of use, this Ur-trust can be amplified or altered, turning into different, richer notions of trust (Simpson, 2012, p. 560). However, we still use the term “trust”, even though it refers to different phenomena. Repeated use hardens this into discrete notions, resulting in plural forms of trust. Because the different trust accounts take different contexts and relational situations as paradigmatic, they are only able to describe specific forms of trust. Importantly, Simpson (2012, p. 566) does not take the different accounts to be useless; as soon as there is clarity about their actual scope, there is nothing wrong with analysing a particular form of trust.

Note that Simpson’s Ur-trust is not restricted to trust in the normative sense I have sketched above; it encompasses what philosophers following Baier have framed mere reliance too. Since I am interested in the normative elements, I will keep the trust-reliance distinction and reserve “trust” for the richer notions. But I will take Simpson’s pluralist view as a starting point. We can make much more sense of the trust debate if we recognize that the different accounts do not refer to the same phenomenon. Depending on the context, different forms of trust are significant.

In the following, I will show that the goodwill account is suitable for describing trust in science based on shared values, but has its problems with trust in science based on objectivity. The commitment account, on the other hand, can capture trust in science based on objectivity much better. This helps us to see where the respective normative elements of both trust versions come from.

6 Shared values and goodwill

Let me start with Baier’s goodwill account. A trustworthy person is motivated by goodwill—she is, in Baier’s (1986, p. 235) terms, expected not to harm the “goods or things one values or cares about.” In order to link Baier’s account to epistemic trust, we first need to take a closer look at the direction of goodwill. Baier is not that clear whether she takes goodwill to be directed at the trustor herself, or whether trust involves goodwill towards the valued things, that is the objects of trust. Holton (1994, p. 65; see also Mullin 2005, p. 317) argues that both alone cannot be right: Two ex-spouses who do not have much friendly feelings left can still trust each other to take care of their child; on the other hand, if I trust another person with my car, she obviously does not need to have goodwill towards it.

However, I do not see why these things should be treated disjunctively. The trust between the two ex-spouses requires at least some amount of goodwill towards each other: If one parent does not think that the other takes care of their child in a way she feels is appropriate, it would be odd to describe this as a trust relationship. This already shows the connection between values and goodwill: Not harming the objects of trust requires taking the trustor’s values into account.

In the case of epistemic trust, the valued goods are epistemic goods: things like knowledge, evidence, true belief, or inquiry (Dormandy, 2020, p. 16). A trustworthy speaker is expected to care about these epistemic goods. When she does not, for example when she lies or disrespects the evidence, the hearer may rightly feel betrayed.

Now, if scientists inevitably make non-epistemic value-judgements, the goodwill requirement amounts to other valued things as well. Take Irzik and Kurtulmus’s (2019, p. 1161) analysis of the MMR vaccine hesitancy: From the parents’ perspective, the scientists making a claim about the safety of the vaccine are not trustworthy because they do not care enough about the individual child’s health and thus value inductive risks differently.

Dormandy (2020, p. 19; see also Hinchman, 2017) makes a similar point: “A hearer trusts a speaker for more than just knowledge; he trusts her for knowledge tailored to his specific epistemic needs.” Consider a hearer who has a nut allergy and asks about the content of a snack bowl. A trustworthy speaker needs to take the consequences of false information into account and thus adopt particularly high evidential standards.

This broad understanding of epistemic care allows us to see how the goodwill account is able to capture trust in science based on shared values: Being motivated by goodwill—and thus being trustworthy—requires caring for other things the hearer values, not just knowledge. A trustworthy scientist needs to take the audience’s values into account because this shows goodwill towards the audience.

Thus, Baier’s account and its language of goodwill and care seem to explain quite naturally why trust in science can be based on shared values. However, it does not seem fitting for trust in science based on objectivity. This is because the objectivity account lays the focus elsewhere (see Sect. 2): With shared values, the focus is on the trust relationship and the things both parties care about, whereas objectivity focusses on the social and epistemic processes that underlie the production of scientific claims. The goodwill account does not capture the collective nature of science that easily.

Baier is concerned with interpersonal trust, and thus locates trust on the individual level. However, if a hearer trusts in what a scientist says based on the processes underlying the production of the scientist’s claim, trustworthiness is also linked to the collective level. Baier’s account has problems with this in two ways. First, for trust based on objectivity, it is not enough to be motivated by goodwill towards the hearer or towards epistemic goods like knowledge. Something like goodwill towards the collective scientific enterprise is needed as well: A trustworthy scientist needs to adhere to the norms and standards of her scientific community. But this does not follow immediately from Baier’s goodwill account. In fact, philosophers of science who refer to Baier sometimes talk of integrity instead of or in addition to goodwill (Irzik & Kurtulmus, 2019, p. 1154; Wilholt, 2013, p. 248; see also McLeod, 2000 on trust and integrity), or they try to spell out goodwill in terms of “commitment to the ethical norms of their trade” (Irzik & Kurtulmus, 2019, p. 1148).

Second, at least for the social notions of objectivity, it cannot be only the individual scientist who needs to be motivated by goodwill. Imagine a scientist who cares about knowledge, follows the norms and standards of her community, and makes honest claims with respect to the available evidence. But the community itself is biased in collecting the evidence, for example because it is dominated by male scientists and their values. In Baier’s account, the honest scientist would count as trustworthy, even though her claims are not based on objective processes. This is also what Rolin (2002, p. 101) points out in her critique of Hardwig’s (1991) trust account: The community plays an important role in achieving the trustworthy moral and epistemic character of an individual scientist, as well as in detecting careless or fraudulent research. Since goodwill is not something a community or organization can have, such cases are beyond the scope of Baier’s account.Footnote 6 I will now show that Hawley’s commitment account is able to capture trust in science based on objectivity much more easily.

7 Objectivity and commitments

According to Hawley (2019, p. 9), “to trust someone to do something is to believe that she has a commitment to doing it, and to rely upon her to meet that commitment”; being trustworthy requires avoiding unfulfilled commitments. Hawley (2014, p. 16) explicitly considers epistemic trust. She argues that “trusting in what someone says” is a special case of trusting someone to do something if we understand assertion in terms of commitment: When making an assertion, a speaker makes a commitment to assert properly. Thus, a trustworthy speaker is someone who either fulfills this commitment, that is, she speaks sincerely or truthful—or who refrains from making an assertion at all, for instance by presenting a claim as mere speculation. Note that in contrast to Baier’s account, competence is here not just an additional requirement but embedded into the nature of trust. Avoiding any unfulfilled commitments requires competence because one needs to assess whether and how specific commitments can be met.

Relevant for trust in scientists and their claims are what Hawley (2019, p. 77) calls “meta-commitments”. She argues that earlier commitments often involve “an implicit commitment to take on further, more specific commitments in the future.” Meta-commitments occur for example in intimate relationships: If someone makes a commitment to be another’s monogamous girlfriend, then this involves specific commitments like not cheating or getting them a birthday present. But they also occur in professional roles or relationships: Accepting a job contract often involves taking on projects in the future—with some degree of flexibility since the employee may choose which projects she adopts.

Likewise, there are “meta-commitments with respect to assertion: things we say, roles we accept, or relationships we enter into may commit us to later volunteering whatever information we may have, or to speaking out in various situations” (Hawley, 2019, p. 89). This, I argue, translates to science. Being a scientist involves other, more specific commitments with respect to scientific claims. This concerns making claims at all; working as a scientist means generating at least some hypotheses. It also concerns the way the claims are produced: how to conduct studies, assess the available evidence, judge the theories and hypotheses others have made previously, and so on.

Sometimes these commitments are explicit, like those we find in guidelines for safeguarding good scientific practice (e.g. DFG, 2019). They can also be explicit with respect to scientific advice: For example, the German National Academy of Sciences Leopoldina (2014) publishes guidelines for advising policymakers, which include principles to not have “any preconceived views as to the outcome”, or to produce statements that are “the result of an open-minded discussion process.” Many other commitments made by scientists are implicit and perhaps more flexible, like following the respective conventional standards (Wilholt, 2009) of some scientific community.

The important thing is that Hawley’s commitment account allows us to conceptualize trust in science based on objectivity: In accepting their role as scientists (or scientific advisors), scientists make a meta-commitment to produce claims in a certain way—thus to be trustworthy, they have to fulfill these other commitments. In other words, scientists have made a commitment to produce claims in an objective way, however that is spelled out, and we trust them if we rely on them to meet this commitment.

In contrast to Baier’s goodwill account, Hawley’s account applies to the collective levelFootnote 7: It is not only possible for an individual scientist to make a commitment to follow the norms and standards of the scientific community, it is also possible for the scientific community or a scientific organization to have certain, perhaps implicit commitments.Footnote 8

Hawley’s account can thus explain the distrust in John’s (2020) “sceptic-about-consensus anti-vaxxer” case. The parent believes that, as part of their roles as scientists, the members of the biomedical community have made a commitment to take even very farfetched views seriously. But since the parent has learned that Wakefield got ousted from the biomedical community, she no longer relies on them to meet that commitment—and consequently, she distrusts the biomedical community’s claim that the MMR vaccine is safe. Nonetheless, the members of the biomedical community are in fact trustworthy: They never made a commitment to take every view seriously. The “false folk philosophy of science” (John, 2018) among the audience can be explained by Hawley’s account as well: Disagreement about meta-commitments, Hawley (2019, p. 77) notes, can lead to problems around a mismatch of expectations, because meta-commitments often involve some degree of flexibility about what particular set of later commitments to take on.

The commitment account captures trust based on objectivity quite well. But it has its limits with regard to trust based on shared values. This is not immediately obvious since we can link commitments to values. Especially advisory bodies often make explicit commitments; Germany’s public health institute RKI states for example: “Our mission is to protect and improve the health of the population” (RKI, 2023). Such mission statements certainly involve value commitments.

However, this does not imply that the values need to be shared. If the other person never made a commitment to value the things I value, she can still be trustworthy: She has no unfulfilled commitments. These cases neither amount to trust nor distrust. Consider Irzik and Kurtulmus’s (2019) analysis of the MMR vaccine in terms of shared values again. As I noted above, this would be a case of distrust in Baier’s sense. But according to Hawley, it would neither count as appropriate trust nor as appropriate distrust: The parent believes that the scientist making a claim about vaccine safety has made a commitment to value public health; the scientist is furthermore trustworthy because she meets this and other commitments.

But the parent does not rely on the scientist in the relevant sense: The parent does not plan on the supposition that the scientist will fulfill her commitment to value public health, since the parent herself holds different values.Footnote 9 This situation is similar to the honest person campaigning to imprison the criminal (Sect. 5): In both scenarios, there is a person who has no unfulfilled commitments, but the other person does not rely on her to meet those commitments. Thus, Hawley’s account cannot explain cases where a misalignment of values leads to distrust.

8 The normativity of trust in science

I have argued that it is plausible to assume plural forms of trust. I have also shown that Baier’s goodwill account can be applied to trust in science based on shared values, whereas Hawley’s commitment account captures trust in science based on objectivity. The two forms of trust in science have different normative elements: If I trust a scientist based on shared values, I rely on her goodwill and the fact that she cares about the things I care about. If I trust a scientist based on objectivity, I rely on her meeting her commitments—those she explicitly made, but also implicit commitments that come with her professional role.

This explains why trust in science can be betrayed—and not just trust in science based on shared values, but also trust in science based on objectivity. If a scientist does not produce her claims in a way I think she should and thus does not meet the commitments I think she made as part of her professional role, I may rightly feel betrayed. However, this a different kind of betrayal than the betrayal I feel when trust based on shared values breaks down. Shared values ground a thicker version of trust than objectivity because their respective normative aspects are of a different strength: Being motivated by goodwill is thicker than meeting one’s commitments. This is something John (2020, pp. 50–51) notes, too: “[S]cientists who are untrustworthy breach a professional obligation and, as such, it is (often) right to feel ethical indignation at them in a way in which we do not (or should not) feel indignant when our computers fail us. However, this is not the same kind of personal betrayal we feel when, say, a lover treats us badly.”

John does not adopt a pluralist view on trust in science (I will deal with his view in Sect. 9), so he does not distinguish trust in science any further. And, as I have demonstrated in the last three sections, the literature on trust and reliance is helpful in elaborating this general observation. But John is right in pointing out that different forms of trust can be betrayed in different ways and that this leads to thinner and thicker versions of trust.

There is a second but related way in which Hawley’s account is thinner than Baier’s account: As Hawley (2019, p. 7) notes herself, she uses a thin notion of vulnerability. In trusting, a person makes herself vulnerable to betrayal, but this does not need to imply “substantial risk or genuine vulnerability.” For instance, I can trust a friend to bring enough lunch for our picnic, but nevertheless bring food myself because I do not want to appear ungenerous.

Baier’s account, in contrast, involves a much stronger notion of vulnerability—trusting means leaving the things one values or cares about within the power of others and thus putting them at risk. This then translates to trust in science: Trust in science based on objectivity is not only thinner because it involves a different kind of betrayal which has to do with failing to meet professional commitments, it also is thinner because it does not require genuine vulnerability.

These differences in thickness help to understand how plural forms of trust in science relate to each other in a specific situation. Take the MMR vaccine case again, where the different forms of trust lead to different outcomes: The parent neither trusts nor distrusts the scientist in Hawley’s sense, but distrust them in Baier’s sense. Likewise, the scientist is trustworthy in Hawley’s sense, but untrustworthy in Baier’s sense. In relying on the scientists, the parent makes herself genuinely vulnerable: There is a possibility of harm for their child. Therefore, a thicker version of trust may be needed, requiring shared values as well. I do not think this is necessary; some parents might still not distrust the scientists even though they hold different values. But distrust based on such misalignment of values would be appropriate as well.

In certain situations, there just is no all-or-nothing recipe on how to be trustworthy. Scientists can be trustworthy in a thin sense if they make objective claims. And perhaps this is enough for members of the public to act based on these claims. But depending on the context, value (mis)alignments may play a role; for example, when the audience’s sense of vulnerability is strong, or perhaps when the values in question are of particular importance to the audience.

Now, one might object that it is unsatisfactory to have no clear conditions for when distrust is appropriate or not, or to have no single account of trustworthiness. One could argue that this is a disadvantage of a pluralist approach to trust in science, and that we instead should try to integrate shared values and objectivity into one account. In the next section, I will deal with those hybrid accounts of trust—in particular John’s (2018, 2021b) two step account for trust, and Rolin’s (2021) social responsibility account—, and argue why a pluralist approach is preferable.

9 Pluralist versus hybrid views

Let me start with Rolin’s (2021) account. She turns the relation between trust and objectivity around and argues that our understanding of trust has implications for our understanding of objectivity. According to Rolin, objectivity grounds trust in science: it gives us permission to trust scientific claims. But because social responsibility is one aspect of trustworthiness, it becomes an important aspect of objectivity, too. This moral-political dimension of objectivity tells us “when we can trust scientists to be socially responsible, that is, to follow ‘sound’ moral and social values in different stages of scientific inquiry” (Rolin, 2021, p. 530). Thus, for Rolin, finding the proper values that are legitimate for members of the public and policymakers is one important aspect of trustworthiness; and in turn, if objectivity grounds trust in science, shared values can become an important dimension of objectivity this way.

There are two ways to interpret Rolin’s view. On the first interpretation, our definition of trustworthiness narrows down what counts as objectivity: Objectivity always requires social responsibility, and only this aspect of objectivity gives us reason to trust scientists and their claims. However, this would be a very restricted view of objectivity. Take the “sceptic-about-consensus anti-vaxxer” case: Rolin could not really explain those situations where something went wrong with certain social processes, which has nothing to do with finding the proper values. On the second interpretation, social responsibility is only one aspect of objectivity that gives us permission to trust. But this would be unsatisfactory from the perspective of trying to understand trust: Rolin’s view does not tell us anything about other aspects of objectivity and their relation to trust.

Another hybrid view is John’s (2018, 2021b) two step account for trust which includes a sociological premise as well as an epistemological premise. The sociological premise roughly captures objectivity considerations: “Institutional structures are such that the best explanation for the factual content of some claim (...) is that this claim meets some set of ‘epistemic standards’ for proper acceptance” (John, 2021b, p. 5). The epistemological premise, on the other hand, states: “If some claim meets scientific epistemic standards for proper acceptance, then I should accept that claim as well.” This premise does not specify how epistemic standards should look like to be acceptable for the trustor. But it allows for standards that are based on shared values: If a claim meets standards that are based on values held by the public, it is reasonable for the public to accept this claim. The epistemological premise is thus able to capture value alignment considerations.

However, John’s two step account assumes that for trust, both premises have to be met. If the second premise requires shared values, this would be too restrictive: Scientists rarely know their audience’s values, and most of the time, values are not shared by all members of the audience. This is in fact why John (2015) himself argues for high epistemic standards instead of floating standards that are based on the respective values of the audience: It would be very hard to reach widespread trust if trust always required shared values. On the other hand, adopting John’s very broad framing of the second premise cannot really explain the significance of shared values for trust.

Hybrid views are thus either too restrictive: They only work with a narrow sense of objectivity or trust. Or they do not really explain how trust relates to objectivity as well as shared values. Adopting a pluralist view allows us to understand different forms of trust in science and their normative elements. I believe this also has practical benefits: A pluralist understanding of trust allows scientists for instance to be trustworthy in a more nuanced way. They can try to be generally trustworthy in a thin sense, and strive for thick trustworthiness—which requires knowing the audience’s values and taking them into account properly—only in certain contexts. Not being able to include the audience’s values does not make scientists generally untrustworthy, as they can still be trustworthy in a thin sense.

10 Conclusion

I have argued that both objectivity and shared values can ground trust in science. There are different forms of trust in science, with different normative elements: Trust in science based on shared values can be linked to goodwill, whereas with trust based on objectivity, the normative aspects can be explained through commitments. I have thus shown that objectivity does not ground only mere reliance, but genuine trust. Or, to use the comparison between cars and science again: Science is not like my car with respect to trust. My car cannot make commitments, therefore I only rely on it. If it breaks down, I can feel disappointed but not betrayed. Scientists do have commitments in regard to their claims, explicit and implicit ones. If they do not meet their commitments and do not produce their claims in a proper way, it would be right to feel betrayed.

In philosophy of science, the emphasis often lies on trust based on shared values. This has to do with the fact that there is not much engagement with the philosophy of trust literature beyond Baier’s goodwill account. One aim of my paper is to change this and to draw on insights from both areas of philosophy. We can gain important philosophical understanding by linking these debates: The philosophy of trust literature helps to understand why the different forms of trust are in fact trust, and what exactly is doing the normative work there.

As a further consequence, my analysis shows that philosophers of science may want to put more emphasis on trust based on objectivity. This is a thinner version of trust, but it nevertheless can be important if we want the public to act on scientific results and recommendations—for example in situations where the values of the audience are unknown, or where members of the audience hold very different values; or in order to establish some “default trust” in science that is not tied to specific claims.