1 Introduction

In 1969 a tobacco industry executive infamously wrote, “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy” (Brown and Williamson, 1969). This executive knew something that science communication researchers would find empirical evidence to support decades later: That people tend to distrust science communicated as in disagreement (see e.g. Gustafson & Rice, 2020). So, tobacco executives kept the controversy alive about whether smoking had negative health effects. They did so in large part by hijacking journalism, specifically its norm of giving opposing views equal weight in reporting. Other industries would replicate this tactic to deceive the public and delay action on acid rain, the ozone hole, and global warming. As Oreskes and Conway argue in their seminal work on this topic, this tactic worked because the public has an image of scientists as depositories of “cold, hard, definite facts,” so any uncertainties or disagreements lead the public to conclude “the science is muddled” (2010, p. 34).

The moral many scholars of science took from this story is that we should emphasize consensus in science communication if we care about maintaining public trust in science, especially policy-relevant science: If journalists had only communicated the robust consensus on smoking and other issues, then the public would have trusted scientists and the epistemic and tangible harm caused by their distrust could have been minimized (see e.g. Slater et al., 2022). The rationale for what I call the consensus model of trust in science is this: Scientists possess a ‘body of fact,’ as the tobacco executive said, where these facts constitute value-free claims that meet high epistemic standards; in other words, knowledge.Footnote 1 If the public trusts scientists’ claims, then the public will likely gain knowledge, which can help them live according to their own values. Consensus is an indicator of knowledge obtainment that the public can easily grasp. Thus, the public is warranted in trusting claims backed by robust scientific consensus (see e.g. Goldman, 2001; Beatty & Moore, 2010; Anderson, 2011; Miller, 2013; Oreskes, 2019; John, 2022).Footnote 2

The problem is that most policy-relevant science has not met the epistemic and non-epistemic standards of knowledge, and, thus, has not reached what I am calling robust consensus. Does this mean that the public cannot be warranted in trusting most policy-relevant science? Hopefully not, as this would have grave consequences for the role of science in democracy. How then should we communicate such science to show the public when it warrants their trust? Journalists and other communicators faced this question during the COVID-19 pandemic. In the early days of the crisis, scientists disagreed about the efficacy of broadscale masking in curbing the virus’ spread (see e.g. McDonald, 2020). They went on to disagree about the efficacy of prolonged lockdowns and vaccine boosters, among other issues (see e.g. Lewis, 2022; Mandavilli, 2021). Along with epistemic debates over interpreting the data, these disagreements were grounded in non-epistemic value differences, including how to best protect the public’s health and rights. With all this disagreement, it may come as no surprise that the public lost trust in science during the pandemic, at least in countries like the United States (Kennedy et al., 2022). Unfortunately, scientists have only continued to lose the American public’s trust since then (Kennedy and Tyson, 2023).Footnote 3

This paper has two aims. First, it aims to argue, in the context of communication, what could in theory warrant public trust in policy-relevant science that has not met the standards of knowledge. Second, it aims to investigate, with the help of relevant empirical literature, how journalists might contribute to cultivating warranted public trust in such science in practice without violating the norms of their discipline. To achieve these aims, I first show that it is often difficult for journalists to adhere to these norms when communicating policy-relevant science in disagreement, especially when compared to reporting on science that has been in robust consensus. I do so in Sects. 2 and 3 by comparing reporting on science related to COVID-19 and smoking, among other issues. Resolving the norm conflict journalists confronted when reporting on smoking was relatively straightforward because the science on smoking had effectively met the standards of knowledge, and, thus, reached robust consensus. However, the norm conflict journalists faced when reporting on COVID-19 was not so easy to resolve, I will argue. Unlike reporting on robust consensus, reporting on scientific disagreement leads journalists to face the ethical conundrum scientists themselves often face in policy-relevant situations, including during the pandemic: the problem of inductive risk; that is, having to weigh the risks of error that come with accepting or rejecting a hypothesis (see e.g. Rudner, 1953; Jeffrey, 1956; Douglas, 2009).

Ultimately, I will argue that reporting on scientific disagreement would be better served by an alternative model of trust in science. This model helps journalists mitigate the risks associated with reporting scientific disagreement by instructing them to communicate reasons that warrant the public’s trust in scientists that go beyond robust consensus. Instead, when journalists report disagreement, they can minimize potential risks by explaining why trust in scientists’ claims is warranted when they exhibit a critical responsiveness to evidence. Additionally, the model has a non-epistemic component such that scientists must be responsive to the public’s values to warrant their trust. In Sect. 4, I sketch how what I call the responsiveness model of trust in science might resolve the issues with inductive risk journalists face when communicating policy-relevant scientific disagreement. In this section, I also outline how journalists must help put the responsiveness model into practice by, first, holding scientists to account if they are not acting and communicating in an epistemically responsive manner and, second, by bringing the public’s values to scientists’ attention and testing their responsiveness to them.Footnote 4 I conclude with a brief discussion of how the responsiveness model could cultivate public trust in journalists alongside cultivating trust in scientists, especially in policy-relevant contexts.

To be clear, I do not mean to argue here that journalistic communication alone can fix the multifaceted problem of public trust in science. Among other actors, science educators and scientists themselves must participate as well, not to mention social media companies.Footnote 5 Still, journalists are often the adult public’s central source of scientific information, such that communication by scientists is often filtered through the media (Funk et al., 2017). Thus, for better or worse, journalists play a significant role in shaping the public’s image of science, and, accordingly, public trust in science (Ophir & Jamieson, 2021). Yet philosophers of science have ignored or, at best, underestimated journalistic communication in their inquiries about trust in science.Footnote 6 The present paper aims to help fill this gap in research by folding journalism into discussions about public trust in science with the hope that others will also examine journalism’s role in both creating and resolving this problem.Footnote 7

Before moving on, I must make one clarification: When I refer to trust in science, what exactly do I mean by trust? Here I use a weak conception of trust, what is often called ‘mere reliance’ or deference, including both epistemic and practical reliance. That is, by ‘trust’ I mean ‘to believe and to act on’ a scientific claim. Accordingly, by ‘distrust’ I mean ‘to disbelieve and to not act on’ a scientific claim. There are also responses that fall between these two extremes: Members of the public may suspend judgment about a claim, and, hence, not act on it. They could also moderate their credences, and then decide to act or not act on that credence depending on its strength. I group the former under distrust and the latter under trust or distrust depending on how the public acts. Therefore, for the purposes of this paper, trust (or distrust) largely aligns with practical action (or inaction).

Following Schroeder (2021), I use this conception of trust for two reasons. First, its emphasis on practical action captures why many “lament the lack of trust in science,” namely the harm that can come from failing to follow scientific guidances, including vaccinating children, curbing CO2 emissions, and, in the central case I discuss, wearing masks. Along these lines, when journalists are evaluating how and whether they should communicate scientific disagreement, it is the public’s action (or inaction), moderated by their beliefs, with which they are most concerned. Second, the sense of trust I use here is arguably a “necessary condition of trustworthiness in a more robust sense,” which means it “will likely prove useful to philosophers looking to define scientific trustworthiness in a more robust sense” (Schroeder, 2021, p. 549).Footnote 8 In future research, I plan to explore how the responsiveness model might be reconfigured with a more robust sense of trust in mind.

2 Reporting on robust scientific consensus

To control the public ethos of the science on smoking, tobacco executives needed a megaphone, and that megaphone was journalists. How did they hijack the press to do their bidding? Journalists adhere to various norms when reporting the news, including ‘fairness,’ ‘accuracy’ and ‘minimizing harm’ (see e.g. SPJ, 2014; NPR, 2012). While the latter two I discuss briefly in this section and in more detail later, I concentrate here on fairness. Until relatively recently, journalists interpreted their fairness norm as having an obligation to give equal weight to opposing claims. So, when reporting on the negative health effects of smoking, journalists gave equal weight to the claims of industry spokespeople and the scientific community. As Oreskes and Conway note, “The industry’s position was that there was ‘no proof’ that tobacco was bad, and they fostered that position by manufacturing a ‘debate,’ convincing the mass media that responsible journalists had an obligation to present ‘both sides’ of it” (2010, p. 16).

Giving equal weight to both sides was both epistemically and ethically problematic: Not only did it lead to inaccurate reporting of the science, but it also caused the public undue harm, thereby violating journalists’ accuracy and harm norms. It led to inaccurate reporting because it gave the public the impression that scientists were still in disagreement when they were in robust consensus. It led to harmful reporting because the communication of that disagreement likely led members of the public to distrust the science on tobacco and maintain the status quo: keep smoking. It also likely led to harmful delays in policy action on tobacco, much like it did with global warming and other issues (Oreskes & Conway, 2010).

Once journalists became aware of this problem, they responded by revising their fairness norm to align it with reporting the science accurately. Journalists now interpret reporting fairly as giving accurate weight to opposing claims instead of equal weight, namely weight proportional to the evidence that supports a claim. National Public Radio’s (NPR) Ethics Handbook exhibits this shift: “To tell the truest story possible, it is essential that we treat those we interview and report on with scrupulous fairness…especially [in] matters of controversy.” However, journalists’ goal is not “to produce stories that create the appearance of balance, but to seek the truth” (2012). Accordingly, journalists now give accurate weight to claims about smoking, which means they report the science as in robust consensus. Their reporting on global warming has also shifted in this manner: So-called ‘consensus reporting’ seems to be journalists’ main strategy for combating public misperceptions of climate science (Slater et al., 2022).

Consensus reporting could only ground public trust in science if a particular image of science permeated popular culture. As the tobacco executive aptly put it, science is a “‘body of fact’ that exists in the minds of the general public” (Brown and Williamson, 1969). This image of science is embedded in how many think about scientific literacy: In school we teach students that science produces indisputable, value-free facts about the world, and science communication researchers evaluate the public’s knowledge of those facts in their studies (see e.g. Douglas, 2021).Footnote 9 Importantly, because journalistic reporting often emphasizes this product of science and not its process, it has also historically promoted this image of science (see e.g. Slater et al., 2021).

This image grounds the model of trust in science in play here, what I call the consensus model: In short, scientists possess knowledge, where knowledge entails value-free claims that meet high epistemic standards. If the public trusts scientists’ claims, then they will likely gain such knowledge, which can help them live according to their own values. Since consensus is an easily graspable indicator of such knowledge obtainment, the public is warranted in trusting science that has reached consensus. In other words, the consensus model tells us that we should trust scientific consensus because it is grounded in epistemic reasons relating to knowledge production (i.e. high epistemic standards), not non-epistemic reasons many assume aren’t relevant to knowledge (i.e. moral or political values that some members of the public may not share with scientists). In this way, the consensus model perpetuates the value-free ideal.

We can also see echoes of this model in philosophical work on trust in science. For example, Goldman defines an expert, in part, as “someone who possesses an extensive fund of knowledge (true belief)” and a layperson’s task in deciding which expert to trust as aiming to “learn a true answer to the target question” (2001, p. 92). Anderson also argues that “citizens can make reliable second-order assessments of the consensus of trustworthy scientific experts” (2011, p. 144). However, “Before a consensus, the best course for laypersons is to suspend judgment,” she writes (2011, p. 149). Likewise, John argues that “in extending trust” to scientists who uphold “high evidential standards…we are gaining true beliefs without running a significant risk of coming to believe false beliefs” (2022, p. 48). Notably, by demanding ‘proof,’ tobacco industry executives were also demanding that the science meet the highest of epistemic standards to warrant action.

This model of trust works well to warrant the public’s trust when scientists have obtained knowledge, but obtaining knowledge is not easy. When it comes to its criterium of value-freedom, some even argue it is impossible. As Betz (2013) writes, the ‘logical’ critique of the value-free ideal in science says value-freedom is never attainable because the evidence underdetermines scientists’ conclusions, no matter how much evidence has been collected. For this reason, scientists must always use non-epistemic values to bridge the inductive gap between the evidence and their conclusions.Footnote 10 However, following John (2015) and Betz (2013), I grant that, at least for practical purposes, it is sometimes possible to attain value-freedom. As Betz rightfully explains, some scientific claims can be taken for “granted as plain facts” in decision-making contexts; for example, that “CO2 is a greenhouse gas” or that “the atmosphere comprises oxygen.” Even if such claims might still be found false for “trivial reasons,” such as many people falling prey to the same fallacy or illusion, he adds, they have been “thoroughly empirically tested” and “acted upon millions of times” and should be deemed “established beyond a reasonable doubt” (2013, p. 215). In other words, it is for epistemic reasons, not non-epistemic reasons, that scientists reach robust consensus.

The problem is that, unlike the claims that smoking causes cancer or that increased CO2 in the atmosphere causes global warming, most policy-relevant claims have not garnered robust consensus. Why? Because these claims have not met the high epistemic standards the consensus model deems relevant for acceptance. This includes most claims made about COVID-19 during the pandemic. For many policy-relevant claims, decisions about whether the evidence is sufficient to accept or reject them must be made on non-epistemic grounds about which there may be reasonable disagreement (see e.g. Douglas, 2009). This means that in such cases we cannot use the consensus model of trust, especially insofar as it appeals to the value-free ideal, to explain why policy-relevant claims warrant public trust. In the next section, I show why this lingering legacy of the value-free ideal in the minds of the public, and the high epistemic standards that come with it, causes ethical problems for journalists when they are tasked with communicating policy-relevant scientific disagreement.

3 Reporting on scientific disagreement

In the early days of the pandemic, scientists disagreed about the efficacy of broadscale masking in curbing COVID-19’s spread. That is, they disagreed about whether everyone should wear a mask or whether masks, which were a limited resource at the time, should be reserved for people who were working in high-risk environments, such as healthcare workers. If individuals could be contagious while asymptomatic, many scientists reasoned at the time, then broadscale masking could be effective. Others argued the effect would be negligible, especially for permeable cloth masks and if the virus could be transmitted via tiny aerosols. Worse, if people touched their faces more or stopped social distancing while masked, then broadscale masking might increase transmission, some worried (see e.g. McDonald, 2020). As a result, the uncertainty of the evidence gave scientists some wiggle room to reasonably differ on how to best protect the public’s health and rights. In other words, scientists resolved the problem of inductive risk with diverging non-epistemic value judgments. However, in April 2020 the U.S. Centers for Disease Control and Prevention recommended that everyone wear masks, which led many businesses to mandate mask-wearing, despite lingering scientific disagreement (Chavez et al., 2020).

When journalists reported on the science related to smoking and global warming, their fairness norm (as originally conceived) came into conflict with their norms of accuracy and harm. This conflict was relatively easy to resolve because the science on smoking and global warming had effectively met the epistemic and non-epistemic standards of knowledge and, thus, was backed by robust consensus, as I showed in the previous section. When it came to reporting on the science on masking during the pandemic, the ethical issues journalists faced were not so straightforward to resolve, I will argue: In short, journalists faced the problem of inductive risk, or, rather, what I call second-order inductive risk, which entails weighing the harms of error when accepting or rejecting a scientific claim that influences how another scientific claim should be communicated. I come to this conclusion through an examination of a potential conflict between journalists’ norms of accuracy and harm when communicating the scientific disagreement over masking.

I will start by outlining how what I call the accuracy-harm norm conflict could manifest. To do so, we first need a more detailed understanding of how journalists conceptualize these norms. On the accuracy norm, as part of the Society for Professional Journalists’ (SPJ) Code of Ethics, Brown writes, “[T]here is nothing more important for a journalist than to come as close to the ‘truth’ as possible,” adding, “[P]eople have different ideas about the truth of almost everything…but accuracy is less debatable” (SPJ, 2014). Hence, the norm says journalists should strive to maximize accuracy, given the slipperiness of the truth. How can journalists maximize accuracy? NPR’s standards state, “For more accurate stories, seek diverse perspectives,” as it “allows for a much more nuanced report” (NPR, 2012).

On the harm norm, the SPJ states journalists should, “Balance the public’s need for information against potential harm or discomfort.” However, the SPJ adds, “While it is important for journalists to be succinct– oversimplification that removes integral facts, or is in the service of manipulation is a violation of some of the Society’s basic principles” (SPJ, 2014). Seaman (2015) also notes that “journalists often inflict some level of harm to serve the greater good,” so the expectation is not that their reporting should be harm-free. In sum, this norm tells journalists to “minimize undue harm,” but to refrain from manipulating the public in the process (NPR, 2012).

Two bodies of evidence fuel the accuracy-harm norm conflict in the case I describe: The first suggests that members of the public tend to distrust scientific claims when they are communicated as in disagreement (see e.g. Gustafson & Rice, 2020). The second suggests broadscale masking is effective in preventing the spread of COVID-19 (see e.g. McDonald, 2020). On the former, admittedly the science on communicating disagreement has not met the high epistemic standards of knowledge. Still, Gustafson and Rice’s (2020) review of the literature found that, across multiple studies, communicating disagreement was the only ‘uncertainty frame’ that consistently led to negative effects on people’s beliefs and behavioral intentions.Footnote 11 Based on their review, the authors conclude that communicators “should be cautious about expressing [scientific disagreement] unless it is appropriate to their context and goals” (Gustafson & Rice, 2020, p. 627). This is not to mention that the tobacco and fossil fuel industries, among other industries, successfully used the public’s distrust in science in disagreement as a tactic to stifle policy action on tobacco, climate change, and other issues (Oreskes & Conway, 2010). Thus, it is arguably safe to say the weight of the evidence supports the claim that people tend to distrust scientific claims in many contexts when they are communicated as in disagreement. As I have already explained, the weight of the evidence also suggested early in the pandemic that broadscale masking curbs the spread of COVID-19.

Let us now consider a hypothetical, though real-world, case of a journalist I call Natalie: It is early in the coronavirus pandemic and Natalie’s editor has assigned her to write an article on whether broadscale masking would be effective at curbing COVID-19’s spread. After reviewing the scientific literature and interviewing experts, she accurately concludes the science is in disagreement: While the most evidence suggests broadscale masking curbs transmission, some evidence suggests it makes no difference and meager evidence suggests it could increase transmission. If Natalie judges that the weight of the evidence on communicating disagreement and on masking lies in the right direction, this situation puts her in a tricky position as a journalist: The accuracy norm tells her to report the scientific disagreement, while the harm norm seems to give her conflicting advice, some of which is in tension with the accuracy norm’s advice. How so?

If Natalie reports the science in a maximally accurate way, then she would clearly report all three hypotheses. Recall NPR’s (2012) guidance: “For more accurate stories” a journalist should “seek diverse perspectives.” By reporting all three hypotheses, Natalie would be providing more diverse viewpoints within the scientific community and, hence, a more accurate picture of the state of the science. When she reports the science in this way, some people will likely still wear masks because they think the potential benefits of doing so outweigh the potential harms, given where the weight of the evidence lies and given that they deem wearing a mask a minor inconvenience.

Yet by reporting the scientific disagreement over the issue, Natalie is concerned that she will cause other members of the public epistemic and non-epistemic harm: Given where the evidence on communicating disagreement stands, she worries some individuals may disbelieve (or suspend judgment or hold low credences) that masking is effective at stemming COVID-19’s spread, and, accordingly, not wear masks. Likewise, given where the evidence on masking stands, she worries that not wearing masks could increase their risk of catching the virus and spreading it to others. Now, Natalie reasons that if she only reported the leading hypothesis, thereby avoiding the communication of scientific disagreement, this would presumably lead at least some of these individuals to believe (or hold high credences) that masking works and wear masks. While this option would not maximize the accuracy of her reporting, since it would not entail reporting the scientific disagreement, it would likely minimize the epistemic and non-epistemic harm that her reporting causes, at least when it comes to individuals who respond negatively to disagreement. So, in one sense, the harm norm instructs her to not report the scientific disagreement, which conflicts with the accuracy norm’s advice.

However, recall that the SPJ (2014) states that “oversimplification that removes integral facts or is in the service of manipulation is a violation of some of the Society’s basic principles.” Arguably, leaving out all but the leading hypothesis does involve the removal of integral facts, namely that the science is in a state of disagreement, even if the weight of the evidence supports the claim that masks work. We should also consider what motivates Natalie’s exclusion of the less-supported hypotheses: To prevent the public from distrusting the science that says masks work with the aim of getting them to wear masks. For many journalists this would veer too close to manipulation: Even if they should minimize harm when reporting, journalists should not directly aim to influence people to act a certain way, especially at the expense of reporting accurately.Footnote 12 Accordingly, Natalie reasons that the curtailing of people’s rights in this way is also a harm, particularly if the public were to find out that they were being manipulated, as this could lead to mistrust in journalists. So, in another sense, the harm norm cautions against leaving out the disagreement, thereby giving conflicting advice. Thus, the real confusion lies in how Natalie should adhere to the harm norm and whether she should sacrifice some of the accuracy of her reporting to do so.

Some might think the solution to Natalie’s conundrum is simple: What if she explained why the most evidence suggests masks work, but why there is still debate? This would translate to reporting the hypotheses in an epistemically fair or balanced manner, which is another norm Natalie should factor into her decision. Unfortunately, this solution will not work for many members of the public because it requires people to respond rationally to evidence, and the whole point is that this is not something some, if not many, individuals may do: When people distrust science that is in disagreement, they let such disagreement weigh more heavily on their credences and beliefs than they rationally should. In fact, research suggests even “small skeptical scientific minorities can cast significant doubt among the general public on the existence of an environmental problem and reduce support for addressing it” (Aklin & Urpelainen, 2014, p. 173).Footnote 13 However, I will return to this solution later, as it will be crucial to showing why, with the responsiveness model of trust in place, we can resolve Natalie’s conundrum without violating any journalistic norms.

Now at this point, one might wonder: Is this ethical issue that journalists face particular to the pandemic or is it pervasive? If it is not pervasive, why care? Policy-relevant science in general is especially susceptible because it often possesses two characteristics, I argue. First, many scientific findings pertinent to policy, such as those about human health and the environment, relate to members of the public’s practical decisions. This means that journalistic reporting of this science can impact the public’s decisions, which is required for this ethical issue to arise. Second, the weight of much policy-relevant science often lies in one direction, but remains in disagreement, given the complex nature of the relevant systems under study and the ethical limitations of experimentation on them. For example, claims about the effects of chemicals on children’s development often cannot reach the status of knowledge because scientists ethically cannot experiment on children. The same goes for pregnant people. Instead, evidence must come from correlational studies and experiments on other animals, both of which have significant epistemic limitations, even if they can be suggestive of the truth. In many, if not most, cases, it will also be journalists’ responsibility to communicate the existing science regarding these topics before scientists obtain knowledge, and, accordingly, reach robust consensus.

So, this ethical issue is pervasive, and, thus, deserves our worry. How then can we resolve it? Given how journalists resolved the norm conflict they faced when reporting on smoking and global warming, one natural conclusion is that the resolution of the accuracy-harm norm conflict also lies in revising one or both norms. Revising the harm norm seems to be the more natural option here since it is the norm that does not provide clear guidance in Natalie’s case. It is also unclear how the accuracy norm could be revised in a way that does not substantially compromise journalists’ ‘most important’ imperative: maximizing accuracy. That is, either journalists communicate the disagreement or they do not, and if they do not, then they are not reporting the science sufficiently accurately. Crucially, this means that we would need to revise the harm norm, such that it allows the communication of scientific disagreement.

One path to revising the harm norm in this way is to examine whether journalists should be permitted by this norm to suspend judgment about the evidence that suggests their reporting might cause harm when that evidence has not met high epistemic standards. In other words, when journalists are not relatively sure, given the evidence, that their reporting will cause harm, they are exempt from considering that harm. In the kinds of cases central to this paper, this would mean suspending judgment on the evidence that suggests communicating scientific disagreement leads to distrust in science. Suspending judgment on this evidence would mean the harm norm would tell journalists to report the disagreement since the only harm they would be considering would be the harm of manipulation. Seeing that the accuracy norm also instructs journalists to report the disagreement, this solution would avoid the accuracy-harm norm conflict in relevant cases.

Those who view trust in science through the lens of the consensus model would deem this a reasonable tactic. As Anderson writes, “Before a consensus, the best course for laypersons is to suspend judgment” (2011, p. 149). In other words, this model requires scientists’ claims to meet high epistemic standards to warrant acceptance (and, thus, trust). Given that the claim that communicating scientific disagreement has not met such standards, we should suspend judgment on it, proponents of this view would say. However, while it could be construed as epistemically permissible for journalists to suspend judgment in this case, I argue it is not morally permissible for them to do so. It is not morally permissible for the same reasons it is not morally permissible for scientists to ignore the potential harms of error when making claims. That is, I argue journalists, much like scientists, face the problem of inductive risk, the resolution of which morally requires them to make non-epistemic value judgments about the threshold of evidence required to accept a scientific claim.Footnote 14

Now before walking through why this is the case, it is worth pointing out how the situation scientists face differs slightly from that of journalists. The literature on inductive risk, particularly the ‘moral critique’ of value-free science, primarily outlines how scientists face what I will call first-order inductive risk, where they are morally required to consider the harms of error when making a claim (Douglas, 2009; Betz, 2013). What my discussion of the accuracy-harm norm conflict brings to the fore is that journalists often face what I call second-order inductive risk, which entails weighing the harms of error when accepting or rejecting a scientific claim that influences how another scientific claim should be communicated.Footnote 15 That is, when journalists evaluate the evidence that suggests communicating scientific disagreement causes distrust in the public, they face second-order inductive risk. To be clear, I am not claiming that all cases of the accuracy-harm norm conflict entail journalists facing the problem of second-order inductive risk. Rather, my claim is that second-order inductive risk gives rise to the case of the accuracy-harm norm conflict I describe here.Footnote 16 For this reason, the literature on inductive risk can help us understand and resolve the situation journalists are in. So, what does it say?

In her seminal work on how scientists face the problem of inductive risk, Douglas writes that “all of us, as general moral agents, have a responsibility to consider the consequences of error when deliberating over choices, and in particular when deciding upon which empirical claims to make” (2009, p. 66). This includes both the intended and unintended consequences of error, where the potential harm that comes from communicating disagreement would fall under the latter. Douglas argues that our moral responsibility for the unintended consequences of error when making claims parallels our concern over reckless or negligent behavior. For behavior to be deemed reckless or negligent, one has to be more than causally responsible for it: One also has to be morally responsible, where the latter necessitates praise or blame and the former does not, and where praise or blame is warranted if one could have reasonably done otherwise. One could do otherwise if the potential unintended consequences of error are “reasonably foreseeable…even if not foreseen by the individual, due to negligence, or even if ignored, due to recklessness,” she writes (Douglas, 2009, p. 71).

This leaves us with the following questions when it comes to the proposed solution to the relevant cases of the accuracy-harm norm conflict, namely suspending judgment on the science that suggests reporting scientific disagreement leads to public distrust: Are Natalie and journalists generally being reckless by ignoring the potential harm their reporting might cause when communicating scientific disagreement? This hinges on the possibility that journalists could have reasonably done otherwise, so could they have? Do they have another option? If they don’t, then it would not be reasonable to deem them morally responsible for the harm their reporting might cause, only causally responsible, which would mean that they would be morally permitted to suspend judgment in these kinds of cases. I argue journalists do have a better option: Accept the claim that communicating disagreement can cause harm and satisfy the harm norm by requiring journalists to contribute to cultivating an alternative image of science in the minds of the public. In other words, the harm norm requires the utilization of an alternative model of trust that accommodates the communication of science’s often disagreement- and value-laden nature.Footnote 17

I outline why this is the better option in the next section, but before doing so I must explore one last option. Fine, journalists might say, we grant that we are morally required to make a value judgment about the evidence that suggests communicating scientific disagreement leads to distrust in science: We judge that the threshold of evidence required to accept this claim has not been met. With this solution journalists also do not need to consider the harm that communicating scientific disagreement might cause, thereby resolving this case of the accuracy-harm norm conflict in much the same way as suspending judgment on the science did. To justify their actions journalists might reason that the very real harm of manipulating the public is more important than the merely potential harm of instigating public distrust in science that could lead to suffering and death from COVID-19. While this is an option, I argue it is still not the best option. Why? Because the solution I outline in the next section avoids both the harm of manipulating people and considers the harm that reporting scientific disagreement might cause, all while maximizing the accuracy of reporting. In short, journalists can have their cake and eat it, too.

4 The responsiveness model of trust in science

To ground trust in policy-relevant science in disagreement, I propose that journalists should work towards shifting the public’s image of science by refocusing science communication on scientific process, not product, namely knowledge. In providing a sketch of what I call the responsiveness model of trust in science, I concentrate on one, if not the, central characteristic of scientific process: a critical responsiveness to evidence; or epistemic responsiveness, for short. What exactly do I mean by epistemic responsiveness? It is often some form of epistemic responsiveness that scholars argue distinguishes trustworthy from untrustworthy consensus. Following Longino (1990), Oreskes (2019) argues that we should trust science when it has reached consensus in a diverse scientific community through criticism and peer review. Using Mill’s famous work on free speech, Beatty and Moore also point to this characteristic in their defense of a form of consensus they call “deliberative acceptance.” The “quality” of a theory, they write, “depends on how well it is defended against alternatives that are themselves well defended” (2010, p. 202). In other words, epistemic responsiveness is the characteristic of scientific process that eventually gets us knowledge. This is a testament to its centrality in scientific practice, giving us prima facie reason to ground trust in it.

Some also directly tie what I am calling epistemic responsiveness to trust. For example, Goldman considers what he calls “indirect argumentative justification,” or providing counterarguments and responding to them, as a reason to trust an expert (2001, p. 95). But how could this aspect of science be used to ground trust in science in disagreement in particular? Let us start with one obvious reason: It does not require robust consensus. This means that if we ground trust in science’s epistemic responsiveness, then the existence of disagreement should not give us prima facie reason to distrust science. More positively, when the science remains in disagreement in these cases, epistemically responsive scientists provide us with the best available evidence because they are constantly updating their views based on evidence gathered directly from the world and indirectly from debates with fellow scientists (John, 2022). So, we are warranted in trusting them when they make claims because, crucially, there is no better alternative. Thus, it is the process by which scientists come to their claims, not the possession of knowledge, that could warrant our trust in science in disagreement.

In some sense, the public could be construed as rational when they distrust science communicated as in disagreement, given their image of scientists as depositories of knowledge. Their argument goes like this: One should trust science only when it is in consensus because that indicates that scientists possess knowledge. Scientists are not in consensus on X, so one is not warranted in trusting the science on X. However, here is what happens when we replace the image of scientists as depositories of knowledge with an image of scientists as epistemically responsive: One is warranted in trusting scientists when they are epistemically responsive, namely when they explain what the best available evidence says and why. Despite some disagreement, the best available evidence says X. Therefore, one should trust the science on X.Footnote 18

This brings me to a solution to the accuracy-harm norm conflict I outlined in the previous section: What if Natalie explained why the most evidence suggests broadscale masking works, but why there is still debate? My point there was that this solution might not work because of some people’s likely reaction to the communication of science in disagreement. My point here is that people are not warranted in reacting to disagreement in this way if we replace their image of scientists as depositories of knowledge with one of scientists as epistemically responsive. If the public saw scientists in this way, then journalists would be able to communicate scientific disagreement and retain the public’s trust in science without having to manipulate them. In short, reporting the science fairly could, in theory, resolve the accuracy-harm norm conflict I outline, but, crucially, only if the public’s image of science revolved around epistemic responsiveness, at least in addition to robust consensus.

However, it is one thing to provide an argument for why trusting epistemically responsive claims is warranted in theory, which was the first aim of this paper. It is another thing to find a way in practice to retain the public’s trust when communicating why claims are epistemically responsive, which was this paper’s second aim. Notably, empirical research does support the efficacy of the responsiveness model of trust in science in practice. Weisberg and colleagues found that the more individuals’ epistemic thinking style accorded with what they call ‘evaluativism,’ the more likely they were to have beliefs that aligned with the scientific community on climate change, vaccines, and evolution. The authors define evaluativism as the idea that scientific claims have “degrees of certainty” and “must undergo a continual process of evaluation” (2021, p. 123). They found the same positive relationship between individuals’ knowledge of scientific process generally and beliefs that align with the scientific community, a relationship that held regardless of identity factors, including political ideology and religiosity. To be clear, I do not cite this research to suggest that the responsiveness model will definitively work– more empirical research must be done to confirm its efficacy– but this gives us reason to believe the solution is worth taking seriously.

Assuming what this evidence suggests is correct, I offer three ways in which journalists can help put the epistemic component of the responsiveness model into practice. Following this, I will then discuss the non-epistemic component of the model. First, journalists should ask scientists questions aimed at testing their claims for epistemic responsiveness. A good example of a question journalists could ask includes one that Douglas (2021) proposes when discussing the communication of value-laden science: What evidence would you need to change your mind? If a scientist cannot come up with an answer, that gives members of the public an indication that scientists’ claims are not epistemically responsive, and thus, should be deemed untrustworthy.

It is worth mentioning that the outcome of this tactic might render the claims of scientists once thought of as trustworthy– claims by scientists affiliated with prestigious universities, for example– as appearing untrustworthy to the public. The hope is that high-ranking scientists, given their training, would have it in their communication repertoire to make epistemically responsive claims by responding to such questions undogmatically, but it should come as no surprise that some elite thinkers fall prey to dogmatism. My point here is that, when it comes to deciding on trustworthiness, displays of epistemic responsiveness should take precedence over hierarchical metrics because it is a better means to holding those in power– which includes scientists– to account, as it does not merely defer to those in power but tests them.Footnote 19

It is also worth noting that this tactic of testing scientists’ epistemic responsiveness through the question of what would change their minds can be useful when reporting on both robust scientific consensus and scientific disagreement. For example, O’Leary (2023) asked a virologist this question about the disagreement over the origins of the COVID-19 pandemic, to which the virologist replied with evidence to support her own view and a response to those who support other theories. While working as a journalist, I also used a form of this tactic to debunk the false claim that the theory of climate change is pseudoscience because it cannot be disproven. I explained how it could be disproven, but why the “chances are slim” (Schipani, 2016). On this note, those producing misinformation related to climate change and other topics are untrustworthy, at least in part, because they are not being epistemically responsive to the evidence, which means that the public, especially with journalists’ help, could use epistemic responsiveness in many cases as a litmus test to protect themselves against misinformation.

Now one might imagine that there are situations in which a corrupt scientist might be able to trick the public and journalists into thinking they are making epistemically responsive claims when they are not, much like they tricked the public and journalists into thinking scientific disagreement persisted when it did not when it came to smoking and climate change. In other words, one might agree that epistemically responsive scientists should warrant the public’s trust– thus, resolving, in theory, the case of the accuracy-harm norm conflict I outline– but object that, if it is easy to feign epistemic responsiveness, then my solution falters on practical grounds. Goldman discusses this problem in relation to indirect argumentative justification, writing that it may be difficult for the public to distinguish “stylistic polish” from genuine expertise and that only an understanding of direct argumentative justification– or an understanding of the science itself– could protect the public from this deception (2001, p. 95). However, this is precisely the expert understanding the public does not have, which is why they must trust scientists in the first place.

Here, again, is where journalists come in, I argue. In the model of trust I propose, journalists should act as intermediaries between scientists, who understand more about their field of science than journalists, and the public, who understand less than journalists. This gives journalists a better ability to verify whether scientists, when they are interviewing them, are, indeed, being epistemically responsive. That is, it is their job not only to ask scientists what evidence they would need to change their mind, but also to interrogate and judge their responses to that question, and questions like it. On that note, the problem that arose when journalists were reporting on climate change, tobacco, and other issues is, in part, that they did not have this intermediary understanding of the science. They were often just as oblivious to the science as the general public (Oreskes & Conway, 2010). Unfortunately, this problem largely still exists today when it comes to many scientific issues. If the responsiveness model is to work in practice, then this would have to change. However, this is not an unachievable feat: Some journalists already possess this understanding of science. For those that do not, they could gain further training.Footnote 20

If journalists were to possess this intermediary understanding, it is also worth mentioning that they would be well-positioned to distinguish what Miller (2013) calls ‘knowledge-based consensus’ (or what I call robust consensus) from ‘mere agreement,’ the former of which grounds the consensus model of trust. They would do so by evaluating the evidence (or lack thereof) that supports the consensus, thereby verifying whether it is grounded in epistemic reasons (i.e. meeting high epistemic standards) or non-epistemic reasons (i.e. moral or political values). If scientists are using consensus to garner the public’s trust, but journalists uncover that scientists’ consensus is grounded in non-epistemic reasons, then the public would be warranted in distrusting scientists (until that is, scientists switch tactics and use epistemic responsiveness, and, as I will show, non-epistemic responsiveness, to garner the public’s trust). Crucially, this means that, regardless of what is grounding public trust in scientists’ claims, journalists can and should play a role in interrogating when trust is warranted and when it is not.

My second recommendation to journalists based on the responsiveness model is a basic one: Journalists should regularly and explicitly point out that epistemic responsiveness defines science and is the characteristic of scientific process that eventually leads scientists to gain knowledge and reach robust consensus. They can do so by devoting more articles solely to explaining scientific processes as well as integrating more on process into pieces about specific issues. In the article “Science Isn’t Broken,” Aschwanden (2015) writes that science is no “magic wand that turns everything it touches to truth,” adding that, “whatever we know now is only our best approximation of the truth.” Similarly, in an article about scientific disagreements over COVID-19, McDonald (2022) writes, “More than anything, science is a process, a method for moving closer to truth.” She follows this up with a reminder to “readers that the science on a given topic can change– an important message, given the habit of some to apply today’s knowledge to the past to undermine public health guidance.” Unfortunately, explicit journalistic reporting on scientific process is relatively rare (Slater et al., 2021). To shift the public’s image of science and reconcile the cases of the accuracy-harm norm conflict that I outline, this must also change, I argue.

Undoubtedly, it will take time for the public’s image of science to shift, no matter how often journalists belabor the point that science is, at its core, epistemically responsive. Accordingly, in the interim, the communication of scientific disagreement will likely continue to cause harm. However, I argue this is the short-term cost of the long-term benefit of a more scientifically literate public. As Seaman (2015) put it, “[J]ournalists often inflict some level of harm to serve the greater good.” The greater good here, I argue, is a public who responds rationally to scientific disagreement, who understands that while science is no ‘magic wand’ of truth, it is still the best tool we have to act in the face of uncertainty. What this means is that a public image of science as epistemically responsive could resolve the cases of the accuracy-harm norm conflict I outline, but likely only over the long term.

Crucially, communicating scientific disagreement would not be ethically warranted by the harm norm if the frame of epistemic responsiveness did not accompany it, as the short-term harm of communicating the disagreement would persist, but the long-term benefit of changing the public’s image of science would not. In other words, what the responsiveness model of trust in science recommends is that the harm norm should require the communication of scientific process, namely the epistemic responsiveness of science, especially in cases of communicating disagreement. This brings the harm norm in line with the accuracy norm by accommodating the communication of disagreement, thereby resolving the conflict.

However, the potential short-term harm incurred in the interim should not be taken lightly. To temper this harm, I offer a third recommendation that, we will soon find, relates to putting the non-epistemic component of the responsiveness model into practice: Journalists should find ways to make members of the public more epistemically responsive to the evidence in their articles– though, crucially, not at the cost of reporting science accurately. How can this be done? Research suggests framing articles about scientific disagreements according to the norm of reciprocity, a form of responsiveness, may be a promising way to accomplish this. Notably, this is likely especially the case when non-epistemic values influence scientific disagreements, which is undoubtedly often in policy-relevant cases.

For example, Xu and Petty (2022) found that people who viewed mask mandates as a moral infringement on their rights were more open to wearing a mask when communicators acknowledged why one might hold the listeners’ view, but also explained why masking is more justified. Thus, they not only provided empirical evidence to justify their claims but also engaged with the public’s values. In explaining why this happens, Xu and Petty hypothesize that “acknowledging the target’s opinion is conceptually similar to doing a favor” because both acts are “[b]ased on the social influence principle of reciprocity.” That is, “if a speaker seems open to the target’s position, the target should reciprocate by being more open to the speaker’s view” (2022, p. 1152). To be clear, openness to the speaker’s position does not entail trust, but it is arguably a step in the right direction.

Post and Bienzeisler (2024) also found that ‘honest brokers’ garnered more trust from the public than ‘epistocrats’ when it came to communicating policy advice on various subjects. Whereas epistocrats claimed that their science dictated certain policy choices, honest brokers effectively displayed the reciprocity or responsiveness that Xu and Petty (2022) pinpoint. In some ways, the epistocrat’s communication is akin to attempting to garner the public’s trust via consensus by non-epistemic means, as the authors write, this “scientist blurs the distinction between scientific and political claims, purporting to ‘prove’ a policy and thereby precluding a societal debate over values and policy priorities” (Post & Bienzeisler, 2024, p. 1). In addition, the authors note that the honest broker’s communication style garnered trust even in those who most strongly opposed the scientist’s policy advice, thereby reducing polarization among members of the public. Importantly, this research, much like Xu and Petty’s (2022) work, suggests that being responsive to the public’s values can be helpful when cultivating epistemic responsiveness in the public to scientists’ claims, especially in groups who experienced the most drastic declines in trust in science as of late, such as Republicans (Kennedy and Tyson, 2023).

Finally, this brings me to the non-epistemic component of the responsiveness model: To warrant the public’s trust, scientists must also be responsive to the public’s values in the ways that Xu and Petty (2022) and Post and Bienzeisler (2024) outline. This aspect of the model is important to resolving the case of the accuracy-harm norm conflict I outline in practice because communicating science as value-laden in the way these researchers suggest may ironically help build public trust in the face of scientific disagreement and, crucially, in a way that involves maximizing accuracy and avoids manipulation.

But why exactly should non-epistemic responsiveness warrant the public’s trust in science in theory? Non-epistemic responsiveness is embedded in non-instrumental arguments for democracy, such as those that see a policy’s legitimacy as originating from public justification: By explaining to the public why they hold certain values and not others that members of the public may hold, scientists justify their choices to the public.Footnote 21 This is in contrast to the consensus model, which warrants trust in science’s value-freedom. Value-freedom is important to other non-instrumental arguments for democracy, such as those that center on liberty, or the public’s right to live according to their own values (see e.g. Christiano & Bajaj, 2022). My point is that democratic theory provides multiple paths to warranting trust in science, some of which accommodate value-laden science and some of which do not (see e.g. Lusk, 2021).

Note that my recommendation is different from other recommendations in the literature on trust in science as it pertains to non-epistemic values: I am not merely arguing that scientists should be transparent about their values, as some have argued.Footnote 22 What I suggest is something different: I am asking scientists to not only be transparent about what values factored into their reasoning and why, but also to respond to the values that others, including members of the public, might hold and would lead them to come to different conclusions about the science. I am also not claiming that scientists should align their values with those of the public to warrant public trust, as others have argued.Footnote 23 My recommendation does not forbid scientists from using their own non-epistemic values in their evaluations of inductive risk or elsewhere in the scientific process. Instead, it asks scientists to consider the public’s values in their reasoning and their communication of their reasoning by explaining why they may not share some members of the public’s values and why other values guide them if they do. As research by Xu and Petty (2022) and Post and Bienzeisler (2024) suggests, this may be enough to move the public in the direction of trusting scientific claims that are plagued with disagreement.

However, for scientists to be responsive to the public’s values, they must know what those values are to begin with. Given the high demands of their work, it is arguably unreasonable to expect scientists to seek out what the public’s values are in all their nuance. Therefore, when interviewing scientists, I argue journalists should be responsible for bringing the public’s values to scientists’ attention and then communicating scientists’ responses back to the public.Footnote 24 If a scientist has no response to the public’s values, then these scientists can reasonably be deemed non-epistemically responsive, and, as a result, at least partially untrustworthy until they have provided a response. Unlike scientists, seeking out the public’s values is central to a journalist’s job description, as they already perform this task when they interview politicians. This is part of how they hold politicians to account, and, I argue, how they should also hold scientists to account as well. Thus, in the same way that journalists can act as intermediaries between scientists and the public when it comes to testing scientists’ epistemic responsiveness, they can and should also perform this role when it comes to scientists’ non-epistemic responsiveness, at least if we want to cultivate trust in science in disagreement.

5 Conclusion

As Oreskes and Conway (2010) point out, journalists have helped cultivate a scientifically ill-informed and distrustful public through their reporting of tobacco, climate change, and other issues. These scholars and others see journalists as part of the problem, not part of the solution, to resolving issues related to trust in science. It is undeniable that journalists have done damage to public trust in science. For this reason and others, the public, at least in the United States, has deemed journalists more untrustworthy than scientists, which may raise issues with the solutions I propose here (Kennedy et al., 2022).

How could the public regain trust in journalists when they make claims about science? Perhaps, much like scientists, they must earn, not merely expect, the public’s trust.Footnote 25 In the context of this paper, earning the public’s trust when it comes to reporting on science follows from the interrogation of the epistemic and non-epistemic responsiveness of scientists. This means that the responsiveness model of trust has built into it a route for warranting trust in journalists as well as trust in scientists. In other words, journalists can and should earn the public’s trust by properly doing their job: cultivating a well-informed public and holding those in power to account– and those in power include scientists.